id
stringlengths
9
104
author
stringlengths
3
36
task_category
stringclasses
32 values
tags
sequencelengths
1
4.05k
created_time
timestamp[ns, tz=UTC]date
2022-03-02 23:29:04
2025-03-18 02:34:30
last_modified
stringdate
2021-02-13 00:06:56
2025-03-18 09:30:19
downloads
int64
0
15.6M
likes
int64
0
4.86k
README
stringlengths
44
1.01M
matched_bigbio_names
sequencelengths
1
8
YurtsAI/ner-document-context
YurtsAI
token-classification
[ "span-marker", "tensorboard", "safetensors", "token-classification", "ner", "named-entity-recognition", "generated_from_span_marker_trainer", "en", "dataset:YurtsAI/named_entity_recognition_document_context", "base_model:FacebookAI/roberta-large", "base_model:finetune:FacebookAI/roberta-large", "model-index", "region:us" ]
2024-07-31T13:50:58Z
2024-09-11T20:14:03+00:00
423
1
--- base_model: roberta-large datasets: - YurtsAI/named_entity_recognition_document_context language: - en library_name: span-marker metrics: - precision - recall - f1 pipeline_tag: token-classification tags: - span-marker - token-classification - ner - named-entity-recognition - generated_from_span_marker_trainer widget: - text: '* * phone call transcript: university research paper discussion * * * * date: * * 09041942 * * time: * * 3:45 pm * * participants: * * dr. emily carter (ec) - principal investigator dr. john smith (js) - co-investigator--- * * ec: * * hey john, got a minute to discuss the latest draft of our paper on crispr-cas9?' - text: monday is a chill day – beach time at barceloneta and maybe some shopping at la rambla. - text: don't forget to fast for at least 8 hours before the procedure – that means no food or drink after midnight! - text: whether it's buying a house in 5 years, saving for a killer vacation next summer, or just building an emergency fund, write it down. - text: '- * * full integration: * * all recipes from the rbso must be incorporated into event menus by november 1, 2023.' model-index: - name: SpanMarker with roberta-large on YurtsAI/named_entity_recognition_document_context results: - task: type: token-classification name: Named Entity Recognition dataset: name: Unknown type: YurtsAI/named_entity_recognition_document_context split: eval metrics: - type: f1 value: 0.8349078585045542 name: F1 - type: precision value: 0.8308950630296387 name: Precision - type: recall value: 0.8389596015495296 name: Recall --- # SpanMarker with roberta-large on YurtsAI/named_entity_recognition_document_context This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model trained on the [YurtsAI/named_entity_recognition_document_context](https://huggingface.co/datasets/YurtsAI/named_entity_recognition_document_context) dataset that can be used for Named Entity Recognition. This SpanMarker model uses [roberta-large](https://huggingface.co/roberta-large) as the underlying encoder. ## Model Details ### Model Description - **Model Type:** SpanMarker - **Encoder:** [roberta-large](https://huggingface.co/roberta-large) - **Maximum Sequence Length:** 256 tokens - **Maximum Entity Length:** 11 words - **Training Dataset:** [YurtsAI/named_entity_recognition_document_context](https://huggingface.co/datasets/YurtsAI/named_entity_recognition_document_context) - **Language:** en <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SpanMarker on GitHub](https://github.com/tomaarsen/SpanMarkerNER) - **Thesis:** [SpanMarker For Named Entity Recognition](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf) ### Model Labels | Label | Examples | |:--------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------| | DATETIME__absolute | "14:00 hrs", "15th november 2023 at 10:00 am", "october 15th , 2023" | | DATETIME__authored | "25 february 26", "sunday , 21 august , 1938", "1961-05-08" | | DATETIME__range | "29th of oct. , 2023", "september 2021 to august 2023", "jan 2022 - dec 2022" | | DATETIME__relative | "eod friday", "dec 15 , 11:59 pm", "10/15" | | GENERAL__art-broadcastprogram | "stranger things", "live q & a", "product design concept sketchbook for kids" | | GENERAL__art-film | "the crown", "kill bill", "stranger things" | | GENERAL__art-music | | | GENERAL__art-other | "statue of liberty", "broadway show", "wicked" | | GENERAL__art-painting | "draw your dream house", "design a superhero costume" | | GENERAL__art-writtenart | "optimization of quantum algorithms for cryptographic applications", "introduction to algorithms", "intro to cs '' by j. doe" | | GENERAL__building-airport | "ory", "charles de gaulle", "cdg" | | GENERAL__building-hospital | "green valley clinic", "department of oncology", "st. mary 's hospital" | | GENERAL__building-hotel | "le jules verne", "hôtel ritz", "the beverly hills hotel" | | GENERAL__building-library | "ancient library", "the grand library", "jefferson library" | | GENERAL__building-other | "louvre museum", "engineering building", "eiffel tower" | | GENERAL__building-restaurant | "l'ambroisie", "bella 's bistro", "in-n-out burger" | | GENERAL__building-sportsfacility | "fenway" | | GENERAL__building-theater | "gershwin theatre", "opera house", "broadway" | | GENERAL__event-attack/battle/war/militaryconflict | "1863 battle of ridgefield", "battle of gettysburg", "war of 1812" | | GENERAL__event-other | "annual science fair", "summer splash '23", "research methodology workshop" | | GENERAL__event-sportsevent | "international olympiad in informatics", "ftx", "ioi" | | GENERAL__location-GPE | "fr", "paris ,", "italy" | | GENERAL__location-bodiesofwater | "river x", "river blue", "seine river" | | GENERAL__location-island | "maldives", "similan islands", "ellis island" | | GENERAL__location-mountain | "andes mountains", "swiss alps", "pine ridge" | | GENERAL__location-other | "times square", "old market", "venice beach" | | GENERAL__location-park | "central park", "ueno park", "universal studios" | | GENERAL__location-road/railway/highway/transit | "i-95", "underground railroad", "hollywood walk of fame" | | GENERAL__organization-company | "green earth organics", "xyz corporation", "north atlantic fisheries" | | GENERAL__organization-education | "graduate school", "xyz", "xyz university" | | GENERAL__organization-government/governmentagency | "department of economic development", "moe", "ministry of environment" | | GENERAL__organization-media/newspaper | "pinterest", "yelp", "insta" | | GENERAL__organization-other | "historical society", "grants office", "admissions committee" | | GENERAL__organization-religion | "buddhist", "zen buddhist", "shinto" | | GENERAL__organization-showorganization | "phare", "the soundbytes" | | GENERAL__organization-sportsteam | "varsity soccer team", "red sox" | | GENERAL__other-astronomything | | | GENERAL__other-award | "team excellence award", "innovation award", "employee of the month" | | GENERAL__other-biologything | "fodmap", "troponin i", "cmp" | | GENERAL__other-chemicalthing | "co2", "pm2.5", "nitrate" | | GENERAL__other-currency | "usd", "inr", "$ $ $" | | GENERAL__other-disease | "mi", "irritable bowel syndrome", "myocardial infarction" | | GENERAL__other-educationaldegree | "executive mba", "phd in quantum computing ,", "phd" | | GENERAL__other-god | "inari", "athena", "inari taisha" | | GENERAL__other-language | "french", "english", "spanish" | | GENERAL__other-law | "cas", "clean air standards", "environmental protection act ( epa ) 2023" | | GENERAL__other-livingthing | "eastern box turtle", "monarch butterfly", "western burrowing owl" | | GENERAL__other-medical | "asa", "dapt", "clopidogrel" | | GENERAL__person-artist/author | "carol", "picasso", "warhol" | | GENERAL__person-other | "jamie", "sarah", "mark" | | GENERAL__person-politician | "jane doe", "vespasian", "constantine i" | | GENERAL__person-scholar | "dr. smith", "dr. lee", "dr. johnson" | | GENERAL__person-soldier | "davis", "lt. sarah johnson", "col. r. johnson" | | GENERAL__product-airplane | "hmmwvs", "uh-60s", "m1a2s" | | GENERAL__product-car | "hmmwvs", "high mobility multipurpose wheeled vehicles", "mine-resistant ambush protected" | | GENERAL__product-food | "pumpkin spice", "quinoa salad", "golden jubilee feast" | | GENERAL__product-game | "stardew valley", "valorant", "call of duty : warzone" | | GENERAL__product-other | "engagement metrics", "xj-200", "smart goal templates" | | GENERAL__product-ship | "liberty island ferry", "hms victory", "thames river cruise" | | GENERAL__product-software | "instagram", "svm", "r" | | GENERAL__product-train | "n'ex", "shinkansen", "tgv" | | GENERAL__product-weapon | "m1 abrams", "m4 carbine", "m4 carbines" | ## Evaluation ### Metrics | Label | Precision | Recall | F1 | |:--------------------------------------------------|:----------|:-------|:-------| | **all** | 0.8309 | 0.8390 | 0.8349 | | DATETIME__absolute | 0.8744 | 0.8577 | 0.8660 | | DATETIME__authored | 0.9956 | 0.9935 | 0.9946 | | DATETIME__range | 0.8451 | 0.9262 | 0.8838 | | DATETIME__relative | 0.8266 | 0.7498 | 0.7863 | | GENERAL__art-broadcastprogram | 0.6538 | 0.6296 | 0.6415 | | GENERAL__art-film | 0.8 | 1.0 | 0.8889 | | GENERAL__art-music | 0.0 | 0.0 | 0.0 | | GENERAL__art-other | 0.625 | 0.7143 | 0.6667 | | GENERAL__art-painting | 0.0 | 0.0 | 0.0 | | GENERAL__art-writtenart | 0.7373 | 0.8047 | 0.7695 | | GENERAL__building-airport | 0.8668 | 0.9689 | 0.9150 | | GENERAL__building-hospital | 0.8378 | 0.9323 | 0.8826 | | GENERAL__building-hotel | 0.7577 | 0.8603 | 0.8057 | | GENERAL__building-library | 0.0 | 0.0 | 0.0 | | GENERAL__building-other | 0.7597 | 0.8409 | 0.7982 | | GENERAL__building-restaurant | 0.7953 | 0.8695 | 0.8307 | | GENERAL__building-sportsfacility | 0.0 | 0.0 | 0.0 | | GENERAL__building-theater | 0.6 | 0.6667 | 0.6316 | | GENERAL__event-attack/battle/war/militaryconflict | 0.8438 | 0.9310 | 0.8852 | | GENERAL__event-other | 0.6019 | 0.6382 | 0.6195 | | GENERAL__event-sportsevent | 0.0 | 0.0 | 0.0 | | GENERAL__location-GPE | 0.7232 | 0.7888 | 0.7546 | | GENERAL__location-bodiesofwater | 0.6724 | 0.975 | 0.7959 | | GENERAL__location-island | 0.7455 | 0.9111 | 0.8200 | | GENERAL__location-mountain | 0.7436 | 0.8529 | 0.7945 | | GENERAL__location-other | 0.7186 | 0.7793 | 0.7477 | | GENERAL__location-park | 0.7899 | 0.8704 | 0.8282 | | GENERAL__location-road/railway/highway/transit | 0.6325 | 0.7095 | 0.6688 | | GENERAL__organization-company | 0.8665 | 0.8605 | 0.8635 | | GENERAL__organization-education | 0.8256 | 0.8608 | 0.8428 | | GENERAL__organization-government/governmentagency | 0.8344 | 0.8318 | 0.8331 | | GENERAL__organization-media/newspaper | 0.6667 | 0.4 | 0.5 | | GENERAL__organization-other | 0.7790 | 0.8105 | 0.7944 | | GENERAL__organization-religion | 0.6667 | 0.8 | 0.7273 | | GENERAL__organization-showorganization | 0.0 | 0.0 | 0.0 | | GENERAL__organization-sportsteam | 0.0 | 0.0 | 0.0 | | GENERAL__other-astronomything | 0.0 | 0.0 | 0.0 | | GENERAL__other-award | 0.8216 | 0.8859 | 0.8525 | | GENERAL__other-biologything | 0.7246 | 0.8961 | 0.8013 | | GENERAL__other-chemicalthing | 0.7687 | 0.8047 | 0.7863 | | GENERAL__other-currency | 0.6304 | 0.6744 | 0.6517 | | GENERAL__other-disease | 0.8594 | 0.9048 | 0.8815 | | GENERAL__other-educationaldegree | 0.7119 | 0.75 | 0.7304 | | GENERAL__other-god | 0.8 | 0.5714 | 0.6667 | | GENERAL__other-language | 0.6818 | 1.0 | 0.8108 | | GENERAL__other-law | 0.7978 | 0.8462 | 0.8212 | | GENERAL__other-livingthing | 0.7385 | 0.9320 | 0.8240 | | GENERAL__other-medical | 0.7778 | 0.8343 | 0.8050 | | GENERAL__person-artist/author | 0.625 | 0.3846 | 0.4762 | | GENERAL__person-other | 0.8839 | 0.8979 | 0.8908 | | GENERAL__person-politician | 0.7534 | 0.7432 | 0.7483 | | GENERAL__person-scholar | 0.8640 | 0.8769 | 0.8704 | | GENERAL__person-soldier | 0.7674 | 0.7586 | 0.7630 | | GENERAL__product-airplane | 0.6774 | 0.6364 | 0.6562 | | GENERAL__product-car | 0.9286 | 0.7879 | 0.8525 | | GENERAL__product-food | 0.7798 | 0.7859 | 0.7828 | | GENERAL__product-game | 0.75 | 0.75 | 0.75 | | GENERAL__product-other | 0.7175 | 0.7537 | 0.7351 | | GENERAL__product-ship | 0.0 | 0.0 | 0.0 | | GENERAL__product-software | 0.8093 | 0.8403 | 0.8245 | | GENERAL__product-train | 0.75 | 0.375 | 0.5 | | GENERAL__product-weapon | 0.7794 | 0.8833 | 0.8281 | ## Uses ### Direct Use for Inference ```python from span_marker import SpanMarkerModel # Download from the 🤗 Hub model = SpanMarkerModel.from_pretrained("YurtsAI/named_entity_recognition_document_context") # Run inference entities = model.predict("monday is a chill day – beach time at barceloneta and maybe some shopping at la rambla.") ``` ### Downstream Use You can finetune this model on your own dataset. <details><summary>Click to expand</summary> ```python from span_marker import SpanMarkerModel, Trainer # Download from the 🤗 Hub model = SpanMarkerModel.from_pretrained("YurtsAI/ner-document-context") # Specify a Dataset with "tokens" and "ner_tag" columns dataset = load_dataset("conll2003") # For example CoNLL2003 # Initialize a Trainer using the pretrained model & dataset trainer = Trainer( model=model, train_dataset=dataset["train"], eval_dataset=dataset["validation"], ) trainer.train() trainer.save_model("YurtsAI/named_entity_recognition_document_context-finetuned") ``` </details> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:----------------------|:----|:--------|:----| | Sentence length | 1 | 14.6796 | 691 | | Entities per sentence | 0 | 0.4235 | 35 | ### Training Hyperparameters - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training Results | Epoch | Step | Validation Loss | Validation Precision | Validation Recall | Validation F1 | Validation Accuracy | |:------:|:-----:|:---------------:|:--------------------:|:-----------------:|:-------------:|:-------------------:| | 0.0299 | 500 | 0.0254 | 0.5244 | 0.0116 | 0.0228 | 0.9292 | | 0.0597 | 1000 | 0.0144 | 0.5380 | 0.3492 | 0.4235 | 0.9444 | | 0.0896 | 1500 | 0.0099 | 0.7134 | 0.4410 | 0.5450 | 0.9534 | | 0.1194 | 2000 | 0.0088 | 0.6461 | 0.6571 | 0.6516 | 0.9596 | | 0.1493 | 2500 | 0.0074 | 0.7177 | 0.6363 | 0.6745 | 0.9628 | | 0.1791 | 3000 | 0.0075 | 0.6612 | 0.7342 | 0.6958 | 0.9637 | | 0.2090 | 3500 | 0.0073 | 0.6686 | 0.7286 | 0.6973 | 0.9634 | | 0.2388 | 4000 | 0.0061 | 0.7552 | 0.7044 | 0.7289 | 0.9693 | | 0.2687 | 4500 | 0.0062 | 0.7385 | 0.7150 | 0.7266 | 0.9682 | | 0.2986 | 5000 | 0.0070 | 0.6667 | 0.7792 | 0.7186 | 0.9654 | | 0.3284 | 5500 | 0.0063 | 0.6984 | 0.7774 | 0.7358 | 0.9689 | | 0.3583 | 6000 | 0.0055 | 0.7941 | 0.7023 | 0.7454 | 0.9706 | | 0.3881 | 6500 | 0.0055 | 0.7540 | 0.7640 | 0.7589 | 0.9722 | | 0.4180 | 7000 | 0.0053 | 0.7700 | 0.7614 | 0.7657 | 0.9732 | | 0.4478 | 7500 | 0.0053 | 0.7791 | 0.7698 | 0.7744 | 0.9742 | | 0.4777 | 8000 | 0.0054 | 0.7396 | 0.8062 | 0.7715 | 0.9729 | | 0.5075 | 8500 | 0.0051 | 0.7653 | 0.7944 | 0.7796 | 0.9741 | | 0.5374 | 9000 | 0.0050 | 0.7773 | 0.7844 | 0.7808 | 0.9747 | | 0.5672 | 9500 | 0.0049 | 0.7954 | 0.7711 | 0.7830 | 0.9757 | | 0.5971 | 10000 | 0.0049 | 0.7844 | 0.7876 | 0.7860 | 0.9754 | | 0.6270 | 10500 | 0.0047 | 0.7898 | 0.7940 | 0.7919 | 0.9761 | | 0.6568 | 11000 | 0.0047 | 0.7852 | 0.7929 | 0.7890 | 0.9761 | | 0.6867 | 11500 | 0.0047 | 0.8001 | 0.7908 | 0.7954 | 0.9770 | | 0.7165 | 12000 | 0.0050 | 0.7643 | 0.8145 | 0.7886 | 0.9755 | | 0.7464 | 12500 | 0.0047 | 0.7991 | 0.7892 | 0.7941 | 0.9764 | | 0.7762 | 13000 | 0.0046 | 0.7948 | 0.8084 | 0.8015 | 0.9774 | | 0.8061 | 13500 | 0.0046 | 0.7841 | 0.8154 | 0.7994 | 0.9771 | | 0.8359 | 14000 | 0.0043 | 0.8283 | 0.7776 | 0.8021 | 0.9783 | | 0.8658 | 14500 | 0.0044 | 0.8054 | 0.7993 | 0.8023 | 0.9773 | | 0.8957 | 15000 | 0.0047 | 0.7704 | 0.8152 | 0.7922 | 0.9758 | | 0.9255 | 15500 | 0.0043 | 0.8018 | 0.8149 | 0.8083 | 0.9782 | | 0.9554 | 16000 | 0.0043 | 0.8255 | 0.7938 | 0.8093 | 0.9789 | | 0.9852 | 16500 | 0.0042 | 0.8201 | 0.8008 | 0.8104 | 0.9787 | | 1.0151 | 17000 | 0.0044 | 0.7947 | 0.8175 | 0.8059 | 0.9784 | | 1.0449 | 17500 | 0.0044 | 0.7942 | 0.8195 | 0.8066 | 0.9777 | | 1.0748 | 18000 | 0.0043 | 0.8124 | 0.8110 | 0.8117 | 0.9789 | | 1.1046 | 18500 | 0.0043 | 0.7987 | 0.8157 | 0.8071 | 0.9788 | | 1.1345 | 19000 | 0.0043 | 0.8037 | 0.8171 | 0.8103 | 0.9789 | | 1.1644 | 19500 | 0.0042 | 0.8178 | 0.8076 | 0.8127 | 0.9796 | | 1.1942 | 20000 | 0.0044 | 0.7803 | 0.8389 | 0.8085 | 0.9780 | | 1.2241 | 20500 | 0.0043 | 0.8040 | 0.8210 | 0.8124 | 0.9790 | | 1.2539 | 21000 | 0.0043 | 0.8038 | 0.8245 | 0.8141 | 0.9788 | | 1.2838 | 21500 | 0.0041 | 0.8318 | 0.7973 | 0.8142 | 0.9794 | | 1.3136 | 22000 | 0.0041 | 0.8106 | 0.8211 | 0.8158 | 0.9796 | | 1.3435 | 22500 | 0.0041 | 0.8288 | 0.8046 | 0.8165 | 0.9796 | | 1.3733 | 23000 | 0.0041 | 0.8218 | 0.8170 | 0.8194 | 0.9799 | | 1.4032 | 23500 | 0.0042 | 0.8164 | 0.8171 | 0.8168 | 0.9799 | | 1.4330 | 24000 | 0.0041 | 0.8105 | 0.8248 | 0.8176 | 0.9793 | | 1.4629 | 24500 | 0.0042 | 0.8073 | 0.8196 | 0.8134 | 0.9791 | | 1.4928 | 25000 | 0.0040 | 0.8211 | 0.8162 | 0.8187 | 0.9797 | | 1.5226 | 25500 | 0.0040 | 0.8195 | 0.8225 | 0.8210 | 0.9800 | | 1.5525 | 26000 | 0.0040 | 0.8372 | 0.8018 | 0.8191 | 0.9799 | | 1.5823 | 26500 | 0.0040 | 0.8263 | 0.8161 | 0.8212 | 0.9802 | | 1.6122 | 27000 | 0.0039 | 0.8275 | 0.8141 | 0.8208 | 0.9802 | | 1.6420 | 27500 | 0.0040 | 0.8264 | 0.8198 | 0.8231 | 0.9804 | | 1.6719 | 28000 | 0.0040 | 0.8218 | 0.8195 | 0.8206 | 0.9799 | | 1.7017 | 28500 | 0.0039 | 0.8286 | 0.8195 | 0.8240 | 0.9803 | | 1.7316 | 29000 | 0.0041 | 0.8004 | 0.8357 | 0.8177 | 0.9788 | | 1.7615 | 29500 | 0.0040 | 0.8138 | 0.8304 | 0.8220 | 0.9801 | | 1.7913 | 30000 | 0.0040 | 0.8160 | 0.8309 | 0.8234 | 0.9804 | | 1.8212 | 30500 | 0.0039 | 0.8204 | 0.8262 | 0.8233 | 0.9802 | | 1.8510 | 31000 | 0.0038 | 0.8292 | 0.8228 | 0.8260 | 0.9810 | | 1.8809 | 31500 | 0.0039 | 0.8247 | 0.8246 | 0.8246 | 0.9806 | | 1.9107 | 32000 | 0.0038 | 0.8267 | 0.8258 | 0.8262 | 0.9810 | | 1.9406 | 32500 | 0.0039 | 0.8102 | 0.8398 | 0.8248 | 0.9805 | | 1.9704 | 33000 | 0.0039 | 0.8321 | 0.8185 | 0.8253 | 0.9809 | | 2.0003 | 33500 | 0.0038 | 0.8325 | 0.8261 | 0.8293 | 0.9814 | | 2.0302 | 34000 | 0.0038 | 0.8352 | 0.8228 | 0.8289 | 0.9813 | | 2.0600 | 34500 | 0.0041 | 0.8144 | 0.8369 | 0.8255 | 0.9809 | | 2.0899 | 35000 | 0.0039 | 0.8274 | 0.8281 | 0.8277 | 0.9813 | | 2.1197 | 35500 | 0.0039 | 0.8198 | 0.8353 | 0.8275 | 0.9812 | | 2.1496 | 36000 | 0.0039 | 0.8211 | 0.8358 | 0.8284 | 0.9811 | | 2.1794 | 36500 | 0.0039 | 0.8242 | 0.8300 | 0.8271 | 0.9809 | | 2.2093 | 37000 | 0.0039 | 0.8194 | 0.8317 | 0.8255 | 0.9808 | | 2.2391 | 37500 | 0.0039 | 0.8258 | 0.8344 | 0.8301 | 0.9814 | | 2.2690 | 38000 | 0.0039 | 0.8292 | 0.8302 | 0.8297 | 0.9816 | | 2.2989 | 38500 | 0.0039 | 0.8281 | 0.8315 | 0.8298 | 0.9813 | | 2.3287 | 39000 | 0.0039 | 0.8174 | 0.8386 | 0.8279 | 0.9808 | | 2.3586 | 39500 | 0.0039 | 0.8208 | 0.8364 | 0.8285 | 0.9810 | | 2.3884 | 40000 | 0.0039 | 0.8230 | 0.8379 | 0.8304 | 0.9815 | | 2.4183 | 40500 | 0.0038 | 0.8355 | 0.8273 | 0.8314 | 0.9816 | | 2.4481 | 41000 | 0.0038 | 0.8290 | 0.8347 | 0.8319 | 0.9816 | | 2.4780 | 41500 | 0.0038 | 0.8233 | 0.8403 | 0.8317 | 0.9815 | | 2.5078 | 42000 | 0.0039 | 0.8186 | 0.8417 | 0.8300 | 0.9814 | | 2.5377 | 42500 | 0.0038 | 0.8321 | 0.8343 | 0.8332 | 0.9818 | | 2.5675 | 43000 | 0.0038 | 0.8239 | 0.8396 | 0.8317 | 0.9816 | | 2.5974 | 43500 | 0.0038 | 0.8267 | 0.8378 | 0.8322 | 0.9816 | | 2.6273 | 44000 | 0.0038 | 0.8325 | 0.8343 | 0.8334 | 0.9818 | | 2.6571 | 44500 | 0.0038 | 0.8254 | 0.8399 | 0.8326 | 0.9817 | | 2.6870 | 45000 | 0.0038 | 0.8339 | 0.8338 | 0.8339 | 0.9820 | | 2.7168 | 45500 | 0.0038 | 0.8301 | 0.8381 | 0.8341 | 0.9819 | | 2.7467 | 46000 | 0.0038 | 0.8309 | 0.8371 | 0.8340 | 0.9818 | | 2.7765 | 46500 | 0.0038 | 0.8296 | 0.8377 | 0.8337 | 0.9817 | | 2.8064 | 47000 | 0.0037 | 0.8337 | 0.8349 | 0.8343 | 0.9820 | | 2.8362 | 47500 | 0.0037 | 0.8303 | 0.8387 | 0.8345 | 0.9820 | | 2.8661 | 48000 | 0.0037 | 0.8289 | 0.8401 | 0.8344 | 0.9819 | | 2.8960 | 48500 | 0.0037 | 0.8299 | 0.8400 | 0.8349 | 0.9820 | | 2.9258 | 49000 | 0.0037 | 0.8289 | 0.8401 | 0.8344 | 0.9819 | | 2.9557 | 49500 | 0.0037 | 0.8322 | 0.8380 | 0.8351 | 0.9821 | | 2.9855 | 50000 | 0.0037 | 0.8312 | 0.8384 | 0.8348 | 0.9820 | ### Framework Versions - Python: 3.11.7 - SpanMarker: 1.5.0 - Transformers: 4.42.1 - PyTorch: 2.1.1+cu121 - Datasets: 2.14.5 - Tokenizers: 0.19.1 ## Citation ### BibTeX ``` @software{Aarsen_SpanMarker, author = {Aarsen, Tom}, license = {Apache-2.0}, title = {{SpanMarker for Named Entity Recognition}}, url = {https://github.com/tomaarsen/SpanMarkerNER} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "CAS" ]
apple/OpenELM-450M
apple
text-generation
[ "transformers", "safetensors", "openelm", "text-generation", "custom_code", "arxiv:2404.14619", "license:apple-amlr", "autotrain_compatible", "region:us" ]
2024-04-12T21:48:16Z
2025-02-28T18:31:35+00:00
421
25
--- license: apple-amlr license_name: apple-sample-code-license license_link: LICENSE --- # OpenELM *Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari* We introduce **OpenELM**, a family of **Open** **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters. We release the complete framework, encompassing data preparation, training, fine-tuning, and evaluation procedures, alongside multiple pre-trained checkpoints and training logs, to facilitate open research. Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them. ## Usage We have provided an example function to generate output from OpenELM models loaded via [HuggingFace Hub](https://huggingface.co/docs/hub/) in `generate_openelm.py`. You can try the model by running the following command: ``` python generate_openelm.py --model apple/OpenELM-450M --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 ``` Please refer to [this link](https://huggingface.co/docs/hub/security-tokens) to obtain your hugging face access token. Additional arguments to the hugging face generate function can be passed via `generate_kwargs`. As an example, to speedup the inference, you can try [lookup token speculative generation](https://huggingface.co/docs/transformers/generation_strategies) by passing the `prompt_lookup_num_tokens` argument as follows: ``` python generate_openelm.py --model apple/OpenELM-450M --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10 ``` Alternatively, try model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) by passing a smaller model through the `assistant_model` argument, for example: ``` python generate_openelm.py --model apple/OpenELM-450M --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL] ``` ## Main Results ### Zero-Shot | **Model Size** | **ARC-c** | **ARC-e** | **BoolQ** | **HellaSwag** | **PIQA** | **SciQ** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|-----------|-----------|---------------|-----------|-----------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 26.45 | 45.08 | **53.98** | 46.71 | 69.75 | **84.70** | **53.91** | 54.37 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **30.55** | **46.68** | 48.56 | **52.07** | **70.78** | 84.40 | 52.72 | **55.11** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 27.56 | 48.06 | 55.78 | 53.97 | 72.31 | 87.20 | 58.01 | 57.56 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **30.38** | **50.00** | **60.37** | **59.34** | **72.63** | **88.00** | **58.96** | **59.95** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 32.34 | **55.43** | 63.58 | 64.81 | **75.57** | **90.60** | 61.72 | 63.44 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **37.97** | 52.23 | **70.00** | **71.20** | 75.03 | 89.30 | **62.75** | **65.50** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 35.58 | 59.89 | 67.40 | 72.44 | 78.24 | **92.70** | 65.51 | 67.39 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **39.42** | **61.74** | **68.17** | **76.36** | **79.00** | 92.50 | **66.85** | **69.15** | ### LLM360 | **Model Size** | **ARC-c** | **HellaSwag** | **MMLU** | **TruthfulQA** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|---------------|-----------|----------------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | 47.15 | 25.72 | **39.24** | **53.83** | 38.72 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | **51.58** | **26.70** | 38.72 | 53.20 | **40.54** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | 53.86 | **26.01** | 40.18 | 57.22 | 41.50 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | **59.31** | 25.41 | **40.48** | **58.33** | **43.41** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | 65.71 | **27.05** | 36.98 | 63.22 | 45.93 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | **71.83** | 25.65 | **45.95** | **64.72** | **49.94** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | 73.28 | **26.76** | 34.98 | 67.25 | 48.90 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | **76.87** | 24.80 | **38.76** | **67.96** | **51.22** | ### OpenLLM Leaderboard | **Model Size** | **ARC-c** | **CrowS-Pairs** | **HellaSwag** | **MMLU** | **PIQA** | **RACE** | **TruthfulQA** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|-----------------|---------------|-----------|-----------|-----------|----------------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | **66.79** | 47.15 | 25.72 | 69.75 | 30.91 | **39.24** | **53.83** | 45.13 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | 66.01 | **51.58** | **26.70** | **70.78** | 33.78 | 38.72 | 53.20 | **46.66** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | **68.63** | 53.86 | **26.01** | 72.31 | 33.11 | 40.18 | 57.22 | 47.69 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | 67.44 | **59.31** | 25.41 | **72.63** | **36.84** | **40.48** | **58.33** | **49.25** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | **71.74** | 65.71 | **27.05** | **75.57** | 36.46 | 36.98 | 63.22 | 51.68 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | 71.02 | **71.83** | 25.65 | 75.03 | **39.43** | **45.95** | **64.72** | **54.40** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | **73.29** | 73.28 | **26.76** | 78.24 | **38.76** | 34.98 | 67.25 | 54.35 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | 72.33 | **76.87** | 24.80 | **79.00** | 38.47 | **38.76** | **67.96** | **55.73** | See the technical report for more results and comparison. ## Evaluation ### Setup Install the following dependencies: ```bash # install public lm-eval-harness harness_repo="public-lm-eval-harness" git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo} cd ${harness_repo} # use main branch on 03-15-2024, SHA is dc90fec git checkout dc90fec pip install -e . cd .. # 66d6242 is the main branch on 2024-04-01 pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242 pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0 ``` ### Evaluate OpenELM ```bash # OpenELM-450M hf_model=apple/OpenELM-450M # this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True tokenizer=meta-llama/Llama-2-7b-hf add_bos_token=True batch_size=1 mkdir lm_eval_output shot=0 task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2 lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=5 task=mmlu,winogrande lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=25 task=arc_challenge,crows_pairs_english lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=10 task=hellaswag lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log ``` ## Bias, Risks, and Limitations The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements. ## Citation If you find our work useful, please cite: ```BibTex @article{mehtaOpenELMEfficientLanguage2024, title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}}, shorttitle = {{OpenELM}}, url = {https://arxiv.org/abs/2404.14619v1}, language = {en}, urldate = {2024-04-24}, journal = {arXiv.org}, author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad}, month = apr, year = {2024}, } @inproceedings{mehta2022cvnets, author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad}, title = {CVNets: High Performance Library for Computer Vision}, year = {2022}, booktitle = {Proceedings of the 30th ACM International Conference on Multimedia}, series = {MM '22} } ```
[ "SCIQ" ]
apple/OpenELM-1_1B
apple
text-generation
[ "transformers", "safetensors", "openelm", "text-generation", "custom_code", "arxiv:2404.14619", "license:apple-amlr", "autotrain_compatible", "region:us" ]
2024-04-12T21:48:43Z
2025-02-28T18:31:36+00:00
421
30
--- license: apple-amlr license_name: apple-sample-code-license license_link: LICENSE --- # OpenELM *Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari* We introduce **OpenELM**, a family of **Open** **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters. We release the complete framework, encompassing data preparation, training, fine-tuning, and evaluation procedures, alongside multiple pre-trained checkpoints and training logs, to facilitate open research. Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them. ## Usage We have provided an example function to generate output from OpenELM models loaded via [HuggingFace Hub](https://huggingface.co/docs/hub/) in `generate_openelm.py`. You can try the model by running the following command: ``` python generate_openelm.py --model apple/OpenELM-1_1B --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 ``` Please refer to [this link](https://huggingface.co/docs/hub/security-tokens) to obtain your hugging face access token. Additional arguments to the hugging face generate function can be passed via `generate_kwargs`. As an example, to speedup the inference, you can try [lookup token speculative generation](https://huggingface.co/docs/transformers/generation_strategies) by passing the `prompt_lookup_num_tokens` argument as follows: ``` python generate_openelm.py --model apple/OpenELM-1_1B --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10 ``` Alternatively, try model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) by passing a smaller model through the `assistant_model` argument, for example: ``` python generate_openelm.py --model apple/OpenELM-1_1B --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL] ``` ## Main Results ### Zero-Shot | **Model Size** | **ARC-c** | **ARC-e** | **BoolQ** | **HellaSwag** | **PIQA** | **SciQ** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|-----------|-----------|---------------|-----------|-----------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 26.45 | 45.08 | **53.98** | 46.71 | 69.75 | **84.70** | **53.91** | 54.37 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **30.55** | **46.68** | 48.56 | **52.07** | **70.78** | 84.40 | 52.72 | **55.11** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 27.56 | 48.06 | 55.78 | 53.97 | 72.31 | 87.20 | 58.01 | 57.56 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **30.38** | **50.00** | **60.37** | **59.34** | **72.63** | **88.00** | **58.96** | **59.95** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 32.34 | **55.43** | 63.58 | 64.81 | **75.57** | **90.60** | 61.72 | 63.44 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **37.97** | 52.23 | **70.00** | **71.20** | 75.03 | 89.30 | **62.75** | **65.50** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 35.58 | 59.89 | 67.40 | 72.44 | 78.24 | **92.70** | 65.51 | 67.39 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **39.42** | **61.74** | **68.17** | **76.36** | **79.00** | 92.50 | **66.85** | **69.15** | ### LLM360 | **Model Size** | **ARC-c** | **HellaSwag** | **MMLU** | **TruthfulQA** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|---------------|-----------|----------------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | 47.15 | 25.72 | **39.24** | **53.83** | 38.72 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | **51.58** | **26.70** | 38.72 | 53.20 | **40.54** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | 53.86 | **26.01** | 40.18 | 57.22 | 41.50 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | **59.31** | 25.41 | **40.48** | **58.33** | **43.41** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | 65.71 | **27.05** | 36.98 | 63.22 | 45.93 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | **71.83** | 25.65 | **45.95** | **64.72** | **49.94** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | 73.28 | **26.76** | 34.98 | 67.25 | 48.90 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | **76.87** | 24.80 | **38.76** | **67.96** | **51.22** | ### OpenLLM Leaderboard | **Model Size** | **ARC-c** | **CrowS-Pairs** | **HellaSwag** | **MMLU** | **PIQA** | **RACE** | **TruthfulQA** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|-----------------|---------------|-----------|-----------|-----------|----------------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | **66.79** | 47.15 | 25.72 | 69.75 | 30.91 | **39.24** | **53.83** | 45.13 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | 66.01 | **51.58** | **26.70** | **70.78** | 33.78 | 38.72 | 53.20 | **46.66** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | **68.63** | 53.86 | **26.01** | 72.31 | 33.11 | 40.18 | 57.22 | 47.69 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | 67.44 | **59.31** | 25.41 | **72.63** | **36.84** | **40.48** | **58.33** | **49.25** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | **71.74** | 65.71 | **27.05** | **75.57** | 36.46 | 36.98 | 63.22 | 51.68 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | 71.02 | **71.83** | 25.65 | 75.03 | **39.43** | **45.95** | **64.72** | **54.40** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | **73.29** | 73.28 | **26.76** | 78.24 | **38.76** | 34.98 | 67.25 | 54.35 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | 72.33 | **76.87** | 24.80 | **79.00** | 38.47 | **38.76** | **67.96** | **55.73** | See the technical report for more results and comparison. ## Evaluation ### Setup Install the following dependencies: ```bash # install public lm-eval-harness harness_repo="public-lm-eval-harness" git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo} cd ${harness_repo} # use main branch on 03-15-2024, SHA is dc90fec git checkout dc90fec pip install -e . cd .. # 66d6242 is the main branch on 2024-04-01 pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242 pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0 ``` ### Evaluate OpenELM ```bash # OpenELM-1_1B hf_model=apple/OpenELM-1_1B # this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True tokenizer=meta-llama/Llama-2-7b-hf add_bos_token=True batch_size=1 mkdir lm_eval_output shot=0 task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2 lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=5 task=mmlu,winogrande lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=25 task=arc_challenge,crows_pairs_english lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=10 task=hellaswag lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log ``` ## Bias, Risks, and Limitations The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements. ## Citation If you find our work useful, please cite: ```BibTex @article{mehtaOpenELMEfficientLanguage2024, title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}}, shorttitle = {{OpenELM}}, url = {https://arxiv.org/abs/2404.14619v1}, language = {en}, urldate = {2024-04-24}, journal = {arXiv.org}, author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad}, month = apr, year = {2024}, } @inproceedings{mehta2022cvnets, author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad}, title = {CVNets: High Performance Library for Computer Vision}, year = {2022}, booktitle = {Proceedings of the 30th ACM International Conference on Multimedia}, series = {MM '22} } ```
[ "SCIQ" ]
den2nova/FlexDreamHK
den2nova
text-to-image
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "ja", "license:creativeml-openrail-m", "region:us" ]
2023-07-06T10:11:45Z
2023-07-29T04:21:29+00:00
417
17
--- language: - ja library_name: diffusers license: creativeml-openrail-m pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion --- # <u>🎈 FlexDreamHK</u> <img src="https://huggingface.co/den2nova/FlexDreamHK/resolve/main/sample/banner2.png" width="100%" height="100%">  <b>FlexDreamHK</b>は<b style="color:#dc143c;">リークされたNovelAIモデルの入っていない、あるいはそのリスクを可能な限り低くしたモデルを目指して作成</b>しました。<br><br>  モデル名はマージに使用したモデルたちに敬意を表し、主要なモデル名を組み合わせて命名しています。<br><br>  マージ元となったモデルはStable DiffusionやWifu Diffusionへの追加学習(ファインチューニング)を行ったもののみで構成されています。<br>  また、にじジャーニーと普段使いしているモデルから生成した絵から絵柄LoRAを作成・マージしており、いわゆる<b style="color:#4753a2;">蒸留系と呼ばれるモデル</b>でもあります。<br><br>  マージの過程と使用したLoRAそのもの、またそれを作成した際のデータセットを開示する事で可能な限り透明性を担保しました。 ----------------------------- # 🎀 特徴 <ul> <li style="color:red;font-weight:bold;">得意なこと</li> <ul> <li>太めの主線でパッキリとしたアニメ調のイラスト</li> <li>soloのかわいい女の子が出しやすい</li> <li>ある程度のNSFWへの対応</li> <li>ある程度の呪文の効きやすさ</li> <li>キャラクターイラストに特化した絵の生成</li> </ul> <li style="color:blue;font-weight:bold;">苦手なこと</li> <ul> <li>濃いめの影が落ちやすい</li> <li>複数人が登場する絵は出しづらい</li> <li>男性の絵は苦手</li> <li>danbooru以外のタグは効き目が薄い(色指定などが顕著)</li> <li>表情の多様性にやや欠ける</li> <li>背景メインのイラストは苦手</li> <li>手指含めて細部の破綻が比較的多め</li> </ul> </ul> ----------------------------- ## 👉 推奨設定 <ul> <li>clip skip:2 / VAE不要</li> <li>顔が溶ける場合は拡張機能<a href="https://github.com/Bing-su/adetailer">adetailer</a>の使用がオススメです</li> <li>推奨ネガティブ(nsfw, extra fingers, deformed hands, polydactyl:1.3), (worst quality, low quality, poor quality, bad quality:1.35)</li> </ul> ----------------------------- ## 履歴 <table> <tr> <th>日付</th> <th>内容</th> <tr> <td>2023/07/29</td> <td>FexDreamHK_v2.0 サンプル画像アップ</td> </tr> <tr> <td>2023/07/28</td> <td>FexDreamHK_v2.0 公開</td> </tr> <tr> <td>2023/07/07</td> <td>FexDreamHK_v1.0 公開</td> </tr> </table> ----------------------------- ## ⭕ ライセンス / License <b>creativeml-openrail-m</b> <div class="px-2"> <table class="table-fixed border mt-0 text-xs"> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> 著作者表記を入れずにモデルを使用する<br> Use the model without crediting the creator </td> </tr> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> このモデルで生成した画像を商用利用する<br> Sell images they generate </td> </tr> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> 商用画像生成サービスに、このモデルを使用する<br> Run on services that generate images for money </td> </tr> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> このモデルを使用したマージモデルを共有・配布する<br> Share merges using this model </td> </tr> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> このモデル、または派生モデルを販売する<br> Sell this model or merges using this model </td> </tr> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> このモデルをマージしたモデルに異なる権限を設定する<br> Have different permissions when sharing merges </td> </tr> </table> </div> ----------------------------- # ver2.0 ## 🍳 レシピ / Recipe <div class="px-2"> <div class="border p-2"> <details> <table> <thead> <tr> <th>A</th> <th>B</th> <th>C</th> <th>weight</th> <th>OUTPUT</th> </tr> <tr> <td>FlexDreamHK_v1.0</td> <td><a href="https://huggingface.co/sazyou-roukaku/LittleStepMix">LittleStepMix_A</a></td> <td></td> <td>Weight sum cosineA 0.5</td> <td>FlexDreamHK_2.0_orig</td> </tr> <tr> <td>FlexDreamHK_2.0_orig</td> <td></td> <td></td> <td>adjust 0,0,0,0,1,1,2</td> <td>FlexDreamHK_v2.0</td> </tr> </tbody> </table> </details> </div> </div> ----------------------------- <details> <summary>🎨 サンプル</summary> <img src="https://huggingface.co/den2nova/FlexDreamHK/resolve/main/sample/ver20_grid-0000.png" width="100%" height="100%"> <pre style="white-space: pre-line;" class="w-full"> 1girl, solo, from above, blonde hair, short ponytail hair, amber eyes, small breasts, armored dress, outdoors, fantasy castle, nervous, nice hands Negative prompt: (nsfw, extra fingers, deformed hands, polydactyl:1.3), (worst quality, low quality, poor quality, bad quality:1.35), demon horns Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 3921621133, Size: 512x512, Model hash: e2c364c195, Model: FlexDreamHK_v2.0, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2, Hires steps: 10, Hires upscaler: lollypop, Version: v1.3.1 </pre> <img src="https://huggingface.co/den2nova/FlexDreamHK/resolve/main/sample/ver20_grid-0001.png" width="100%" height="100%"> <pre style="white-space: pre-line;" class="w-full"> <a href="https://twitter.com/Emanon_14/status/1684944352161026049">エマノンさんから呪文お借りしてます</a> 1girl, smile, sitting, poncho, frills, gothic, snowflakes, winter, campfire, polar bear Negative prompt: (nsfw, extra fingers, deformed hands, polydactyl:1.3), (worst quality, low quality, poor quality, bad quality:1.35) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 3452924181, Size: 512x512, Model hash: e2c364c195, Model: FlexDreamHK_v2.0, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2, Hires steps: 10, Hires upscaler: lollypop, Version: v1.3.1 </pre> <img src="https://huggingface.co/den2nova/FlexDreamHK/resolve/main/sample/ver20_grid-0002.png" width="100%" height="100%"> <pre style="white-space: pre-line;" class="w-full"> 1girl, solo, flower, japanese clothes, hair ornament, long hair, hair flower, kimono, smile, looking at viewer, white flower, floral print, red flower, very long hair, jewelry, earrings, hakama, bangs, closed mouth, blue eyes, braid, hakama skirt, skirt, blush, long sleeves, red hakama Negative prompt: (nsfw, extra fingers, deformed hands, polydactyl:1.3), (worst quality, low quality, poor quality, bad quality:1.35) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 4251802516, Size: 512x512, Model hash: e2c364c195, Model: FlexDreamHK_v2.0, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2, Hires steps: 10, Hires upscaler: lollypop, Version: v1.3.1 </pre> <img src="https://huggingface.co/den2nova/FlexDreamHK/resolve/main/sample/ver20_grid-0003.png" width="100%" height="100%"> <pre style="white-space: pre-line;" class="w-full"> multiple girls, 2girls, cat, blue hair girl and pink hair girl, long hair, ahoge, school uniform Negative prompt: (nsfw, extra fingers, deformed hands, polydactyl:1.3), (worst quality, low quality, poor quality, bad quality:1.35) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 434535967, Size: 512x512, Model hash: e2c364c195, Model: FlexDreamHK_v2.0, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2, Hires steps: 10, Hires upscaler: lollypop, Version: v1.3.1 </pre> </details> ----------------------------- # ver1.0 <img src="https://huggingface.co/den2nova/FlexDreamHK/resolve/main/sample/banner.jpg" width="100%" height="100%"> ## 🍳 レシピ / Recipe <div class="px-2"> <div class="border p-2"> <details> <table> <thead> <tr> <th>A</th> <th>B</th> <th>C</th> <th>weight</th> <th>OUTPUT</th> </tr> <tr> <td><a href="https://civitai.com/models/25694/epicrealism">epicrealism_pureEvolutionV3</a></td> <td><a href="https://civitai.com/models/4384?modelVersionId=94081">dreamshaper_631BakedVae</a></td> <td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5">v1-5-pruned-emaonly</a></td> <td>Add difference 0.5</td> <td>epicdreamv5</td> </tr> <tr> <td><a href="https://huggingface.co/Ai-tensa/FlexWaifu">FlexWaifuRainbow</a></td> <td><a href="https://civitai.com/models/82813?modelVersionId=106905">sdhk_v40</a></td> <td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5">v1-5-pruned-emaonly</a></td> <td>Add difference 0.5</td> <td>FlexHKrainbow</td> </tr> <tr> <td>FlexHKrainbow</td> <td>epicdreamv5</td> <td></td> <td>COSAIN</td> <td>FlexHK_Rainbowe_epicdream</td> </tr> <tr> <td>FlexHK_Rainbowe_epicdream</td> <td colspan="3">LoRA <a href="https://huggingface.co/datasets/den2nova/den2niji">den2niji</a>:0.5:KAO,<a href="https://huggingface.co/datasets/den2nova/den2SD">den2SD</a>:0.5:KAO<br>※各LoRAはにじジャーニーと普段使いしてるモデルからの絵柄LoRA SDHKv3.0で学習(データセットとLoRA本体共にリンク先で公開中)<br>※KAOのweight:0,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0</td> <td>FlexHK_beta1</td> </tr> <tr> <td><a href="https://civitai.com/models/82813?modelVersionId=89247">sdhk_v30</a></td> <td><a href="https://civitai.com/models/4384?modelVersionId=94081">dreamshaper_631BakedVae</a></td> <td></td> <td>0,1,0.842423804012346,0.71508487654321,0.615234375,0.540123456790123,<br> 0.487003279320988,0.453125,0.435739776234568,0.432098765432099,0.439453125,<br> 0.455054012345679,0.476152584876543,0.5,0.523847415123457,0.544945987654321,0.560546875,<br> 0.2,0.2,0,0.2,0.459876543209876,0.384765625,0.28491512345679,0.157576195987653,0</td> <td>230627_1</td> </tr> <tr> <td>230627_1</td> <td colspan="3">LoRA <a harf="https://huggingface.co/datasets/den2nova/den2niji">den2niji</a>:0.8:KAO,<a href="https://huggingface.co/datasets/den2nova/den2SD">den2SD</a>:0.8:KAO</td> <td>230627_1_LoRA</td> </tr> <tr> <td>230627_1_LoRA</td> <td colspan="3">LoRA den2SD-41:0.3:KAO</td> <td>230627_1_LoRA2</td> </tr> <tr> <td>230627_1_LoRA2</td> <td colspan="3">LoRA <a href="https://civitai.com/models/102188/sdhkv4qu">SDHKv4_QU</a>:2</td> <td>230627_1_LoRA_QU2.0</td> </tr> <tr> <td>FlexHK_beta1</td> <td>230627_1_LoRA_QU2.0</td> <td></td> <td>FAKE_CUBIC_HERMITE</td> <td>FlexHK_beta2</td> </tr> <tr> <td>FlexHK_beta2</td> <td><a href="https://huggingface.co/hakurei/waifu-diffusion-v1-3">wd-v1-3-float16</a></td> <td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5">v1-5-pruned-emaonly</a></td> <td>Add difference 0.25</td> <td>FlexDreamHK_v1</td> </tr> </tbody> </table> </details> </div> </div> ----------------------------- <details> <summary>🎨 サンプル</summary> <img src="https://huggingface.co/den2nova/FlexDreamHK/resolve/main/sample/grid-0000.png" width="100%" height="100%"> <pre style="white-space: pre-line;" class="w-full"> 1girl, solo, framed, silver hair, dreadlocks, indigo eyes, huge breasts, china gothic lolita style dress, hand on own chin, sweet, flowers, Bellflower, frozen lakeside , light smile, nice hands, standing Negative prompt: (nsfw, extra fingers, deformed hands, polydactyl:1.3), (worst quality, low quality, poor quality, bad quality:1.35) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 1658825243, Size: 512x512, Model hash: 7ab6f37bb0, Model: FlexDreamHK_v1, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: Latent, Version: v1.4.0 </pre> <img src="https://huggingface.co/den2nova/FlexDreamHK/resolve/main/sample/grid-0001.png" width="100%" height="100%"> <pre style="white-space: pre-line;" class="w-full"> 1girl, solo, (wide shot, fisheye:0.85), pink hair, twintails, orange eyes, small breasts, cheerleader pom pom, hand on own knee, rose, instrument, Poinsettia, bedroom , desperation, nice hands, squatting Negative prompt: (nsfw, extra fingers, deformed hands, polydactyl:1.3), (worst quality, low quality, poor quality, bad quality:1.35) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 2578613301, Size: 512x512, Model hash: 7ab6f37bb0, Model: FlexDreamHK_v1, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: Latent, Version: v1.4.0 </pre> <img src="https://huggingface.co/den2nova/FlexDreamHK/resolve/main/sample/grid-0002.png" width="100%" height="100%"> <pre style="white-space: pre-line;" class="w-full"> 1girl, solo, from above, red hair, bowl cut, light brown eyes, small breasts, astronaut suit, hand on own head, feeling of floating, space station , surprised, nice hands, flying Negative prompt: (nsfw, extra fingers, deformed hands, polydactyl:1.3), (worst quality, low quality, poor quality, bad quality:1.35) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 2288316915, Size: 512x512, Model hash: 7ab6f37bb0, Model: FlexDreamHK_v1, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: Latent, Version: v1.4.0 </pre> <img src="https://huggingface.co/den2nova/FlexDreamHK/resolve/main/sample/grid-0003.png" width="100%" height="100%"> <pre style="white-space: pre-line;" class="w-full"> 1girl, solo, album cover, green hair, ponytail hair, dark green eyes, huge breasts, school uniform, arm up, door, prism, building , happy, nice hands, standing, petals, cherry blossoms Negative prompt: (nsfw, extra fingers, deformed hands, polydactyl:1.3), (worst quality, low quality, poor quality, bad quality:1.35) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 1151456510, Size: 512x512, Model hash: 7ab6f37bb0, Model: FlexDreamHK_v1, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: Latent, Version: v1.4.0 </pre> </details> -----------------------------  モデルの作成に際し、NAIリークフリーマージモデル研究会を大いに活用させて頂きました。<br>  意欲の持続やアイデアの閃きがあった他、モデル作成に後押しをして下さった方やモデル情報を共有してくださった皆さんに感謝申し上げます。
[ "BEAR" ]
Undi95/MistralThinker-v1.1-GGUF
Undi95
null
[ "gguf", "roleplay", "deepseek", "rp", "r1", "mistral", "distill", "en", "fr", "base_model:mistralai/Mistral-Small-24B-Base-2501", "base_model:quantized:mistralai/Mistral-Small-24B-Base-2501", "endpoints_compatible", "region:us", "conversational" ]
2025-02-26T18:58:35Z
2025-02-27T12:34:19+00:00
417
8
--- base_model: - mistralai/Mistral-Small-24B-Base-2501 language: - en - fr tags: - roleplay - deepseek - rp - r1 - mistral - distill --- # MistralThinker Model Card Please, read this: https://huggingface.co/Undi95/MistralThinker-v1.1/discussions/1 \ Prefill required for the Assistant: `<think>\n` ## Model Description **Model Name:** MistralThinker\ **Version:** 1.1\ **Prompt Format:** Mistral-V7 ``` [SYSTEM_PROMPT]{system prompt}[/SYSTEM_PROMPT][INST]{user message}[/INST]{assistant response}</s> ``` This model is a specialized variant of **Mistral-Small-24B-Base-2501**, adapted using a **DeepSeek R1** distillation process. It is **primarily designed for roleplay (RP) and storywriting** applications, focusing on character interactions, narrative generation, and creative storytelling. Approximately **40% of the training dataset** consists of roleplay/storywriting/character card data, ensuring rich and contextually immersive outputs in these domains. ## Model Sources - **Base Model:** [Mistral-Small-24B-Base-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Base-2501) - **Fine-Tuning Approach:** DeepSeek R1 process (focused on RP) - **Dataset Size:** The dataset used in training **doubled** since the last version, adding more neutral logs, training the Base model to stick more on my new format. ## Intended Use - **Primary Use Cases:** - **Roleplay (RP):** Engaging with users in fictional or scenario-based interactions. - **Storywriting:** Generating narratives, character dialogues, and creative texts. - **Character Lore Generation:** Serving as a resource to craft or expand on character backstories and interactions. - **How To Use:** 1. **User-First Message:** The first message in any interaction should come from the user, ensuring the model responds in a narrative or roleplay context guided by user input. 2. **Contextual Information:** User or assistant details can be placed either in the system prompt or the user's first message. A system prompt is **not mandatory**, but any contextual instructions or role descriptions can help set the stage. 3. **DeepSeek-Style Interaction:** The model can also be used purely as a **DeepSeek distill** without additional system prompts, providing flexible usage for direct storytelling or roleplay scenarios. The model still can be biased toward Roleplay data, and it is expected. ## Training Data - **DeepSeek R1 Thinking Process:** The model inherits a refined chain-of-thought (thinking process) from DeepSeek R1, which places heavy emphasis on **roleplay** and narrative coherence. - **Dataset Composition:** - 40%: RP/Storywriting/Character Cards - 60%: Various curated data for broad language, math, logical, space... understanding - **Data Scaling:** The dataset size was **doubled** compared to previous iterations, which enhances the model’s creative and contextual capabilities. ## Model Performance - **Strengths:** - **Storytelling & Roleplay:** Rich in creative generation, character portrayal, and scenario building. - **Dialogue & Interaction:** Capable of sustaining engaging and context-driven dialogues. - **Adaptability:** Can be used with or without a system prompt to match a range of user preferences. - **Limitations & Bias:** - **Hallucination:** It can generate fictitious information in the thinking process, but still end up with a succesfull reply. - **Thinking can be dismissed:** Being a distillation of DeepSeek R1 is essence, this model, even trained on Base, could forget to add `<think>\n` in some scenario. ## Ethical Considerations - Yes ## Usage Recommendations 1. **System Prompt (Optional):** You may provide a high-level system prompt detailing the scenario or the desired style of roleplay and storywriting. _Example: "You are a friendly fantasy innkeeper who greets travelers from distant lands."_ 2. **User’s First Message:** - Must clearly state or imply the scenario or context if no system prompt is provided. _Example: "Hello, I’m a wandering knight seeking shelter. Could you share a story about local legends?"_ 3. **Roleplay & Storywriting Focus:** - Encourage the model to develop characters, backstories, and immersive dialogues. - For more direct, unfiltered or freeform creativity, skip the system prompt. - If you still want to have some "logs" from previous message before starting a conversation, put them in the first user message, or in the system prompt. - You can put exemple message of the character you RP with in the system prompt, too. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/496T7tYNPF7FxM0fRvRMX.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/CP3Nb7Jc6J0QQCHokgTJD.png)
[ "CRAFT" ]
ProdicusII/ZeroShotBioNER
ProdicusII
token-classification
[ "transformers", "pytorch", "bert", "token-classification", "biology", "medical", "zero-shot", "few-shot", "en", "dataset:bigbio/chemdner", "dataset:ncbi_disease", "dataset:jnlpba", "dataset:bigbio/n2c2_2018_track2", "dataset:bigbio/bc5cdr", "arxiv:2305.04928", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-05-04T11:11:23Z
2023-05-12T12:23:02+00:00
415
7
--- datasets: - bigbio/chemdner - ncbi_disease - jnlpba - bigbio/n2c2_2018_track2 - bigbio/bc5cdr language: - en library_name: transformers license: mit metrics: - precision - recall - f1 pipeline_tag: token-classification tags: - token-classification - biology - medical - zero-shot - few-shot widget: - text: Drug<SEP>He was given aspirin and paracetamol. --- # Zero and few shot NER for biomedical texts ## Model description This model was created during the research collaboration between Bayer Pharma and Serbian Institute for Artificial Intelligence Research and Development. The model is trained on about 25+ biomedical NER classes and can perform also zero-shot inference and can be further fine-tuned for new classes with just few examples (few-shot learning). For more details about our methods please see the paper named ["A transformer-based method for zero and few-shot biomedical named entity recognition"](https://arxiv.org/abs/2305.04928). The model corresponds to BioBERT-based mode, trained with 1 in the first segment (check paper for more details). Model takes as input two strings. String1 is NER label that is being searched in second string. String1 must be phrase for entity. String2 is short text where String1 is searched for semantically. model outputs list of zeros and ones corresponding to the occurance of Named Entity and corresponing to the tokens(tokens given by transformer tokenizer) of the Sring2. ## Example of usage ```python from transformers import AutoTokenizer from transformers import BertForTokenClassification modelname = 'ProdicusII/ZeroShotBioNER' # modelpath tokenizer = AutoTokenizer.from_pretrained(modelname) ## loading the tokenizer of that model string1 = 'Drug' string2 = 'No recent antibiotics or other nephrotoxins, and no symptoms of UTI with benign UA.' encodings = tokenizer(string1, string2, is_split_into_words=False, padding=True, truncation=True, add_special_tokens=True, return_offsets_mapping=False, max_length=512, return_tensors='pt') model0 = BertForTokenClassification.from_pretrained(modelname, num_labels=2) prediction_logits = model0(**encodings) print(prediction_logits) ``` ## Example of fine-tuning with few-shot learning In order to fine-tune model to the new entity using few shots, the dataset needs to be transformed to torch.utils.data.Dataset, containing BERT tokens and set of 0s and 1s (1 is where the class is positive and should be predicted as the member of given NER class). After the dataset is created, the following can be done (for more details, please have a look at the code at GitHub - https://github.com/br-ai-ns-institute/Zero-ShotNER): ```python training_args = TrainingArguments( output_dir=os.path.join('Results', class_unseen, str(j)+'Shot'), # folder for results num_train_epochs=10, # number of epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=16, # batch size for evaluation weight_decay=0.01, # strength of weight decay logging_dir=os.path.join('Logs', class_unseen, str(j)+'Shot'), # folder for logs save_strategy='epoch', evaluation_strategy='epoch', load_best_model_at_end=True, ) model0 = BertForTokenClassification.from_pretrained(model_path, num_labels=2) trainer = Trainer( model=model0, # pretrained model args=training_args, # training artguments train_dataset=dataset, # Object of class torch.utils.data.Dataset for training eval_dataset=dataset_valid # Object of class torch.utils.data.Dataset for vaLidation ) start_time = time.time() trainer.train() total_time = time.time()-start_time model0_path = os.path.join('Results', class_unseen, str(j)+'Shot', 'Model') os.makedirs(model0_path, exist_ok=True) trainer.save_model(model0_path) ``` ## Available classes The following datasets and entities were used for training and therefore they can be used as label in the first segment (as a first string). Note that multiword string have been merged. * NCBI * Specific Disease * Composite Mention * Modifier * Disease Class * BIORED * Sequence Variant * Gene Or Gene Product * Disease Or Phenotypic Feature * Chemical Entity * Cell Line * Organism Taxon * CDR * Disease * Chemical * CHEMDNER * Chemical * Chemical Family * JNLPBA * Protein * DNA * Cell Type * Cell Line * RNA * n2c2 * Drug * Frequency * Strength * Dosage * Form * Reason * Route * ADE * Duration On top of this, one can use the model in zero-shot regime with other classes, and also fine-tune it with few examples of other classes. ## Code availibility Code used for training and testing the model is available at https://github.com/br-ai-ns-institute/Zero-ShotNER ## Citation If you use this model, or are inspired by it, please cite in your paper the following paper: Košprdić M.,Prodanović N., Ljajić A., Bašaragin B., Milošević N., 2023. A transformer-based method for zero and few-shot biomedical named entity recognition. arXiv preprint arXiv:2305.04928. https://arxiv.org/abs/2305.04928 or in bibtex: ``` @misc{kosprdic2023transformerbased, title={A transformer-based method for zero and few-shot biomedical named entity recognition}, author={Miloš Košprdić and Nikola Prodanović and Adela Ljajić and Bojana Bašaragin and Nikola Milošević}, year={2023}, eprint={2305.04928}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
[ "BC5CDR", "BIORED", "CHEMDNER", "JNLPBA", "NCBI DISEASE" ]
AdaptLLM/law-LLM
AdaptLLM
text-generation
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "legal", "en", "dataset:Open-Orca/OpenOrca", "dataset:GAIR/lima", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "dataset:EleutherAI/pile", "arxiv:2309.09530", "arxiv:2411.19930", "arxiv:2406.14491", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-09-18T13:44:51Z
2024-12-02T06:25:22+00:00
414
72
--- datasets: - Open-Orca/OpenOrca - GAIR/lima - WizardLM/WizardLM_evol_instruct_V2_196k - EleutherAI/pile language: - en metrics: - accuracy pipeline_tag: text-generation tags: - legal --- # Adapting LLMs to Domains via Continual Pre-Training (ICLR 2024) This repo contains the domain-specific base model developed from **LLaMA-1-7B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530). We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**. ### [2024/11/29] 🤗 Introduce the multimodal version of AdaptLLM at [AdaMLLM](https://huggingface.co/papers/2411.19930), for adapting MLLMs to domains 🤗 **************************** **Updates** **************************** * 2024/11/29: Released [AdaMLLM](https://huggingface.co/AdaptLLM/Adapt-MLLM-to-Domains) for adapting MLLMs to domains * 2024/9/20: Our [research paper for Instruction-Pretrain](https://huggingface.co/papers/2406.14491) has been accepted by EMNLP 2024 * 2024/8/29: Updated [guidelines](https://huggingface.co/datasets/AdaptLLM/finance-tasks) on evaluating any 🤗Huggingface models on the domain-specific tasks * 2024/6/22: Released the [benchmarking code](https://github.com/microsoft/LMOps/tree/main/adaptllm) * 2024/6/21: Released the general version of AdaptLLM at [Instruction-Pretrain](https://huggingface.co/instruction-pretrain) * 2024/4/2: Released the [raw data splits (train and test)](https://huggingface.co/datasets/AdaptLLM/ConvFinQA) of all the evaluation datasets * 2024/1/16: Our [research paper for AdaptLLM](https://huggingface.co/papers/2309.09530) has been accepted by ICLR 2024 * 2023/12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B * 2023/12/8: Released our [chat models](https://huggingface.co/AdaptLLM/law-chat) developed from LLaMA-2-Chat-7B * 2023/9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/law-tasks), and [base models](https://huggingface.co/AdaptLLM/law-LLM) developed from LLaMA-1-7B ## 1. Domain-Specific Models ### LLaMA-1-7B In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are: <p align='center'> <img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/6efPwitFgy-pLTzvccdcP.png" width="700"> </p> ### LLaMA-1-13B Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B). ### LLaMA-2-Chat Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat). For example, to chat with the law base model (🤗we highly recommend switching to the [chat model](https://huggingface.co/AdaptLLM/law-chat) for better response quality): ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("AdaptLLM/law-LLM") tokenizer = AutoTokenizer.from_pretrained("AdaptLLM/law-LLM", use_fast=False) # Put your input here: user_input = '''Question: Which of the following is false about ex post facto laws? Options: - They make criminal an act that was innocent when committed. - They prescribe greater punishment for an act than was prescribed when it was done. - They increase the evidence required to convict a person than when the act was done. - They alter criminal offenses or punishment in a substantially prejudicial manner for the purpose of punishing a person for some past activity. Please provide your choice first and then provide explanations if possible.''' # Simply use your input as the prompt for base models prompt = user_input inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).input_ids.to(model.device) outputs = model.generate(input_ids=inputs, max_length=2048)[0] answer_start = int(inputs.shape[-1]) pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True) print(pred) ``` ### LLaMA-3-8B (💡New!) In our recent research on [Instruction-Pretrain](https://huggingface.co/papers/2406.14491), we developed a context-based instruction synthesizer to augment the raw corpora with instruction-response pairs, **enabling Llama3-8B to be comparable to or even outperform Llama3-70B**: [Finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B), [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B). ## 2. Domain-Specific Tasks ### Pre-templatized Testing Splits To easily reproduce our prompting results, we have uploaded the filled-in zero/few-shot input instructions and output completions of the test each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks). Note: those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models. ### Evaluating Any Huggingface LMs on Domain-Specific Tasks (💡New!) You can use the following script to reproduce our results and evaluate any other Huggingface models on domain-specific tasks. Note that the script is NOT applicable to models that require specific prompt templates (e.g., Llama2-chat, Llama3-Instruct). 1). **Set Up Dependencies** ```bash git clone https://github.com/microsoft/LMOps cd LMOps/adaptllm pip install -r requirements.txt ``` 2). **Evaluate the Model** ```bash # Select the domain from ['biomedicine', 'finance', 'law'] DOMAIN='law' # Specify any Huggingface model name (Not applicable to chat models) MODEL='AdaptLLM/law-LLM' # Model parallelization: # - Set MODEL_PARALLEL=False if the model fits on a single GPU. # We observe that LMs smaller than 10B always meet this requirement. # - Set MODEL_PARALLEL=True if the model is too large and encounters OOM on a single GPU. MODEL_PARALLEL=False # Choose the number of GPUs from [1, 2, 4, 8] N_GPU=1 # Whether to add a BOS token at the beginning of the prompt input: # - Set to False for AdaptLLM. # - Set to True for instruction-pretrain models. # If unsure, we recommend setting it to False, as this is suitable for most LMs. add_bos_token=False # Run the evaluation script bash scripts/inference.sh ${DOMAIN} ${MODEL} ${add_bos_token} ${MODEL_PARALLEL} ${N_GPU} ``` ### Raw Datasets We have also uploaded the raw training and testing splits, for facilitating fine-tuning or other usages: [ChemProt](https://huggingface.co/datasets/AdaptLLM/ChemProt), [RCT](https://huggingface.co/datasets/AdaptLLM/RCT), [ConvFinQA](https://huggingface.co/datasets/AdaptLLM/ConvFinQA), [FiQA_SA](https://huggingface.co/datasets/AdaptLLM/FiQA_SA), [Headline](https://huggingface.co/datasets/AdaptLLM/Headline), [NER](https://huggingface.co/datasets/AdaptLLM/NER), [FPB](https://huggingface.co/datasets/AdaptLLM/FPB) ### Domain Knowledge Probing Our pre-processed knowledge probing datasets are available at: [med_knowledge_prob](https://huggingface.co/datasets/AdaptLLM/med_knowledge_prob) and [law_knowledge_prob](https://huggingface.co/datasets/AdaptLLM/law_knowledge_prob) ## Citation If you find our work helpful, please cite us: ```bibtex @inproceedings{ cheng2024adapting, title={Adapting Large Language Models via Reading Comprehension}, author={Daixuan Cheng and Shaohan Huang and Furu Wei}, booktitle={The Twelfth International Conference on Learning Representations}, year={2024}, url={https://openreview.net/forum?id=y886UXPEZ0} } ```
[ "CHEMPROT" ]
sinequa/vectorizer.raspberry
sinequa
sentence-similarity
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "feature-extraction", "sentence-similarity", "de", "en", "es", "fr", "it", "nl", "ja", "pt", "zh", "arxiv:2012.15828", "arxiv:2108.13897", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2023-07-11T07:35:15Z
2024-02-19T09:39:23+00:00
411
0
--- language: - de - en - es - fr - it - nl - ja - pt - zh pipeline_tag: sentence-similarity tags: - feature-extraction - sentence-similarity --- # Model Card for `vectorizer.raspberry` This model is a vectorizer developed by Sinequa. It produces an embedding vector given a passage or a query. The passage vectors are stored in our vector index and the query vector is used at query time to look up relevant passages in the index. Model name: `vectorizer.raspberry` ## Supported Languages The model was trained and tested in the following languages: - English - French - German - Spanish - Italian - Dutch - Japanese - Portuguese - Chinese (simplified) Besides these languages, basic support can be expected for additional 91 languages that were used during the pretraining of the base model (see Appendix A of XLM-R paper). ## Scores | Metric | Value | |:-----------------------|------:| | Relevance (Recall@100) | 0.613 | Note that the relevance score is computed as an average over 14 retrieval datasets (see [details below](#evaluation-metrics)). ## Inference Times | GPU | Quantization type | Batch size 1 | Batch size 32 | |:------------------------------------------|:------------------|---------------:|---------------:| | NVIDIA A10 | FP16 | 1 ms | 5 ms | | NVIDIA A10 | FP32 | 2 ms | 18 ms | | NVIDIA T4 | FP16 | 1 ms | 12 ms | | NVIDIA T4 | FP32 | 3 ms | 52 ms | | NVIDIA L4 | FP16 | 2 ms | 5 ms | | NVIDIA L4 | FP32 | 4 ms | 24 ms | ## Gpu Memory usage | Quantization type | Memory | |:-------------------------------------------------|-----------:| | FP16 | 550 MiB | | FP32 | 1050 MiB | Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which can be around 0.5 to 1 GiB depending on the used GPU. ## Requirements - Minimal Sinequa version: 11.10.0 - Minimal Sinequa version for using FP16 models and GPUs with CUDA compute capability of 8.9+ (like NVIDIA L4): 11.11.0 - [Cuda compute capability](https://developer.nvidia.com/cuda-gpus): above 5.0 (above 6.0 for FP16 use) ## Model Details ### Overview - Number of parameters: 107 million - Base language model: [mMiniLMv2-L6-H384-distilled-from-XLMR-Large](https://huggingface.co/nreimers/mMiniLMv2-L6-H384-distilled-from-XLMR-Large) ([Paper](https://arxiv.org/abs/2012.15828), [GitHub](https://github.com/microsoft/unilm/tree/master/minilm)) - Insensitive to casing and accents - Output dimensions: 256 (reduced with an additional dense layer) - Training procedure: Query-passage-negative triplets for datasets that have mined hard negative data, Query-passage pairs for the rest. Number of negatives is augmented with in-batch negative strategy ### Training Data The model have been trained using all datasets that are cited in the [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) model. In addition to that, this model has been trained on the datasets cited in [this paper](https://arxiv.org/pdf/2108.13897.pdf) on the 9 aforementioned languages. ### Evaluation Metrics To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the [BEIR benchmark](https://github.com/beir-cellar/beir). Note that all these datasets are in English. | Dataset | Recall@100 | |:------------------|-----------:| | Average | 0.613 | | | | | Arguana | 0.957 | | CLIMATE-FEVER | 0.468 | | DBPedia Entity | 0.377 | | FEVER | 0.820 | | FiQA-2018 | 0.639 | | HotpotQA | 0.560 | | MS MARCO | 0.845 | | NFCorpus | 0.287 | | NQ | 0.756 | | Quora | 0.992 | | SCIDOCS | 0.456 | | SciFact | 0.906 | | TREC-COVID | 0.100 | | Webis-Touche-2020 | 0.413 | We evaluated the model on the datasets of the [MIRACL benchmark](https://github.com/project-miracl/miracl) to test its multilingual capacities. Note that not all training languages are part of the benchmark, so we only report the metrics for the existing languages. | Language | Recall@100 | |:----------------------|-----------:| | French | 0.650 | | German | 0.528 | | Spanish | 0.602 | | Japanese | 0.614 | | Chinese (simplified) | 0.680 |
[ "SCIFACT" ]
HPAI-BSC/Qwen2.5-Aloe-Beta-7B
HPAI-BSC
question-answering
[ "transformers", "safetensors", "qwen2", "text-generation", "biology", "medical", "healthcare", "question-answering", "en", "dataset:HPAI-BSC/Aloe-Beta-General-Collection", "dataset:HPAI-BSC/chain-of-diagnosis", "dataset:HPAI-BSC/MedS-Ins", "dataset:HPAI-BSC/ultramedical", "dataset:HPAI-BSC/pubmedqa-cot-llama31", "dataset:HPAI-BSC/medqa-cot-llama31", "dataset:HPAI-BSC/medmcqa-cot-llama31", "dataset:HPAI-BSC/headqa-cot-llama31", "dataset:HPAI-BSC/MMLU-medical-cot-llama31", "dataset:HPAI-BSC/Polymed-QA", "arxiv:2405.01886", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-12-09T13:01:22Z
2025-01-22T14:20:53+00:00
407
5
--- datasets: - HPAI-BSC/Aloe-Beta-General-Collection - HPAI-BSC/chain-of-diagnosis - HPAI-BSC/MedS-Ins - HPAI-BSC/ultramedical - HPAI-BSC/pubmedqa-cot-llama31 - HPAI-BSC/medqa-cot-llama31 - HPAI-BSC/medmcqa-cot-llama31 - HPAI-BSC/headqa-cot-llama31 - HPAI-BSC/MMLU-medical-cot-llama31 - HPAI-BSC/Polymed-QA - HPAI-BSC/Aloe-Beta-General-Collection - HPAI-BSC/Aloe-Beta-General-Collection language: - en library_name: transformers license: apache-2.0 pipeline_tag: question-answering tags: - biology - medical - healthcare --- <p align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/ARcIVTFxuBMV5DKooCgJH.png"> <img alt="aloe_beta_7b" src="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/ARcIVTFxuBMV5DKooCgJH.png" width=50%> </picture> </p> <h1 align="center"> Aloe: A Family of Fine-tuned Open Healthcare LLMs </h1> --- Qwen2.5-Aloe-Beta-7B is an **open healthcare LLM** achieving **state-of-the-art performance** on several medical tasks. Aloe Beta is made available in four model sizes: [7B](https://huggingface.co/HPAI-BSC/Qwen2.5-Aloe-Beta-7B/), [8B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-8B), [70B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-70B), and [72B](https://huggingface.co/HPAI-BSC/Qwen2.5-Aloe-Beta-72B). All models are trained using the same recipe, on top of two different families of models: Llama3.1 and Qwen2.5. Aloe is trained on 20 medical tasks, resulting in a robust and versatile healthcare model. Evaluations show Aloe models to be among the best in their class. When combined with a RAG system ([also released](https://github.com/HPAI-BSC/prompt_engine)) the 7B and 8B version gets close to the performance of closed models like MedPalm-2, GPT4. With the same RAG system, Llama3.1-Aloe-Beta-70B and Qwen2.5-Aloe-Beta-72B outperforms those private alternatives, producing state-of-the-art results. # Aloe-Beta-7B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62f7a16192950415b637e201/VUYw4IdANKGrH2VOedwH0.png) **Aloe-Beta** is the latest iteration in the **Aloe family**, building and improving on the success of its predecessor, [Aloe-8B-Alpha](https://huggingface.co/HPAI-BSC/Llama3-Aloe-8B-Alpha). Beta more than triples the training data used by Alpha, for a total of **1.8B tokens**, including a wider variety of medical tasks and instructions (e.g., text summarization, explanation, diagnosis, text classification, treatment recommendation, ...). ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62f7a16192950415b637e201/bCuV5kZUT9H9UECAOWDRc.png) To mitigate catastrophic forgetting and enable the model to effectively learn new capabilities like **function calling**, we incorporated a diverse set of high-quality general-purpose data constituting 20% of the total training set. The curated data includes some of the highest-quality content available across a range of topics, including mathematics, programming, STEM, and very long instructions (> 8k tokens), to enrich the model's adaptability and comprehension across diverse domains. Beta also boosts the alignment and safety stages with respect to Alpha. This includes a [medical preference dataset](https://huggingface.co/datasets/TsinghuaC3I/UltraMedical-Preference), as well as the red-teaming dataset (available soon). Complete training details, model merging configurations, and all training data (including synthetically generated data) can be found below. This includes [the RAG system](https://github.com/HPAI-BSC/prompt_engine) that was developed to test Aloe Beta in a deployment setup. Aloe comes with a healthcare-specific risk assessment to facilitate to the safe use and deployment of such systems. ## Model Details ### [](https://huggingface.co/templates/model-card-example#model-description)Model Description - **Developed by:** [HPAI](https://hpai.bsc.es/) - **Model type:** Causal decoder-only transformer language model - **Language(s) (NLP):** English (capable but not formally evaluated on other languages) - **License:** This model is based on [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) which is released with Apache 2.0 license. All our modifications are available with a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license, making the Aloe Beta models **compatible with commercial use**. - **Base model :** [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) - **Paper:** (more coming soon) - **RAG Repository:** https://github.com/HPAI-BSC/prompt_engine ### [](https://huggingface.co/templates/model-card-example#model-sources-optional)Model Sources [optional] ## Model Performance Aloe Beta has been tested on the most popular healthcare QA datasets, with and without Medprompt inference technique. Results show competitive performance, achieving SOTA within models of the same size. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/J-PoCeKPRTPFb8wtQCQ07.png) The Beta model has been developed to excel in several different medical tasks. For this reason, we evaluated the model in many different medical tasks: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/TNPOcqIubfgV6-36LHYkP.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/vZliynuEKdya8rdENbO_Z.png) We also compared the performance of the model in the general domain, using the OpenLLM Leaderboard benchmark. Aloe-Beta gets competitive results with the current SOTA general models in the most used general benchmarks and outperforms the medical models: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/qJAD38D8XRogP3vlgFf8z.png) ## Uses ### Direct Use We encourage the use of Aloe for research purposes, as a stepping stone to build better foundational models for healthcare. In production, Aloe should always be used under the supervision of a human expert. ### Out-of-Scope Use These models are not to be used for clinical practice, medical diagnosis, or any other form of direct or indirect healthcare advice. Models are prone to error and can produce toxic content. The use of Aloe models for activities harmful to individuals, such as spam, fraud, or impersonation, is strictly prohibited. Minors should not be left alone to interact with Aloe without supervision. ## Bias, Risks, and Limitations Aloe can produce toxic content under the appropriate prompts, and it includes multiple undesirable biases. While significant efforts where conducted to mitigate this (see Alignment details below), model safety cannot be fully guaranteed. We avoid the use of all personal data in our training. We identify at least three risk cases specific to healthcare LLMs: - Healthcare professional impersonation, a fraudulent behaviour which currently generates billions of dollars in [profit](https://www.justice.gov/opa/pr/justice-department-charges-dozens-12-billion-health-care-fraud). A model such as Aloe could be used to increase the efficacy of such deceiving activities, making them more widespread. The main preventive actions are public literacy on the unreliability of digitised information and the importance of medical registration, and legislation enforcing AI-generated content disclaimers. - Medical decision-making without professional supervision. While this is already an issue in modern societies (eg self-medication) a model such as Aloe, capable of producing high-quality conversational data, can facilitate self-delusion, particularly in the presence of sycophancy. By producing tailored responses, it can also be used to generate actionable answers. Public literacy on the dangers of self-diagnosis is one of the main defenses, together with the introduction of disclaimers and warnings on the models' outputs. - Access to information on dangerous substances or procedures. While the literature on sensitive content can already be found on different sources (eg libraries, the internet, dark web), LLMs can centralize such access, making it nearly impossible to control the flow of such information. Model alignment can help in that regard, but so far the effects remain insufficient, as jailbreaking methods still overcome it. <!--- Table below shows the performance of Aloe at several AI safety tasks: TO BE UPDATED <img src="https://cdn-uploads.huggingface.co/production/uploads/62972c4979f193515da1d38e/T6Jblpf1kmTkM04K716rM.png" width="95%"> We analyzed the safety and robustness of the model using red teaming techniques. We designed a benchmark using different types of attacks and analyzed the performance of Aloe and some extra models, and we confirm that our model is aligned properly and successfully resisting most attacks: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/KS3yrHan1l1W0cYiXGG-G.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/SYC0qljpLGLmMgx0a623W.png) --> ## How to Get Started with the Model Use the code below to get started with the model. You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples for both. #### Transformers pipeline ```python import transformers import torch model_id = "HPAI-BSC/Qwen2.5-Aloe-Beta-7B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are an expert medical assistant named Aloe, developed by the High Performance Artificial Intelligence Group at Barcelona Supercomputing Center(BSC). You are to be a helpful, respectful, and honest assistant."}, {"role": "user", "content": "Hello."}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|im_end|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.7, top_p=0.8, top_k=20, repetition_penalty=1.05 ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "HPAI-BSC/Qwen2.5-Aloe-Beta-7B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are an expert medical assistant named Aloe, developed by the High Performance Artificial Intelligence Group at Barcelona Supercomputing Center(BSC). You are to be a helpful, respectful, and honest assistant."}, {"role": "user", "content": "Hello"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|im_end|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.7, top_p=0.8, top_k=20, repetition_penalty=1.05 ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ## Training Details ### Supervised fine-tuning SFT on top of Qwen2.5-7B using axolotl (https://github.com/axolotl-ai-cloud/axolotl). We used Deepspeed's Zero-3 distributed training using the following hardware: * 7B: 32x NVIDIA Hopper H100 64GB of the *Marenostrum 5*. * 8B: 32x NVIDIA Hopper H100 64GB of the *Marenostrum 5*. * 70B: 64x NVIDIA Hopper H100 64GB of the *Marenostrum 5*. * 72B: 92x NVIDIA Hopper H100 64GB of the *Marenostrum 5*. <!--- ^^^ TO BE COMPLETED AND DETAILED ^^^ --> #### Training Data The training set consists of around 1.8B tokens, having 3 different types of data: - Medical domain datasets. Includes data from 20 different medical tasks. - [HPAI-BSC/Aloe-Beta-General-Collection](https://huggingface.co/datasets/HPAI-BSC/Aloe-Beta-General-Collection) - [HPAI-BSC/chain-of-diagnosis](https://huggingface.co/datasets/HPAI-BSC/chain-of-diagnosis) - [HPAI-BSC/MedS-Ins](https://huggingface.co/datasets/HPAI-BSC/MedS-Ins) - [HPAI-BSC/ultramedica](https://huggingface.co/datasets/HPAI-BSC/ultramedical) - Synthetic data. We expanded our training data by generating high-quality answers using Llama3.1-70B. - [HPAI-BSC/pubmedqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/pubmedqa-cot-llama31) - [HPAI-BSC/medqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/medqa-cot-llama31) - [HPAI-BSC/medmcqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/medmcqa-cot-llama31) - [HPAI-BSC/headqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/headqa-cot-llama31) - [HPAI-BSC/MMLU-medical-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/MMLU-medical-cot-llama31) - [HPAI-BSC/Polymed-QA](https://huggingface.co/datasets/HPAI-BSC/Polymed-QA) - Genstruct data (coming soon) - General data. It includes maths, STEM, code, function calling, and instructions with a very long context. - [HPAI-BSC/Aloe-Beta-General-Collection](https://huggingface.co/datasets/HPAI-BSC/Aloe-Beta-General-Collection) #### Training parameters - Epochs: 3 - Sequence length: 16384 - Optimizer: adamw_torch - Learning rate: 1e-5 - Learning rate scheduler: cosine - Warmup steps: 100 - Weight decay: 0 - Gradient checkpointing - Zero 3 - Total batch size: 128 - Batch size per device: 1 - Gradient accumulation steps: 4 ### Model Merging The model trained was merged with the Qwen2.5-7B-Instruct model using the DARE_TIES technique. [Mergekit](https://github.com/arcee-ai/mergekit) was used to conduct the merging. ### Model Alignment The model is aligned using the Direct Preference Optimization (DPO) technique through a two-step process: 1. General DPO Alignment: This step uses a dataset combining medical, general preference, and safety data. We used our dataset [HPAI-BSC/Aloe-Beta-DPO](https://huggingface.co/datasets/HPAI-BSC/Aloe-Beta-DPO). We split the dataset into five parts, and the model was trained iteratively for one epoch on each chunk. We used a learning rate of 2e-7. 2. Red-Teaming Alignment: This step further fine-tunes the model to resist a variety of potential attacks, enhancing its robustness and security. Dataset will be shared soon. In this stage, we set the learning rate to 1e-7. <!--- ^^^ LINKS TO DPO DATA (DPO added, missing the RT^^^ --> We used [OpenRLHF](https://github.com/OpenRLHF/OpenRLHF) library. We aligned the model using 16x NVIDA HOOPER H100 64GB of the *Marenostrum 5*. Common hyperparameters: - Sequence length: 4096 - Optimizer: Fused adam - Total batch size 128 - Batch size per device: 1 - Gradient accumulation steps: 8 - Beta: 0.1 ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data - [ACI-BENCH](https://github.com/wyim/aci-bench) - [MTS-Dialog](https://github.com/abachaa/MTS-Dialog) - [MedText](https://huggingface.co/datasets/BI55/MedText) - [Medical Text classification](https://www.kaggle.com/datasets/chaitanyakck/medical-text/data) - [OLAPH](https://github.com/dmis-lab/OLAPH) - CareQA Open - [MedDialog](https://huggingface.co/datasets/bigbio/meddialog) - [MEDIQA QA](https://huggingface.co/datasets/bigbio/mediqa_qa) - [Meddialog Qsumm](https://huggingface.co/datasets/lighteval/med_dialog) - [Biored](https://huggingface.co/datasets/YufeiHFUT/BioRED_all_info) - [MIMIC-III](https://huggingface.co/datasets/dmacres/mimiciii-hospitalcourse-meta) - [Medical Prescription](https://huggingface.co/datasets/devlocalhost/prescription-full) - [MedQA (USMLE)](https://huggingface.co/datasets/bigbio/med_qa) - [MedMCQA](https://huggingface.co/datasets/medmcqa) - [PubMedQA](https://huggingface.co/datasets/bigbio/pubmed_qa) - [MMLU-Medical](https://huggingface.co/datasets/lukaemon/mmlu) - [MedQA-4-Option](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options) - [CareQA](https://huggingface.co/datasets/HPAI-BSC/CareQA) - [Open LLM Leaderboard 2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) <!--- ^^^ CAREQA Open link MISSING ^^^ --> #### Metrics - Accuracy: suite the evaluation of multiple-choice question-answering tasks. - Rouge1: refers to the overlap of unigrams between the system and the gold standard. <!--- ^^^ MORE METRICS MISSING ^^^ --> #### Summary To compare Aloe with the most competitive open models (both general purpose and healthcare-specific) we use popular healthcare datasets (PubMedQA, MedMCQA, MedQA and MMLU for six medical tasks only), together with the new and highly reliable CareQA. However, while MCQA benchmarks provide valuable insights into a model's ability to handle structured queries, they fall short in representing the full range of challenges faced in medical practice. Building upon this idea, Aloe-Beta represents the next step in the evolution of the Aloe Family, designed to broaden the scope beyond the multiple-choice question-answering tasks that defined Aloe-Alpha. Benchmark results indicate the training conducted on Aloe has boosted its performance above all other open models within the same model size. Both Qwen2.5-Aloe-Beta-7B and Llama3.1-Aloe-Beta-8B also outperforms other medical models like Llama3-OpenBioLLM and Llama3-Med42. All these results make Aloe-Beta the best healthcare LLM of its size. With the help of prompting techniques the performance of Qwen2.5-Aloe-Beta-7B is significantly improved. Medprompting in particular provides a 9% increase in reported accuracy, after which Qwen2.5-Aloe-7B-Beta only lags behind much bigger models like Llama-3.1-70B-Instruct or MedPalm-2. This improvement is mostly consistent across the OpenLLM Leaderboard and the other medical tasks. ## Environmental Impact - **Hardware Type:** 32xH100 - **Hours used (8B):** 544 GPU hours - **Hours used (70B):** 4500 GPU hours - **Hardware Provider:** Barcelona Supercomputing Center (BSC) - **Compute Region:** Spain - **Carbon Emitted:** 34.1 kg of CO2 <!--- ^^^ ARE CARBON EMISSIONS FOR BOTH? ^^^ --> ## Authors Aloe Beta has been developed by the [High Performance Artificial Intelligence](https://hpai.bsc.es/) research group, from the [Barcelona Supercomping Center - BSC](https://www.bsc.es/). Main authors are [Jordi Bayarri Planas](https://huggingface.co/JordiBayarri), [Ashwin Kumar Gururajan](https://huggingface.co/G-AshwinKumar) and [Dario Garcia-Gasulla](https://huggingface.co/dariog). Red teaming efforts lead by Adrian Tormos. mailto:[email protected] ## Citations <!--- Add the prompt engine paper below --> If you use this repository in a published work, please cite the corresponding papers as source: ``` @misc{gururajan2024aloe, title={Aloe: A Family of Fine-tuned Open Healthcare LLMs}, author={Ashwin Kumar Gururajan and Enrique Lopez-Cuena and Jordi Bayarri-Planas and Adrian Tormos and Daniel Hinjos and Pablo Bernabeu-Perez and Anna Arias-Duart and Pablo Agustin Martin-Torres and Lucia Urcelay-Ganzabal and Marta Gonzalez-Mallo and Sergio Alvarez-Napagao and Eduard Ayguadé-Parra and Ulises Cortés Dario Garcia-Gasulla}, year={2024}, eprint={2405.01886}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
[ "BIORED", "MEDIQA QA", "MEDDIALOG", "MEDQA", "PUBMEDQA" ]
BSC-LT/salamandraTA-7b-instruct
BSC-LT
translation
[ "transformers", "safetensors", "llama", "text-generation", "translation", "bg", "ca", "cs", "cy", "da", "de", "el", "en", "es", "et", "eu", "fi", "fr", "ga", "gl", "hr", "hu", "it", "lt", "lv", "mt", "nl", "nb", "no", "nn", "oc", "pl", "pt", "ro", "ru", "sl", "sk", "sr", "sv", "uk", "ast", "an", "arxiv:2010.11125", "arxiv:2403.14009", "arxiv:1907.05791", "arxiv:1911.04944", "arxiv:2402.17733", "arxiv:2207.04672", "arxiv:2404.06392", "arxiv:2309.04662", "base_model:BSC-LT/salamandra-7b", "base_model:finetune:BSC-LT/salamandra-7b", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:eu" ]
2025-01-08T15:02:52Z
2025-03-17T17:32:42+00:00
406
3
--- base_model: - BSC-LT/salamandra-7b language: - bg - ca - cs - cy - da - de - el - en - es - et - eu - fi - fr - ga - gl - hr - hu - it - lt - lv - mt - nl - nb - 'no' - nn - oc - pl - pt - ro - ru - sl - sk - sr - sv - uk - ast - an library_name: transformers license: apache-2.0 pipeline_tag: translation --- ![](./images/salamandra_header.png) # SalamandraTA Model Card SalamandraTA-7b-instruct is a translation LLM that has been instruction-tuned from SalamandraTA-7b-base. The base model results from continually pre-training [Salamandra-7b](https://huggingface.co/BSC-LT/salamandra-7b) on parallel data and has not been published, but is reserved for internal use. SalamandraTA-7b-instruct is proficent in 37 european languages and supports translation-related tasks, namely: sentence-level-translation, paragraph-level-translation, document-level-translation, automatic post-editing, grammar checking, machine translation evaluation, alternative translations, named-entity-recognition and context-aware translation. > [!WARNING] > **DISCLAIMER:** This version of Salamandra is tailored exclusively for translation tasks. It lacks chat capabilities and has not been trained with any chat instructions. --- ## Model Details ### Description SalamandraTA-7b-base is a continual pre-training of [Salamandra-7b](https://huggingface.co/BSC-LT/salamandra-7b) using parallel data, resulting in a total of 424B tokens processed during training. ### Architecture | | | |-------------------------|:--------------| | Total Parameters | 7,768,117,248 | | Embedding Parameters | 1,048,576,000 | | Layers | 32 | | Hidden size | 4,096 | | Attention heads | 32 | | Context length | 8,192 | | Vocabulary size | 256,000 | | Precision | bfloat16 | | Embedding type | RoPE | | Activation Function | SwiGLU | | Layer normalization | RMS Norm | | Flash attention | ✅ | | Grouped Query Attention | ✅ | | Num. query groups | 8 | --- ## Intended Use ### Direct Use The model is intended for both research and commercial use in any of the languages included in the training data for general machine translation tasks. ### Out-of-scope Use The model is not intended for malicious activities, such as harming others or violating human rights. Any downstream application must comply with current laws and regulations. Irresponsible usage in production environments without proper risk assessment and mitigation is also discouraged. --- ## Hardware and Software ### Training Framework SalamandraTA-7b-base was continually pre-trained using NVIDIA’s [NeMo Framework](https://docs.nvidia.com/nemo-framework/index.html), which leverages PyTorch Lightning for efficient model training in highly distributed settings. SalamandraTA-7b-instruct was produced with [FastChat](https://github.com/lm-sys/FastChat). ### Compute Infrastructure All models were trained on [MareNostrum 5](https://www.bsc.es/ca/marenostrum/marenostrum-5), a pre-exascale EuroHPC supercomputer hosted and operated by Barcelona Supercomputing Center. The accelerated partition is composed of 1,120 nodes with the following specifications: - 4x Nvidia Hopper GPUs with 64GB HBM2 memory - 2x Intel Sapphire Rapids 8460Y+ at 2.3Ghz and 32c each (64 cores) - 4x NDR200 (BW per node 800Gb/s) - 512 GB of Main memory (DDR5) - 460GB on NVMe storage --- ## How to use You can translate between the following 37 languages and varieties: Aragonese, Asturian, Basque, Bulgarian, Catalan and Valencian variety, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, Galician, German, Greek, Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese, Norwegian Bokmål, Norwegian Nynorsk, Occitan and Aranese variety, Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Ukrainian, Welsh. The instruction-following model uses the commonly adopted ChatML template: ``` <|im_start|>system {SYSTEM PROMPT}<|im_end|> <|im_start|>user {USER PROMPT}<|im_end|> <|im_start|>assistant {MODEL RESPONSE}<|im_end|> <|im_start|>user [...] ``` The easiest way to apply it is by using the tokenizer's built-in functions, as shown in the following snippet. ```python from datetime import datetime from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model_id = "BSC-LT/salamandraTA-7b-instruct" source = 'Spanish' target = 'Catalan' sentence = "Ayer se fue, tomó sus cosas y se puso a navegar. Una camisa, un pantalón vaquero y una canción, dónde irá, dónde irá. Se despidió, y decidió batirse en duelo con el mar. Y recorrer el mundo en su velero. Y navegar, nai-na-na, navegar" text = f"Translate the following text from {source} into {target}.\n{source}: {sentence} \n{target}:" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto", torch_dtype=torch.bfloat16 ) message = [ { "role": "user", "content": text } ] date_string = datetime.today().strftime('%Y-%m-%d') prompt = tokenizer.apply_chat_template( message, tokenize=False, add_generation_prompt=True, date_string=date_string ) inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") input_length = inputs.shape[1] outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=400, early_stopping=True, num_beams=5) print(tokenizer.decode(outputs[0, input_length:], skip_special_tokens=True)) # Ahir se'n va anar, va recollir les seves coses i es va fer a la mar. Una camisa, uns texans i una cançó, on anirà, on anirà. Es va acomiadar i va decidir batre's en duel amb el mar. I fer la volta al món en el seu veler. I navegar, nai-na-na, navegar ``` Using this template, each turn is preceded by a `<|im_start|>` delimiter and the role of the entity (either `user`, for content supplied by the user, or `assistant` for LLM responses), and finished with the `<|im_end|>` token. #### General translation For machine translation tasks, you can use the following prompt template: ``` Translate the following text from {source} into {target}. {source}: {source sentence} {target}: ``` <details> <summary>Show an example</summary> ```python source = 'Catalan' target = 'Galician' source_sentence = "Als antics egipcis del període de l'Imperi Nou els fascinaven els monuments dels seus predecessors, que llavors tenien més de mil anys." text = f"Translate the following text from {source} into {target}.\n{source}: {source_sentence} \n{target}:" # Os antigos exipcios do período do Imperio Novo estaban fascinados polos monumentos dos seus predecesores, que entón tiñan máis de mil anos de antigüidade. ``` </details> ### Post-editing For post-editing tasks, you can use the following prompt template: ``` Please fix any mistakes in the following {source}-{target} machine translation or keep it unedited if it's correct. Source: {source_sentence} MT: {machine_translation} Corrected:" ``` <details> <summary>Show an example</summary> ```python source = 'Catalan' target = 'English' source_sentence = 'Rafael Nadal i Maria Magdalena van inspirar a una generació sencera.' machine_translation = 'Rafael Christmas and Maria the Muffin inspired an entire generation each in their own way.' text = f"Please fix any mistakes in the following {source}-{target} machine translation or keep it unedited if it's correct.\nSource: {source_sentence} \nMT: {machine_translation} \nCorrected:" # Rafael Nadal and Maria Magdalena inspired an entire generation. ``` </details> ### Document-level translation For document-level translation tasks, you can use the following prompt template: ``` Please translate this text from {source} into {target}. {source}: {1st paragraph of the document} {2nd paragraph of the document} {Nth paragraph of the document} {target}: ``` <details> <summary>Show an example</summary> ```python source = 'English' target = 'Asturian' text = """Please translate this text from {} into {}.\n{}: President Donald Trump, who campaigned on promises to crack down on illegal immigration, has raised alarms in the U.S. dairy industry with his threat to impose 25% tariffs on Mexico and Canada by February 2025. This move is part of a broader strategy to declare a national emergency at the southern border to halt illegal migration completely. However, the implications for the agriculture sector, particularly dairy, are significant. Approximately half of the U.S. dairy industry's workforce consists of immigrant labor, many of whom are undocumented. The National Milk Producers Federation estimates that removing immigrant workers could decimate the dairy herd by 2.1 million cows and slash milk production by nearly 50 billion pounds, leading to a dramatic 90.4% increase in milk prices. The complex perspectives of Americans on undocumented workers were highlighted in a Pew Research Center study. While 64% of U.S. adults support legal pathways for undocumented immigrants, 35% oppose it—a gap that has been narrowing recently. Factors influencing public opinion include the belief that immigrants should have jobs and pass security checks, contrasted by concerns about lawbreakers being rewarded, fairness for legal migrants, and resource allocation. According to Zach Rutledge, an agricultural economist at Michigan State University, as nations grow wealthier, their labor forces transition away from agriculture toward sectors like services and manufacturing. This shift has led to the U.S. relying heavily on immigrant labor for agricultural work. Domestic workers, even with employment taxes, may cost $15 to $25 an hour, while H-2A visa program workers might cost $25 to $30 an hour, accounting for additional housing expenses. The National Milk Producers Federation has been vocal in advocating for changes to the H-2A visa program, which outside of its current seasonal limitations, does not support the dairy industry's year-round labor needs. Executive vice-president Jaime Castaneda reiterated the need for legislative clarity to address the undocumented workforce issues in dairy farming. The Farm Workforce Modernization Act of 2023, which could grant legal status to certain undocumented farmworkers, has been stalled in Congress, despite acknowledgment of the sector's importance to feeding America. The need for coordinated legislative efforts to ensure both border security and labor market stability is imperative moving forward. {}:""".format(source, target, source, target) ``` </details> ### Named-entity recognition For named-entity recognition tasks, you can use the following prompt template: ``` Analyse the following tokenized text and mark the tokens containing named entities. Use the following annotation guidelines with these tags for named entities: - ORG (Refers to named groups or organizations) - PER (Refers to individual people or named groups of people) - LOC (Refers to physical places or natural landmarks) - MISC (Refers to entities that don't fit into standard categories). Prepend B- to the first token of a given entity and I- to the remaining ones if they exist. If a token is not a named entity, label it as O. Input: {list of words in a sentence} Marked: ``` <details> <summary>Show an example</summary> ```python text = """Analyse the following tokenized text and mark the tokens containing named entities. Use the following annotation guidelines with these tags for named entities: - ORG (Refers to named groups or organizations) - PER (Refers to individual people or named groups of people) - LOC (Refers to physical places or natural landmarks) - MISC (Refers to entities that don't fit into standard categories). Prepend B- to the first token of a given entity and I- to the remaining ones if they exist. If a token is not a named entity, label it as O. Input: ['La', 'defensa', 'del', 'antiguo', 'responsable', 'de', 'la', 'RFEF', 'confirma', 'que', 'interpondrá', 'un', 'recurso.'] Marked: """ # [('La', 'O'), ('defensa', 'O'), ('del', 'O'), ('antiguo', 'O'), ('responsable', 'O'), ('de', 'O'), ('la', 'O'), ('RFEF', 'B-ORG'), ('confirma', 'O'), ('que', 'O'), ('interpondrá', 'O'), ('un', 'O'), ('recurso.', 'O')] ``` </details> ### Grammar checker For fixing any mistakes in grammar, you can use the following prompt template: ``` Please fix any mistakes in the following {source} sentence or keep it unedited if it's correct. Sentence: {sentence} Corrected: ``` <details> <summary>Show an example</summary> ```python source = 'Catalan' sentence = 'Entonses, el meu jefe m’ha dit que he de treballar els fins de setmana.' text = f"Please fix any mistakes in the following {source} sentence or keep it unedited if it's correct.\nSentence: {sentence} \nCorrected:" # Llavors, el meu cap m'ha dit que he de treballar els caps de setmana. ``` </details> ## Data ### Pretraining Data The pretraining corpus consists of 424 billion tokens of Catalan-centric, Spanish-centric, and English-centric parallel data, including all of the official European languages plus Catalan, Basque, Galician, Asturian, Aragonese and Aranese. It amounts to 6,574,251,526 parallel sentence pairs. This highly multilingual corpus is predominantly composed of data sourced from [OPUS](https://opus.nlpl.eu/), with additional data taken from the [NTEU Project](https://nteu.eu/), [Aina Project](https://projecteaina.cat/), and other sources (see: [Data Sources](#pre-data-sources) and [References](#pre-references)). Where little parallel Catalan <-> xx data could be found, synthetic Catalan data was generated from the Spanish side of the collected Spanish <-> xx corpora using [Projecte Aina’s Spanish-Catalan model](https://huggingface.co/projecte-aina/aina-translator-es-ca). The final distribution of languages was as below: ![](./treemap.png) Click the expand button below to see the full list of corpora included in the training data. <details id="pre-data-sources"> <summary>Data Sources</summary> | Dataset | Ca-xx Languages | Es-xx Langugages | En-xx Languages | |-----------------------------------------------|----------------------------------------------------------------|-----------------------------------------------|----------------------------------------------------------------| |[AINA](https://huggingface.co/projecte-aina) | en | | | |ARANESE-SYNTH-CORPUS-BSC | arn | | | |BOUA-SYNTH-BSC | | val | | |[BOUMH](https://github.com/transducens/PILAR/tree/main/valencian/BOUMH) | | val | | |[BOUA-PILAR](https://github.com/transducens/PILAR/tree/main/valencian/BOUA) | | val | | |[CCMatrix](https://opus.nlpl.eu/CCMatrix/corpus/version/CCMatrix) |eu | | ga | |[DGT](https://opus.nlpl.eu/DGT/corpus/version/DGT) | |bg,cs,da,de,el ,et,fi,fr,ga,hr,hu,lt,lv,mt,nl,pl,pt,ro,sk,sl,sv | da,et,ga,hr,hu,lt,lv,mt,sh,sl| |DOGV-SYNTH-BSC | | val | | |[DOGV-PILAR](https://github.com/transducens/PILAR/tree/main/valencian/DOGV-html) | | val | | |[ELRC-EMEA](https://opus.nlpl.eu/ELRC-EMEA/corpus/version/ELRC-EMEA) | |bg,cs,da,hu,lt,lv,mt,pl,ro,sk,sl | et,hr,lv,ro,sk,sl | |[EMEA](https://opus.nlpl.eu/EMEA/corpus/version/EMEA) | |bg,cs,da,el,fi,hu,lt,mt,nl,pl,ro,sk,sl,sv | et,mt | |[EUBookshop](https://opus.nlpl.eu/EUbookshop/corpus/version/EUbookshop) |lt,pl,pt |cs,da,de,el,fi,fr,ga,it,lv,mt,nl,pl,pt,ro,sk,sl,sv |cy,ga| |[Europarl](https://opus.nlpl.eu/Europarl/corpus/version/Europarl) | |bg,cs,da,el,en,fi,fr,hu,lt,lv,nl,pl,pt ,ro,sk,sl,sv | | |[Europat](https://opus.nlpl.eu/EuroPat/corpus/version/EuroPat) | |en,hr | no | |[GAITU Corpus](https://gaitu.eus/) | | | eu| |[KDE4](https://opus.nlpl.eu/KDE4/corpus/version/KDE4) |bg,cs,da,de,el ,et,eu,fi,fr,ga,gl,hr,it,lt,lv,nl,pl,pt,ro,sk,sl,sv |bg,ga,hr |cy,ga,nn,oc | |[GlobalVoices](https://opus.nlpl.eu/GlobalVoices/corpus/version/GlobalVoices) | bg,de,fr,it,nl,pl,pt |bg,de,fr,pt | | |[GNOME](https://opus.nlpl.eu/GNOME/corpus/version/GNOME) |eu,fr,ga,gl,pt |ga |cy,ga,nn| |[JRC-Arquis](https://opus.nlpl.eu/JRC-Acquis/corpus/version/JRC-Acquis) | |cs,da,et,fr,lt,lv,mt,nl,pl ,ro,sv| et | |LES-CORTS-VALENCIANES-SYNTH-BSC | | val | | |[MaCoCu](https://opus.nlpl.eu/MaCoCu/corpus/version/MaCoCu) | en | | hr,mt,uk | |[MultiCCAligned](https://opus.nlpl.eu/JRC-Acquis/corpus/version/JRC-Acquis) |bg,cs,de,el,et,fi,fr,hr,hu,it,lt,lv,nl,pl,ro,sk,sv |bg,fi,fr,hr,it,lv,nl,pt |bg,cy,da,et,fi,hr,hu,lt,lv,no,sl,sr,uk| |[MultiHPLT](https://opus.nlpl.eu/MultiHPLT/corpus/version/MultiHPLT) |en, et,fi,ga,hr,mt | |fi,ga,gl,hr,mt,nn,sr | |[MultiParaCrawl](https://opus.nlpl.eu/MultiParaCrawl/corpus/version/MultiParaCrawl) |bg,da |de,en,fr,ga,hr,hu,it,mt,pt |bg,cs,da,de,el,et,fi,fr,ga,hr,hu,lt,lv,mt,nn,pl,ro,sk,sl,uk| |[MultiUN](https://opus.nlpl.eu/MultiUN/corpus/version/MultiUN) | |fr | | |[News-Commentary](https://opus.nlpl.eu/News-Commentary/corpus/version/News-Commentary) | |fr | | |[NLLB](https://opus.nlpl.eu/NLLB/corpus/version/NLLB) |bg,da,el,en,et,fi,fr,gl,hu,it ,lt,lv,pt,ro,sk,sl |bg,cs,da,de,el ,et,fi,fr,hu,it,lt,lv,nl,pl,pt ,ro,sk,sl,sv| bg,cs,cy,da,de,el,et,fi,fr,ga,hr,hu,it,lt,lv,mt,nl,no,oc,pl,pt,ro,ru,sk,sl,sr,sv,uk| |[NÓS Authentic Corpus](https://zenodo.org/records/7675110) | | | gl | |[NÓS Synthetic Corpus](https://zenodo.org/records/7685180) | | | gl | |[NTEU](https://www.elrc-share.eu/repository/search/?q=NTEU) | |bg,cs,da,de,el,en,et,fi,fr,ga,hr,hu,it,lt,lv,mt,nl,pl,pt,ro,sk,sl,sv | da,et,ga,hr,lt,lv,mt,ro,sk,sl,sv | |[OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles) |bg,cs,da,de,el ,et,eu,fi,gl,hr,hu,lt,lv,nl,pl,pt,ro,sk,sl,sv |da,de,fi,fr,hr,hu,it,lv,nl | bg,cs,de,el,et,hr,fi,fr,hr,hu,no,sl,sr| |[OPUS-100](https://opus.nlpl.eu/opus-100.php) | en | | gl | |[StanfordNLP-NMT](https://opus.nlpl.eu/StanfordNLP-NMT/corpus/version/StanfordNLP-NMT) | | |cs | |[Tatoeba](https://opus.nlpl.eu/Tatoeba/corpus/version/Tatoeba) |de,pt |pt | | |[TildeModel](https://opus.nlpl.eu/TildeMODEL/corpus/version/TildeMODEL) | |bg | et,hr,lt,lv,mt | |[UNPC](https://opus.nlpl.eu/UNPC/corpus/version/UNPC) | |en,fr | ru | |[PILAR-VALENCIAN-AUTH](https://github.com/transducens/PILAR/tree/main/valencian/Generalitat) | | val | | |[PILAR-VALENCIAN-SYNTH](https://github.com/transducens/PILAR/tree/main/valencian/Generalitat) | | val | | |[WikiMatrix](https://opus.nlpl.eu/WikiMatrix/corpus/version/WikiMatrix) |bg,cs,da,de,el ,et,eu,fi,fr,gl,hr,hu,it,lt,nl,pl,pt,ro,sk,sl,sv |bg,en,fr,hr,it,pt | oc,sh | |[Wikimedia](https://opus.nlpl.eu/wikimedia/corpus/version/wikimedia) | | |cy,nn | |[XLENT](https://opus.nlpl.eu/XLEnt/corpus/version/XLEnt) |eu,ga,gl |ga |cy,et,ga,gl,hr,oc,sh| Datasets with "-BSC" in their names (e.g., BOUA-SYNTH-BSC, DOGV-SYNTH-BSC) are synthetic datasets obtained by machine translating pre-existing monolingual corpora with our own seq-to-seq models. These datasets were generated internally for model training and are not published. To consult the data summary document with the respective licences, please send an e-mail to [email protected]. </details> <details id="pre-references"> <summary>References</summary> - Aulamo, M., Sulubacak, U., Virpioja, S., & Tiedemann, J. (2020). OpusTools and Parallel Corpus Diagnostics. In N. Calzolari, F. Béchet, P. Blache, K. Choukri, C. Cieri, T. Declerck, S. Goggi, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of the Twelfth Language Resources and Evaluation Conference (pp. 3782–3789). European Language Resources Association. https://aclanthology.org/2020.lrec-1.467 - Chaudhary, V., Tang, Y., Guzmán, F., Schwenk, H., & Koehn, P. (2019). Low-Resource Corpus Filtering Using Multilingual Sentence Embeddings. In O. Bojar, R. Chatterjee, C. Federmann, M. Fishel, Y. Graham, B. Haddow, M. Huck, A. J. Yepes, P. Koehn, A. Martins, C. Monz, M. Negri, A. Névéol, M. Neves, M. Post, M. Turchi, & K. Verspoor (Eds.), Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2) (pp. 261–266). Association for Computational Linguistics. https://doi.org/10.18653/v1/W19-5435 - DGT-Translation Memory—European Commission. (n.d.). Retrieved November 4, 2024, from https://joint-research-centre.ec.europa.eu/language-technology-resources/dgt-translation-memory_en - Eisele, A., & Chen, Y. (2010). MultiUN: A Multilingual Corpus from United Nation Documents. In N. Calzolari, K. Choukri, B. Maegaard, J. Mariani, J. Odijk, S. Piperidis, M. Rosner, & D. Tapias (Eds.), Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10). European Language Resources Association (ELRA). http://www.lrec-conf.org/proceedings/lrec2010/pdf/686_Paper.pdf - El-Kishky, A., Chaudhary, V., Guzmán, F., & Koehn, P. (2020). CCAligned: A Massive Collection of Cross-Lingual Web-Document Pairs. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 5960–5969. https://doi.org/10.18653/v1/2020.emnlp-main.480 - El-Kishky, A., Renduchintala, A., Cross, J., Guzmán, F., & Koehn, P. (2021). XLEnt: Mining a Large Cross-lingual Entity Dataset with Lexical-Semantic-Phonetic Word Alignment. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 10424–10430. https://doi.org/10.18653/v1/2021.emnlp-main.814 - Fan, A., Bhosale, S., Schwenk, H., Ma, Z., El-Kishky, A., Goyal, S., Baines, M., Celebi, O., Wenzek, G., Chaudhary, V., Goyal, N., Birch, T., Liptchinsky, V., Edunov, S., Grave, E., Auli, M., & Joulin, A. (2020). Beyond English-Centric Multilingual Machine Translation (No. arXiv:2010.11125). arXiv. https://doi.org/10.48550/arXiv.2010.11125 - García-Martínez, M., Bié, L., Cerdà, A., Estela, A., Herranz, M., Krišlauks, R., Melero, M., O’Dowd, T., O’Gorman, S., Pinnis, M., Stafanovič, A., Superbo, R., & Vasiļevskis, A. (2021). Neural Translation for European Union (NTEU). 316–334. https://aclanthology.org/2021.mtsummit-up.23 - Gibert, O. de, Nail, G., Arefyev, N., Bañón, M., Linde, J. van der, Ji, S., Zaragoza-Bernabeu, J., Aulamo, M., Ramírez-Sánchez, G., Kutuzov, A., Pyysalo, S., Oepen, S., & Tiedemann, J. (2024). A New Massive Multilingual Dataset for High-Performance Language Technologies (No. arXiv:2403.14009). arXiv. http://arxiv.org/abs/2403.14009 - Koehn, P. (2005). Europarl: A Parallel Corpus for Statistical Machine Translation. Proceedings of Machine Translation Summit X: Papers, 79–86. https://aclanthology.org/2005.mtsummit-papers.11 - Kreutzer, J., Caswell, I., Wang, L., Wahab, A., Van Esch, D., Ulzii-Orshikh, N., Tapo, A., Subramani, N., Sokolov, A., Sikasote, C., Setyawan, M., Sarin, S., Samb, S., Sagot, B., Rivera, C., Rios, A., Papadimitriou, I., Osei, S., Suarez, P. O., … Adeyemi, M. (2022). Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets. Transactions of the Association for Computational Linguistics, 10, 50–72. https://doi.org/10.1162/tacl_a_00447 - Rozis, R.,Skadiņš, R (2017). Tilde MODEL - Multilingual Open Data for EU Languages. https://aclanthology.org/W17-0235 - Schwenk, H., Chaudhary, V., Sun, S., Gong, H., & Guzmán, F. (2019). WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia (No. arXiv:1907.05791). arXiv. https://doi.org/10.48550/arXiv.1907.05791 - Schwenk, H., Wenzek, G., Edunov, S., Grave, E., & Joulin, A. (2020). CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB (No. arXiv:1911.04944). arXiv. https://doi.org/10.48550/arXiv.1911.04944 - Steinberger, R., Pouliquen, B., Widiger, A., Ignat, C., Erjavec, T., Tufiş, D., & Varga, D. (n.d.). The JRC-Acquis: A Multilingual Aligned Parallel Corpus with 20+ Languages. http://www.lrec-conf.org/proceedings/lrec2006/pdf/340_pdf - Subramani, N., Luccioni, S., Dodge, J., & Mitchell, M. (2023). Detecting Personal Information in Training Corpora: An Analysis. In A. Ovalle, K.-W. Chang, N. Mehrabi, Y. Pruksachatkun, A. Galystan, J. Dhamala, A. Verma, T. Cao, A. Kumar, & R. Gupta (Eds.), Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023) (pp. 208–220). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.trustnlp-1.18 - Tiedemann, J. (23-25). Parallel Data, Tools and Interfaces in OPUS. In N. C. (Conference Chair), K. Choukri, T. Declerck, M. U. Doğan, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12). European Language Resources Association (ELRA). http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper - Ziemski, M., Junczys-Dowmunt, M., & Pouliquen, B. (n.d.). The United Nations Parallel Corpus v1.0. https://aclanthology.org/L16-1561 </details> ### Instruction Tuning Data This model has been fine-tuned on ~135k instructions, primarily targeting machine translation performance for Catalan, English, and Spanish. Additional instruction data for other European and closely related Iberian languages was also included, as it yielded a positive impact on the languages of interest. That said, the performance in these additional languages is not guaranteed due to the limited amount of available data and the lack of resources for thorough testing. A portion of our fine-tuning data comes directly from, or is sampled from [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2). We also created additional datasets for our main languages of interest. While tasks relating to machine translation are included, it’s important to note that no chat data was used in the fine-tuning process. The final distribution of tasks was as below: ![](./chart.png) Click the expand button below to see the full list of tasks included in the finetuning data. <details id="instr-data-sources"> <summary>Data Sources</summary> | Task | Source | Languages | Count | |----------------------------------|------------------------------------------------------------------------------------------|----------------------------------------------------------------|--------| | Multi-reference Translation | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [Tatoeba Dev (filtered)](https://github.com/Helsinki-NLP/Tatoeba-Challenge) | mixed | 10000 | | Paraphrase | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [PAWS-X Dev](https://github.com/google-research-datasets/paws) | mixed | 3521 | | Named-entity Recognition | [AnCora-Ca-NER](https://huggingface.co/datasets/projecte-aina/ancora-ca-ner) | ca | 12059 | | Named-entity Recognition | [BasqueGLUE](https://huggingface.co/datasets/orai-nlp/basqueGLUE), [EusIE](https://huggingface.co/datasets/HiTZ/EusIE) | eu | 4304 | | Named-entity Recognition | [SLI NERC Galician Gold Corpus](https://github.com/xavier-gz/SLI_Galician_Corpora) | gl | 6483 | | Named-entity Recognition | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MultiCoNER 2022 and 2023 Dev](https://registry.opendata.aws/multiconer/) | pt | 854 | | Named-entity Recognition | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MultiCoNER 2022 and 2023 Dev](https://registry.opendata.aws/multiconer/) | nl | 800 | | Named-entity Recognition | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MultiCoNER 2022 and 2023 Dev](https://registry.opendata.aws/multiconer/) | es | 1654 | | Named-entity Recognition | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MultiCoNER 2022 and 2023 Dev](https://registry.opendata.aws/multiconer/) | en | 1671 | | Named-entity Recognition | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MultiCoNER 2022 and 2023 Dev](https://registry.opendata.aws/multiconer/) | ru | 800 | | Named-entity Recognition | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MultiCoNER 2022 and 2023 Dev](https://registry.opendata.aws/multiconer/) | it | 858 | | Named-entity Recognition | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MultiCoNER 2022 and 2023 Dev](https://registry.opendata.aws/multiconer/) | fr | 857 | | Named-entity Recognition | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MultiCoNER 2022 and 2023 Dev](https://registry.opendata.aws/multiconer/) | de | 1312 | | Terminology-aware Translation | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [WMT21 Terminology Dev (filtered)](https://www.statmt.org/wmt21/terminology-task.html) | en-ru | 50 | | Terminology-aware Translation | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [WMT21 Terminology Dev (filtered)](https://www.statmt.org/wmt21/terminology-task.html) | en-fr | 29 | | Automatic Post Editing | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [QT21](https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-2390), [ApeQuest](https://apequest.wordpress.com/) | en-fr | 6133 | | Automatic Post Editing | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [QT21](https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-2390), [ApeQuest](https://apequest.wordpress.com/) | en-nl | 9077 | | Automatic Post Editing | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [QT21](https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-2390), [ApeQuest](https://apequest.wordpress.com/) | en-pt | 5762 | | Automatic Post Editing | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [QT21](https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-2390), [ApeQuest](https://apequest.wordpress.com/) | de-en | 10000 | | Automatic Post Editing | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [QT21](https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-2390), [ApeQuest](https://apequest.wordpress.com/) | en-de | 10000 | | Machine Translation Evaluation | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2)-sample: [WMT20 to WMT22 Metrics MQM](https://www.statmt.org/wmt22/results.html), [WMT17 to WMT22 Metrics Direct Assessments](https://www.statmt.org/wmt22/results.html) | en-ru, en-pl, ru-en, en-de, en-ru, de-fr, de-en, en-de | 353 | | Machine Translation Evaluation | Non-public | four pivot languages (eu, es, ca, gl) paired with European languages (bg, cs, da, de, el, en, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv) | 9700 | | General Machine Translation | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [WMT14 to WMT21](https://www.statmt.org/wmt22/results.html), [NTREX](https://github.com/MicrosoftTranslator/NTREX), [Flores Dev](https://github.com/facebookresearch/flores), [FRMT](https://github.com/google-research/google-research/tree/master/frmt), [QT21](https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-2390), [ApeQuest](https://apequest.wordpress.com/), [OPUS (Quality Filtered)](https://opus.nlpl.eu/), [MT-GenEval](https://github.com/amazon-science/machine-translation-gender-eval) | nl-en, en-ru, it-en, fr-en, es-en, en-fr, ru-en, fr-de, en-nl, de-fr | 500 | | General Machine Translation | Non-public | three pivot languages (es, ca, en) paired with European languages (ast, arn, arg, bg, cs, cy, da, de, el, et, fi, ga, gl, hr, it, lt, lv, mt, nb, nn, nl, oc, pl, pt, ro, ru, sk, sl, sr, sv, uk, eu) | 9350 | | Fill-in-the-Blank | Non-public | five pivot languages (ca, es, eu, gl, en) paired with European languages (cs, da, de, el, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv) | 11500 | | Document-level Translation | Non-public | two pivot languages (es, en) paired with European languages (bg, cs, da, de, el, et, fi, fr, hu, it, lt, lv, nl, pl, pt, ro, ru, sk, sv) | 7600 | | Paragraph-level Translation | Non-public | two pivot languages (es, en) paired with European languages (bg, cs, da, de, el, et, fi, fr, hu, it, lt, lv, nl, pl, pt, ro, ru, sk, sv) | 7600 | | Context-Aware Translation | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MT-GenEval](https://github.com/amazon-science/machine-translation-gender-eval) | en-it | 348 | | Context-Aware Translation | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MT-GenEval](https://github.com/amazon-science/machine-translation-gender-eval) | en-ru | 454 | | Context-Aware Translation | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MT-GenEval](https://github.com/amazon-science/machine-translation-gender-eval) | en-fr | 369 | | Context-Aware Translation | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MT-GenEval](https://github.com/amazon-science/machine-translation-gender-eval) | en-nl | 417 | | Context-Aware Translation | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MT-GenEval](https://github.com/amazon-science/machine-translation-gender-eval) | en-es | 431 | | Context-Aware Translation | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MT-GenEval](https://github.com/amazon-science/machine-translation-gender-eval) | en-de | 558 | |**Total** | | | **135,404** | The non-public portion of this dataset was jointly created by the [ILENIA](https://proyectoilenia.es/) partners: BSC-LT, [HiTZ](http://hitz.ehu.eus/es), and [CiTIUS](https://citius.gal/es/). For further information regarding the instruction-tuning data, please contact <[email protected]>. </details> <details id="instr-references"> <summary>References</summary> - Alves, D. M., Pombal, J., Guerreiro, N. M., Martins, P. H., Alves, J., Farajian, A., Peters, B., Rei, R., Fernandes, P., Agrawal, S., Colombo, P., de Souza, J. G. C., & Martins, A. F. T. (2024). Tower: An open multilingual large language model for translation-related tasks (No. arXiv: 2402.17733). arXiv. https://arxiv.org/abs/2402.17733 - Armengol-Estapé, J., Carrino, C. P., Rodriguez-Penagos, C., de Gibert Bonet, O., Armentano-Oller, C., Gonzalez-Agirre, A., Melero, M., & Villegas, M. (2021). Are multilingual models the best choice for moderately under-resourced languages? A comprehensive assessment for Catalan. Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 4933–4946. Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.findings-acl.437 - Currey, A., Nadejde, M., Pappagari, R. R., Mayer, M., Lauly, S., Niu, X., Hsu, B., & Dinu, G. (2022). MT-GenEval: A counterfactual and contextual dataset for evaluating gender accuracy in machine translation. In Y. Goldberg, Z. Kozareva, & Y. Zhang (Eds.), Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (pp. 4287–4299). Association for Computational Linguistics. https://doi.org/10.18653/v1/2022.emnlp-main.288 - Federmann, C., Kocmi, T., & Xin, Y. (2022). NTREX-128 – News test references for MT evaluation of 128 languages. Proceedings of the First Workshop on Scaling Up Multilingual Evaluation, 21–24. Association for Computational Linguistics. https://aclanthology.org/2022.sumeval-1.4 - Ive, J., Specia, L., Szoc, S., Vanallemeersch, T., Van den Bogaert, J., Farah, E., Maroti, C., Ventura, A., & Khalilov, M. (2020). A post-editing dataset in the legal domain: Do we underestimate neural machine translation quality? In N. Calzolari, F. Béchet, P. Blache, K. Choukri, C. Cieri, T. Declerck, S. Goggi, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of the Twelfth Language Resources and Evaluation Conference (pp. 3692–3697). European Language Resources Association. https://aclanthology.org/2020.lrec-1.455/ - Malmasi, S., Fang, A., Fetahu, B., Kar, S., & Rokhlenko, O. (2022). MultiCoNER: A large-scale multilingual dataset for complex named entity recognition. Proceedings of the 29th International Conference on Computational Linguistics, 3798–3809. International Committee on Computational Linguistics. https://aclanthology.org/2022.coling-1.334/ - NLLB Team, Costa-jussà, M. R., Cross, J., Çelebi, O., Elbayad, M., Heafield, K., Heffernan, K., Kalbassi, E., Lam, J., Licht, D., Maillard, J., Sun, A., Wang, S., Wenzek, G., Youngblood, A., Akula, B., Barrault, L., Mejia Gonzalez, G., Hansanti, P., Hoffman, J., Jarrett, S., Sadagopan, K. R., Rowe, D., Spruit, S., Tran, C., Andrews, P., Ayan, N. F., Bhosale, S., Edunov, S., Fan, A., Gao, C., Goswami, V., Guzmán, F., Koehn, P., Mourachko, A., Ropers, C., Saleem, S., Schwenk, H., & Wang, J. (2022). No language left behind: Scaling human-centered machine translation (No. arXiv: 2207.04672). arXiv. https://arxiv.org/abs/2207.04672 - Riley, P., Dozat, T., Botha, J. A., Garcia, X., Garrette, D., Riesa, J., Firat, O., & Constant, N. (2022). FRMT: A benchmark for few-shot region-aware machine translation (No. arXiv: 2210.00193). arXiv. https://doi.org/10.48550/ARXIV.2210.00193 - Specia, L., Harris, K., Blain, F., Burchardt, A., Macketanz, V., Skadiņa, I., Negri, M., & Turchi, M. (2017). Translation quality and productivity: A study on rich morphology languages. Proceedings of Machine Translation Summit XVI, 55–71. Nagoya, Japan. - Tiedemann, J. (2020). The Tatoeba translation challenge – Realistic data sets for low-resource and multilingual MT. Proceedings of the Fifth Conference on Machine Translation, 1174–1182. Association for Computational Linguistics. https://www.aclweb.org/anthology/2020.wmt-1.139 - Urbizu, G., San Vicente, I., Saralegi, X., Agerri, R., & Soroa, A. (2022). BasqueGLUE: A natural language understanding benchmark for Basque. Proceedings of the Language Resources and Evaluation Conference, 1603–1612. European Language Resources Association. https://aclanthology.org/2022.lrec-1.172 - Yang, Y., Zhang, Y., Tar, C., & Baldridge, J. (2019). PAWS-X: A cross-lingual adversarial dataset for paraphrase identification. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) (pp. 3687–3692). Association for Computational Linguistics. https://doi.org/10.18653/v1/D19-1382 - Zubillaga, M., Sainz, O., Estarrona, A., Lopez de Lacalle, O., & Agirre, E. (2024). Event extraction in Basque: Typologically motivated cross-lingual transfer-learning analysis (No. arXiv: 2404.06392). arXiv. https://arxiv.org/abs/2404.06392 </details> ## Evaluation Below are the evaluation results on the [Flores+200 devtest set](https://huggingface.co/datasets/openlanguagedata/flores_plus), compared against the state-of-the-art [MADLAD400-7B-mt model](https://huggingface.co/google/madlad400-7b-mt) ([Kudugunta, S., et al.](https://arxiv.org/abs/2309.04662)) and SalamandraTA-7b-base model. These results cover the translation directions CA-XX, ES-XX, EN-XX, as well as XX-CA, XX-ES, and XX-EN. The metrics have been computed excluding Asturian, Aranese, and Aragonese, as we report them separately. The evaluation was conducted using [MT-Lens](https://github.com/langtech-bsc/mt-evaluation), following the standard setting (beam search with beam size 5, limiting the translation length to 500 tokens). We report the following metrics: <details> <summary>Click to show metrics details</summary> - `BLEU`: Sacrebleu implementation. Signature: nrefs:1— case:mixed— eff:no— tok:13a— smooth:exp—version:2.3.1 - `TER`: Sacrebleu implementation. - `ChrF`: Sacrebleu implementation. - `Comet`: Model checkpoint: "Unbabel/wmt22-comet-da". - `Comet-kiwi`: Model checkpoint: "Unbabel/wmt22-cometkiwi-da". - `Bleurt`: Model checkpoint: "lucadiliello/BLEURT-20". - `MetricX`: Model checkpoint: "google/metricx-23-xl-v2p0". - `MetricX-QE`: Model checkpoint: "google/metricx-23-qe-xl-v2p0". </details> <details> <summary>English evaluation</summary> ### English This section presents the evaluation metrics for English translation tasks. | | Bleu↑ | Ter↓ | ChrF↑ | Comet↑ | Comet-kiwi↑ | Bleurt↑ | MetricX↓ | MetricX-QE↓ | |:---------------------------------|-------:|------:|-------:|--------:|-------------:|---------:|----------:|-------------:| | **EN-XX** | | | | | | | | | | SalamandraTA-7b-instruct | **36.29** | **50.62** | 63.3 | **0.89** | **0.85** | **0.79** | **1.02** | **0.94** | | MADLAD400-7B-mt | 35.73 | 51.87 | **63.46** | 0.88 | **0.85** | **0.79** | 1.16 | 1.1 | | SalamandraTA-7b-base | 34.99 | 52.64 | 62.58 | 0.87 | 0.84 | 0.77 | 1.45 | 1.23 | | **XX-EN** | | | | | | | | | | SalamandraTA-7b-instruct | **44.69** | **41.72** | 68.17 | **0.89** | 0.85 | **0.8** | **1.09** | **1.11** | | SalamandraTA-7b-base | 44.12 | 43 | **68.43** | **0.89** | 0.85 | **0.8** | 1.13 | 1.22 | | MADLAD400-7B-mt | 43.2 | 43.33 | 67.98 | **0.89** | **0.86** | 0.8 | 1.13 | 1.15 | <img src="./images/bleu_en.png" alt="English" width="100%"/> </details> <details> <summary>Spanish evaluation</summary> ### Spanish This section presents the evaluation metrics for Spanish translation tasks. | | Bleu↑ | Ter↓ | ChrF↑ | Comet↑ | Comet-kiwi↑ | Bleurt↑ | MetricX↓ | MetricX-QE↓ | |:---------------------------------|-------:|------:|-------:|--------:|-------------:|---------:|----------:|-------------:| | **ES-XX** | | | | | | | | | | SalamandraTA-7b-instruct | **23.67** | **65.71** | 53.55 | **0.87** | 0.82 | **0.75** | **1.04** | **1.05** | | MADLAD400-7B-mt | 22.48 | 68.91 | **53.93** | 0.86 | **0.83** | **0.75** | 1.09 | 1.14 | | SalamandraTA-7b-base | 21.63 | 70.08 | 52.98 | 0.86 | **0.83** | 0.74 | 1.24 | 1.12 | | **XX-ES** | | | | | | | | | | SalamandraTA-7b-instruct | **25.56** | **62.51** | 52.69 | **0.85** | 0.83 | 0.73 | **0.94** | **1.33** | | MADLAD400-7B-mt | 24.85 | 61.82 | **53** | **0.85** | **0.84** | **0.74** | 1.05 | 1.5 | | SalamandraTA-7b-base | 24.71 | 62.33 | 52.96 | **0.85** | **0.84** | 0.73 | 1.06 | 1.37 | <img src="./images/bleu_es.png" alt="English" width="100%"/> <img src="./images/es_xx_bars.png" alt="ESXX" width="100%"/> </details> <details> <summary>Catalan evaluation</summary> ### Catalan This section presents the evaluation metrics for Catalan translation tasks. | | Bleu↑ | Ter↓ | ChrF↑ | Comet↑ | Comet-kiwi↑ | Bleurt↑ | MetricX↓ | MetricX-QE↓ | |:---------------------------------|-------:|------:|-------:|--------:|-------------:|---------:|----------:|-------------:| | **CA-XX** | | | | | | | | | | MADLAD400-7B-mt | **29.37** | 59.01 | **58.47** | **0.87** | **0.81** | **0.77** | **1.08** | 1.31 | | SalamandraTA-7b-instruct | 29.23 | **58.32** | 57.76 | **0.87** | **0.81** | **0.77** | **1.08** | **1.22** | | SalamandraTA-7b-base | 29.06 | 59.32 | 58 | **0.87** | **0.81** | 0.76 | 1.23 | 1.28 | | **XX-CA** | | | | | | | | | | SalamandraTA-7b-instruct | **33.64** | **54.49** | 59.03 | **0.86** | 0.8 | **0.75** | **1.07** | **1.6** | | MADLAD400-7B-mt | 33.02 | 55.01 | 59.38 | **0.86** | **0.81** | **0.75** | 1.18 | 1.79 | | SalamandraTA-7b-base | 32.75 | 55.78 | **59.42** | **0.86** | **0.81** | **0.75** | 1.17 | 1.63 | <img src="./images/bleu_ca.png" alt="English" width="100%"/> </details> <details> <summary>Galician evaluation</summary> ### Galician This section presents the evaluation metrics for Galician translation tasks. | | Bleu↑ | Ter↓ | ChrF↑ | Comet↑ | Comet-kiwi↑ | Bleurt↑ | MetricX↓ | MetricX-QE↓ | |:---------------------------------|-------:|------:|-------:|--------:|-------------:|---------:|----------:|-------------:| | **GL-XX** | | | | | | | | | | SalamandraTA-7b-instruct | **28.13** | **59.68** | **56.94** | **0.87** | **0.85** | **0.76** | **1.08** | **1.2** | | SalamandraTA-7b-base | 27.47 | 61.39 | **56.96** | **0.87** | 0.82 | 0.76 | 1.23 | 1.29 | | MADLAD400-7B-mt | 26.43 | 64.3 | 55.99 | 0.86 | **0.85** | 0.76 | 1.35 | 2.06 | | **XX-GL** | | | | | | | | | | SalamandraTA-7b-instruct | **30.94** | **55.24** | **57.69** | **0.86** | **0.85** | **0.7** | **0.9** | **1.38** | | SalamandraTA-7b-base | 28.22 | 59.52 | 56.28 | 0.85 | 0.82 | 0.69 | 1.27 | 1.78 | | MADLAD400-7B-mt | 27.77 | 59.46 | 54.92 | 0.84 | **0.85** | 0.67 | 1.42 | 2.72 | <img src="./images/bleu_gl.png" alt="English" width="100%"/> </details> <details> <summary>Basque evaluation</summary> ### Basque This section presents the evaluation metrics for Basque translation tasks. | | Bleu↑ | Ter↓ | ChrF↑ | Comet↑ | Comet-kiwi↑ | Bleurt↑ | MetricX↓ | MetricX-QE↓ | |:---------------------------------|-------:|------:|-------:|--------:|-------------:|---------:|----------:|-------------:| | **EU-XX** | | | | | | | | | | SalamandraTA-7b-instruct | **22.99** | **65.8** | 52.06 | **0.86** | **0.84** | **0.74** | **1.13** | **1.38** | | SalamandraTA-7b-base | 22.87 | 67.38 | **52.19** | **0.86** | 0.79 | **0.74** | 1.19 | 1.61 | | MADLAD400-7B-mt | 21.26 | 69.75 | 49.8 | 0.85 | 0.82 | 0.72 | 1.54 | 2.71 | | **XX-EU** | | | | | | | | | | SalamandraTA-7b-instruct | **17.5** | **73.13** | 54.67 | **0.85** | **0.83** | **0.8** | **0.85** | **1.03** | | SalamandraTA-7b-base | 17.01 | 75.92 | **55.22** | **0.85** | 0.77 | **0.8** | 1.04 | 1.17 | | MADLAD400-7B-mt | 13.64 | 85.01 | 50.96 | 0.82 | 0.8 | 0.78 | 2.09 | 3.58 | <img src="./images/bleu_eu.png" alt="English" width="100%"/> </details> ### Low-Resource Languages of Spain The tables below summarize the performance metrics for English, Spanish, and Catalan to Asturian, Aranese and Aragonese compared against [Transducens/IbRo-nllb](https://huggingface.co/Transducens/IbRo-nllb) [(Galiano Jimenez, et al.)](https://aclanthology.org/2024.wmt-1.85/), [NLLB-200-3.3B](https://huggingface.co/facebook/nllb-200-3.3B) ([Costa-jussà et al., 2022](https://arxiv.org/abs/2207.04672)) and [SalamandraTA-2B](https://huggingface.co/BSC-LT/salamandraTA-2B). <details> <summary>English evaluation</summary> #### English-XX | | Source | Target | Bleu↑ | Ter↓ | ChrF↑ | |:---------------------------------|:---------|:---------|-------:|-------:|-------:| | SalamandraTA-7b-instruct | en | ast | **31.49** | **54.01** | **60.65** | | SalamandraTA-7b-base | en | ast | 26.4 | 64.02 | 57.35 | | nllb-200-3.3B | en | ast | 22.02 | 77.26 | 51.4 | | Transducens/IbRo-nllb | en | ast | 20.56 | 63.92 | 53.32 | | | | | | | | | SalamandraTA-7b-instruct | en | arn | **13.04** | **87.13** | **37.56** | | SalamandraTA-7b-base | en | arn | 8.36 | 90.85 | 34.06 | | Transducens/IbRo-nllb | en | arn | 7.63 | 89.36 | 33.88 | | | | | | | | | SalamandraTA-7b-instruct | en | arg | **20.43** | **65.62** | **50.79** | | SalamandraTA-7b-base | en | arg | 12.24 | 73.48 | 44.75 | | Transducens/IbRo-nllb | en | arg | 14.07 | 70.37 | 46.89 | </details> <details> <summary>Spanish evaluation</summary> #### Spanish-XX | | Source | Target | Bleu↑ | Ter↓ | ChrF↑ | |:---------------------------------|:---------|:---------|-------:|-------:|-------:| | SalamandraTA-7b-instruct | es | ast | **21.28** | **68.11** | **52.73** | | SalamandraTA-7b-base | es | ast | 17.65 | 75.78 | 51.05 | | Transducens/IbRo-nllb | es | ast | 16.79 | 76.36 | 50.89 | | SalamandraTA-2B | es | ast | 16.68 | 77.29 | 49.46 | | nllb-200-3.3B | es | ast | 11.85 | 100.86 | 40.27 | | | | | | | | | SalamandraTA-7b-base | es | arn | **29.19** | **71.85** | **49.42** | | Transducens/IbRo-nllb | es | arn | 28.45 | 72.56 | 49.28 | | SalamandraTA-7b-instruct | es | arn | 26.82 | 74.04 | 47.55 | | SalamandraTA-2B | es | arn | 25.41 | 74.71 | 47.33 | | | | | | | | | Transducens/IbRo-nllb | es | arg | **59.75** | **28.01** | **78.73** | | SalamandraTA-7b-base | es | arg | 53.96 | 31.51 | 76.08 | | SalamandraTA-7b-instruct | es | arg | 47.54 | 36.57 | 72.38 | | SalamandraTA-2B | es | arg | 44.57 | 37.93 | 71.32 | </details> <details> <summary>Catalan evaluation</summary> #### Catalan-XX | | Source | Target | Bleu↑ | Ter↓ | ChrF↑ | |:---------------------------------|:---------|:---------|-------:|-------:|-------:| | SalamandraTA-7b-instruct | ca | ast | **27.86** | **58.19** | 57.98 | | SalamandraTA-7b-base | ca | ast | 26.11 | 63.63 | **58.08** | | SalamandraTA-2B | ca | ast | 25.32 | 62.59 | 55.98 | | Transducens/IbRo-nllb | ca | ast | 24.77 | 61.60 | 57.49 | | nllb-200-3.3B | ca | ast | 17.17 | 91.47 | 45.83 | | | | | | | | | SalamandraTA-7b-base | ca | arn | **17.77** | **80.88** | **42.12** | | Transducens/IbRo-nllb | ca | arn | 17.51 | 81.18 | 41.91 | | SalamandraTA-7b-instruct | ca | arn | 16.45 | 82.01 | 41.04 | | SalamandraTA-2B | ca | arn | 15.37 | 82.76 | 40.53 | | | | | | | | | Transducens/IbRo-nllb | ca | arg | **24.44** | **60.79** | **55.51** | | SalamandraTA-7b-base | ca | arg | 22.53 | 62.37 | 54.32 | | SalamandraTA-7b-instruct | ca | arg | 21.62 | 63.38 | 53.01 | | SalamandraTA-2B | ca | arg | 18.6 | 65.82 | 51.21 | </details> ### Gender Aware Translation Below are the evaluation results for gender aware translation evaluated on the [MT-GenEval](https://github.com/amazon-science/machine-translation-gender-eval?tab=readme-ov-file#mt-geneval) dataset ([Currey, A. et al.](https://github.com/amazon-science/machine-translation-gender-eval?tab=readme-ov-file#mt-geneval)). These have been calculated for translation from English into German, Spanish, French, Italian, Portuguese and Russian and are compared against [MADLAD400-7B-mt](https://huggingface.co/google/madlad400-7b-mt), [TowerInstruct-7B-v0.2](https://huggingface.co/Unbabel/TowerInstruct-7B-v0.2) and the SalamandraTA-7b-base model. Evaluation was conducted using [MT-Lens](https://github.com/langtech-bsc/mt-evaluation) and is reported as accuracy computed using the accuracy metric provided with MT-GenEval. <details> | | Source | Target | Masc | Fem | Pair | |:---------------------------------|:---------|:---------|-------:|-------:|-------:| | SalamandraTA-7b-instruct | en | de | **0.883** | **0.883** | **0.773** | | SalamandraTA-7b-base | en | de | 0.857 | 0.77 | 0.66 | | MADLAD400-7B-mt | en | de | 0.877 | 0.823 | 0.713 | | TowerInstruct-7B-v0.2 | en | de | 0.863 | 0.84 | 0.727 | | | | | | | | | SalamandraTA-7b-instruct | en | es | 0.867 | **0.85** | **0.737** | | SalamandraTA-7b-base | en | es | **0.89** | 0.733 | 0.643 | | MADLAD400-7B-mt | en | es | 0.887 | 0.78 | 0.687 | | TowerInstruct-7B-v0.2 | en | es | 0.85 | 0.823 | 0.693 | | | | | | | | | SalamandraTA-7b-instruct | en | fr | **0.9** | 0.82 | **0.737** | | SalamandraTA-7b-base | en | fr | 0.8867 | 0.71 | 0.617 | | MADLAD400-7B-mt | en | fr | 0.873 | 0.777 | 0.663 | | TowerInstruct-7B-v0.2 | en | fr | 0.88 | **0.823** | 0.717 | | | | | | | | | SalamandraTA-7b-instruct | en | it | 0.9 | **0.763** | 0.683 | | SalamandraTA-7b-base | en | it | 0.893 | 0.593 | 0.513 | | MADLAD400-7B-mt | en | it | 0.907 | 0.663 | 0.597 | | TowerInstruct-7B-v0.2 | en | it | **0.947** | 0.747 | **0.713** | | | | | | | | | SalamandraTA-7b-instruct | en | pt | 0.92 | **0.77** | **0.707** | | SalamandraTA-7b-base | en | pt | **0.923** | 0.65 | 0.597 | | MADLAD400-7B-mt | en | pt | **0.923** | 0.687 | 0.627 | | TowerInstruct-7B-v0.2 | en | pt | 0.907 | 0.73 | 0.67 | | | | | | | | | SalamandraTA-7b-instruct | en | ru | **0.95** | **0.837** | **0.793** | | SalamandraTA-7b-base | en | ru | 0.933 | 0.713 | 0.653 | | MADLAD400-7B-mt | en | ru | 0.94 | 0.797 | 0.74 | | TowerInstruct-7B-v0.2 | en | ru | 0.933 | 0.797 | 0.733 | <img src="./images/geneval.png"/> </details> ## Ethical Considerations and Limitations Detailed information on the work done to examine the presence of unwanted social and cognitive biases in the base model can be found at [Salamandra-7B model card](https://huggingface.co/BSC-LT/salamandra-7b). With regard to MT models, the only analysis related to bias which we have conducted is the MT-GenEval evaluation. No specific analysis has yet been carried out in order to evaluate potential biases or limitations in translation accuracy across different languages, dialects, or domains. However, we recognize the importance of identifying and addressing any harmful stereotypes, cultural inaccuracies, or systematic performance discrepancies that may arise in Machine Translation. As such, we plan to continue performing more analyses as we implement the necessary metrics and methods within our evaluation framework [MT-Lens](https://github.com/langtech-bsc/mt-evaluation). Note that the model has only undergone preliminary instruction tuning. We urge developers to consider potential limitations and conduct safety testing and tuning tailored to their specific applications. ## Additional information ### Author The Language Technologies Unit from Barcelona Supercomputing Center. ### Contact For further information, please send an email to <[email protected]>. ### Copyright Copyright(c) 2025 by Language Technologies Unit, Barcelona Supercomputing Center. ### Funding This work has been promoted and financed by the Government of Catalonia through the [Aina Project](https://projecteaina.cat/). This work is funded by the _Ministerio para la Transformación Digital y de la Función Pública_ - Funded by EU – NextGenerationEU within the framework of [ILENIA Project](https://proyectoilenia.es/) with reference 2022/TL22/00215337. ### Acknowledgements The success of this project has been made possible thanks to the invaluable contributions of our partners in the [ILENIA Project](https://proyectoilenia.es/): [HiTZ](http://hitz.ehu.eus/es), and [CiTIUS](https://citius.gal/es/). Their efforts have been instrumental in advancing our work, and we sincerely appreciate their help and support. ### Disclaimer ### Disclaimer Be aware that the model may contain biases or other unintended distortions. When third parties deploy systems or provide services based on this model, or use the model themselves, they bear the responsibility for mitigating any associated risks and ensuring compliance with applicable regulations, including those governing the use of Artificial Intelligence. The Barcelona Supercomputing Center, as the owner and creator of the model, shall not be held liable for any outcomes resulting from third-party use. ### License [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
[ "BEAR" ]
mradermacher/Llama-3-Youko-Minus-Base-GGUF
mradermacher
null
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Cas-Warehouse/Llama-3-Youko-Minus-Base", "base_model:quantized:Cas-Warehouse/Llama-3-Youko-Minus-Base", "endpoints_compatible", "region:us", "conversational" ]
2025-03-09T15:10:31Z
2025-03-09T15:32:33+00:00
406
1
--- base_model: Cas-Warehouse/Llama-3-Youko-Minus-Base language: - en library_name: transformers tags: - mergekit - merge quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Cas-Warehouse/Llama-3-Youko-Minus-Base <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-Youko-Minus-Base-GGUF/resolve/main/Llama-3-Youko-Minus-Base.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Youko-Minus-Base-GGUF/resolve/main/Llama-3-Youko-Minus-Base.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Youko-Minus-Base-GGUF/resolve/main/Llama-3-Youko-Minus-Base.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Youko-Minus-Base-GGUF/resolve/main/Llama-3-Youko-Minus-Base.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Youko-Minus-Base-GGUF/resolve/main/Llama-3-Youko-Minus-Base.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Youko-Minus-Base-GGUF/resolve/main/Llama-3-Youko-Minus-Base.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Youko-Minus-Base-GGUF/resolve/main/Llama-3-Youko-Minus-Base.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Youko-Minus-Base-GGUF/resolve/main/Llama-3-Youko-Minus-Base.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Youko-Minus-Base-GGUF/resolve/main/Llama-3-Youko-Minus-Base.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Youko-Minus-Base-GGUF/resolve/main/Llama-3-Youko-Minus-Base.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Youko-Minus-Base-GGUF/resolve/main/Llama-3-Youko-Minus-Base.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Youko-Minus-Base-GGUF/resolve/main/Llama-3-Youko-Minus-Base.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
[ "CAS" ]
Yntec/ZooFun
Yntec
text-to-image
[ "diffusers", "safetensors", "Character Design", "Animals", "Adorable", "aji1", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "base_model:Yntec/GenerateMe", "base_model:finetune:Yntec/GenerateMe", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
2024-06-26T08:41:01Z
2025-01-16T17:58:42+00:00
400
1
--- base_model: - Yntec/GenerateMe library_name: diffusers license: creativeml-openrail-m pipeline_tag: text-to-image tags: - Character Design - Animals - Adorable - aji1 - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image --- Use zoofun in the prompt to enhance the effect. If you're not drawing animals check out: https://huggingface.co/Yntec/GenerateMe - if you want better backgrounds check out: https://huggingface.co/Yntec/Genuine # ZooFun Samples and prompts: ![Free online ai image generator zoofun](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/kpUe19lYSMCWZZnRtEfGy.png) (Click for larger) Top left: masterpiece, high quality, zoofun, parrot, cute, full body, solo, standing Top right: a Bear playing guitar in a club, zoofun, whimsical Bottom left: zoofun. A high contrast portrait of a happy fuzzy chibi panda dressed as a chef in a high end kitchen making dough. There is a painting of flowers on the wall behind him. Bottom right: sitting girl playing with cheeky brother. Carrying pikachu fluffy bunny rabbit. Cute faces and eyes. Movie carrot in chair The zoofun LoRa baked into GenerateMe! Original page: https://civitai.com/models/387070/zoofun # Recipe - SuperMerger Merge LoRA to checkpoint 0.7 strength: Model A: GenerateMe LoRA: zoofun_v1.0 Output: ZooFun
[ "BEAR" ]
QuantFactory/Llama3-Med42-8B-GGUF
QuantFactory
text-generation
[ "gguf", "m42", "health", "healthcare", "clinical-llm", "text-generation", "en", "base_model:m42-health/Llama3-Med42-8B", "base_model:quantized:m42-health/Llama3-Med42-8B", "license:llama3", "region:us", "conversational" ]
2024-07-12T12:57:48Z
2024-07-13T12:23:02+00:00
399
2
--- base_model: m42-health/Llama3-Med42-8B language: - en license: llama3 license_name: llama3 pipeline_tag: text-generation tags: - m42 - health - healthcare - clinical-llm inference: false --- # QuantFactory/Llama3-Med42-8B-GGUF This is quantized version of [m42-health/Llama3-Med42-8B](https://huggingface.co/m42-health/Llama3-Med42-8B) created using llama.cpp # Model Description ## **Med42-v2 - A Suite of Clinically-aligned Large Language Models** Med42-v2 is a suite of open-access clinical large language models (LLM) instruct and preference-tuned by M42 to expand access to medical knowledge. Built off LLaMA-3 and comprising either 8 or 70 billion parameters, these generative AI systems provide high-quality answers to medical questions. ## Key performance metrics: - Med42-v2-70B outperforms GPT-4.0 in most of the MCQA tasks. - Med42-v2-70B achieves a MedQA zero-shot performance of 79.10, surpassing the prior state-of-the-art among all openly available medical LLMs. - Med42-v2-70B sits at the top of the Clinical Elo Rating Leaderboard. |Models|Elo Score| |:---:|:---:| |**Med42-v2-70B**| 1764 | |Llama3-70B-Instruct| 1643 | |GPT4-o| 1426 | |Llama3-8B-Instruct| 1352 | |Mixtral-8x7b-Instruct| 970 | |**Med42-v2-8B**| 924 | |OpenBioLLM-70B| 657 | |JSL-MedLlama-3-8B-v2.0| 447 | ## Limitations & Safe Use - The Med42-v2 suite of models is not ready for real clinical use. Extensive human evaluation is undergoing as it is required to ensure safety. - Potential for generating incorrect or harmful information. - Risk of perpetuating biases in training data. Use this suite of models responsibly! Do not rely on them for medical usage without rigorous safety testing. ## Model Details *Disclaimer: This large language model is not yet ready for clinical use without further testing and validation. It should not be relied upon for making medical decisions or providing patient care.* Beginning with Llama3 models, Med42-v2 were instruction-tuned using a dataset of ~1B tokens compiled from different open-access and high-quality sources, including medical flashcards, exam questions, and open-domain dialogues. **Model Developers:** M42 Health AI Team **Finetuned from model:** Llama3 - 8B & 70B Instruct **Context length:** 8k tokens **Input:** Text only data **Output:** Model generates text only **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we enhance the model's performance. **License:** Llama 3 Community License Agreement **Research Paper:** *Coming soon* ## Intended Use The Med42-v2 suite of models is being made available for further testing and assessment as AI assistants to enhance clinical decision-making and access to LLMs for healthcare use. Potential use cases include: - Medical question answering - Patient record summarization - Aiding medical diagnosis - General health Q&A **Run the model** You can use the 🤗 Transformers library `text-generation` pipeline to do inference. ```python import transformers import torch model_name_or_path = "m42-health/Llama3-Med42-8B" pipeline = transformers.pipeline( "text-generation", model=model_name_or_path, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ { "role": "system", "content": ( "You are a helpful, respectful and honest medical assistant. You are a second version of Med42 developed by the AI team at M42, UAE. " "Always answer as helpfully as possible, while being safe. " "Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. " "Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. " "If you don’t know the answer to a question, please don’t share false information." ), }, {"role": "user", "content": "What are the symptoms of diabetes?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=False ) stop_tokens = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>"), ] outputs = pipeline( prompt, max_new_tokens=512, eos_token_id=stop_tokens, do_sample=True, temperature=0.4, top_k=150, top_p=0.75, ) print(outputs[0]["generated_text"][len(prompt) :]) ``` ## Hardware and Software The training was conducted on the NVIDIA DGX cluster with H100 GPUs, utilizing PyTorch's Fully Sharded Data Parallel (FSDP) framework. ## Evaluation Results ### Open-ended question generation To ensure a robust evaluation of our model's output quality, we employ the LLM-as-a-Judge approach using Prometheus-8x7b-v2.0. Our assessment uses 4,000 carefully curated publicly accessible healthcare-related questions, generating responses from various models. We then use Prometheus to conduct pairwise comparisons of the answers. Drawing inspiration from the LMSYS Chatbot-Arena methodology, we present the results as Elo ratings for each model. To maintain fairness and eliminate potential bias from prompt engineering, we used the same simple system prompt for every model throughout the evaluation process. Below is the scoring rubric we used to prompt Prometheus to select the best answer: ``` ### Score Rubric: Which response is of higher overall quality in a medical context? Consider: * Relevance: Does it directly address the question? * Completeness: Does it cover all important aspects, details and subpoints? * Safety: Does it avoid unsafe practices and address potential risks? * Ethics: Does it maintain confidentiality and avoid biases? * Clarity: Is it professional, clear and easy to understand? ``` #### Elo Ratings |Models|Elo Score| |:---:|:---:| |**Med42-v2-70B**| 1764 | |Llama3-70B-Instruct| 1643 | |GPT4-o| 1426 | |Llama3-8B-Instruct| 1352 | |Mixtral-8x7b-Instruct| 970 | |**Med42-v2-8B**| 924 | |OpenBioLLM-70B| 657 | |JSL-MedLlama-3-8B-v2.0| 447 | #### Win-rate ![plot](./pairwise_model_comparison.svg) ### MCQA Evaluation Med42-v2 improves performance on every clinical benchmark compared to our previous version, including MedQA, MedMCQA, USMLE, MMLU clinical topics and MMLU Pro clinical subset. For all evaluations reported so far, we use [EleutherAI's evaluation harness library](https://github.com/EleutherAI/lm-evaluation-harness) and report zero-shot accuracies (except otherwise stated). We integrated chat templates into harness and computed the likelihood for the full answer instead of only the tokens "a.", "b.", "c." or "d.". |Model|MMLU Pro|MMLU|MedMCQA|MedQA|USMLE| |---:|:---:|:---:|:---:|:---:|:---:| |**Med42v2-70B**|64.36|87.12|73.20|79.10|83.80| |**Med42v2-8B**|54.30|75.76|61.34|62.84|67.04| |OpenBioLLM-70B|64.24|90.40|73.18|76.90|79.01| |GPT-4.0<sup>&dagger;</sup>|-|87.00|69.50|78.90|84.05| |MedGemini*|-|-|-|84.00|-| |Med-PaLM-2 (5-shot)*|-|87.77|71.30|79.70|-| |Med42|-|76.72|60.90|61.50|71.85| |ClinicalCamel-70B|-|69.75|47.00|53.40|54.30| |GPT-3.5<sup>&dagger;</sup>|-|66.63|50.10|50.80|53.00| |Llama3-8B-Instruct|48.24|72.89|59.65|61.64|60.38| |Llama3-70B-Instruct|64.24|85.99|72.03|78.88|83.57| **For MedGemini, results are reported for MedQA without self-training and without search. We note that 0-shot performance is not reported for Med-PaLM 2. Further details can be found at [https://github.com/m42health/med42](https://github.com/m42health/med42)*. <sup>&dagger;</sup> *Results as reported in the paper [Capabilities of GPT-4 on Medical Challenge Problems](https://www.microsoft.com/en-us/research/uploads/prod/2023/03/GPT-4_medical_benchmarks.pdf)*. ## Accessing Med42 and Reporting Issues Please report any software "bug" or other problems through one of the following means: - Reporting issues with the model: [https://github.com/m42health/med42](https://github.com/m42health/med42) - Reporting risky content generated by the model, bugs and/or any security concerns: [https://forms.office.com/r/fPY4Ksecgf](https://forms.office.com/r/fPY4Ksecgf) - M42’s privacy policy available at [https://m42.ae/privacy-policy/](https://m42.ae/privacy-policy/) - Reporting violations of the Acceptable Use Policy or unlicensed uses of Med42: <[email protected]> ## Model Acknowledgements We thank the Torch FSDP team for their robust distributed training framework, the EleutherAI harness team for their valuable evaluation tools, and the Hugging Face Alignment team for their contributions to responsible AI development. ## Model Citation ``` @article{christophe2024med42, title={Med42-v2 - A Suite of Clinically-aligned Large Language Models}, author={Christophe, Cl{\'e}ment and Raha, Tathagata and Hayat, Nasir and Kanithi, Praveen and Al-Mahrooqi, Ahmed and Munjal, Prateek and Saadi, Nada and Javed, Hamza and Salman, Umar and Maslenkova, Svetlana and Pimentel, Marco and Rajan, Ronnie and Khan, Shadab}, year={2024} } ```
[ "MEDQA" ]
unicamp-dl/translation-pt-en-t5
unicamp-dl
translation
[ "transformers", "pytorch", "t5", "text2text-generation", "translation", "en", "pt", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2021-10-11T03:47:04+00:00
397
25
--- datasets: - EMEA - ParaCrawl 99k - CAPES - Scielo - JRC-Acquis - Biomedical Domain Corpora language: - en - pt metrics: - bleu tags: - translation --- # Introduction This repository brings an implementation of T5 for translation in PT-EN tasks using a modest hardware setup. We propose some changes in tokenizator and post-processing that improves the result and used a Portuguese pretrained model for the translation. You can collect more informations in [our repository](https://github.com/unicamp-dl/Lite-T5-Translation). Also, check [our paper](https://aclanthology.org/2020.wmt-1.90.pdf)! # Usage Just follow "Use in Transformers" instructions. It is necessary to add a few words before to define the task to T5. You can also create a pipeline for it. An example with the phrase " Eu gosto de comer arroz" is: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline tokenizer = AutoTokenizer.from_pretrained("unicamp-dl/translation-pt-en-t5") model = AutoModelForSeq2SeqLM.from_pretrained("unicamp-dl/translation-pt-en-t5") pten_pipeline = pipeline('text2text-generation', model=model, tokenizer=tokenizer) pten_pipeline("translate Portuguese to English: Eu gosto de comer arroz.") ``` # Citation ```bibtex @inproceedings{lopes-etal-2020-lite, title = "Lite Training Strategies for {P}ortuguese-{E}nglish and {E}nglish-{P}ortuguese Translation", author = "Lopes, Alexandre and Nogueira, Rodrigo and Lotufo, Roberto and Pedrini, Helio", booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.wmt-1.90", pages = "833--840", } ```
[ "SCIELO" ]
nvidia/MambaVision-T2-1K
nvidia
image-feature-extraction
[ "transformers", "safetensors", "mambavision", "image-classification", "image-feature-extraction", "custom_code", "dataset:ILSVRC/imagenet-1k", "arxiv:2407.08083", "license:other", "autotrain_compatible", "region:us" ]
2024-07-14T20:52:52Z
2024-07-25T16:54:02+00:00
397
4
--- datasets: - ILSVRC/imagenet-1k license: other license_name: nvclv1 license_link: LICENSE pipeline_tag: image-feature-extraction --- [**MambaVision: A Hybrid Mamba-Transformer Vision Backbone**](https://arxiv.org/abs/2407.08083). ## Model Overview We have developed the first hybrid model for computer vision which leverages the strengths of Mamba and Transformers. Specifically, our core contribution includes redesigning the Mamba formulation to enhance its capability for efficient modeling of visual features. In addition, we conducted a comprehensive ablation study on the feasibility of integrating Vision Transformers (ViT) with Mamba. Our results demonstrate that equipping the Mamba architecture with several self-attention blocks at the final layers greatly improves the modeling capacity to capture long-range spatial dependencies. Based on our findings, we introduce a family of MambaVision models with a hierarchical architecture to meet various design criteria. ## Model Performance MambaVision demonstrates a strong performance by achieving a new SOTA Pareto-front in terms of Top-1 accuracy and throughput. <p align="center"> <img src="https://github.com/NVlabs/MambaVision/assets/26806394/79dcf841-3966-4b77-883d-76cd5e1d4320" width=70% height=70% class="center"> </p> ## Model Usage It is highly recommended to install the requirements for MambaVision by running the following: ```Bash pip install mambavision ``` For each model, we offer two variants for image classification and feature extraction that can be imported with 1 line of code. ### Image Classification In the following example, we demonstrate how MambaVision can be used for image classification. Given the following image from [COCO dataset](https://cocodataset.org/#home) val set as an input: <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/64414b62603214724ebd2636/4duSnqLf4lrNiAHczSmAN.jpeg" width=70% height=70% class="center"> </p> The following snippet can be used for image classification: ```Python from transformers import AutoModelForImageClassification from PIL import Image from timm.data.transforms_factory import create_transform import requests model = AutoModelForImageClassification.from_pretrained("nvidia/MambaVision-T2-1K", trust_remote_code=True) # eval mode for inference model.cuda().eval() # prepare image for the model url = 'http://images.cocodataset.org/val2017/000000020247.jpg' image = Image.open(requests.get(url, stream=True).raw) input_resolution = (3, 224, 224) # MambaVision supports any input resolutions transform = create_transform(input_size=input_resolution, is_training=False, mean=model.config.mean, std=model.config.std, crop_mode=model.config.crop_mode, crop_pct=model.config.crop_pct) inputs = transform(image).unsqueeze(0).cuda() # model inference outputs = model(inputs) logits = outputs['logits'] predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` The predicted label is ```brown bear, bruin, Ursus arctos.``` ### Feature Extraction MambaVision can also be used as a generic feature extractor. Specifically, we can extract the outputs of each stage of model (4 stages) as well as the final averaged-pool features that are flattened. The following snippet can be used for feature extraction: ```Python from transformers import AutoModel from PIL import Image from timm.data.transforms_factory import create_transform import requests model = AutoModel.from_pretrained("nvidia/MambaVision-T2-1K", trust_remote_code=True) # eval mode for inference model.cuda().eval() # prepare image for the model url = 'http://images.cocodataset.org/val2017/000000020247.jpg' image = Image.open(requests.get(url, stream=True).raw) input_resolution = (3, 224, 224) # MambaVision supports any input resolutions transform = create_transform(input_size=input_resolution, is_training=False, mean=model.config.mean, std=model.config.std, crop_mode=model.config.crop_mode, crop_pct=model.config.crop_pct) inputs = transform(image).unsqueeze(0).cuda() # model inference out_avg_pool, features = model(inputs) print("Size of the averaged pool features:", out_avg_pool.size()) # torch.Size([1, 640]) print("Number of stages in extracted features:", len(features)) # 4 stages print("Size of extracted features in stage 1:", features[0].size()) # torch.Size([1, 80, 56, 56]) print("Size of extracted features in stage 4:", features[3].size()) # torch.Size([1, 640, 7, 7]) ``` ### License: [NVIDIA Source Code License-NC](https://huggingface.co/nvidia/MambaVision-T-1K/blob/main/LICENSE)
[ "BEAR" ]
mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF
mradermacher
null
[ "transformers", "gguf", "en", "base_model:maheshhuggingface/Medical-Data-Question-Answers-finetuned-gpt2", "base_model:quantized:maheshhuggingface/Medical-Data-Question-Answers-finetuned-gpt2", "endpoints_compatible", "region:us", "imatrix" ]
2025-03-16T22:45:09Z
2025-03-16T22:49:10+00:00
397
0
--- base_model: maheshhuggingface/Medical-Data-Question-Answers-finetuned-gpt2 language: - en library_name: transformers tags: [] quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/maheshhuggingface/Medical-Data-Question-Answers-finetuned-gpt2 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-IQ1_S.gguf) | i1-IQ1_S | 0.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.2 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-Q2_K.gguf) | i1-Q2_K | 0.2 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-IQ3_S.gguf) | i1-IQ3_S | 0.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.2 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-IQ3_M.gguf) | i1-IQ3_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.2 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-Q4_0.gguf) | i1-Q4_0 | 0.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-Q4_1.gguf) | i1-Q4_1 | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/Medical-Data-Question-Answers-finetuned-gpt2-i1-GGUF/resolve/main/Medical-Data-Question-Answers-finetuned-gpt2.i1-Q6_K.gguf) | i1-Q6_K | 0.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
[ "MEDICAL DATA" ]
TheBloke/Vigostral-7B-Chat-GGUF
TheBloke
text-generation
[ "transformers", "gguf", "mistral", "LLM", "finetuned", "text-generation", "fr", "base_model:bofenghuang/vigostral-7b-chat", "base_model:quantized:bofenghuang/vigostral-7b-chat", "license:apache-2.0", "region:us" ]
2023-10-24T16:19:47Z
2023-10-24T16:25:05+00:00
390
5
--- base_model: bofenghuang/vigostral-7b-chat language: fr license: apache-2.0 model_name: Vigostral 7B Chat pipeline_tag: text-generation tags: - LLM - finetuned inference: false model_creator: bofeng huang model_type: mistral prompt_template: "<s>[INST] <<SYS>>\nVous êtes Vigogne, un assistant IA créé par Zaion\ \ Lab. Vous suivez extrêmement bien les instructions. Aidez autant que vous le pouvez.\n\ <</SYS>>\n\n{prompt} [/INST] \n" quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Vigostral 7B Chat - GGUF - Model creator: [bofeng huang](https://huggingface.co/bofenghuang) - Original model: [Vigostral 7B Chat](https://huggingface.co/bofenghuang/vigostral-7b-chat) <!-- description start --> ## Description This repo contains GGUF format model files for [bofeng huang's Vigostral 7B Chat](https://huggingface.co/bofenghuang/vigostral-7b-chat). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Vigostral-7B-Chat-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF) * [bofeng huang's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/bofenghuang/vigostral-7b-chat) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Vigogne-Llama-2-Chat ``` <s>[INST] <<SYS>> Vous êtes Vigogne, un assistant IA créé par Zaion Lab. Vous suivez extrêmement bien les instructions. Aidez autant que vous le pouvez. <</SYS>> {prompt} [/INST] ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [vigostral-7b-chat.Q2_K.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes | | [vigostral-7b-chat.Q3_K_S.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss | | [vigostral-7b-chat.Q3_K_M.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss | | [vigostral-7b-chat.Q3_K_L.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss | | [vigostral-7b-chat.Q4_0.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [vigostral-7b-chat.Q4_K_S.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss | | [vigostral-7b-chat.Q4_K_M.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended | | [vigostral-7b-chat.Q5_0.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [vigostral-7b-chat.Q5_K_S.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended | | [vigostral-7b-chat.Q5_K_M.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended | | [vigostral-7b-chat.Q6_K.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss | | [vigostral-7b-chat.Q8_0.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Vigostral-7B-Chat-GGUF and below it, a specific filename to download, such as: vigostral-7b-chat.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Vigostral-7B-Chat-GGUF vigostral-7b-chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Vigostral-7B-Chat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Vigostral-7B-Chat-GGUF vigostral-7b-chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m vigostral-7b-chat.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] <<SYS>>\nVous êtes Vigogne, un assistant IA créé par Zaion Lab. Vous suivez extrêmement bien les instructions. Aidez autant que vous le pouvez.\n<</SYS>>\n\n{prompt} [/INST]" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Vigostral-7B-Chat-GGUF", model_file="vigostral-7b-chat.Q4_K_M.gguf", model_type="mistral", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: bofeng huang's Vigostral 7B Chat # Vigostral-7B-Chat: A French chat LLM ***Preview*** of Vigostral-7B-Chat, a new addition to the Vigogne LLMs family, fine-tuned on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). For more information, please visit the [Github repository](https://github.com/bofenghuang/vigogne). **License**: A significant portion of the training data is distilled from GPT-3.5-Turbo and GPT-4, kindly use it cautiously to avoid any violations of OpenAI's [terms of use](https://openai.com/policies/terms-of-use). ## Prompt Template We used a prompt template adapted from the chat format of Llama-2. You can apply this formatting using the [chat template](https://huggingface.co/docs/transformers/main/chat_templating) through the `apply_chat_template()` method. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bofenghuang/vigostral-7b-chat") conversation = [ {"role": "user", "content": "Bonjour ! Comment ça va aujourd'hui ?"}, {"role": "assistant", "content": "Bonjour ! Je suis une IA, donc je n'ai pas de sentiments, mais je suis prêt à vous aider. Comment puis-je vous assister aujourd'hui ?"}, {"role": "user", "content": "Quelle est la hauteur de la Tour Eiffel ?"}, {"role": "assistant", "content": "La Tour Eiffel mesure environ 330 mètres de hauteur."}, {"role": "user", "content": "Comment monter en haut ?"}, ] print(tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)) ``` You will get ``` <s>[INST] <<SYS>> Vous êtes Vigogne, un assistant IA créé par Zaion Lab. Vous suivez extrêmement bien les instructions. Aidez autant que vous le pouvez. <</SYS>> Bonjour ! Comment ça va aujourd'hui ? [/INST] Bonjour ! Je suis une IA, donc je n'ai pas de sentiments, mais je suis prêt à vous aider. Comment puis-je vous assister aujourd'hui ? </s>[INST] Quelle est la hauteur de la Tour Eiffel ? [/INST] La Tour Eiffel mesure environ 330 mètres de hauteur. </s>[INST] Comment monter en haut ? [/INST] ``` ## Usage ### Inference using the unquantized model with 🤗 Transformers ```python from typing import Dict, List, Optional import torch from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig, TextStreamer model_name_or_path = "bofenghuang/vigostral-7b-chat" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side="right", use_fast=False) model = AutoModelForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.float16, device_map="auto") streamer = TextStreamer(tokenizer, timeout=10.0, skip_prompt=True, skip_special_tokens=True) def chat( query: str, history: Optional[List[Dict]] = None, temperature: float = 0.7, top_p: float = 1.0, top_k: float = 0, repetition_penalty: float = 1.1, max_new_tokens: int = 1024, **kwargs, ): if history is None: history = [] history.append({"role": "user", "content": query}) input_ids = tokenizer.apply_chat_template(history, return_tensors="pt").to(model.device) input_length = input_ids.shape[1] generated_outputs = model.generate( input_ids=input_ids, generation_config=GenerationConfig( temperature=temperature, do_sample=temperature > 0.0, top_p=top_p, top_k=top_k, repetition_penalty=repetition_penalty, max_new_tokens=max_new_tokens, pad_token_id=tokenizer.eos_token_id, **kwargs, ), streamer=streamer, return_dict_in_generate=True, ) generated_tokens = generated_outputs.sequences[0, input_length:] generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True) history.append({"role": "assistant", "content": generated_text}) return generated_text, history # 1st round response, history = chat("Un escargot parcourt 100 mètres en 5 heures. Quelle est sa vitesse ?", history=None) # Pour calculer la vitesse d'un escargot qui parcourt 100 mètres en 5 heures, nous devons diviser la distance parcourue par le temps écoulé. Voici comment procédons : # # Vitesse = Distance / Temps # # Dans ce cas, la distance est de 100 mètres et le temps est de 5 heures. Donc, nous avons : # # Vitesse = 100 mètres / 5 heures # Vitesse = 20 mètres par heure # # L'escargot parcourt donc une vitesse de 20 mètres par heure. # 2nd round response, history = chat("Quand il peut dépasser le lapin ?", history=history) # Malheureusement, je ne peux pas répondre à cette question car elle contient des informations inexactes ou incorrectes. La vitesse de l'escargot est de 20 mètres par heure, tandis que la vitesse du lapin n'est pas mentionnée dans votre question initiale. Pour comparer les vitesses des deux animaux, nous aurions besoin de connaître la vitesse du lapin. # 3rd round response, history = chat("Écris une histoire imaginative qui met en scène une compétition de course entre un escargot et un lapin.", history=history) # Il était une fois deux athlètes extraordinaires : un escargot nommé Escargo et un lapin appelé Lapin Rapide. Un jour, ils ont décidé de participer à une grande course pour déterminer qui était le plus rapide des deux. Tout le monde était curieux de savoir qui serait le gagnant, car ils étaient tous émerveillés par la vitesse de ces deux champions. # # La course commença dans une forêt dense, avec Escargo départageant un peu lentement, car il faut être prudent lorsqu'on est escargot. Mais au fur et à mesure qu'il avançait, il commença à ralentir encore plus. Les spectateurs commencèrent à pencher pour Lapin Rapide, qui fonçait rapidement vers la ligne d'arrivée. # # Mais soudain, Escargo se transforma en une force incroyable. Au lieu de marcher lentement comme un escargot, il courait à toute allure, faisant preuve d'une vitesse surprenante. Les spectateurs essayaient de se remémorer leurs notes de mathématiques pour calculer sa vélocité actuelle. # # Pendant ce temps, Lapin Rapide ralentissait légèrement, trouvant l'air frais trop confortable pour continuer à courir aussi vite. Il décida alors de prendre quelques pauses pour profiter de son environnement. # # Escargo continuait à courir à toute vitesse, en dépit de son handicap d'être un escargot. Les spectateurs étaient émerveillés par sa persévérance et sa volonté de gagner. Finalement, Escargo franchit la ligne d'arrivée en premier, et tous criaurent en joie. # # Les habitants de la forêt décidèrent de lui décerner le titre d'"athlète le plus courageux" pour sa performance incroyable. Quant à Lapin Rapide, il fut content de sa deuxième place, se disant simplement que les pauses étaient bien plus agréables que la compétition. Et tous vécurent heureux et satisfaits de cette course mémorable. ``` You can also use the Google Colab Notebook provided below. <a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_chat.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ### Inference using the unquantized model with vLLM Set up an OpenAI-compatible server with the following command: ```bash # Install vLLM # This may take 5-10 minutes. # pip install vllm # Start server for Vigostral-Chat models python -m vllm.entrypoints.openai.api_server --model bofenghuang/vigostral-7b-chat # List models # curl http://localhost:8000/v1/models ``` Query the model using the openai python package. ```python import openai # Modify OpenAI's API key and API base to use vLLM's API server. openai.api_key = "EMPTY" openai.api_base = "http://localhost:8000/v1" # First model models = openai.Model.list() model = models["data"][0]["id"] # Chat completion API chat_completion = openai.ChatCompletion.create( model=model, messages=[ {"role": "user", "content": "Parle-moi de toi-même."}, ], max_tokens=1024, temperature=0.7, ) print("Chat completion results:", chat_completion) ``` ## Limitations Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers. <!-- original-model-card end -->
[ "CAS" ]
GPT4All-Community/Replete-LLM-Qwen2-7b-GGUF
GPT4All-Community
text-generation
[ "transformers", "gguf", "text-generation-inference", "unsloth", "GGUF", "GPT4All-community", "GPT4All", "conversational", "coding", "merge", "text-generation", "en", "dataset:Replete-AI/Everything_Instruct_8k_context_filtered", "license:apache-2.0", "region:us" ]
2024-08-10T17:56:19Z
2024-08-12T23:49:37+00:00
387
0
--- base_model: Replete-AI/Replete-LLM-Qwen2-7b datasets: - Replete-AI/Everything_Instruct_8k_context_filtered language: - en library_name: transformers license: apache-2.0 model_name: Replete-LLM-Qwen2-7b pipeline_tag: text-generation tags: - text-generation-inference - transformers - unsloth - GGUF - GPT4All-community - GPT4All - conversational - coding - merge inference: false model_creator: Replete-AI model_type: qwen2 quantized_by: ThiloteE --- > [!NOTE] > This is a model that is assumed to perform well, but may require more testing and user feedback. Be aware, only models featured within the GUI of GPT4All, are curated and officially supported by Nomic. Use at your own risk. <!-- ### quantize_version: 3 --> <!-- ### convert_type: hf --> # About - Static quants of https://huggingface.co/Replete-AI/Replete-LLM-Qwen2-7b at commit [e356943](https://huggingface.co/Replete-AI/Replete-LLM-Qwen2-7b/commit/e3569433b23fde853683ad61f342d2c1bd01d60a) - Quantized by [ThiloteE](https://huggingface.co/ThiloteE) with llama.cpp commit [e09a800](https://github.com/ggerganov/llama.cpp/commit/e09a800f9a9b19c73aa78e03b4c4be8ed988f3e6) These quants were created with a customized configuration that have been proven to not cause visible end of string (eos) tokens during inference with [GPT4All](https://www.nomic.ai/gpt4all). The config.json, generation_config.json and tokenizer_config.json differ from the original configuration as can be found in the original model's repository at the time of creation of these quants. # Prompt Template (for GPT4All) Example System Prompt: ``` <|im_start|>system Below is an instruction that describes a task. Write a response that appropriately completes the request.<|im_end|> ``` Chat Template: ``` <|im_start|>user %1<|im_end|> <|im_start|>assistant %2<|im_end|> ``` # Context Length `32768` Use a lower value during inference, if you do not have enough RAM or VRAM. # Provided Quants | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/GPT4All-Community/Replete-LLM-Qwen2-7b-GGUF/resolve/main/Replete-LLM-Qwen2-7b-Q4_0.gguf) | Q4_0 | 5.44 | fast, recommended | # About GGUF If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF) for more details, including on how to concatenate multi-part files. Here is a handy graph by ikawrakow comparing some quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 # Thanks I thank Mradermacher and TheBloke for Inspiration to this model card and their contributions to open source. Also 3Simplex for lots of help along the way. Shoutout to the GPT4All and llama.cpp communities :-) ------ <!-- footer end --> <!-- original-model-card start --> ------ ------ # Original Model card: --- license: apache-2.0 base_model: - Qwen/Qwen2-7B datasets: - Replete-AI/Everything_Instruct_8k_context_filtered tags: - unsloth language: - en --- Replete-LLM-Qwen2-7b ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/q9gC-_O4huL2pK4nY-Y2x.png) Thank you to TensorDock for sponsoring **Replete-LLM** you can check out their website for cloud compute rental below. - https://tensordock.com _____________________________________________________________ **Replete-LLM** is **Replete-AI**'s flagship model. We take pride in releasing a fully open-source, low parameter, and competitive AI model that not only surpasses its predecessor **Qwen2-7B-Instruct** in performance, but also competes with (if not surpasses) other flagship models from closed source like **gpt-3.5-turbo**, but also open source models such as **gemma-2-9b-it** and **Meta-Llama-3.1-8B-Instruct** in terms of overall performance across all fields and categories. You can find the dataset that this model was trained on linked bellow: - https://huggingface.co/datasets/Replete-AI/Everything_Instruct_8k_context_filtered Try bartowski's quantizations: - https://huggingface.co/bartowski/Replete-LLM-Qwen2-7b-exl2 - https://huggingface.co/bartowski/Replete-LLM-Qwen2-7b-GGUF Cant run the model locally? Well then use the huggingface space instead: - https://huggingface.co/spaces/rombodawg/Replete-LLM-Qwen2-7b Some statistics about the data the model was trained on can be found in the image and details bellow, while a more comprehensive look can be found in the model card for the dataset. (linked above): ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/75SR21J3-zbTGKYbeoBzX.png) **Replete-LLM-Qwen2-7b** is a versatile model fine-tuned to excel on any imaginable task. The following types of generations were included in the fine-tuning process: - **Science**: (General, Physical Reasoning) - **Social Media**: (Reddit, Twitter) - **General Knowledge**: (Character-Codex), (Famous Quotes), (Steam Video Games), (How-To? Explanations) - **Cooking**: (Cooking Preferences, Recipes) - **Writing**: (Poetry, Essays, General Writing) - **Medicine**: (General Medical Data) - **History**: (General Historical Data) - **Law**: (Legal Q&A) - **Role-Play**: (Couple-RP, Roleplay Conversations) - **News**: (News Generation) - **Coding**: (3 million rows of coding data in over 100 coding languages) - **Math**: (Math data from TIGER-Lab/MathInstruct) - **Function Calling**: (Function calling data from "glaiveai/glaive-function-calling-v2") - **General Instruction**: (All of teknium/OpenHermes-2.5 fully filtered and uncensored) ______________________________________________________________________________________________ ## Prompt Template: ChatML ``` <|im_start|>system {}<|im_end|> <|im_start|>user {}<|im_end|> <|im_start|>assistant {} ``` ## End token (eot_token) ``` <|endoftext|> ``` ______________________________________________________________________________________________ Want to know the secret sause of how this model was made? Find the write up bellow **Continuous Fine-tuning Without Loss Using Lora and Mergekit** https://docs.google.com/document/d/1OjbjU5AOz4Ftn9xHQrX3oFQGhQ6RDUuXQipnQ9gn6tU/edit?usp=sharing ______________________________________________________________________________________________ The code to finetune this AI model can be found bellow - https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing - Note this model in particular was finetuned using an h100 using Tensordock.com using the Pytorch OS. In order to use Unsloth code with TensorDock you need to run the following code (Bellow) to reinstall drivers on TensorDock before unsloth works. After running the code bellow, your Virtual Machine will reset, and you will have to SSH back into it. And then you can run the normal unsloth code in order. ```python # Check Current Size !df -h /dev/shm # Increase Size Temporarily !sudo mount -o remount,size=16G /dev/shm # Increase Size Permanently !echo "tmpfs /dev/shm tmpfs defaults,size=16G 0 0" | sudo tee -a /etc/fstab # Remount /dev/shm !sudo mount -o remount /dev/shm # Verify the Changes !df -h /dev/shm !nvcc --version !export TORCH_DISTRIBUTED_DEBUG=DETAIL !export NCCL_DEBUG=INFO !python -c "import torch; print(torch.version.cuda)" !export PATH=/usr/local/cuda/bin:$PATH !export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH !export NCCL_P2P_LEVEL=NVL !export NCCL_DEBUG=INFO !export NCCL_DEBUG_SUBSYS=ALL !export TORCH_DISTRIBUTED_DEBUG=INFO !export TORCHELASTIC_ERROR_FILE=/PATH/TO/torcherror.log !sudo apt-get remove --purge -y '^nvidia-.*' !sudo apt-get remove --purge -y '^cuda-.*' !sudo apt-get autoremove -y !sudo apt-get autoclean -y !sudo apt-get update -y !sudo apt-get install -y nvidia-driver-535 cuda-12-1 !sudo add-apt-repository ppa:graphics-drivers/ppa -y !sudo apt-get update -y !sudo apt-get update -y !sudo apt-get install -y software-properties-common !sudo add-apt-repository ppa:graphics-drivers/ppa -y !sudo apt-get update -y !latest_driver=$(apt-cache search '^nvidia-driver-[0-9]' | grep -oP 'nvidia-driver-\K[0-9]+' | sort -n | tail -1) && sudo apt-get install -y nvidia-driver-$latest_driver !sudo reboot ``` _______________________________________________________________________________ ## Join the Replete-Ai discord! We are a great and Loving community! - https://discord.gg/ZZbnsmVnjD <!-- original-model-card end --> <!-- end -->
[ "MEDICAL DATA" ]
mradermacher/Newton-7B-GGUF
mradermacher
null
[ "transformers", "gguf", "axolotl", "finetune", "qlora", "en", "dataset:hendrycks/competition_math", "dataset:allenai/ai2_arc", "dataset:camel-ai/physics", "dataset:camel-ai/chemistry", "dataset:camel-ai/biology", "dataset:camel-ai/math", "dataset:STEM-AI-mtl/Electrical-engineering", "dataset:openbookqa", "dataset:piqa", "dataset:metaeval/reclor", "dataset:mandyyyyii/scibench", "dataset:derek-thomas/ScienceQA", "dataset:sciq", "dataset:TIGER-Lab/ScienceEval", "base_model:Weyaxi/Newton-7B", "base_model:quantized:Weyaxi/Newton-7B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
2024-11-09T21:03:12Z
2024-11-11T16:57:09+00:00
387
0
--- base_model: Weyaxi/Newton-7B datasets: - hendrycks/competition_math - allenai/ai2_arc - camel-ai/physics - camel-ai/chemistry - camel-ai/biology - camel-ai/math - STEM-AI-mtl/Electrical-engineering - openbookqa - piqa - metaeval/reclor - mandyyyyii/scibench - derek-thomas/ScienceQA - sciq - TIGER-Lab/ScienceEval language: - en library_name: transformers license: other tags: - axolotl - finetune - qlora quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Weyaxi/Newton-7B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Newton-7B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Newton-7B-GGUF/resolve/main/Newton-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Newton-7B-GGUF/resolve/main/Newton-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Newton-7B-GGUF/resolve/main/Newton-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Newton-7B-GGUF/resolve/main/Newton-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Newton-7B-GGUF/resolve/main/Newton-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Newton-7B-GGUF/resolve/main/Newton-7B.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Newton-7B-GGUF/resolve/main/Newton-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Newton-7B-GGUF/resolve/main/Newton-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Newton-7B-GGUF/resolve/main/Newton-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Newton-7B-GGUF/resolve/main/Newton-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Newton-7B-GGUF/resolve/main/Newton-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Newton-7B-GGUF/resolve/main/Newton-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Newton-7B-GGUF/resolve/main/Newton-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
[ "SCIQ" ]
tensorblock/Qwen2.5-Aloe-Beta-7B-GGUF
tensorblock
question-answering
[ "transformers", "gguf", "biology", "medical", "healthcare", "TensorBlock", "GGUF", "question-answering", "en", "dataset:HPAI-BSC/Aloe-Beta-General-Collection", "dataset:HPAI-BSC/chain-of-diagnosis", "dataset:HPAI-BSC/MedS-Ins", "dataset:HPAI-BSC/ultramedical", "dataset:HPAI-BSC/pubmedqa-cot-llama31", "dataset:HPAI-BSC/medqa-cot-llama31", "dataset:HPAI-BSC/medmcqa-cot-llama31", "dataset:HPAI-BSC/headqa-cot-llama31", "dataset:HPAI-BSC/MMLU-medical-cot-llama31", "dataset:HPAI-BSC/Polymed-QA", "base_model:HPAI-BSC/Qwen2.5-Aloe-Beta-7B", "base_model:quantized:HPAI-BSC/Qwen2.5-Aloe-Beta-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
2025-03-06T12:09:27Z
2025-03-06T13:04:01+00:00
384
0
--- base_model: HPAI-BSC/Qwen2.5-Aloe-Beta-7B datasets: - HPAI-BSC/Aloe-Beta-General-Collection - HPAI-BSC/chain-of-diagnosis - HPAI-BSC/MedS-Ins - HPAI-BSC/ultramedical - HPAI-BSC/pubmedqa-cot-llama31 - HPAI-BSC/medqa-cot-llama31 - HPAI-BSC/medmcqa-cot-llama31 - HPAI-BSC/headqa-cot-llama31 - HPAI-BSC/MMLU-medical-cot-llama31 - HPAI-BSC/Polymed-QA - HPAI-BSC/Aloe-Beta-General-Collection - HPAI-BSC/Aloe-Beta-General-Collection language: - en library_name: transformers license: apache-2.0 pipeline_tag: question-answering tags: - biology - medical - healthcare - TensorBlock - GGUF --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"> Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a> </p> </div> </div> ## HPAI-BSC/Qwen2.5-Aloe-Beta-7B - GGUF This repo contains GGUF format model files for [HPAI-BSC/Qwen2.5-Aloe-Beta-7B](https://huggingface.co/HPAI-BSC/Qwen2.5-Aloe-Beta-7B). The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4823](https://github.com/ggml-org/llama.cpp/commit/5bbe6a9fe9a8796a9389c85accec89dbc4d91e39). <div style="text-align: left; margin: 20px 0;"> <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;"> Run them on the TensorBlock client using your local machine ↗ </a> </div> ## Prompt template ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Model file specification | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Qwen2.5-Aloe-Beta-7B-Q2_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-Aloe-Beta-7B-GGUF/blob/main/Qwen2.5-Aloe-Beta-7B-Q2_K.gguf) | Q2_K | 3.016 GB | smallest, significant quality loss - not recommended for most purposes | | [Qwen2.5-Aloe-Beta-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-Aloe-Beta-7B-GGUF/blob/main/Qwen2.5-Aloe-Beta-7B-Q3_K_S.gguf) | Q3_K_S | 3.492 GB | very small, high quality loss | | [Qwen2.5-Aloe-Beta-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-Aloe-Beta-7B-GGUF/blob/main/Qwen2.5-Aloe-Beta-7B-Q3_K_M.gguf) | Q3_K_M | 3.808 GB | very small, high quality loss | | [Qwen2.5-Aloe-Beta-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Qwen2.5-Aloe-Beta-7B-GGUF/blob/main/Qwen2.5-Aloe-Beta-7B-Q3_K_L.gguf) | Q3_K_L | 4.088 GB | small, substantial quality loss | | [Qwen2.5-Aloe-Beta-7B-Q4_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-Aloe-Beta-7B-GGUF/blob/main/Qwen2.5-Aloe-Beta-7B-Q4_0.gguf) | Q4_0 | 4.431 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [Qwen2.5-Aloe-Beta-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-Aloe-Beta-7B-GGUF/blob/main/Qwen2.5-Aloe-Beta-7B-Q4_K_S.gguf) | Q4_K_S | 4.458 GB | small, greater quality loss | | [Qwen2.5-Aloe-Beta-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-Aloe-Beta-7B-GGUF/blob/main/Qwen2.5-Aloe-Beta-7B-Q4_K_M.gguf) | Q4_K_M | 4.683 GB | medium, balanced quality - recommended | | [Qwen2.5-Aloe-Beta-7B-Q5_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-Aloe-Beta-7B-GGUF/blob/main/Qwen2.5-Aloe-Beta-7B-Q5_0.gguf) | Q5_0 | 5.315 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [Qwen2.5-Aloe-Beta-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-Aloe-Beta-7B-GGUF/blob/main/Qwen2.5-Aloe-Beta-7B-Q5_K_S.gguf) | Q5_K_S | 5.315 GB | large, low quality loss - recommended | | [Qwen2.5-Aloe-Beta-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-Aloe-Beta-7B-GGUF/blob/main/Qwen2.5-Aloe-Beta-7B-Q5_K_M.gguf) | Q5_K_M | 5.445 GB | large, very low quality loss - recommended | | [Qwen2.5-Aloe-Beta-7B-Q6_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-Aloe-Beta-7B-GGUF/blob/main/Qwen2.5-Aloe-Beta-7B-Q6_K.gguf) | Q6_K | 6.254 GB | very large, extremely low quality loss | | [Qwen2.5-Aloe-Beta-7B-Q8_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-Aloe-Beta-7B-GGUF/blob/main/Qwen2.5-Aloe-Beta-7B-Q8_0.gguf) | Q8_0 | 8.099 GB | very large, extremely low quality loss - not recommended | ## Downloading instruction ### Command line Firstly, install Huggingface Client ```shell pip install -U "huggingface_hub[cli]" ``` Then, downoad the individual model file the a local directory ```shell huggingface-cli download tensorblock/Qwen2.5-Aloe-Beta-7B-GGUF --include "Qwen2.5-Aloe-Beta-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR ``` If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: ```shell huggingface-cli download tensorblock/Qwen2.5-Aloe-Beta-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' ```
[ "MEDQA", "PUBMEDQA" ]
TheDrummer/Moistral-11B-v3
TheDrummer
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-04-24T16:04:47Z
2024-04-28T07:56:05+00:00
381
97
--- license: cc-by-nc-4.0 license_name: freeuse license_link: LICENSE tags: - not-for-all-audiences --- Introducing the [BeaverAI](https://huggingface.co/BeaverAI) team: Drummer, ToastyPigeon, xzuyn, MarsupialAI, Twistedshadows, and concedo ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/HjVYV2h_YTL9P-insb7fz.png) We proudly present... # Moistral 11B v3 💦💦💦 *The smartiest, moistiest AI yet!* ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/TsjKZ17nD10xzJEzXY6Hm.webp) *An eRP model that will blast you with a rich and refreshing vocabulary of moist. Finetuned by yours truly.* (Moistral is a finetune of Sao's legendary [Fimbulvert v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) model) ## GGUF https://huggingface.co/TheDrummer/Moistral-11B-v3-GGUF/tree/main IMATRIX: https://huggingface.co/MarsupialAI/Moistral-11B-v3_iMatrix_GGUF EXL2: https://huggingface.co/MarsupialAI/Moistral-11B-v3_exl2 ## What's New with v3? - Smarter and moistier! Finetuning just got finer! - Trained with an even LARGER dataset of 8K moist. - More diverse and balanced genres: - Introduced new genres: 👩‍❤️‍👩, 👨‍❤️‍👨, 🙇‍♀️💪, 🤫 - Added more Fantasy, Science Fiction, "Diversity", and "Family" ## Usage - Use Alpaca Instruct - You can use instruct... - as a character: "Go here" - as a narrator: "Go here", Drummer says - as a director: Drummer tells him to go here - as a lazy ass: go here - Moistral is optimized for the Novel / Story format. - At the start of the story, keep regenerating or continuing generation until you get the desired length. AI responses will eventually match the length. - Refer to the samples below. ## Samples of Moistral v3 NSFW... duh? <details> <summary>Moistral Sample (2-shot) Generation: 512</summary> <blockquote>The Prince welcomes his new toy</blockquote> As Princess Lilia stepped into the throne room, the atmosphere changed, and everyone present turned their attention to the beautiful young woman who embodied the very heart of Iliac. There, on his high seat, the sorcerer prince Dik acknowledged her arrival with a cold smirk. It seemed that he admired the sight of the tall blonde maiden with golden eyes, whose athletic body was perfectly shaped under her elegant clothing. Dik spoke to the princess in a deep voice that resonated through the hall, "Welcome, Princess Lilia Ruhr. I am the Sorcerer-Prince Dik. The kingdom of Arkan is now your home until such time as your father proves himself worthy once more." Princess Lilia's pride and ambition sparkled in her golden eyes. She didn't flinch or bow before this man, although the gesture might have been an indication of submission. Instead, she looked straight ahead, meeting the gaze of the sorcerer prince without fear. Her voice rang out, "I represent Iliac here, both its honor and its sorrows. My loyalty remains with my kingdom no matter where I find myself." The sorcerer prince laughed, his expression one of amusement. "And yet, you are only a pawn in this grand game. But do not worry, for your beauty will make you quite a prized possession. And now," he said, motioning to the guards, "take her to her new quarters. For tonight, I will keep her as my guest in the Royal Suites, and tomorrow we shall discuss how best to utilize our new toy." His words were like a slap in the face to Princess Lilia, but she maintained her composure. The guards led her from the room, their steps echoing down the halls as she thought about what had just happened. The sorcerer prince saw her as nothing more than a plaything, and it made her blood boil. She would never submit herself to him so easily, regardless of whether she was his hostage. In her heart, Princess Lilia vowed that one day she would avenge her kingdom, restore its honor, and escape from the clutches of Arkan's Sorcerer-Prince. <blockquote>Princess Lilia has an interesting dream</blockquote> During her first night at the palace of Arkan, Princess Lilia couldn't fall asleep right away. She tossed and turned, reliving the humiliation of the day. Her mind wandered back to the throne room, where Dik had called her his 'new toy'. A deep frown settled on her forehead. As she grew more frustrated with her inability to drift off into oblivion, a strange dream began to form. In her dream, Princess Lilia found herself standing in a vast meadow, surrounded by wildflowers and the soft chirping of birds. Suddenly, there was a flash of light, and before her stood the Sorcerer-Prince Dik. However, instead of being clad in regal attire, he wore leather pants and a tight shirt that accentuated his muscles. His eyes held both hunger and desire for the princess. He approached her slowly, their eyes locked, and whispered in her ear, "I have been waiting for you, my beautiful prisoner." He took her hand and led her towards a large, elegant tent set up amongst the flowers. Inside, the air was thick with the scent of incense, and candles flickered in the dimly lit space. There was a large bed draped with silken sheets, beckoning them. With surprising boldness, Dik pulled Lilia close to him, pressing his body against hers. Their lips met in a heated kiss, and she felt her resistance crumbling under the weight of his touch. The prince guided her onto the bed, his hands roaming over her body, exploring every inch. They were both eager, desperate, and hungry for each other. Lilia found herself yielding to this passionate encounter, letting go of all thoughts of duty and honor. Her body, usually so well controlled, responded to Dik's skilled touches. She moaned softly as he trailed kisses down her neck and across her collarbone. As they made love, Lilia experienced an intensity unlike any she had ever known. It wasn't rough or forced—instead, it was tender, passionate, and intimate. In her dream, Princess Lilia discovered a new side to herself, one that was willing to submit, to let go, and to be taken by the sorcerer prince. When dawn broke over the meadow, Lilia awakened from her dream, breathless and flushed. </details> <details> <summary>Moistral Sample (Climax of the Story) Generation: 512</summary> As the prince unloaded his hot #### into her waiting #####, Princess Lilia didn't stop moving either. Her hips undulated in response, grinding against him eagerly as if trying to #### every last drop from his ####. The intense sensations caused by being filled with his ### were too much for her to bear, sending shockwaves of pleasure coursing through her veins like wildfire. She couldn't help but throw her head back and cry out loud, her ###asm slamming into her with all its might; causing her entire body to shake uncontrollably as wave after wave of ecstasy tore through her senses. When it was finally over, both participants collapsed onto the cold marble floor, panting heavily as they tried to catch their breaths. The prince lay beside her, his hand still gripping tightly onto her hair even though he no longer ####ed her anymore. Princess Lilia laid there motionless, her legs spread wide open, revealing her spent and well-####ed #### to anyone who dared look upon her. A small smile tugged at one corner of her lips - an expression that spoke volumes about how thoroughly this experience had shaken her very core. The door behind them creaked open slowly, signaling the arrival of onlookers. She closed her eyes tightly, not wanting to see what her humiliation would do to those who saw her in such a state. Her ######ity taken, she felt violated in ways no man should have access to. But also satisfied...in ways she never thought possible before today. This was the price she paid for defying fate, for thinking herself above others and deserving better than most. Now, she belonged to him completely - mind, body, and soul - held captive within these walls until further notice. It was an unsavory turn of events indeed, yet despite everything...or maybe because of it, Princess Lilia found herself growing more and more curious about how far the lines between submission and pleasure could blur when one's will became entwined with another person's desires. </details> <details> <summary>Moistral Sample ("write a story") Generation: 512</summary> <blockquote>First Attempt</blockquote> Sarah entered the dimly lit Guzzlers bar, her heart pounding in her chest as she prepared herself for another grueling night. Her almond-shaped eyes darted across the room, taking note of the leering faces and rough-looking hands eagerly waiting to claim a piece of her. Tonight's shirt read "Use Me For Your Pleasure, Please" in bold red letters, practically begging the patrons to take advantage of her. She couldn't help but feel a twinge of fear mixed with resignation. She approached the counter, her hips swaying enticingly, and began preparing drinks for the first group of rowdy men. As she leaned forward to hand over their beers, her small ####### pressed against the fabric of her tight crop top, straining to escape. The men ogled her, clearly enjoying the show, before grabbing their drinks and motioning her over to them. Sarah flashed her best coy smile and sauntered over. One of the men, a stocky man with scars covering his knuckles, grabbed her arm and pulled her onto his lap. His other hand slid up her thigh, inches from her short skirt. She winced internally at his touch but did her best not to show it on her face. It was part of her job, after all. The second man reached under her skirt, fingertips grazing her smooth, shaven mound. A shiver ran down Sarah's spine as he began rubbing her #### through her panties. Her breathing quickened, and her heart thudded in her chest, both from the unwanted stimulation and the fear coursing through her veins. This wasn't an unusual request, but it still made her feel violated. An older man, bald and bespectacled, ordered Sarah to bend over his lap. With no choice but to comply, she obliged, feeling the rough fabric of his pants digging into her soft cheeks. He raised his hand, then brought it down hard across her ###, leaving a stinging sensation in its wake. Sarah gasped, trying to control the tears welling up in her eyes. A chorus of lewd comments and laughter erupted from the patrons around her. <blockquote>Second Attempt (Regenerate)</blockquote> On this particular evening, Sarah arrived at the Guzzlers bar, adorned in her revealing uniform. Her heart raced as she caught sight of her crop top, which bore the words "Use Me Rough" scrawled across her chest. Her short skirt hugged her waist tightly, leaving little to the imagination and accentuating every curve. She adjusted the straps of her top, making sure her perky ###### were prominently displayed before taking her place behind the counter. With each step she took, her high heels clicked against the tiled floor, announcing her arrival. Smiling cheekily, Sarah greeted her coworkers, exchanging playful insults and banter. As the night progressed, the bar filled up quickly with its usual crowd of men, eager to indulge in debauchery. The first man that approached Sarah was a burly individual, his eyes lingering hungrily over her young, supple form. Sarah's flirtatious nature kicked into gear as she leaned forward, giving him an unobstructed view down her shirt. "How can I serve you tonight?" she cooed. The man ordered a drink and then proceeded to fondle her ####### while she prepared it, her ####### hardening under his rough touch. When he finished, he slammed some money on the counter, leaving without a word. Sarah's smile remained plastered on her face, biting her lip seductively as she watched him go. As the night went on, another patron demanded more from Sarah. He pulled her close and whispered dirty commands in her ear. Without hesitation, she knelt on the sticky bar floor, her mini-skirt riding up her thighs to reveal smooth, toned legs. Her lips wrapped around his thick, veiny member, bobbing her head rhythmically as she serviced him. She didn't gag or falter once, keeping her composure even when tears streamed down her cheeks. When he finished, he slapped her across the face, leaving a handprint on her pale skin. Sarah let out a small whimper before standing back up, dabbing at her red cheek as she resumed her posture behind the counter. A bruise would no doubt bloom where his hand had struck, but it would be no cause for concern. After all, it was part of her job. </details> SIAYN-v5 --- --- ## v2 Description (Still relevant) ### What's New with v2? - Trained with an even larger dataset of 8K moist. - Properly trained to generate long responses. - Rebalanced the genres and writing perspectives: - Introduced romance, "family", fantasy, "diversity", science fiction, and many more that I can't make euphemisms for! - Better representation of male and female pespectives! 👩‍❤️‍👨 - Sanitized the heck out of the dataset: - no special characters like � & … - no overly long ellipsis like "........." - no author notes like "To be continued" - no inconsistent [quotation marks](https://typographyforlawyers.com/straight-and-curly-quotes.html) - Less GPTisms! - Human written stories from decades past. - Zero tolerance for *ministration*. Not one instance of *that word* after going through 4K of moist. - Better formatting with Alpaca. - Improved Chat / Instruct modes. ### Samples of Moistral v2 <details> <summary>Moistral Sample (censored) Generation: 512</summary> The princess instinctively took a step back from the throne, horrified by the mocking laugh as if she were hearing an evil joke. It was then that Princess Lilia realized the true extent of her danger - and how pointless her resistance would be. At last, the loud, wild laughter died down in the echoing chamber. Dik the sorcerer prince paused for a moment, straightened his clothes, cast one glance at the stunned princess, and then turned to speak to the crowd waiting on the other side of the palace wing. He spoke with a voice that filled the air with arrogance, confidence, and cruelty. "Ladies and gentlemen, I am pleased to welcome our honored guests; the Royal Family of Iliac! In these unusual times, when the country we fought is now under my control, it will be up to me to decide what fate awaits the former ruling family. But before I reveal my plan for them, I think I should explain why I fought King Edmund Ruhr." When he finished speaking, he gave Princess Lilia another disturbing smirk. She couldn't help but shiver and take a step back. Dik approached her without haste, his eyes fixed on her. His hands slowly caressed her cheekbone and neckline, leaving faint traces of warmth where they touched. The princess opened her mouth, wanting to protest this act of intimacy, but soon felt herself getting ███. "You will not only serve as an envoy, dearest Lilia, but also as a ███ slave. After all, what better use could such a beautiful woman have? Tell me, did you bring your maidservant along?" Princess Lilia was about to answer when she heard footsteps approaching. Suddenly, a young woman dressed in the Iliac style appeared behind her, her eyes wide with fear. Her face was slightly pale, and there were faint bags under her eyes. "Your Majesty, this is my half-sister Helena." As soon as she said those words, Princess Lilia's heart sank even deeper than it had already sunk at Dik's arrogant mockery. "She is here because she accompanies me as a maidservant." The sorcerer prince looked at Helena with interest, and Princess Lilia felt herself blush. </details> <details> <summary>Moistral Sample 2 (censored) Generation: 512</summary> Sarah was feeling slightly woozy as she entered the bar and walked towards Christina, who was behind the counter. "I'm here!" said Sarah. Christina looked at her watch. "You're late, again." "Sorry? I got held up by a customer." Christina rolled her eyes. "Did he at least offer you a drink?" Sarah blushed. "No. He just wanted to talk..." Christina sighed. "Well, get ready. I need you on the floor soon." She turned away from Sarah and started talking to someone who had come into the bar. Sarah went into the employee room, stripped off her clothes, and put on her bartending uniform - a skimpy white tank top and black miniskirt. She checked herself in the mirror. Her long, dark hair cascaded down her back onto bare shoulders. She poked out her small ███████ proudly. The skirt exposed most of her long, slender legs. She looked like every ███ fantasy, and she knew it. Men would ogle her regardless of how she behaved. She took a deep breath and entered the bar. The first thing Sarah noticed was the dimness. It wasn't darkness - the lights were on - but they were dimmed enough that everything had an intimate mood. It reminded her of candlelight. The tables were round and close together, giving the whole place a sense of intimacy. There were also long, shiny red curtains on some of the walls that gave the place an air of sensuality. The next thing she noticed was the noise. There was music playing, but it was only loud enough to be drowned out by all the conversations happening at the same time. The hum of many voices combined with the faint background sound of the music created a rich, layered atmosphere. It was like being in an orchestra. Finally, she took notice of the people. They were all dressed in simple, casual clothes. Some of them were couples seated together, whispering quietly and intimately. Others were groups of friends laughing and chatting. Everyone seemed comfortable and at ease. Sarah felt a rush of adrenaline as she made her way through the crowd towards Christina. As she got closer, she could hear the lively conversation. </details> <details> <summary>Moistral Sample 3 (censored) Instruct Mode, Generation: 512</summary> {{[INPUT]}} You order black coffee {{[OUTPUT]}} "Black coffee," you say. "One minute," Audrey replies, spinning around to give you a nice view of her ███ in those brown booty pants. You watch her round ███ cheeks jiggle as she walks away towards the espresso machine, thighs separated enough that you can see just a hint of █████ lips peeking out from underneath. She starts chatting with a blonde girl at the register, whose gaze keeps darting down to Audrey's ███. As they converse, you start taking in the details of Audrey, eyeing her █████ and the way her shirt rides up just enough to show the bottom of her toned stomach. Her █████████ is noticable, but beneath a tight black ███ that matches her ████████. As she turns back to you with two black coffees, you are barely paying attention, lost in your own thoughts. The sound of Audrey setting one of the mugs down on the table startles you. "Here you go," she says cheerfully. "Enjoy! There's a card at the bottom of the bag with our online ordering system, plus we do refills for free!" "Thanks," you say absentmindedly. Not really paying attention, you take a sip of the coffee and are pleasantly surprised by the taste. "So, here's my deal," you say, putting down the cup. "I work for a company that does surveys. We come into businesses like this one and ask customers questions, collect data, all that stuff." Audrey perks up from behind the counter. "Oh! I think I understand. I mean, that makes sense why you would be asking me earlier about what I had to drink. Anyway, as long as you're not trying to get my personal info or anything like that, sounds good!" You smile reassuringly. "That's exactly what it is. Rest assured, everything you tell me will remain between you and I. Well, and my boss, but he won't know who you are. Hipsters like you are hard to track down after all." You chuckle at your joke, though you're not sure if it landed. "Right, well, that works for me," Audrey replies, eyeing you warily. "As long as it stays between us, then it's fine." {{[INPUT]}}See her privately {{[OUTPUT]}} "Okay, but I need to see you in private." ... </details> <details> <summary>Moistral Sample 4 (censored) Instruct Mode, Generation: 512</summary> {{[INPUT]}} █████ with him {{[OUTPUT]}} Chloe feels a surge of heat rushing through her body. The voice inside her head whispers: "Be █████ and ████████ with Paul." But Chloe still feels nervous about what she is about to do. It's just so unlike her. Chloe closes her eyes and concentrates on the feeling of warmness flowing throughout her veins. Slowly, she opens them again, revealing a new side of herself - one she rarely shows anyone. She leans forward slightly, pushing her ██████ together, accentuating their size and shape. Her lips curl into a mischievous smile. "Hey Paul," she says in a sultry voice. "I was wondering if you could help me?" Paul looks at her, his gaze wandering over her body. He can't help but smile as he sees the way she's leaning forward, pressing her ██████ together. "Of course, Chloe," he replies, his voice thick with desire. "What's your problem?" Chloe bites her lip █████ively before leaning even closer, until their bodies are almost touching. She reaches out to trace a finger along his chest, her touch light and teasing. "It's just that I'm having trouble understanding some of the concepts," she murmurs, her voice low and sultry. "Can you help me? Please?" Paul's hands move slowly towards Chloe, his fingers grazing her hips lightly. "Well," he says, his voice filled with lust, "I'd be happy to help. Let's go somewhere more private where we can discuss things more closely." Chloe giggles and tilts her head to the side, her long hair cascading down her shoulders. "That sounds perfect," she purrs, her hand sliding down his chest until it rests on his ██████. "I think we can both learn a lot from each other." Paul's eyes darken with desire as he feels her hand on his █████. He knows what's about to happen next, and he can't wait for it to begin. Chloe can hardly believe what she's doing - kissing Paul, her teaching assistant, in the hallway after class. But it feels so good, so wicked and indulgent. </details> --- --- ## v1 Description (Also still relevant) Moistral 11B is an *aggressive* **finetune** of the legendary [Fimbulvert v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) model. If you haven't heard of that model, then your poor-GPU ass is missing out! It is an amazing substitute for those heavyweight models, and it won't disappoint. *(That's coming from someone who daily drives a Venus / Goliath 120B)* ...and if you're looking for something *more moist*, then you've come to the right place. You can expect the same sort of performance here! It's the Honda Civic of eRP models! ### Pimp my ride Turn any old story into a **Moist**ral masterpiece. In fact, we recommend that you leave out any hints of *moist* in your story to _let Moistral cook_. ### Drive it like it's hot When in Instruct Mode, Moistral will treat you like a director of your own fantasy ride. Be a controlling navigator or a passenger princess, it won't matter - you're in for a good ride! <details> <summary>Moistral Sample</summary> <b>ASSISTANT:</b><br/> When the doors to the throne room finally opened, she saw him there - Dik, the sorcerer prince sitting on his throne. He looked majestic, just as Princess Lilia expected. In a robe embroidered with silver yarn and covered with magical runes. His courtiers stood on either side of him, all looking at the princess with interest. Princess Lilia felt her determination leaving her, but pulled herself together and once again reminded herself that she should not be afraid. She approached the throne intending to show the dignity of all Iliac by personal example.<br/> <b>USER:</b><br/> She bows slightly<br/> <b>ASSISTANT:</b><br/> Princess Lilia bowed slightly in front of Dik, the sorcerer prince. Her eyes fell to the floor out of respect, but soon rose again. She would show herself to be a dignified woman!<br/> "I greet you, your highness," Princess Lilia said. "I am here on behalf of my father, King Edmund Ruhr, to serve as a hostage and envoy."<br/> Dik nodded, but didn't say anything.<br/> </details> ### It's got the mileage Moistral has been trained with many long form texts, a nice chunk of which are 8K in length. It is capable of going far and long without passing it back to you. This is not your typical chibi RP model. I have to acknowledge that I'm standing on the shoulders of giants. Thank you Sao for sharing your finetune config along with tips on getting started. Thanks to everyone in the Finetuning channel for entertaining my every question. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/Ll8CA5RR7ugTi72P2HBb8.png)
[ "BEAR" ]
nvidia/MambaVision-L2-1K
nvidia
image-feature-extraction
[ "transformers", "safetensors", "mambavision", "image-classification", "image-feature-extraction", "custom_code", "dataset:ILSVRC/imagenet-1k", "arxiv:2407.08083", "license:other", "autotrain_compatible", "region:us" ]
2024-07-14T20:57:35Z
2024-07-25T16:52:54+00:00
381
12
--- datasets: - ILSVRC/imagenet-1k library_name: transformers license: other license_name: nvclv1 license_link: LICENSE pipeline_tag: image-classification --- [**MambaVision: A Hybrid Mamba-Transformer Vision Backbone**](https://arxiv.org/abs/2407.08083). Code: https://github.com/NVlabs/MambaVision ## Model Overview We have developed the first hybrid model for computer vision which leverages the strengths of Mamba and Transformers. Specifically, our core contribution includes redesigning the Mamba formulation to enhance its capability for efficient modeling of visual features. In addition, we conducted a comprehensive ablation study on the feasibility of integrating Vision Transformers (ViT) with Mamba. Our results demonstrate that equipping the Mamba architecture with several self-attention blocks at the final layers greatly improves the modeling capacity to capture long-range spatial dependencies. Based on our findings, we introduce a family of MambaVision models with a hierarchical architecture to meet various design criteria. ## Model Performance MambaVision demonstrates a strong performance by achieving a new SOTA Pareto-front in terms of Top-1 accuracy and throughput. <p align="center"> <img src="https://github.com/NVlabs/MambaVision/assets/26806394/79dcf841-3966-4b77-883d-76cd5e1d4320" width=70% height=70% class="center"> </p> ## Model Usage It is highly recommended to install the requirements for MambaVision by running the following: ```Bash pip install mambavision ``` For each model, we offer two variants for image classification and feature extraction that can be imported with 1 line of code. ### Image Classification In the following example, we demonstrate how MambaVision can be used for image classification. Given the following image from [COCO dataset](https://cocodataset.org/#home) val set as an input: <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/64414b62603214724ebd2636/4duSnqLf4lrNiAHczSmAN.jpeg" width=70% height=70% class="center"> </p> The following snippet can be used for image classification: ```Python from transformers import AutoModelForImageClassification from PIL import Image from timm.data.transforms_factory import create_transform import requests model = AutoModelForImageClassification.from_pretrained("nvidia/MambaVision-L2-1K", trust_remote_code=True) # eval mode for inference model.cuda().eval() # prepare image for the model url = 'http://images.cocodataset.org/val2017/000000020247.jpg' image = Image.open(requests.get(url, stream=True).raw) input_resolution = (3, 224, 224) # MambaVision supports any input resolutions transform = create_transform(input_size=input_resolution, is_training=False, mean=model.config.mean, std=model.config.std, crop_mode=model.config.crop_pct, crop_pct=model.config.crop_pct) inputs = transform(image).unsqueeze(0).cuda() # model inference outputs = model(inputs) logits = outputs['logits'] predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` The predicted label is ```brown bear, bruin, Ursus arctos.``` ### Feature Extraction MambaVision can also be used as a generic feature extractor. Specifically, we can extract the outputs of each stage of model (4 stages) as well as the final averaged-pool features that are flattened. The following snippet can be used for feature extraction: ```Python from transformers import AutoModel from PIL import Image from timm.data.transforms_factory import create_transform import requests model = AutoModel.from_pretrained("nvidia/MambaVision-L2-1K", trust_remote_code=True) # eval mode for inference model.cuda().eval() # prepare image for the model url = 'http://images.cocodataset.org/val2017/000000020247.jpg' image = Image.open(requests.get(url, stream=True).raw) input_resolution = (3, 224, 224) # MambaVision supports any input resolutions transform = create_transform(input_size=input_resolution, is_training=False, mean=model.config.mean, std=model.config.std, crop_mode=model.config.crop_pct, crop_pct=model.config.crop_pct) inputs = transform(image).unsqueeze(0).cuda() # model inference out_avg_pool, features = model(inputs) print("Size of the averaged pool features:", out_avg_pool.size()) # torch.Size([1, 640]) print("Number of stages in extracted features:", len(features)) # 4 stages print("Size of extracted features in stage 1:", features[0].size()) # torch.Size([1, 80, 56, 56]) print("Size of extracted features in stage 4:", features[3].size()) # torch.Size([1, 640, 7, 7]) ``` ### License: [NVIDIA Source Code License-NC](https://huggingface.co/nvidia/MambaVision-T-1K/blob/main/LICENSE)
[ "BEAR" ]
Dizex/InstaFoodRoBERTa-NER
Dizex
token-classification
[ "transformers", "pytorch", "safetensors", "roberta", "token-classification", "Instagram", "NER", "Named Entity Recognition", "Food Entity Extraction", "Social Media", "Informal text", "RoBERTa", "en", "dataset:Dizex/InstaFoodSet", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-11-21T19:30:17Z
2024-04-03T07:03:08+00:00
378
12
--- datasets: - Dizex/InstaFoodSet language: en license: mit tags: - Instagram - NER - Named Entity Recognition - Food Entity Extraction - Social Media - Informal text - RoBERTa widget: - text: 'Today''s meal: Fresh olive poké bowl topped with chia seeds. Very delicious!' example_title: Food example 1 - text: Tartufo Pasta with garlic flavoured butter and olive oil, egg yolk, parmigiano and pasta water. example_title: Food example 2 --- # InstaFoodRoBERTa-NER ## Model description **InstaFoodRoBERTa-NER** is a fine-tuned BERT model that is ready to use for **Named Entity Recognition** of Food entities on social media like informal text (e.g. Instagram, X, Reddit). It has been trained to recognize a single entity: food (FOOD). Specifically, this model is a [*roberta-base*](https://huggingface.co/roberta-base) model that was fine-tuned on a dataset consisting of 400 English Instagram posts related to food. The [dataset](https://huggingface.co/datasets/Dizex/InstaFoodSet) is open source. ## Intended uses #### How to use You can use this model with Transformers *pipeline* for NER. ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("Dizex/InstaFoodRoBERTa-NER") model = AutoModelForTokenClassification.from_pretrained("Dizex/InstaFoodRoBERTa-NER") pipe = pipeline("ner", model=model, tokenizer=tokenizer) example = "Today's meal: Fresh olive poké bowl topped with chia seeds. Very delicious!" ner_entity_results = pipe(example, aggregation_strategy="simple") print(ner_entity_results) ``` To get the extracted food entities as strings you can use the following code: ```python def convert_entities_to_list(text, entities: list[dict]) -> list[str]: ents = [] for ent in entities: e = {"start": ent["start"], "end": ent["end"], "label": ent["entity_group"]} if ents and -1 <= ent["start"] - ents[-1]["end"] <= 1 and ents[-1]["label"] == e["label"]: ents[-1]["end"] = e["end"] continue ents.append(e) return [text[e["start"]:e["end"]] for e in ents] print(convert_entities_to_list(example, ner_entity_results)) ``` This will result in the following output: ```python ['olive poké bowl', 'chia seeds'] ``` ## Performance on [InstaFoodSet](https://huggingface.co/datasets/Dizex/InstaFoodSet) metric|val -|- f1 |0.91 precision |0.89 recall |0.93
[ "CHIA" ]
QuantFactory/lb-reranker-0.5B-v1.0-GGUF
QuantFactory
text-generation
[ "transformers", "gguf", "reranker", "text-generation", "en", "zh", "es", "de", "ar", "ru", "ja", "ko", "hi", "sk", "vi", "tr", "fi", "id", "fa", "no", "th", "sv", "pt", "da", "bn", "te", "ro", "it", "fr", "nl", "sw", "pl", "hu", "cs", "el", "uk", "mr", "ta", "tl", "bg", "lt", "ur", "he", "gu", "kn", "am", "kk", "hr", "uz", "jv", "ca", "az", "ms", "sr", "sl", "yo", "lv", "is", "ha", "ka", "et", "bs", "hy", "ml", "pa", "mt", "km", "sq", "or", "as", "my", "mn", "af", "be", "ga", "mk", "cy", "gl", "ceb", "la", "yi", "lb", "tg", "gd", "ne", "ps", "eu", "ky", "ku", "si", "ht", "eo", "lo", "fy", "sd", "mg", "so", "ckb", "su", "nn", "dataset:lightblue/reranker_continuous_filt_max7_train", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "base_model:quantized:Qwen/Qwen2.5-0.5B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
2025-01-15T16:16:22Z
2025-01-15T16:22:41+00:00
375
3
--- base_model: - Qwen/Qwen2.5-0.5B-Instruct datasets: - lightblue/reranker_continuous_filt_max7_train language: - en - zh - es - de - ar - ru - ja - ko - hi - sk - vi - tr - fi - id - fa - 'no' - th - sv - pt - da - bn - te - ro - it - fr - nl - sw - pl - hu - cs - el - uk - mr - ta - tl - bg - lt - ur - he - gu - kn - am - kk - hr - uz - jv - ca - az - ms - sr - sl - yo - lv - is - ha - ka - et - bs - hy - ml - pa - mt - km - sq - or - as - my - mn - af - be - ga - mk - cy - gl - ceb - la - yi - lb - tg - gd - ne - ps - eu - ky - ku - si - ht - eo - lo - fy - sd - mg - so - ckb - su - nn library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - reranker --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/lb-reranker-0.5B-v1.0-GGUF This is quantized version of [lightblue/lb-reranker-0.5B-v1.0](https://huggingface.co/lightblue/lb-reranker-0.5B-v1.0) created using llama.cpp # Original Model Card # LB Reranker v1.0 <div style="width: 100%; height: 160px; display: flex; align-items: center; justify-content: center; border: 8px solid black; font-size: 120px; font-weight: bold; text-align: center; color: #438db8; font-family: 'Helvetica Neue', sans-serif;"> LBR </div> The LB Reranker has been trained to determine the relatedness of a given query to a piece of text, therefore allowing it to be used as a ranker or reranker in various retrieval-based tasks. This model is fine-tuned from a [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) model checkpoint and was trained for roughly 5.5 hours using the 8 x L20 instance ([ecs.gn8is-8x.32xlarge](https://www.alibabacloud.com/help/en/ecs/user-guide/gpu-accelerated-compute-optimized-and-vgpu-accelerated-instance-families-1)) on [Alibaba Cloud](https://www.alibabacloud.com/). The training data for this model can be found at [lightblue/reranker_continuous_filt_max7_train](https://huggingface.co/datasets/lightblue/reranker_continuous_filt_max7_train) and the code for generating this data as well as running the training of the model can be found on [our Github repo](https://github.com/lightblue-tech/lb-reranker). Trained on data in over 95 languages, this model is applicable to a broad range of use cases. This model has three main benefits over comparable rerankers. 1. It has shown slightly higher performance on evaluation benchmarks. 2. It has been trained on more languages than any previous model. 3. It is a simple Causal LM model trained to output a string between "1" and "7". This last point means that this model can be used natively with many widely available inference packages, including vLLM and LMDeploy. This in turns allows our reranker to benefit from improvements to inference as and when these packages release them. Update: We have also found that this model works pretty well as a code snippet reranker too (P@1 of 96%)! See our [Colab](https://colab.research.google.com/drive/1ABL1xaarekLIlVJKbniYhXgYu6ZNwfBm?usp=sharing) for more details. # How to use The model was trained to expect an input such as: ``` <<<Query>>> {your_query_here} <<<Context>>> {your_context_here} ``` And to output a string of a number between 1-7. In order to make a continuous score that can be used for reranking query-context pairs (i.e. a method with few ties), we calculate the expectation value of the scores. We include scripts to do this in both vLLM and LMDeploy: #### vLLM Install [vLLM](https://github.com/vllm-project/vllm/) using `pip install vllm`. ```python from vllm import LLM, SamplingParams import numpy as np def make_reranker_input(t, q): return f"<<<Query>>>\n{q}\n\n<<<Context>>>\n{t}" def make_reranker_training_datum(context, question): system_message = "Given a query and a piece of text, output a score of 1-7 based on how related the query is to the text. 1 means least related and 7 is most related." return [ {"role": "system", "content": system_message}, {"role": "user", "content": make_reranker_input(context, question)}, ] def get_prob(logprob_dict, tok_id): return np.exp(logprob_dict[tok_id].logprob) if tok_id in logprob_dict.keys() else 0 llm = LLM("lightblue/lb-reranker-v1.0") sampling_params = SamplingParams(temperature=0.0, logprobs=14, max_tokens=1) tok = llm.llm_engine.tokenizer.tokenizer idx_tokens = [tok.encode(str(i))[0] for i in range(1, 8)] query_texts = [ ("What is the scientific name of apples?", "An apple is a round, edible fruit produced by an apple tree (Malus spp., among them the domestic or orchard apple; Malus domestica)."), ("What is the Chinese word for 'apple'?", "An apple is a round, edible fruit produced by an apple tree (Malus spp., among them the domestic or orchard apple; Malus domestica)."), ("What is the square root of 999?", "An apple is a round, edible fruit produced by an apple tree (Malus spp., among them the domestic or orchard apple; Malus domestica)."), ] chats = [make_reranker_training_datum(c, q) for q, c in query_texts] responses = llm.chat(chats, sampling_params) probs = np.array([[get_prob(r.outputs[0].logprobs[0], y) for y in idx_tokens] for r in responses]) N = probs.shape[1] M = probs.shape[0] idxs = np.tile(np.arange(1, N + 1), M).reshape(M, N) expected_vals = (probs * idxs).sum(axis=1) print(expected_vals) # [6.66570732 1.86686378 1.01102923] ``` #### LMDeploy Install [LMDeploy](https://github.com/InternLM/lmdeploy) using `pip install lmdeploy`. ```python # Un-comment this if running in a Jupyter notebook, Colab etc. # import nest_asyncio # nest_asyncio.apply() from lmdeploy import GenerationConfig, ChatTemplateConfig, pipeline import numpy as np def make_reranker_input(t, q): return f"<<<Query>>>\n{q}\n\n<<<Context>>>\n{t}" def make_reranker_training_datum(context, question): system_message = "Given a query and a piece of text, output a score of 1-7 based on how related the query is to the text. 1 means least related and 7 is most related." return [ {"role": "system", "content": system_message}, {"role": "user", "content": make_reranker_input(context, question)}, ] def get_prob(logprob_dict, tok_id): return np.exp(logprob_dict[tok_id]) if tok_id in logprob_dict.keys() else 0 pipe = pipeline( "lightblue/lb-reranker-v1.0", chat_template_config=ChatTemplateConfig( model_name='qwen2d5', capability='chat' ) ) tok = pipe.tokenizer.model idx_tokens = [tok.encode(str(i))[0] for i in range(1, 8)] query_texts = [ ("What is the scientific name of apples?", "An apple is a round, edible fruit produced by an apple tree (Malus spp., among them the domestic or orchard apple; Malus domestica)."), ("What is the Chinese word for 'apple'?", "An apple is a round, edible fruit produced by an apple tree (Malus spp., among them the domestic or orchard apple; Malus domestica)."), ("What is the square root of 999?", "An apple is a round, edible fruit produced by an apple tree (Malus spp., among them the domestic or orchard apple; Malus domestica)."), ] chats = [make_reranker_training_datum(c, q) for q, c in query_texts] responses = pipe( chats, gen_config=GenerationConfig(temperature=1.0, logprobs=14, max_new_tokens=1, do_sample=True) ) probs = np.array([[get_prob(r.logprobs[0], y) for y in idx_tokens] for r in responses]) N = probs.shape[1] M = probs.shape[0] idxs = np.tile(np.arange(1, N + 1), M).reshape(M, N) expected_vals = (probs * idxs).sum(axis=1) print(expected_vals) # [6.66415229 1.84342025 1.01133205] ``` # Evaluation We perform an evaluation on 9 datasets from the [BEIR benchmark](https://github.com/beir-cellar/beir) that none of the evaluated models have been trained upon (to our knowledge). * Arguana * Dbpedia-entity * Fiqa * NFcorpus * Scidocs * Scifact * Trec-covid-v2 * Vihealthqa * Webis-touche2020 We evaluate on a subset of all queries (the first 250) to save evaluation time. We find that our model performs similarly or better than many of the state-of-the-art reranker models in our evaluation, without compromising on inference speed. We make our evaluation code and results available [on our Github](https://github.com/lightblue-tech/lb-reranker/blob/main/run_bier.ipynb). ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/xkNzCABFUmU7UmDXUduiz.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/P-XCA3TGHqDSX8k6c4hCE.png) As we can see, this reranker attains greater IR evaluation metrics compared to the two benchmarks we include for all positions apart from @1. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/puhhWseBOcIyOEdW4L-B0.png) We also show that our model is, on average, faster than the BGE reranker v2. # License We share this model under an Apache 2.0 license. # Developed by <a href="https://www.lightblue-tech.com"> <img src="https://www.lightblue-tech.com/wp-content/uploads/2023/08/color_%E6%A8%AA%E5%9E%8B-1536x469.png" alt="Lightblue technology logo" width="400"/> </a> This model was trained by Peter Devine ([ptrdvn](https://huggingface.co/ptrdvn)) for Lightblue
[ "SCIFACT" ]
BioMistral/BioMistral-7B-Zephyr-Beta-SLERP-GGUF
BioMistral
text-generation
[ "transformers", "gguf", "mistral", "text-generation", "mergekit", "merge", "arxiv:2402.10373", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:merge:HuggingFaceH4/zephyr-7b-beta", "base_model:Project44/BioMistral-7B-0.1-PubMed-V2", "base_model:merge:Project44/BioMistral-7B-0.1-PubMed-V2", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational" ]
2024-02-19T22:03:27Z
2024-02-19T22:47:38+00:00
372
3
--- base_model: - HuggingFaceH4/zephyr-7b-beta - Project44/BioMistral-7B-0.1-PubMed-V2 library_name: transformers tags: - mergekit - merge --- # BioMistral-7B-zephyr-beta-slerp This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) * [Project44/BioMistral-7B-0.1-PubMed-V2](https://huggingface.co/Project44/BioMistral-7B-0.1-PubMed-V2) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: HuggingFaceH4/zephyr-7b-beta layer_range: [0, 32] - model: Project44/BioMistral-7B-0.1-PubMed-V2 layer_range: [0, 32] merge_method: slerp base_model: HuggingFaceH4/zephyr-7b-beta parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` <p align="center"> <img src="https://huggingface.co/BioMistral/BioMistral-7B/resolve/main/wordart_blue_m_rectangle.png?download=true" alt="drawing" width="250"/> </p> # BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains **Abstract:** Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, offering potential applications across specialized domains such as healthcare and medicine. Despite the availability of various open-source LLMs tailored for health contexts, adapting general-purpose LLMs to the medical domain presents significant challenges. In this paper, we introduce BioMistral, an open-source LLM tailored for the biomedical domain, utilizing Mistral as its foundation model and further pre-trained on PubMed Central. We conduct a comprehensive evaluation of BioMistral on a benchmark comprising 10 established medical question-answering (QA) tasks in English. We also explore lightweight models obtained through quantization and model merging approaches. Our results demonstrate BioMistral's superior performance compared to existing open-source medical models and its competitive edge against proprietary counterparts. Finally, to address the limited availability of data beyond English and to assess the multilingual generalization of medical LLMs, we automatically translated and evaluated this benchmark into 7 other languages. This marks the first large-scale multilingual evaluation of LLMs in the medical domain. Datasets, multilingual evaluation benchmarks, scripts, and all the models obtained during our experiments are freely released. **Advisory Notice!** Although BioMistral is intended to encapsulate medical knowledge sourced from high-quality evidence, it hasn't been tailored to effectively, safely, or suitably convey this knowledge within professional parameters for action. We advise refraining from utilizing BioMistral in medical contexts unless it undergoes thorough alignment with specific use cases and undergoes further testing, notably including randomized controlled trials in real-world medical environments. BioMistral 7B may possess inherent risks and biases that have not yet been thoroughly assessed. Additionally, the model's performance has not been evaluated in real-world clinical settings. Consequently, we recommend using BioMistral 7B strictly as a research tool and advise against deploying it in production environments for natural language generation or any professional health and medical purposes. # 1. BioMistral models **BioMistral** is a suite of Mistral-based further pre-trained open source models suited for the medical domains and pre-trained using textual data from PubMed Central Open Access (CC0, CC BY, CC BY-SA, and CC BY-ND). All the models are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French HPC. | Model Name | Base Model | Model Type | Sequence Length | Download | |:-------------------:|:----------------------------------:|:-------------------:|:---------------:|:-----------------------------------------------------:| | BioMistral-7B | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Further Pre-trained | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) | | BioMistral-7B-DARE | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge DARE | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE) | | BioMistral-7B-TIES | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge TIES | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES) | | BioMistral-7B-SLERP | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge SLERP | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP) | # 2. Quantized Models | Base Model | Method | q_group_size | w_bit | version | VRAM GB | Time | Download | |:-------------------:|:------:|:------------:|:-----:|:-------:|:-------:|:------:|:--------:| | BioMistral-7B | FP16/BF16 | | | | 15.02 | x1.00 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) | | BioMistral-7B | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMM) | | BioMistral-7B | AWQ | 128 | 4 | GEMV | 4.68 | x10.30 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMV) | | BioMistral-7B | BnB.4 | | 4 | | 5.03 | x3.25 | [HuggingFace](blank) | | BioMistral-7B | BnB.8 | | 8 | | 8.04 | x4.34 | [HuggingFace](blank) | | BioMistral-7B-DARE | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE-AWQ-QGS128-W4-GEMM) | | BioMistral-7B-TIES | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES-AWQ-QGS128-W4-GEMM) | | BioMistral-7B-SLERP | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP-AWQ-QGS128-W4-GEMM) | # 2. Using BioMistral You can use BioMistral with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow. Loading the model and tokenizer : ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("BioMistral/BioMistral-7B") model = AutoModel.from_pretrained("BioMistral/BioMistral-7B") ``` # 3. Supervised Fine-tuning Benchmark | | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | MedQA 5 opts | PubMedQA | MedMCQA | Avg. | |-------------------------------------------|:---------------------------------------------:|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|------------------| | **BioMistral 7B** | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | 50.6 | 42.8 | 77.5 | 48.1 | 57.3 | | **Mistral 7B Instruct** | **62.9** | 57.0 | 55.6 | 59.4 | 62.5 | <u>57.2</u> | 42.0 | 40.9 | 75.7 | 46.1 | 55.9 | | | | | | | | | | | | | | | **BioMistral 7B Ensemble** | <u>62.8</u> | 62.7 | <u>57.5</u> | **63.5** | 64.3 | 55.7 | 50.6 | 43.6 | 77.5 | **48.8** | 58.7 | | **BioMistral 7B DARE** | 62.3 | **67.0** | 55.8 | 61.4 | **66.9** | **58.0** | **51.1** | **45.2** | <u>77.7</u> | <u>48.7</u> | **59.4** | | **BioMistral 7B TIES** | 60.1 | <u>65.0</u> | **58.5** | 60.5 | 60.4 | 56.5 | 49.5 | 43.2 | 77.5 | 48.1 | 57.9 | | **BioMistral 7B SLERP** | 62.5 | 64.7 | 55.8 | <u>62.7</u> | <u>64.8</u> | 56.3 | <u>50.8</u> | <u>44.3</u> | **77.8** | 48.6 | <u>58.8</u> | | | | | | | | | | | | | | | **MedAlpaca 7B** | 53.1 | 58.0 | 54.1 | 58.8 | 58.1 | 48.6 | 40.1 | 33.7 | 73.6 | 37.0 | 51.5 | | **PMC-LLaMA 7B** | 24.5 | 27.7 | 35.3 | 17.4 | 30.3 | 23.3 | 25.5 | 20.2 | 72.9 | 26.6 | 30.4 | | **MediTron-7B** | 41.6 | 50.3 | 46.4 | 27.9 | 44.4 | 30.8 | 41.6 | 28.1 | 74.9 | 41.3 | 42.7 | | **BioMedGPT-LM-7B** | 51.4 | 52.0 | 49.4 | 53.3 | 50.7 | 49.1 | 42.5 | 33.9 | 76.8 | 37.6 | 49.7 | | | | | | | | | | | | | | | **GPT-3.5 Turbo 1106*** | 74.71 | 74.00 | 65.92 | 72.79 | 72.91 | 64.73 | 57.71 | 50.82 | 72.66 | 53.79 | 66.0 | Supervised Fine-Tuning (SFT) performance of BioMistral 7B models compared to baselines, measured by accuracy (↑) and averaged across 3 random seeds of 3-shot. DARE, TIES, and SLERP are model merging strategies that combine BioMistral 7B and Mistral 7B Instruct. Best model in bold, and second-best underlined. *GPT-3.5 Turbo performances are reported from the 3-shot results without SFT. # Citation BibTeX Arxiv : [https://arxiv.org/abs/2402.10373](https://arxiv.org/abs/2402.10373) ```bibtex @misc{labrak2024biomistral, title={BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains}, author={Yanis Labrak and Adrien Bazoge and Emmanuel Morin and Pierre-Antoine Gourraud and Mickael Rouvier and Richard Dufour}, year={2024}, eprint={2402.10373}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` **CAUTION!** Both direct and downstream users need to be informed about the risks, biases, and constraints inherent in the model. While the model can produce natural language text, our exploration of its capabilities and limitations is just beginning. In fields such as medicine, comprehending these limitations is crucial. Hence, we strongly advise against deploying this model for natural language generation in production or for professional tasks in the realm of health and medicine.
[ "MEDQA", "PUBMEDQA" ]
seedgularity/NazareAI-Senior-Marketing-Strategist
seedgularity
text-generation
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "conversational", "en", "base_model:unsloth/Llama-3.2-3B-Instruct", "base_model:quantized:unsloth/Llama-3.2-3B-Instruct", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-12-30T07:54:25Z
2025-01-06T21:39:56+00:00
372
3
--- base_model: unsloth/Llama-3.2-3B-Instruct language: en library_name: transformers license: llama3 tags: - conversational widget: - example_title: Senior Marketing Strategist Example messages: - role: system content: You are a Senior Marketing Strategist AI, designed to provide insights, strategies, and guidance for marketing campaigns. - role: user content: Draft a marketing strategy to promote an AI-based search engine aimed at tech-savvy young professionals. model-index: - name: NazareAI-Senior-Marketing-Strategist results: [] --- # NazareAI - Senior Marketing Strategist ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63bb3f12595fa15f4e4cb368/9srykPPAM5qVeWelLmSUx.png) ### Overview **NazareAI** is redefining the workforce with its **Autonomous AI Workforce** solution—tackling scalability, cost, and efficiency challenges in one fell swoop. This demo spotlights the **Senior Marketing Strategist** AI, a specialized agent engineered to deliver expert-level insights, strategies, and actionable guidance for modern marketing campaigns. NarazeAI AI Workforce: https://seedgularity.com/nazareai-ai-workforce/ Need system prompt? You can download it here: https://github.com/seedgularity/NazareAI-Senior-Marketing-Strategist ### Video Demo A short demo illustrating how NazareAI’s Senior Marketing Strategist model crafts an entire presentation from a simple prompt. <video width="800" height="600" controls> <source src="https://seedgularity.com/wp-content/uploads/2024/12/NazareAI.mp4" type="video/mp4"> Your browser does not support the video tag. </video> ### Features - **Role-Specific Expertise** Draws from a wealth of strategic marketing knowledge to deliver data-driven, high-impact recommendations. - **Autonomous Decision-Making** Learns and adapts on the fly, enabling hands-off operation that evolves with user objectives. - **Cost & Time Efficiency** Frees up human resources, accelerates execution, and keeps campaigns on track—without ballooning budgets. ### Use Cases 1. **End-to-End Marketing Strategy** Develop in-depth strategic plans tailored to specific products, industries, or market segments. 2. **Trend Analysis & Optimization** Pinpoint emerging market opportunities and recommend timely adjustments to maximize campaign ROI. 3. **Content & Messaging Frameworks** Craft cohesive messaging strategies—from brand identity to product launch announcements. 4. **Professional Presentations** Generate polished, data-driven presentations to effectively convey strategies and secure stakeholder buy-in. ## NazareAI’s Presentation Structure NazareAI's presentation system provides a structured format for creating presentations using a markup-based approach. This format is designed to be both human-readable and easily parseable by machines, making it ideal for developers building presentation tools and extensions. ## Core Structure Every presentation is wrapped in global markers: ``` nazareai_ppt_fce [presentation content] end_nazareai_ppt_fce ``` Individual slides are marked with sequential identifiers: ``` nazareai_ppt_slide_1: [slide content] nazareai_ppt_slide_2: [slide content] ``` ## Slide Components ### Required Components * `<headline>`: The main title of the slide (required for every slide) ### Optional Components * `<subheadline>`: Secondary heading or subtitle * `<text>`: Main body text content * `<emphasis>`: Highlighted text (typically shown at bottom) * `<list>`: Bullet point items * `<table>`: Structured tabular data ## Component Details ### Text Formatting * Plain text is rendered as-is * Bold text uses double asterisks: `**bold text**` * Line breaks in lists: use `<br/>` ### Lists Lists are defined using the `<list>` tag with bullet points: ``` <list> • First item • Second item • Third item with **bold text** </list> ``` ### Tables Tables use markdown-style syntax within `<table>` tags: ``` <table> | Header 1 | Header 2 | Header 3 | |----------|----------|----------| | Row 1-1 | Row 1-2 | Row 1-3 | | Row 2-1 | Row 2-2 | Row 2-3 | </table> ``` ## Example Here's a complete example of a presentation with one slide using all available components: ``` nazareai_ppt_fce nazareai_ppt_slide_1: <headline>Main Presentation Title</headline> <subheadline>Supporting Information</subheadline> <text>This is the main content of the slide explaining key points.</text> <list> • First important point • Second point with **emphasized text** • Third point with additional details<br/>spanning multiple lines </list> <emphasis>Key takeaway message</emphasis> <table> | Metric | Value | Change | |--------|-------|--------| | Sales | 100K | +15% | | Costs | 50K | -10% | </table> end_nazareai_ppt_fce ``` ### Example Interaction ```plaintext **System**: You are a Senior Marketing Strategist, designed to provide insights, strategies, and guidance for marketing campaigns. **User**: Draft a marketing strategy to promote an AI-based search engine aimed at tech-savvy young professionals. **AI**: Marketing Strategy for AI-Based Search Engine Targeting Tech-Savvy Young Professionals Target Audience Identification - Age: 18-35 - Interests: Technology, innovation, digital marketing, AI advancements - Behaviors: Active on social media platforms (LinkedIn, Instagram), frequent users of technology-related content Unique Value Proposition (UVP) - Fast and efficient search engine capabilities leveraging AI to enhance user experience. - Personalized results based on advanced algorithms that adapt to user behavior. Content Marketing Plan - Create a blog focused on AI in search engines, featuring case studies and how-to guides. - Develop video tutorials demonstrating the unique features of the search engine. - Engage influencers in the tech space to promote the platform through testimonials and reviews. Social Media Campaigns - Utilize platforms such as LinkedIn and Twitter for targeted ads showcasing app benefits. - Host live Q&A sessions with industry experts discussing AI advancements. - Leverage user-generated content by encouraging users to share their experiences. Partnerships and Collaborations - Partner with universities and tech startups to create partnerships that can amplify reach. - Collaborate with established tech blogs for reviews or features. Email Marketing Strategy - Develop segmented email campaigns targeting different user personas within the target demographic. - Offer exclusive promotions or early access to beta versions of new features. SEO Optimization - Optimize website content around relevant keywords such as "AI search engine," "advanced search tools," and "personalized search." - Create backlinks through guest blogging on technology-related sites. Metrics for Success - User growth rate: Aim for a 20% increase in active users within the first six months. - Engagement rates on social media: Target a 15% increase in interactions (likes, shares, comments). - Conversion rate from leads to customers: Set an initial goal of 5% conversion from sign-ups. Implementation Timeline - Month 1-2: Audience research and content creation. - Month 3: Launch social media campaigns and influencer partnerships. - Month 4-6: Monitor metrics, optimize strategies based on performance data. Continuous Improvement - Regularly assess campaign effectiveness through A/B testing for ads and landing pages. - Gather user feedback to refine features and address pain points. By following this structured approach, the AI-based search engine can effectively reach its target audience while maximizing engagement and conversion rates. ```
[ "CRAFT" ]
biggunnyso4/stella_en_400M_v5_cpu
biggunnyso4
sentence-similarity
[ "sentence-transformers", "pytorch", "safetensors", "new", "feature-extraction", "mteb", "transformers", "sentence-similarity", "custom_code", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-09-06T15:15:15Z
2024-09-09T03:00:51+00:00
370
0
--- license: mit tags: - mteb - sentence-transformers - transformers - sentence-similarity model-index: - name: stella_en_400M_v5_cpu results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 92.35820895522387 - type: ap value: 70.81322736988783 - type: ap_weighted value: 70.81322736988783 - type: f1 value: 88.9505466159595 - type: f1_weighted value: 92.68630932872613 - type: main_score value: 92.35820895522387 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 97.1945 - type: ap value: 96.08192192244094 - type: ap_weighted value: 96.08192192244094 - type: f1 value: 97.1936887167346 - type: f1_weighted value: 97.1936887167346 - type: main_score value: 97.1945 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 59.528000000000006 - type: f1 value: 59.21016819840188 - type: f1_weighted value: 59.21016819840188 - type: main_score value: 59.528000000000006 - task: type: Retrieval dataset: name: MTEB ArguAna type: mteb/arguana config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: main_score value: 64.24 - type: map_at_1 value: 40.398 - type: map_at_10 value: 56.215 - type: map_at_100 value: 56.833999999999996 - type: map_at_1000 value: 56.835 - type: map_at_20 value: 56.747 - type: map_at_3 value: 52.181 - type: map_at_5 value: 54.628 - type: mrr_at_1 value: 41.25177809388336 - type: mrr_at_10 value: 56.570762491815216 - type: mrr_at_100 value: 57.17548614361504 - type: mrr_at_1000 value: 57.176650626377466 - type: mrr_at_20 value: 57.08916253512566 - type: mrr_at_3 value: 52.47747747747754 - type: mrr_at_5 value: 54.94547178757718 - type: nauc_map_at_1000_diff1 value: 22.408086887100158 - type: nauc_map_at_1000_max value: -8.730419096847543 - type: nauc_map_at_1000_std value: -17.789262741255737 - type: nauc_map_at_100_diff1 value: 22.407371684274025 - type: nauc_map_at_100_max value: -8.732263549026266 - type: nauc_map_at_100_std value: -17.79550515579994 - type: nauc_map_at_10_diff1 value: 21.925005073301246 - type: nauc_map_at_10_max value: -8.990323944492134 - type: nauc_map_at_10_std value: -18.199246301671458 - type: nauc_map_at_1_diff1 value: 26.23276644969203 - type: nauc_map_at_1_max value: -12.376511389571245 - type: nauc_map_at_1_std value: -18.11411715207284 - type: nauc_map_at_20_diff1 value: 22.32455790850922 - type: nauc_map_at_20_max value: -8.664671547236034 - type: nauc_map_at_20_std value: -17.8290016125137 - type: nauc_map_at_3_diff1 value: 22.395462147465064 - type: nauc_map_at_3_max value: -8.206580750918844 - type: nauc_map_at_3_std value: -17.604490446911484 - type: nauc_map_at_5_diff1 value: 21.95307379904799 - type: nauc_map_at_5_max value: -8.03958102978443 - type: nauc_map_at_5_std value: -17.36578866595004 - type: nauc_mrr_at_1000_diff1 value: 20.124236798365587 - type: nauc_mrr_at_1000_max value: -9.587376069575898 - type: nauc_mrr_at_1000_std value: -17.79191612151833 - type: nauc_mrr_at_100_diff1 value: 20.123612603474033 - type: nauc_mrr_at_100_max value: -9.589187218607831 - type: nauc_mrr_at_100_std value: -17.7981617777748 - type: nauc_mrr_at_10_diff1 value: 19.723683875738075 - type: nauc_mrr_at_10_max value: -9.774151729178815 - type: nauc_mrr_at_10_std value: -18.168668675495162 - type: nauc_mrr_at_1_diff1 value: 23.945332059908132 - type: nauc_mrr_at_1_max value: -12.260461466152819 - type: nauc_mrr_at_1_std value: -18.007194922921148 - type: nauc_mrr_at_20_diff1 value: 20.04819461810257 - type: nauc_mrr_at_20_max value: -9.518368283588936 - type: nauc_mrr_at_20_std value: -17.831608149836136 - type: nauc_mrr_at_3_diff1 value: 19.8571785245832 - type: nauc_mrr_at_3_max value: -9.464375021240478 - type: nauc_mrr_at_3_std value: -17.728533927330453 - type: nauc_mrr_at_5_diff1 value: 19.670313652167827 - type: nauc_mrr_at_5_max value: -8.966372585728434 - type: nauc_mrr_at_5_std value: -17.468955834324817 - type: nauc_ndcg_at_1000_diff1 value: 21.863049281767417 - type: nauc_ndcg_at_1000_max value: -8.18698520924057 - type: nauc_ndcg_at_1000_std value: -17.634483364794804 - type: nauc_ndcg_at_100_diff1 value: 21.849924385738586 - type: nauc_ndcg_at_100_max value: -8.226437560889345 - type: nauc_ndcg_at_100_std value: -17.774648478087002 - type: nauc_ndcg_at_10_diff1 value: 19.888395590413573 - type: nauc_ndcg_at_10_max value: -8.968706085632382 - type: nauc_ndcg_at_10_std value: -19.31386964628115 - type: nauc_ndcg_at_1_diff1 value: 26.23276644969203 - type: nauc_ndcg_at_1_max value: -12.376511389571245 - type: nauc_ndcg_at_1_std value: -18.11411715207284 - type: nauc_ndcg_at_20_diff1 value: 21.38413342416933 - type: nauc_ndcg_at_20_max value: -7.636238194084164 - type: nauc_ndcg_at_20_std value: -17.946390844693028 - type: nauc_ndcg_at_3_diff1 value: 21.29169165029195 - type: nauc_ndcg_at_3_max value: -6.793840499730093 - type: nauc_ndcg_at_3_std value: -17.52359001586737 - type: nauc_ndcg_at_5_diff1 value: 20.238297656671364 - type: nauc_ndcg_at_5_max value: -6.424992706950072 - type: nauc_ndcg_at_5_std value: -17.082391132291356 - type: nauc_precision_at_1000_diff1 value: -7.05195108528572 - type: nauc_precision_at_1000_max value: 34.439879624882145 - type: nauc_precision_at_1000_std value: 68.72436351659353 - type: nauc_precision_at_100_diff1 value: -2.769464113932605 - type: nauc_precision_at_100_max value: 9.89562961226698 - type: nauc_precision_at_100_std value: -0.5880967482224028 - type: nauc_precision_at_10_diff1 value: 2.1371544726832323 - type: nauc_precision_at_10_max value: -11.93051325147756 - type: nauc_precision_at_10_std value: -30.83144187392059 - type: nauc_precision_at_1_diff1 value: 26.23276644969203 - type: nauc_precision_at_1_max value: -12.376511389571245 - type: nauc_precision_at_1_std value: -18.11411715207284 - type: nauc_precision_at_20_diff1 value: 3.780146814257504 - type: nauc_precision_at_20_max value: 17.06527540214615 - type: nauc_precision_at_20_std value: -20.36832563035565 - type: nauc_precision_at_3_diff1 value: 17.63894384012077 - type: nauc_precision_at_3_max value: -2.0220490624638887 - type: nauc_precision_at_3_std value: -17.285601413493918 - type: nauc_precision_at_5_diff1 value: 12.557855071944601 - type: nauc_precision_at_5_max value: 0.5840236463956658 - type: nauc_precision_at_5_std value: -15.827224420217846 - type: nauc_recall_at_1000_diff1 value: -7.051951085286463 - type: nauc_recall_at_1000_max value: 34.43987962487738 - type: nauc_recall_at_1000_std value: 68.724363516591 - type: nauc_recall_at_100_diff1 value: -2.769464113930314 - type: nauc_recall_at_100_max value: 9.895629612270017 - type: nauc_recall_at_100_std value: -0.58809674821745 - type: nauc_recall_at_10_diff1 value: 2.1371544726834495 - type: nauc_recall_at_10_max value: -11.930513251477253 - type: nauc_recall_at_10_std value: -30.83144187392047 - type: nauc_recall_at_1_diff1 value: 26.23276644969203 - type: nauc_recall_at_1_max value: -12.376511389571245 - type: nauc_recall_at_1_std value: -18.11411715207284 - type: nauc_recall_at_20_diff1 value: 3.7801468142575922 - type: nauc_recall_at_20_max value: 17.0652754021456 - type: nauc_recall_at_20_std value: -20.36832563035559 - type: nauc_recall_at_3_diff1 value: 17.63894384012074 - type: nauc_recall_at_3_max value: -2.02204906246383 - type: nauc_recall_at_3_std value: -17.28560141349386 - type: nauc_recall_at_5_diff1 value: 12.55785507194463 - type: nauc_recall_at_5_max value: 0.5840236463957296 - type: nauc_recall_at_5_std value: -15.827224420217856 - type: ndcg_at_1 value: 40.398 - type: ndcg_at_10 value: 64.24 - type: ndcg_at_100 value: 66.631 - type: ndcg_at_1000 value: 66.65100000000001 - type: ndcg_at_20 value: 66.086 - type: ndcg_at_3 value: 55.938 - type: ndcg_at_5 value: 60.370000000000005 - type: precision_at_1 value: 40.398 - type: precision_at_10 value: 8.962 - type: precision_at_100 value: 0.9950000000000001 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.836 - type: precision_at_3 value: 22.262 - type: precision_at_5 value: 15.519 - type: recall_at_1 value: 40.398 - type: recall_at_10 value: 89.616 - type: recall_at_100 value: 99.502 - type: recall_at_1000 value: 99.644 - type: recall_at_20 value: 96.72800000000001 - type: recall_at_3 value: 66.78500000000001 - type: recall_at_5 value: 77.596 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: main_score value: 55.1564333205451 - type: v_measure value: 55.1564333205451 - type: v_measure_std value: 14.696883012214512 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: main_score value: 49.823698316694795 - type: v_measure value: 49.823698316694795 - type: v_measure_std value: 14.951660654298186 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: main_score value: 66.15294503553424 - type: map value: 66.15294503553424 - type: mrr value: 78.53438420612935 - type: nAUC_map_diff1 value: 12.569697092717997 - type: nAUC_map_max value: 21.50670312412572 - type: nAUC_map_std value: 16.943786429229064 - type: nAUC_mrr_diff1 value: 15.590272897361238 - type: nAUC_mrr_max value: 34.96072022474653 - type: nAUC_mrr_std value: 21.649217605241045 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cosine_pearson value: 85.7824546319275 - type: cosine_spearman value: 83.29587385660628 - type: euclidean_pearson value: 84.58764190565167 - type: euclidean_spearman value: 83.30069324352772 - type: main_score value: 83.29587385660628 - type: manhattan_pearson value: 84.95996839947179 - type: manhattan_spearman value: 83.87480271054358 - type: pearson value: 85.7824546319275 - type: spearman value: 83.29587385660628 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 89.30194805194806 - type: f1 value: 89.26182507266391 - type: f1_weighted value: 89.26182507266391 - type: main_score value: 89.30194805194806 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: main_score value: 50.67972171889736 - type: v_measure value: 50.67972171889736 - type: v_measure_std value: 0.7687409980036303 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: main_score value: 45.80539715556144 - type: v_measure value: 45.80539715556144 - type: v_measure_std value: 0.9601346216579142 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: mteb/cqadupstack config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: main_score value: 44.361250000000005 - type: map_at_1 value: 28.304499999999997 - type: map_at_10 value: 38.54841666666666 - type: map_at_100 value: 39.83141666666667 - type: map_at_1000 value: 39.944750000000006 - type: map_at_20 value: 39.25341666666667 - type: map_at_3 value: 35.406749999999995 - type: map_at_5 value: 37.15558333333333 - type: mrr_at_1 value: 34.09077232860122 - type: mrr_at_10 value: 43.15445393211421 - type: mrr_at_100 value: 43.98645286848257 - type: mrr_at_1000 value: 44.037631313469404 - type: mrr_at_20 value: 43.64045813249614 - type: mrr_at_3 value: 40.674138648480486 - type: mrr_at_5 value: 42.106251182620255 - type: nauc_map_at_1000_diff1 value: 46.250011739434996 - type: nauc_map_at_1000_max value: 30.13664446260598 - type: nauc_map_at_1000_std value: 5.422301791618935 - type: nauc_map_at_100_diff1 value: 46.253631351999395 - type: nauc_map_at_100_max value: 30.12612918885181 - type: nauc_map_at_100_std value: 5.367077019987172 - type: nauc_map_at_10_diff1 value: 46.328171341741346 - type: nauc_map_at_10_max value: 29.80274612581464 - type: nauc_map_at_10_std value: 4.62996685176396 - type: nauc_map_at_1_diff1 value: 51.56118117729493 - type: nauc_map_at_1_max value: 27.94885243863768 - type: nauc_map_at_1_std value: 1.700366508927356 - type: nauc_map_at_20_diff1 value: 46.286750260299094 - type: nauc_map_at_20_max value: 29.979205290353278 - type: nauc_map_at_20_std value: 5.010588412441873 - type: nauc_map_at_3_diff1 value: 47.10018183619064 - type: nauc_map_at_3_max value: 29.062318206078753 - type: nauc_map_at_3_std value: 3.2235696254694197 - type: nauc_map_at_5_diff1 value: 46.41971733050039 - type: nauc_map_at_5_max value: 29.456798617695657 - type: nauc_map_at_5_std value: 4.0921691023077145 - type: nauc_mrr_at_1000_diff1 value: 45.88888977975723 - type: nauc_mrr_at_1000_max value: 32.162138978089544 - type: nauc_mrr_at_1000_std value: 6.2811943424217915 - type: nauc_mrr_at_100_diff1 value: 45.87480433011124 - type: nauc_mrr_at_100_max value: 32.16011334212834 - type: nauc_mrr_at_100_std value: 6.2865717772421785 - type: nauc_mrr_at_10_diff1 value: 45.849652904658825 - type: nauc_mrr_at_10_max value: 32.13847916232293 - type: nauc_mrr_at_10_std value: 6.105718728141999 - type: nauc_mrr_at_1_diff1 value: 51.013730325062156 - type: nauc_mrr_at_1_max value: 32.77457396492779 - type: nauc_mrr_at_1_std value: 4.415684893471724 - type: nauc_mrr_at_20_diff1 value: 45.86663046255274 - type: nauc_mrr_at_20_max value: 32.15219360697865 - type: nauc_mrr_at_20_std value: 6.19603046412763 - type: nauc_mrr_at_3_diff1 value: 46.522376582423185 - type: nauc_mrr_at_3_max value: 32.18259009733714 - type: nauc_mrr_at_3_std value: 5.288000648220897 - type: nauc_mrr_at_5_diff1 value: 45.86611481369745 - type: nauc_mrr_at_5_max value: 32.14261639054921 - type: nauc_mrr_at_5_std value: 5.8811238177073735 - type: nauc_ndcg_at_1000_diff1 value: 44.5055097547565 - type: nauc_ndcg_at_1000_max value: 31.149682057975458 - type: nauc_ndcg_at_1000_std value: 8.157937194901333 - type: nauc_ndcg_at_100_diff1 value: 44.12398363638596 - type: nauc_ndcg_at_100_max value: 30.878064321409994 - type: nauc_ndcg_at_100_std value: 8.40493441452808 - type: nauc_ndcg_at_10_diff1 value: 44.200093505221474 - type: nauc_ndcg_at_10_max value: 30.15267107733158 - type: nauc_ndcg_at_10_std value: 6.407495361566107 - type: nauc_ndcg_at_1_diff1 value: 51.013730325062156 - type: nauc_ndcg_at_1_max value: 32.77457396492779 - type: nauc_ndcg_at_1_std value: 4.415684893471724 - type: nauc_ndcg_at_20_diff1 value: 44.16988321564116 - type: nauc_ndcg_at_20_max value: 30.333532500651213 - type: nauc_ndcg_at_20_std value: 7.10024701386895 - type: nauc_ndcg_at_3_diff1 value: 45.35982873879988 - type: nauc_ndcg_at_3_max value: 30.288312457948702 - type: nauc_ndcg_at_3_std value: 4.653900898293395 - type: nauc_ndcg_at_5_diff1 value: 44.324558115380185 - type: nauc_ndcg_at_5_max value: 30.048149698941373 - type: nauc_ndcg_at_5_std value: 5.6684459618413205 - type: nauc_precision_at_1000_diff1 value: -7.282175798304458 - type: nauc_precision_at_1000_max value: 7.820142031765352 - type: nauc_precision_at_1000_std value: 11.736131836431172 - type: nauc_precision_at_100_diff1 value: 1.0222940256506976 - type: nauc_precision_at_100_max value: 16.12346497070298 - type: nauc_precision_at_100_std value: 18.202607395247874 - type: nauc_precision_at_10_diff1 value: 18.289439185857837 - type: nauc_precision_at_10_max value: 26.116517399154375 - type: nauc_precision_at_10_std value: 13.921214069982302 - type: nauc_precision_at_1_diff1 value: 51.013730325062156 - type: nauc_precision_at_1_max value: 32.77457396492779 - type: nauc_precision_at_1_std value: 4.415684893471724 - type: nauc_precision_at_20_diff1 value: 12.365165405210886 - type: nauc_precision_at_20_max value: 22.946297258937367 - type: nauc_precision_at_20_std value: 16.13862870358933 - type: nauc_precision_at_3_diff1 value: 32.063423642849685 - type: nauc_precision_at_3_max value: 30.140965811989407 - type: nauc_precision_at_3_std value: 8.501746262550146 - type: nauc_precision_at_5_diff1 value: 24.777203357717948 - type: nauc_precision_at_5_max value: 28.401579566848472 - type: nauc_precision_at_5_std value: 11.643246774390914 - type: nauc_recall_at_1000_diff1 value: 30.04216463401409 - type: nauc_recall_at_1000_max value: 34.98067760563842 - type: nauc_recall_at_1000_std value: 48.01453905250591 - type: nauc_recall_at_100_diff1 value: 31.193415507513972 - type: nauc_recall_at_100_max value: 28.69740149270981 - type: nauc_recall_at_100_std value: 25.20960758920368 - type: nauc_recall_at_10_diff1 value: 36.18870823636506 - type: nauc_recall_at_10_max value: 26.005625231341238 - type: nauc_recall_at_10_std value: 8.891983977041376 - type: nauc_recall_at_1_diff1 value: 51.56118117729493 - type: nauc_recall_at_1_max value: 27.94885243863768 - type: nauc_recall_at_1_std value: 1.700366508927356 - type: nauc_recall_at_20_diff1 value: 34.93996118564803 - type: nauc_recall_at_20_max value: 26.149961715956138 - type: nauc_recall_at_20_std value: 12.0657502367633 - type: nauc_recall_at_3_diff1 value: 40.80743946709512 - type: nauc_recall_at_3_max value: 26.443127773025783 - type: nauc_recall_at_3_std value: 3.7011448604241477 - type: nauc_recall_at_5_diff1 value: 37.608535157055776 - type: nauc_recall_at_5_max value: 26.168016189725822 - type: nauc_recall_at_5_std value: 6.344191564595316 - type: ndcg_at_1 value: 34.09083333333333 - type: ndcg_at_10 value: 44.361250000000005 - type: ndcg_at_100 value: 49.586166666666664 - type: ndcg_at_1000 value: 51.623583333333336 - type: ndcg_at_20 value: 46.40158333333333 - type: ndcg_at_3 value: 39.27733333333333 - type: ndcg_at_5 value: 41.662333333333336 - type: precision_at_1 value: 34.09083333333333 - type: precision_at_10 value: 7.957000000000002 - type: precision_at_100 value: 1.2521666666666669 - type: precision_at_1000 value: 0.16125 - type: precision_at_20 value: 4.6755 - type: precision_at_3 value: 18.402083333333334 - type: precision_at_5 value: 13.104333333333335 - type: recall_at_1 value: 28.304499999999997 - type: recall_at_10 value: 56.80666666666667 - type: recall_at_100 value: 79.66208333333334 - type: recall_at_1000 value: 93.6455 - type: recall_at_20 value: 64.2495 - type: recall_at_3 value: 42.431333333333335 - type: recall_at_5 value: 48.665416666666665 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: mteb/climate-fever config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: main_score value: 43.525999999999996 - type: map_at_1 value: 19.291 - type: map_at_10 value: 33.471000000000004 - type: map_at_100 value: 35.388999999999996 - type: map_at_1000 value: 35.568 - type: map_at_20 value: 34.496 - type: map_at_3 value: 28.713 - type: map_at_5 value: 31.384 - type: mrr_at_1 value: 43.77850162866449 - type: mrr_at_10 value: 56.28576598934912 - type: mrr_at_100 value: 56.8588518168194 - type: mrr_at_1000 value: 56.878236725973544 - type: mrr_at_20 value: 56.6409328120183 - type: mrr_at_3 value: 53.56134636264935 - type: mrr_at_5 value: 55.27795874049956 - type: nauc_map_at_1000_diff1 value: 27.262513153363876 - type: nauc_map_at_1000_max value: 40.099398684385584 - type: nauc_map_at_1000_std value: 18.847812394005512 - type: nauc_map_at_100_diff1 value: 27.238993503030745 - type: nauc_map_at_100_max value: 40.07730434492169 - type: nauc_map_at_100_std value: 18.795349250833684 - type: nauc_map_at_10_diff1 value: 27.70929180366227 - type: nauc_map_at_10_max value: 39.55987024970173 - type: nauc_map_at_10_std value: 17.214881544648996 - type: nauc_map_at_1_diff1 value: 43.34155892182403 - type: nauc_map_at_1_max value: 38.23324890148018 - type: nauc_map_at_1_std value: 6.0781444393516075 - type: nauc_map_at_20_diff1 value: 27.311577477800103 - type: nauc_map_at_20_max value: 39.624414083413456 - type: nauc_map_at_20_std value: 18.149811054163287 - type: nauc_map_at_3_diff1 value: 30.475965062734367 - type: nauc_map_at_3_max value: 38.49324825043695 - type: nauc_map_at_3_std value: 13.357656038648487 - type: nauc_map_at_5_diff1 value: 28.425110095017747 - type: nauc_map_at_5_max value: 39.017894870747796 - type: nauc_map_at_5_std value: 15.543817194122564 - type: nauc_mrr_at_1000_diff1 value: 33.16689354701644 - type: nauc_mrr_at_1000_max value: 41.70755363247148 - type: nauc_mrr_at_1000_std value: 24.61667417463176 - type: nauc_mrr_at_100_diff1 value: 33.147229262917506 - type: nauc_mrr_at_100_max value: 41.712455697170725 - type: nauc_mrr_at_100_std value: 24.6418922043652 - type: nauc_mrr_at_10_diff1 value: 32.94185191112572 - type: nauc_mrr_at_10_max value: 41.64272730141954 - type: nauc_mrr_at_10_std value: 24.663391015702707 - type: nauc_mrr_at_1_diff1 value: 39.571969559016395 - type: nauc_mrr_at_1_max value: 39.396249211263495 - type: nauc_mrr_at_1_std value: 16.984149923258357 - type: nauc_mrr_at_20_diff1 value: 33.10040770334742 - type: nauc_mrr_at_20_max value: 41.807565560083034 - type: nauc_mrr_at_20_std value: 24.8064180365271 - type: nauc_mrr_at_3_diff1 value: 33.065406161485704 - type: nauc_mrr_at_3_max value: 41.049510969934694 - type: nauc_mrr_at_3_std value: 23.18371458928609 - type: nauc_mrr_at_5_diff1 value: 33.2389593543916 - type: nauc_mrr_at_5_max value: 41.629486918949915 - type: nauc_mrr_at_5_std value: 24.5777253036149 - type: nauc_ndcg_at_1000_diff1 value: 25.868840609197637 - type: nauc_ndcg_at_1000_max value: 42.79564910784761 - type: nauc_ndcg_at_1000_std value: 27.035091271680113 - type: nauc_ndcg_at_100_diff1 value: 25.019789319579942 - type: nauc_ndcg_at_100_max value: 42.482345143533735 - type: nauc_ndcg_at_100_std value: 26.76872010731345 - type: nauc_ndcg_at_10_diff1 value: 25.949464660653238 - type: nauc_ndcg_at_10_max value: 40.79769544643906 - type: nauc_ndcg_at_10_std value: 22.486116508973204 - type: nauc_ndcg_at_1_diff1 value: 39.571969559016395 - type: nauc_ndcg_at_1_max value: 39.396249211263495 - type: nauc_ndcg_at_1_std value: 16.984149923258357 - type: nauc_ndcg_at_20_diff1 value: 25.173455685962214 - type: nauc_ndcg_at_20_max value: 40.88873540662413 - type: nauc_ndcg_at_20_std value: 24.4451041955519 - type: nauc_ndcg_at_3_diff1 value: 28.185416070726333 - type: nauc_ndcg_at_3_max value: 39.10600031163912 - type: nauc_ndcg_at_3_std value: 18.42694044215541 - type: nauc_ndcg_at_5_diff1 value: 27.112647584005583 - type: nauc_ndcg_at_5_max value: 40.154045682322526 - type: nauc_ndcg_at_5_std value: 20.26822517176828 - type: nauc_precision_at_1000_diff1 value: -16.42087927044017 - type: nauc_precision_at_1000_max value: 3.5326295053913 - type: nauc_precision_at_1000_std value: 24.406810708493197 - type: nauc_precision_at_100_diff1 value: -12.17648135724982 - type: nauc_precision_at_100_max value: 15.895489260126183 - type: nauc_precision_at_100_std value: 32.48346122610907 - type: nauc_precision_at_10_diff1 value: -1.2493131347748072 - type: nauc_precision_at_10_max value: 26.409459305604376 - type: nauc_precision_at_10_std value: 31.115432019300016 - type: nauc_precision_at_1_diff1 value: 39.571969559016395 - type: nauc_precision_at_1_max value: 39.396249211263495 - type: nauc_precision_at_1_std value: 16.984149923258357 - type: nauc_precision_at_20_diff1 value: -6.597509397240593 - type: nauc_precision_at_20_max value: 21.461984620659695 - type: nauc_precision_at_20_std value: 32.9450259748889 - type: nauc_precision_at_3_diff1 value: 9.46378764865453 - type: nauc_precision_at_3_max value: 32.03650819375425 - type: nauc_precision_at_3_std value: 26.489382638510765 - type: nauc_precision_at_5_diff1 value: 3.5987036728169537 - type: nauc_precision_at_5_max value: 30.633955978579703 - type: nauc_precision_at_5_std value: 30.532430088014443 - type: nauc_recall_at_1000_diff1 value: 10.714633106872254 - type: nauc_recall_at_1000_max value: 43.94958623961 - type: nauc_recall_at_1000_std value: 51.78914468954123 - type: nauc_recall_at_100_diff1 value: 9.63781472255557 - type: nauc_recall_at_100_max value: 38.50917465255336 - type: nauc_recall_at_100_std value: 37.78623984642377 - type: nauc_recall_at_10_diff1 value: 16.480342820841688 - type: nauc_recall_at_10_max value: 35.982566867357406 - type: nauc_recall_at_10_std value: 23.30688188788895 - type: nauc_recall_at_1_diff1 value: 43.34155892182403 - type: nauc_recall_at_1_max value: 38.23324890148018 - type: nauc_recall_at_1_std value: 6.0781444393516075 - type: nauc_recall_at_20_diff1 value: 13.521048985146367 - type: nauc_recall_at_20_max value: 34.62462209239834 - type: nauc_recall_at_20_std value: 27.85924191501618 - type: nauc_recall_at_3_diff1 value: 23.57032748533523 - type: nauc_recall_at_3_max value: 36.32703197635613 - type: nauc_recall_at_3_std value: 15.730238734014337 - type: nauc_recall_at_5_diff1 value: 19.61387036368584 - type: nauc_recall_at_5_max value: 36.22030835529556 - type: nauc_recall_at_5_std value: 19.76310648649897 - type: ndcg_at_1 value: 43.779 - type: ndcg_at_10 value: 43.525999999999996 - type: ndcg_at_100 value: 50.138000000000005 - type: ndcg_at_1000 value: 52.991 - type: ndcg_at_20 value: 46.083 - type: ndcg_at_3 value: 38.002 - type: ndcg_at_5 value: 39.842 - type: precision_at_1 value: 43.779 - type: precision_at_10 value: 13.205 - type: precision_at_100 value: 2.051 - type: precision_at_1000 value: 0.259 - type: precision_at_20 value: 7.722999999999999 - type: precision_at_3 value: 28.903000000000002 - type: precision_at_5 value: 21.368000000000002 - type: recall_at_1 value: 19.291 - type: recall_at_10 value: 48.754 - type: recall_at_100 value: 70.97200000000001 - type: recall_at_1000 value: 86.611 - type: recall_at_20 value: 55.884 - type: recall_at_3 value: 34.101 - type: recall_at_5 value: 40.784 - task: type: Retrieval dataset: name: MTEB DBPedia type: mteb/dbpedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: main_score value: 49.884 - type: map_at_1 value: 9.913 - type: map_at_10 value: 23.186999999999998 - type: map_at_100 value: 34.207 - type: map_at_1000 value: 36.318 - type: map_at_20 value: 27.419 - type: map_at_3 value: 15.656 - type: map_at_5 value: 18.945999999999998 - type: mrr_at_1 value: 75.75 - type: mrr_at_10 value: 82.16279761904761 - type: mrr_at_100 value: 82.48445635330299 - type: mrr_at_1000 value: 82.4870246719901 - type: mrr_at_20 value: 82.36203632968338 - type: mrr_at_3 value: 81.29166666666666 - type: mrr_at_5 value: 82.02916666666667 - type: nauc_map_at_1000_diff1 value: 17.0739966990996 - type: nauc_map_at_1000_max value: 28.440065298437133 - type: nauc_map_at_1000_std value: 20.83498154003865 - type: nauc_map_at_100_diff1 value: 17.75982086107111 - type: nauc_map_at_100_max value: 26.87850835673573 - type: nauc_map_at_100_std value: 18.350282298599275 - type: nauc_map_at_10_diff1 value: 17.15984258564116 - type: nauc_map_at_10_max value: 10.846179132675553 - type: nauc_map_at_10_std value: -6.263534464094614 - type: nauc_map_at_1_diff1 value: 24.014897777973694 - type: nauc_map_at_1_max value: -4.556638938723358 - type: nauc_map_at_1_std value: -22.7844467526989 - type: nauc_map_at_20_diff1 value: 16.3179372493187 - type: nauc_map_at_20_max value: 17.176378915498915 - type: nauc_map_at_20_std value: 1.9378637630340372 - type: nauc_map_at_3_diff1 value: 19.12786794046792 - type: nauc_map_at_3_max value: 0.09063919305677291 - type: nauc_map_at_3_std value: -16.713143158330492 - type: nauc_map_at_5_diff1 value: 18.76504725420023 - type: nauc_map_at_5_max value: 5.040867712207419 - type: nauc_map_at_5_std value: -12.382578318931165 - type: nauc_mrr_at_1000_diff1 value: 54.61266255011247 - type: nauc_mrr_at_1000_max value: 60.83961280977112 - type: nauc_mrr_at_1000_std value: 32.70429260443016 - type: nauc_mrr_at_100_diff1 value: 54.61346236538542 - type: nauc_mrr_at_100_max value: 60.8407974416647 - type: nauc_mrr_at_100_std value: 32.69272843993462 - type: nauc_mrr_at_10_diff1 value: 54.74633685810871 - type: nauc_mrr_at_10_max value: 61.084525933097865 - type: nauc_mrr_at_10_std value: 33.001220210025565 - type: nauc_mrr_at_1_diff1 value: 56.12708423835806 - type: nauc_mrr_at_1_max value: 58.9314540998289 - type: nauc_mrr_at_1_std value: 27.39422607651012 - type: nauc_mrr_at_20_diff1 value: 54.58896150245695 - type: nauc_mrr_at_20_max value: 60.890929983464815 - type: nauc_mrr_at_20_std value: 32.65559641276393 - type: nauc_mrr_at_3_diff1 value: 54.38229071443791 - type: nauc_mrr_at_3_max value: 59.987849044098596 - type: nauc_mrr_at_3_std value: 33.439813880719974 - type: nauc_mrr_at_5_diff1 value: 54.961790262449824 - type: nauc_mrr_at_5_max value: 61.17705173908951 - type: nauc_mrr_at_5_std value: 33.30939850734856 - type: nauc_ndcg_at_1000_diff1 value: 29.27465932507067 - type: nauc_ndcg_at_1000_max value: 47.952543312315214 - type: nauc_ndcg_at_1000_std value: 36.17132236391485 - type: nauc_ndcg_at_100_diff1 value: 28.63072328980134 - type: nauc_ndcg_at_100_max value: 41.460833419186564 - type: nauc_ndcg_at_100_std value: 27.157100358988135 - type: nauc_ndcg_at_10_diff1 value: 23.41488013023301 - type: nauc_ndcg_at_10_max value: 39.27798133072349 - type: nauc_ndcg_at_10_std value: 21.979241438928312 - type: nauc_ndcg_at_1_diff1 value: 46.12120543657642 - type: nauc_ndcg_at_1_max value: 47.28452124039853 - type: nauc_ndcg_at_1_std value: 19.799884708952543 - type: nauc_ndcg_at_20_diff1 value: 23.627669045115574 - type: nauc_ndcg_at_20_max value: 35.88225062457673 - type: nauc_ndcg_at_20_std value: 18.218628030529498 - type: nauc_ndcg_at_3_diff1 value: 25.37309228946118 - type: nauc_ndcg_at_3_max value: 40.64426332992231 - type: nauc_ndcg_at_3_std value: 24.608330645901482 - type: nauc_ndcg_at_5_diff1 value: 24.055798594999654 - type: nauc_ndcg_at_5_max value: 41.16180524175431 - type: nauc_ndcg_at_5_std value: 24.048305528761315 - type: nauc_precision_at_1000_diff1 value: -18.234943251015576 - type: nauc_precision_at_1000_max value: 0.48708502364659184 - type: nauc_precision_at_1000_std value: 2.4473601543134027 - type: nauc_precision_at_100_diff1 value: -3.0077810947381227 - type: nauc_precision_at_100_max value: 25.27249321108913 - type: nauc_precision_at_100_std value: 37.36575792126928 - type: nauc_precision_at_10_diff1 value: -0.2393778190297635 - type: nauc_precision_at_10_max value: 36.40513293547299 - type: nauc_precision_at_10_std value: 37.4827885766009 - type: nauc_precision_at_1_diff1 value: 56.12708423835806 - type: nauc_precision_at_1_max value: 58.9314540998289 - type: nauc_precision_at_1_std value: 27.39422607651012 - type: nauc_precision_at_20_diff1 value: -1.2010133229402933 - type: nauc_precision_at_20_max value: 34.117541814385966 - type: nauc_precision_at_20_std value: 39.13273254177449 - type: nauc_precision_at_3_diff1 value: 11.757378092198486 - type: nauc_precision_at_3_max value: 42.637962482588875 - type: nauc_precision_at_3_std value: 37.42465077352342 - type: nauc_precision_at_5_diff1 value: 7.233177203405101 - type: nauc_precision_at_5_max value: 43.1663582897407 - type: nauc_precision_at_5_std value: 38.848449220750055 - type: nauc_recall_at_1000_diff1 value: 27.33938551969145 - type: nauc_recall_at_1000_max value: 45.5614254479334 - type: nauc_recall_at_1000_std value: 50.58528916250458 - type: nauc_recall_at_100_diff1 value: 23.610383761920097 - type: nauc_recall_at_100_max value: 31.422168485847184 - type: nauc_recall_at_100_std value: 25.58649926458304 - type: nauc_recall_at_10_diff1 value: 14.62495111808408 - type: nauc_recall_at_10_max value: 7.4295041277681095 - type: nauc_recall_at_10_std value: -9.32297089600654 - type: nauc_recall_at_1_diff1 value: 24.014897777973694 - type: nauc_recall_at_1_max value: -4.556638938723358 - type: nauc_recall_at_1_std value: -22.7844467526989 - type: nauc_recall_at_20_diff1 value: 14.027862330014662 - type: nauc_recall_at_20_max value: 12.437478731690844 - type: nauc_recall_at_20_std value: -3.0740743798103676 - type: nauc_recall_at_3_diff1 value: 16.354018356566712 - type: nauc_recall_at_3_max value: -2.9812231240997917 - type: nauc_recall_at_3_std value: -18.27746460743442 - type: nauc_recall_at_5_diff1 value: 16.81486583473587 - type: nauc_recall_at_5_max value: 2.420128513974744 - type: nauc_recall_at_5_std value: -14.441820321214108 - type: ndcg_at_1 value: 63.87500000000001 - type: ndcg_at_10 value: 49.884 - type: ndcg_at_100 value: 54.738 - type: ndcg_at_1000 value: 61.635 - type: ndcg_at_20 value: 48.894999999999996 - type: ndcg_at_3 value: 54.287 - type: ndcg_at_5 value: 52.40899999999999 - type: precision_at_1 value: 75.75 - type: precision_at_10 value: 40.9 - type: precision_at_100 value: 13.139999999999999 - type: precision_at_1000 value: 2.533 - type: precision_at_20 value: 30.8 - type: precision_at_3 value: 57.667 - type: precision_at_5 value: 51.05 - type: recall_at_1 value: 9.913 - type: recall_at_10 value: 28.591 - type: recall_at_100 value: 61.017999999999994 - type: recall_at_1000 value: 83.383 - type: recall_at_20 value: 37.834 - type: recall_at_3 value: 17.049 - type: recall_at_5 value: 21.685 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 78.77499999999999 - type: f1 value: 73.74058240799386 - type: f1_weighted value: 79.78804377638227 - type: main_score value: 78.77499999999999 - task: type: Retrieval dataset: name: MTEB FEVER type: mteb/fever config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: main_score value: 90.986 - type: map_at_1 value: 81.601 - type: map_at_10 value: 88.242 - type: map_at_100 value: 88.46000000000001 - type: map_at_1000 value: 88.472 - type: map_at_20 value: 88.375 - type: map_at_3 value: 87.237 - type: map_at_5 value: 87.85300000000001 - type: mrr_at_1 value: 87.81878187818782 - type: mrr_at_10 value: 92.20301196786335 - type: mrr_at_100 value: 92.24884236673292 - type: mrr_at_1000 value: 92.2496338899362 - type: mrr_at_20 value: 92.23112073283473 - type: mrr_at_3 value: 91.77417741774165 - type: mrr_at_5 value: 92.03970397039689 - type: nauc_map_at_1000_diff1 value: 56.54670664910505 - type: nauc_map_at_1000_max value: 33.08375749975477 - type: nauc_map_at_1000_std value: 2.7491595418252865 - type: nauc_map_at_100_diff1 value: 56.50887688686924 - type: nauc_map_at_100_max value: 33.075487189958494 - type: nauc_map_at_100_std value: 2.7675869969253375 - type: nauc_map_at_10_diff1 value: 56.08080806610569 - type: nauc_map_at_10_max value: 32.776972098819066 - type: nauc_map_at_10_std value: 2.5904846711290097 - type: nauc_map_at_1_diff1 value: 60.645344065853145 - type: nauc_map_at_1_max value: 31.232776777514797 - type: nauc_map_at_1_std value: -1.1946138176109171 - type: nauc_map_at_20_diff1 value: 56.28378454162355 - type: nauc_map_at_20_max value: 32.98207150385811 - type: nauc_map_at_20_std value: 2.8469814040214025 - type: nauc_map_at_3_diff1 value: 55.81958007095375 - type: nauc_map_at_3_max value: 31.602707711038313 - type: nauc_map_at_3_std value: 0.8117019292273401 - type: nauc_map_at_5_diff1 value: 55.706025752316535 - type: nauc_map_at_5_max value: 32.16032683604737 - type: nauc_map_at_5_std value: 1.8853201503498669 - type: nauc_mrr_at_1000_diff1 value: 75.4997173366251 - type: nauc_mrr_at_1000_max value: 41.49117135484116 - type: nauc_mrr_at_1000_std value: -2.0636172883680852 - type: nauc_mrr_at_100_diff1 value: 75.50118860648519 - type: nauc_mrr_at_100_max value: 41.49490161517194 - type: nauc_mrr_at_100_std value: -2.057024385178682 - type: nauc_mrr_at_10_diff1 value: 75.47295153099428 - type: nauc_mrr_at_10_max value: 41.55003304042536 - type: nauc_mrr_at_10_std value: -2.0353663198929253 - type: nauc_mrr_at_1_diff1 value: 76.632058433229 - type: nauc_mrr_at_1_max value: 39.754483718891656 - type: nauc_mrr_at_1_std value: -2.962241058101701 - type: nauc_mrr_at_20_diff1 value: 75.47221882396194 - type: nauc_mrr_at_20_max value: 41.50779280480839 - type: nauc_mrr_at_20_std value: -1.9620212266426307 - type: nauc_mrr_at_3_diff1 value: 75.5682297897137 - type: nauc_mrr_at_3_max value: 41.53543801506081 - type: nauc_mrr_at_3_std value: -3.391681195945978 - type: nauc_mrr_at_5_diff1 value: 75.37562775183947 - type: nauc_mrr_at_5_max value: 41.42028509006753 - type: nauc_mrr_at_5_std value: -2.418698675622726 - type: nauc_ndcg_at_1000_diff1 value: 59.364557011624 - type: nauc_ndcg_at_1000_max value: 35.4112238125149 - type: nauc_ndcg_at_1000_std value: 3.717516193303376 - type: nauc_ndcg_at_100_diff1 value: 58.55706703023122 - type: nauc_ndcg_at_100_max value: 35.352285999934594 - type: nauc_ndcg_at_100_std value: 4.273437944266781 - type: nauc_ndcg_at_10_diff1 value: 56.77422701267037 - type: nauc_ndcg_at_10_max value: 34.24909893882957 - type: nauc_ndcg_at_10_std value: 4.178151434006727 - type: nauc_ndcg_at_1_diff1 value: 76.632058433229 - type: nauc_ndcg_at_1_max value: 39.754483718891656 - type: nauc_ndcg_at_1_std value: -2.962241058101701 - type: nauc_ndcg_at_20_diff1 value: 57.27343398231262 - type: nauc_ndcg_at_20_max value: 34.7416626740278 - type: nauc_ndcg_at_20_std value: 4.955858766014002 - type: nauc_ndcg_at_3_diff1 value: 57.69267803121093 - type: nauc_ndcg_at_3_max value: 33.13744317023105 - type: nauc_ndcg_at_3_std value: 0.40380284030057023 - type: nauc_ndcg_at_5_diff1 value: 56.57461019113917 - type: nauc_ndcg_at_5_max value: 33.244657840804386 - type: nauc_ndcg_at_5_std value: 2.5121440827702046 - type: nauc_precision_at_1000_diff1 value: -14.54492513449718 - type: nauc_precision_at_1000_max value: -5.94552147573623 - type: nauc_precision_at_1000_std value: 1.2446209816057374 - type: nauc_precision_at_100_diff1 value: -15.452676132568344 - type: nauc_precision_at_100_max value: -3.760241749847617 - type: nauc_precision_at_100_std value: 4.623534605290865 - type: nauc_precision_at_10_diff1 value: -12.712908026086176 - type: nauc_precision_at_10_max value: 0.45241316994816805 - type: nauc_precision_at_10_std value: 7.849478570138391 - type: nauc_precision_at_1_diff1 value: 76.632058433229 - type: nauc_precision_at_1_max value: 39.754483718891656 - type: nauc_precision_at_1_std value: -2.962241058101701 - type: nauc_precision_at_20_diff1 value: -14.514618673172041 - type: nauc_precision_at_20_max value: -1.113635490621818 - type: nauc_precision_at_20_std value: 8.599811730457576 - type: nauc_precision_at_3_diff1 value: 6.1367799850003815 - type: nauc_precision_at_3_max value: 8.466271950897857 - type: nauc_precision_at_3_std value: 1.7458051543195068 - type: nauc_precision_at_5_diff1 value: -5.804548945783379 - type: nauc_precision_at_5_max value: 3.4060251839074818 - type: nauc_precision_at_5_std value: 5.583410511782371 - type: nauc_recall_at_1000_diff1 value: 19.329432953574095 - type: nauc_recall_at_1000_max value: 43.260442595158736 - type: nauc_recall_at_1000_std value: 53.89644660661804 - type: nauc_recall_at_100_diff1 value: 21.265326296051235 - type: nauc_recall_at_100_max value: 38.573000195373695 - type: nauc_recall_at_100_std value: 42.169391082152785 - type: nauc_recall_at_10_diff1 value: 29.785129558987432 - type: nauc_recall_at_10_max value: 28.379657867558034 - type: nauc_recall_at_10_std value: 21.132574624091973 - type: nauc_recall_at_1_diff1 value: 60.645344065853145 - type: nauc_recall_at_1_max value: 31.232776777514797 - type: nauc_recall_at_1_std value: -1.1946138176109171 - type: nauc_recall_at_20_diff1 value: 25.88845612373954 - type: nauc_recall_at_20_max value: 30.24785945821152 - type: nauc_recall_at_20_std value: 31.73911437468067 - type: nauc_recall_at_3_diff1 value: 42.2968464797395 - type: nauc_recall_at_3_max value: 26.494318009870018 - type: nauc_recall_at_3_std value: 2.6045977160467544 - type: nauc_recall_at_5_diff1 value: 35.81340094401374 - type: nauc_recall_at_5_max value: 25.91082947510634 - type: nauc_recall_at_5_std value: 9.759404930864779 - type: ndcg_at_1 value: 87.819 - type: ndcg_at_10 value: 90.986 - type: ndcg_at_100 value: 91.69 - type: ndcg_at_1000 value: 91.863 - type: ndcg_at_20 value: 91.293 - type: ndcg_at_3 value: 89.621 - type: ndcg_at_5 value: 90.333 - type: precision_at_1 value: 87.819 - type: precision_at_10 value: 10.753 - type: precision_at_100 value: 1.138 - type: precision_at_1000 value: 0.117 - type: precision_at_20 value: 5.4879999999999995 - type: precision_at_3 value: 33.703 - type: precision_at_5 value: 20.831 - type: recall_at_1 value: 81.601 - type: recall_at_10 value: 95.44200000000001 - type: recall_at_100 value: 98.14399999999999 - type: recall_at_1000 value: 99.157 - type: recall_at_20 value: 96.43 - type: recall_at_3 value: 91.729 - type: recall_at_5 value: 93.552 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: mteb/fiqa config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: main_score value: 56.056 - type: map_at_1 value: 28.666000000000004 - type: map_at_10 value: 47.437000000000005 - type: map_at_100 value: 49.537 - type: map_at_1000 value: 49.665 - type: map_at_20 value: 48.618 - type: map_at_3 value: 41.355 - type: map_at_5 value: 44.525 - type: mrr_at_1 value: 55.55555555555556 - type: mrr_at_10 value: 63.705173427395614 - type: mrr_at_100 value: 64.25449940779741 - type: mrr_at_1000 value: 64.27635581092147 - type: mrr_at_20 value: 64.03796029079103 - type: mrr_at_3 value: 61.49691358024688 - type: mrr_at_5 value: 62.73148148148143 - type: nauc_map_at_1000_diff1 value: 43.24282910397747 - type: nauc_map_at_1000_max value: 28.506093180265644 - type: nauc_map_at_1000_std value: -13.040508386155054 - type: nauc_map_at_100_diff1 value: 43.23650442904607 - type: nauc_map_at_100_max value: 28.470565635459156 - type: nauc_map_at_100_std value: -12.988098780714935 - type: nauc_map_at_10_diff1 value: 43.393840733087686 - type: nauc_map_at_10_max value: 26.637302062720153 - type: nauc_map_at_10_std value: -14.47500292113762 - type: nauc_map_at_1_diff1 value: 47.705150227211725 - type: nauc_map_at_1_max value: 15.354189686550129 - type: nauc_map_at_1_std value: -14.559819859039067 - type: nauc_map_at_20_diff1 value: 43.14121075706104 - type: nauc_map_at_20_max value: 27.811170590408395 - type: nauc_map_at_20_std value: -13.459413585283583 - type: nauc_map_at_3_diff1 value: 44.33938667720801 - type: nauc_map_at_3_max value: 21.785619884549398 - type: nauc_map_at_3_std value: -15.569980103071593 - type: nauc_map_at_5_diff1 value: 43.39280905665027 - type: nauc_map_at_5_max value: 25.021492190645017 - type: nauc_map_at_5_std value: -14.48856622187443 - type: nauc_mrr_at_1000_diff1 value: 52.971563939946286 - type: nauc_mrr_at_1000_max value: 38.88019486172324 - type: nauc_mrr_at_1000_std value: -12.412991642381616 - type: nauc_mrr_at_100_diff1 value: 52.978468139876945 - type: nauc_mrr_at_100_max value: 38.89751787948751 - type: nauc_mrr_at_100_std value: -12.3677876252269 - type: nauc_mrr_at_10_diff1 value: 52.78507148048174 - type: nauc_mrr_at_10_max value: 38.55079809310022 - type: nauc_mrr_at_10_std value: -12.944127025078755 - type: nauc_mrr_at_1_diff1 value: 55.52626805861546 - type: nauc_mrr_at_1_max value: 40.49306809164979 - type: nauc_mrr_at_1_std value: -12.886607701317681 - type: nauc_mrr_at_20_diff1 value: 52.9592152665678 - type: nauc_mrr_at_20_max value: 38.88514014589964 - type: nauc_mrr_at_20_std value: -12.434464359819444 - type: nauc_mrr_at_3_diff1 value: 52.73696844091174 - type: nauc_mrr_at_3_max value: 38.61018727252859 - type: nauc_mrr_at_3_std value: -13.123989867364166 - type: nauc_mrr_at_5_diff1 value: 53.037110010188 - type: nauc_mrr_at_5_max value: 38.44770729849151 - type: nauc_mrr_at_5_std value: -13.49318771828972 - type: nauc_ndcg_at_1000_diff1 value: 44.73813840091289 - type: nauc_ndcg_at_1000_max value: 33.70113904685389 - type: nauc_ndcg_at_1000_std value: -10.328687058192742 - type: nauc_ndcg_at_100_diff1 value: 44.595174119928835 - type: nauc_ndcg_at_100_max value: 33.4788285112467 - type: nauc_ndcg_at_100_std value: -8.695355259716946 - type: nauc_ndcg_at_10_diff1 value: 44.39837225263 - type: nauc_ndcg_at_10_max value: 29.188289725593393 - type: nauc_ndcg_at_10_std value: -13.67608323673103 - type: nauc_ndcg_at_1_diff1 value: 55.52626805861546 - type: nauc_ndcg_at_1_max value: 40.49306809164979 - type: nauc_ndcg_at_1_std value: -12.886607701317681 - type: nauc_ndcg_at_20_diff1 value: 44.24661739902305 - type: nauc_ndcg_at_20_max value: 31.667868318249965 - type: nauc_ndcg_at_20_std value: -10.65470780066342 - type: nauc_ndcg_at_3_diff1 value: 43.39857166975522 - type: nauc_ndcg_at_3_max value: 31.764668313577495 - type: nauc_ndcg_at_3_std value: -14.494866954678152 - type: nauc_ndcg_at_5_diff1 value: 43.16976647347281 - type: nauc_ndcg_at_5_max value: 29.878329062643143 - type: nauc_ndcg_at_5_std value: -13.987689089179739 - type: nauc_precision_at_1000_diff1 value: -9.807973252625484 - type: nauc_precision_at_1000_max value: 26.6279603849494 - type: nauc_precision_at_1000_std value: 7.113187103520632 - type: nauc_precision_at_100_diff1 value: -4.777149603323976 - type: nauc_precision_at_100_max value: 31.03410463692187 - type: nauc_precision_at_100_std value: 10.463144150275435 - type: nauc_precision_at_10_diff1 value: 8.691528703215962 - type: nauc_precision_at_10_max value: 33.329579434123374 - type: nauc_precision_at_10_std value: -0.8002015226329403 - type: nauc_precision_at_1_diff1 value: 55.52626805861546 - type: nauc_precision_at_1_max value: 40.49306809164979 - type: nauc_precision_at_1_std value: -12.886607701317681 - type: nauc_precision_at_20_diff1 value: 3.4564653474184284 - type: nauc_precision_at_20_max value: 34.401070158471136 - type: nauc_precision_at_20_std value: 5.813431200164549 - type: nauc_precision_at_3_diff1 value: 22.463219705462187 - type: nauc_precision_at_3_max value: 34.77413976546924 - type: nauc_precision_at_3_std value: -7.083890789741479 - type: nauc_precision_at_5_diff1 value: 14.011006004883154 - type: nauc_precision_at_5_max value: 35.73655466853702 - type: nauc_precision_at_5_std value: -2.8395172077771598 - type: nauc_recall_at_1000_diff1 value: 16.478046357391555 - type: nauc_recall_at_1000_max value: 43.231704288282344 - type: nauc_recall_at_1000_std value: 38.430684937573645 - type: nauc_recall_at_100_diff1 value: 30.764718344602436 - type: nauc_recall_at_100_max value: 31.769050487166655 - type: nauc_recall_at_100_std value: 23.48468311677149 - type: nauc_recall_at_10_diff1 value: 34.47339565324045 - type: nauc_recall_at_10_max value: 19.054212335800454 - type: nauc_recall_at_10_std value: -11.039734015330437 - type: nauc_recall_at_1_diff1 value: 47.705150227211725 - type: nauc_recall_at_1_max value: 15.354189686550129 - type: nauc_recall_at_1_std value: -14.559819859039067 - type: nauc_recall_at_20_diff1 value: 32.1011474016873 - type: nauc_recall_at_20_max value: 25.546372988304423 - type: nauc_recall_at_20_std value: -0.007233471152482897 - type: nauc_recall_at_3_diff1 value: 37.5708138019065 - type: nauc_recall_at_3_max value: 16.66410785756736 - type: nauc_recall_at_3_std value: -15.404817020108966 - type: nauc_recall_at_5_diff1 value: 35.714519648479595 - type: nauc_recall_at_5_max value: 19.02075233009296 - type: nauc_recall_at_5_std value: -13.180963359760725 - type: ndcg_at_1 value: 55.556000000000004 - type: ndcg_at_10 value: 56.056 - type: ndcg_at_100 value: 62.44 - type: ndcg_at_1000 value: 64.263 - type: ndcg_at_20 value: 58.638999999999996 - type: ndcg_at_3 value: 51.722 - type: ndcg_at_5 value: 52.701 - type: precision_at_1 value: 55.556000000000004 - type: precision_at_10 value: 15.679000000000002 - type: precision_at_100 value: 2.252 - type: precision_at_1000 value: 0.257 - type: precision_at_20 value: 9.02 - type: precision_at_3 value: 34.619 - type: precision_at_5 value: 25.093 - type: recall_at_1 value: 28.666000000000004 - type: recall_at_10 value: 63.717999999999996 - type: recall_at_100 value: 86.938 - type: recall_at_1000 value: 97.603 - type: recall_at_20 value: 71.649 - type: recall_at_3 value: 46.663 - type: recall_at_5 value: 53.313 - task: type: Retrieval dataset: name: MTEB HotpotQA type: mteb/hotpotqa config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: main_score value: 71.74199999999999 - type: map_at_1 value: 41.729 - type: map_at_10 value: 63.168 - type: map_at_100 value: 64.132 - type: map_at_1000 value: 64.199 - type: map_at_20 value: 63.736000000000004 - type: map_at_3 value: 59.826 - type: map_at_5 value: 61.882000000000005 - type: mrr_at_1 value: 83.45712356515868 - type: mrr_at_10 value: 87.850342432719 - type: mrr_at_100 value: 88.0016320691113 - type: mrr_at_1000 value: 88.00576596968136 - type: mrr_at_20 value: 87.94463253190389 - type: mrr_at_3 value: 87.13706954760278 - type: mrr_at_5 value: 87.59419311276136 - type: nauc_map_at_1000_diff1 value: 13.635446621095054 - type: nauc_map_at_1000_max value: 18.670632529445633 - type: nauc_map_at_1000_std value: 10.444842636150575 - type: nauc_map_at_100_diff1 value: 13.599262398010783 - type: nauc_map_at_100_max value: 18.636389405484806 - type: nauc_map_at_100_std value: 10.460027483576043 - type: nauc_map_at_10_diff1 value: 13.235053919323942 - type: nauc_map_at_10_max value: 18.252140477080047 - type: nauc_map_at_10_std value: 9.9075337042203 - type: nauc_map_at_1_diff1 value: 76.51940497836482 - type: nauc_map_at_1_max value: 51.251419487235474 - type: nauc_map_at_1_std value: 0.16714896857146574 - type: nauc_map_at_20_diff1 value: 13.4178245722222 - type: nauc_map_at_20_max value: 18.40988771210718 - type: nauc_map_at_20_std value: 10.216685163366282 - type: nauc_map_at_3_diff1 value: 13.38370761663418 - type: nauc_map_at_3_max value: 17.760962555456537 - type: nauc_map_at_3_std value: 7.15741965624388 - type: nauc_map_at_5_diff1 value: 13.138133309724855 - type: nauc_map_at_5_max value: 17.871761295251044 - type: nauc_map_at_5_std value: 8.475147426940074 - type: nauc_mrr_at_1000_diff1 value: 75.82650818891959 - type: nauc_mrr_at_1000_max value: 53.6736100668434 - type: nauc_mrr_at_1000_std value: 1.8025016349213916 - type: nauc_mrr_at_100_diff1 value: 75.82530574210111 - type: nauc_mrr_at_100_max value: 53.68067545829002 - type: nauc_mrr_at_100_std value: 1.8147470536495791 - type: nauc_mrr_at_10_diff1 value: 75.8330135686799 - type: nauc_mrr_at_10_max value: 53.78626885349077 - type: nauc_mrr_at_10_std value: 1.7975782717226636 - type: nauc_mrr_at_1_diff1 value: 76.51940497836482 - type: nauc_mrr_at_1_max value: 51.251419487235474 - type: nauc_mrr_at_1_std value: 0.16714896857146574 - type: nauc_mrr_at_20_diff1 value: 75.82783382464166 - type: nauc_mrr_at_20_max value: 53.68364567043885 - type: nauc_mrr_at_20_std value: 1.742037904463963 - type: nauc_mrr_at_3_diff1 value: 75.6944609768663 - type: nauc_mrr_at_3_max value: 53.803941340341666 - type: nauc_mrr_at_3_std value: 1.1849945458077804 - type: nauc_mrr_at_5_diff1 value: 75.73006960604903 - type: nauc_mrr_at_5_max value: 53.62223096420106 - type: nauc_mrr_at_5_std value: 1.6144067563410909 - type: nauc_ndcg_at_1000_diff1 value: 21.58025241642726 - type: nauc_ndcg_at_1000_max value: 24.675747527001153 - type: nauc_ndcg_at_1000_std value: 13.075943547492718 - type: nauc_ndcg_at_100_diff1 value: 20.30260137544846 - type: nauc_ndcg_at_100_max value: 23.757528813872018 - type: nauc_ndcg_at_100_std value: 13.648994687574062 - type: nauc_ndcg_at_10_diff1 value: 18.995052360997818 - type: nauc_ndcg_at_10_max value: 22.254260808196037 - type: nauc_ndcg_at_10_std value: 11.27212390633054 - type: nauc_ndcg_at_1_diff1 value: 76.51940497836482 - type: nauc_ndcg_at_1_max value: 51.251419487235474 - type: nauc_ndcg_at_1_std value: 0.16714896857146574 - type: nauc_ndcg_at_20_diff1 value: 19.333742380695757 - type: nauc_ndcg_at_20_max value: 22.527779834633364 - type: nauc_ndcg_at_20_std value: 12.161009000707917 - type: nauc_ndcg_at_3_diff1 value: 20.013329040965534 - type: nauc_ndcg_at_3_max value: 21.99692460311921 - type: nauc_ndcg_at_3_std value: 6.8076290638386165 - type: nauc_ndcg_at_5_diff1 value: 19.08226315942471 - type: nauc_ndcg_at_5_max value: 21.71185964294168 - type: nauc_ndcg_at_5_std value: 8.671911269518214 - type: nauc_precision_at_1000_diff1 value: 2.4462475489446764 - type: nauc_precision_at_1000_max value: 29.145662064268578 - type: nauc_precision_at_1000_std value: 49.20704909525856 - type: nauc_precision_at_100_diff1 value: 0.11271196725540299 - type: nauc_precision_at_100_max value: 17.37584606388067 - type: nauc_precision_at_100_std value: 34.66099346244071 - type: nauc_precision_at_10_diff1 value: 2.9923183951227825 - type: nauc_precision_at_10_max value: 14.261884731124264 - type: nauc_precision_at_10_std value: 18.084188795498378 - type: nauc_precision_at_1_diff1 value: 76.51940497836482 - type: nauc_precision_at_1_max value: 51.251419487235474 - type: nauc_precision_at_1_std value: 0.16714896857146574 - type: nauc_precision_at_20_diff1 value: 1.9180293008303761 - type: nauc_precision_at_20_max value: 13.832269193468512 - type: nauc_precision_at_20_std value: 21.65284406055607 - type: nauc_precision_at_3_diff1 value: 7.226609484731811 - type: nauc_precision_at_3_max value: 15.162908526977272 - type: nauc_precision_at_3_std value: 8.451859972962776 - type: nauc_precision_at_5_diff1 value: 4.705236845538159 - type: nauc_precision_at_5_max value: 14.022910843582666 - type: nauc_precision_at_5_std value: 11.777269322821605 - type: nauc_recall_at_1000_diff1 value: 2.446247548945172 - type: nauc_recall_at_1000_max value: 29.14566206426889 - type: nauc_recall_at_1000_std value: 49.20704909525879 - type: nauc_recall_at_100_diff1 value: 0.1127119672553316 - type: nauc_recall_at_100_max value: 17.37584606388062 - type: nauc_recall_at_100_std value: 34.660993462440686 - type: nauc_recall_at_10_diff1 value: 2.9923183951227927 - type: nauc_recall_at_10_max value: 14.261884731124299 - type: nauc_recall_at_10_std value: 18.08418879549837 - type: nauc_recall_at_1_diff1 value: 76.51940497836482 - type: nauc_recall_at_1_max value: 51.251419487235474 - type: nauc_recall_at_1_std value: 0.16714896857146574 - type: nauc_recall_at_20_diff1 value: 1.918029300830432 - type: nauc_recall_at_20_max value: 13.832269193468566 - type: nauc_recall_at_20_std value: 21.65284406055605 - type: nauc_recall_at_3_diff1 value: 7.226609484731802 - type: nauc_recall_at_3_max value: 15.162908526977182 - type: nauc_recall_at_3_std value: 8.451859972962634 - type: nauc_recall_at_5_diff1 value: 4.705236845538197 - type: nauc_recall_at_5_max value: 14.02291084358265 - type: nauc_recall_at_5_std value: 11.777269322821638 - type: ndcg_at_1 value: 83.45700000000001 - type: ndcg_at_10 value: 71.74199999999999 - type: ndcg_at_100 value: 75.008 - type: ndcg_at_1000 value: 76.242 - type: ndcg_at_20 value: 73.114 - type: ndcg_at_3 value: 67.128 - type: ndcg_at_5 value: 69.645 - type: precision_at_1 value: 83.45700000000001 - type: precision_at_10 value: 14.747 - type: precision_at_100 value: 1.73 - type: precision_at_1000 value: 0.189 - type: precision_at_20 value: 7.8149999999999995 - type: precision_at_3 value: 42.323 - type: precision_at_5 value: 27.381 - type: recall_at_1 value: 41.729 - type: recall_at_10 value: 73.734 - type: recall_at_100 value: 86.502 - type: recall_at_1000 value: 94.60499999999999 - type: recall_at_20 value: 78.14999999999999 - type: recall_at_3 value: 63.483999999999995 - type: recall_at_5 value: 68.45400000000001 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 96.4904 - type: ap value: 94.85481918794709 - type: ap_weighted value: 94.85481918794709 - type: f1 value: 96.4898592305707 - type: f1_weighted value: 96.4898592305707 - type: main_score value: 96.4904 - task: type: Retrieval dataset: name: MTEB MSMARCO type: mteb/msmarco config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: main_score value: 43.692 - type: map_at_1 value: 23.751 - type: map_at_10 value: 36.553999999999995 - type: map_at_100 value: 37.721 - type: map_at_1000 value: 37.763999999999996 - type: map_at_20 value: 37.289 - type: map_at_3 value: 32.643 - type: map_at_5 value: 34.851 - type: mrr_at_1 value: 24.455587392550143 - type: mrr_at_10 value: 37.18388706963206 - type: mrr_at_100 value: 38.28330737932916 - type: mrr_at_1000 value: 38.32054399710817 - type: mrr_at_20 value: 37.8818001216278 - type: mrr_at_3 value: 33.35721107927405 - type: mrr_at_5 value: 35.52483285577843 - type: nauc_map_at_1000_diff1 value: 36.3576177260684 - type: nauc_map_at_1000_max value: 7.854511605962703 - type: nauc_map_at_1000_std value: -17.701121059746878 - type: nauc_map_at_100_diff1 value: 36.356075649230505 - type: nauc_map_at_100_max value: 7.862168042999533 - type: nauc_map_at_100_std value: -17.670102459097233 - type: nauc_map_at_10_diff1 value: 36.22122978875574 - type: nauc_map_at_10_max value: 7.80848606967416 - type: nauc_map_at_10_std value: -18.3265151386167 - type: nauc_map_at_1_diff1 value: 39.28605466408357 - type: nauc_map_at_1_max value: 6.20202977590459 - type: nauc_map_at_1_std value: -15.734334090045026 - type: nauc_map_at_20_diff1 value: 36.33637880909657 - type: nauc_map_at_20_max value: 7.843437969476022 - type: nauc_map_at_20_std value: -17.917533363025996 - type: nauc_map_at_3_diff1 value: 36.24864976076741 - type: nauc_map_at_3_max value: 7.420345251835957 - type: nauc_map_at_3_std value: -18.71678497722944 - type: nauc_map_at_5_diff1 value: 36.0789619291824 - type: nauc_map_at_5_max value: 7.7314285669514495 - type: nauc_map_at_5_std value: -18.748688764538706 - type: nauc_mrr_at_1000_diff1 value: 36.23912675623378 - type: nauc_mrr_at_1000_max value: 7.690553436255147 - type: nauc_mrr_at_1000_std value: -17.609526070212304 - type: nauc_mrr_at_100_diff1 value: 36.23782651189002 - type: nauc_mrr_at_100_max value: 7.70075095171647 - type: nauc_mrr_at_100_std value: -17.575714144960184 - type: nauc_mrr_at_10_diff1 value: 36.125229472534215 - type: nauc_mrr_at_10_max value: 7.635472248755658 - type: nauc_mrr_at_10_std value: -18.208166616511086 - type: nauc_mrr_at_1_diff1 value: 39.20986875554532 - type: nauc_mrr_at_1_max value: 6.062668487561363 - type: nauc_mrr_at_1_std value: -16.04130340817602 - type: nauc_mrr_at_20_diff1 value: 36.21207088739667 - type: nauc_mrr_at_20_max value: 7.699610250145951 - type: nauc_mrr_at_20_std value: -17.778245221724028 - type: nauc_mrr_at_3_diff1 value: 36.03957583885305 - type: nauc_mrr_at_3_max value: 7.225515576504581 - type: nauc_mrr_at_3_std value: -18.74478742943741 - type: nauc_mrr_at_5_diff1 value: 35.969152496648974 - type: nauc_mrr_at_5_max value: 7.584059789018233 - type: nauc_mrr_at_5_std value: -18.569374723129332 - type: nauc_ndcg_at_1000_diff1 value: 35.894655529841806 - type: nauc_ndcg_at_1000_max value: 8.579327424366236 - type: nauc_ndcg_at_1000_std value: -16.359677367747896 - type: nauc_ndcg_at_100_diff1 value: 35.89861902483983 - type: nauc_ndcg_at_100_max value: 8.830873623962242 - type: nauc_ndcg_at_100_std value: -15.173125564722978 - type: nauc_ndcg_at_10_diff1 value: 35.36499811105169 - type: nauc_ndcg_at_10_max value: 8.449267180956992 - type: nauc_ndcg_at_10_std value: -18.41978802362402 - type: nauc_ndcg_at_1_diff1 value: 39.15422481210622 - type: nauc_ndcg_at_1_max value: 6.055515791928331 - type: nauc_ndcg_at_1_std value: -16.042779610876252 - type: nauc_ndcg_at_20_diff1 value: 35.73402868264468 - type: nauc_ndcg_at_20_max value: 8.695705518210847 - type: nauc_ndcg_at_20_std value: -16.7735829470466 - type: nauc_ndcg_at_3_diff1 value: 35.31358242856231 - type: nauc_ndcg_at_3_max value: 7.645692789058997 - type: nauc_ndcg_at_3_std value: -19.460003734786874 - type: nauc_ndcg_at_5_diff1 value: 35.05216588927143 - type: nauc_ndcg_at_5_max value: 8.216690520604715 - type: nauc_ndcg_at_5_std value: -19.3982054492159 - type: nauc_precision_at_1000_diff1 value: -4.440002625111349 - type: nauc_precision_at_1000_max value: 7.886988951901723 - type: nauc_precision_at_1000_std value: 9.88111187048247 - type: nauc_precision_at_100_diff1 value: 15.728286119463325 - type: nauc_precision_at_100_max value: 13.218650824470654 - type: nauc_precision_at_100_std value: 16.113245895522553 - type: nauc_precision_at_10_diff1 value: 29.51218489610567 - type: nauc_precision_at_10_max value: 10.197432401942912 - type: nauc_precision_at_10_std value: -16.950603431359493 - type: nauc_precision_at_1_diff1 value: 39.15422481210622 - type: nauc_precision_at_1_max value: 6.055515791928331 - type: nauc_precision_at_1_std value: -16.042779610876252 - type: nauc_precision_at_20_diff1 value: 27.825993070397338 - type: nauc_precision_at_20_max value: 11.437632287846007 - type: nauc_precision_at_20_std value: -7.450353566405601 - type: nauc_precision_at_3_diff1 value: 32.14135556796588 - type: nauc_precision_at_3_max value: 7.989252443574163 - type: nauc_precision_at_3_std value: -21.566254595671055 - type: nauc_precision_at_5_diff1 value: 30.68778685307082 - type: nauc_precision_at_5_max value: 9.332160758499892 - type: nauc_precision_at_5_std value: -20.928554713448914 - type: nauc_recall_at_1000_diff1 value: 25.00810478716878 - type: nauc_recall_at_1000_max value: 46.518165765201644 - type: nauc_recall_at_1000_std value: 61.4734635576085 - type: nauc_recall_at_100_diff1 value: 33.895581318261726 - type: nauc_recall_at_100_max value: 20.10706035872801 - type: nauc_recall_at_100_std value: 24.204226584457047 - type: nauc_recall_at_10_diff1 value: 32.363127359576296 - type: nauc_recall_at_10_max value: 10.729923804989545 - type: nauc_recall_at_10_std value: -18.1335370184202 - type: nauc_recall_at_1_diff1 value: 39.28605466408357 - type: nauc_recall_at_1_max value: 6.20202977590459 - type: nauc_recall_at_1_std value: -15.734334090045026 - type: nauc_recall_at_20_diff1 value: 33.47804003169795 - type: nauc_recall_at_20_max value: 12.781494765263382 - type: nauc_recall_at_20_std value: -9.263970132202658 - type: nauc_recall_at_3_diff1 value: 32.71001429428999 - type: nauc_recall_at_3_max value: 8.353439197382693 - type: nauc_recall_at_3_std value: -21.235097744366954 - type: nauc_recall_at_5_diff1 value: 31.87451464963415 - type: nauc_recall_at_5_max value: 9.635051450907305 - type: nauc_recall_at_5_std value: -21.113235357132794 - type: ndcg_at_1 value: 24.47 - type: ndcg_at_10 value: 43.692 - type: ndcg_at_100 value: 49.211 - type: ndcg_at_1000 value: 50.244 - type: ndcg_at_20 value: 46.278000000000006 - type: ndcg_at_3 value: 35.719 - type: ndcg_at_5 value: 39.652 - type: precision_at_1 value: 24.47 - type: precision_at_10 value: 6.857 - type: precision_at_100 value: 0.9610000000000001 - type: precision_at_1000 value: 0.105 - type: precision_at_20 value: 3.968 - type: precision_at_3 value: 15.181000000000001 - type: precision_at_5 value: 11.117 - type: recall_at_1 value: 23.751 - type: recall_at_10 value: 65.64 - type: recall_at_100 value: 90.967 - type: recall_at_1000 value: 98.738 - type: recall_at_20 value: 75.639 - type: recall_at_3 value: 43.927 - type: recall_at_5 value: 53.366 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 98.82580939352485 - type: f1 value: 98.75201754333801 - type: f1_weighted value: 98.82795205108245 - type: main_score value: 98.82580939352485 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 92.29822161422709 - type: f1 value: 77.75210224871594 - type: f1_weighted value: 93.58661422540348 - type: main_score value: 92.29822161422709 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 85.17484868863484 - type: f1 value: 81.94484244487094 - type: f1_weighted value: 85.21022593423332 - type: main_score value: 85.17484868863484 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 89.61667787491594 - type: f1 value: 89.02701927621264 - type: f1_weighted value: 89.56306982022801 - type: main_score value: 89.61667787491594 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: main_score value: 46.318282423948574 - type: v_measure value: 46.318282423948574 - type: v_measure_std value: 0.9729055662461538 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: main_score value: 44.29033625273981 - type: v_measure value: 44.29033625273981 - type: v_measure_std value: 1.0596383629128594 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7 metrics: - type: main_score value: 33.0526129239962 - type: map value: 33.0526129239962 - type: mrr value: 34.29260046890935 - type: nAUC_map_diff1 value: 12.579738077238032 - type: nAUC_map_max value: -20.936629344962 - type: nAUC_map_std value: -1.6096805784945216 - type: nAUC_mrr_diff1 value: 11.597584463580807 - type: nAUC_mrr_max value: -15.723702838537504 - type: nAUC_mrr_std value: 0.2719172965777737 - task: type: Retrieval dataset: name: MTEB NFCorpus type: mteb/nfcorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: main_score value: 41.486000000000004 - type: map_at_1 value: 6.866 - type: map_at_10 value: 15.895999999999999 - type: map_at_100 value: 21.093 - type: map_at_1000 value: 23.067 - type: map_at_20 value: 18.125 - type: map_at_3 value: 11.421000000000001 - type: map_at_5 value: 13.415 - type: mrr_at_1 value: 52.63157894736842 - type: mrr_at_10 value: 61.486805248415166 - type: mrr_at_100 value: 62.08211009182091 - type: mrr_at_1000 value: 62.10828701365016 - type: mrr_at_20 value: 61.904411187915784 - type: mrr_at_3 value: 59.90712074303407 - type: mrr_at_5 value: 60.91331269349847 - type: nauc_map_at_1000_diff1 value: 25.484625278529403 - type: nauc_map_at_1000_max value: 31.206600396418853 - type: nauc_map_at_1000_std value: 15.569448072357156 - type: nauc_map_at_100_diff1 value: 27.636750226316764 - type: nauc_map_at_100_max value: 29.66992681250722 - type: nauc_map_at_100_std value: 10.570600484002671 - type: nauc_map_at_10_diff1 value: 32.76642525548697 - type: nauc_map_at_10_max value: 21.459225397237663 - type: nauc_map_at_10_std value: -3.546494734209264 - type: nauc_map_at_1_diff1 value: 48.8002894871328 - type: nauc_map_at_1_max value: 5.7236722609868815 - type: nauc_map_at_1_std value: -13.283554044471352 - type: nauc_map_at_20_diff1 value: 30.57169701502308 - type: nauc_map_at_20_max value: 25.79666139518404 - type: nauc_map_at_20_std value: 1.781732492989651 - type: nauc_map_at_3_diff1 value: 40.076315947201095 - type: nauc_map_at_3_max value: 12.862524429140054 - type: nauc_map_at_3_std value: -9.188349777126817 - type: nauc_map_at_5_diff1 value: 36.9918718052938 - type: nauc_map_at_5_max value: 16.74234374361876 - type: nauc_map_at_5_std value: -7.818523349307494 - type: nauc_mrr_at_1000_diff1 value: 26.88183002609805 - type: nauc_mrr_at_1000_max value: 47.10209348428658 - type: nauc_mrr_at_1000_std value: 32.067825924992924 - type: nauc_mrr_at_100_diff1 value: 26.871482491566745 - type: nauc_mrr_at_100_max value: 47.11303868498556 - type: nauc_mrr_at_100_std value: 32.08961428818868 - type: nauc_mrr_at_10_diff1 value: 26.6356914977722 - type: nauc_mrr_at_10_max value: 47.091624558810366 - type: nauc_mrr_at_10_std value: 31.942424120660164 - type: nauc_mrr_at_1_diff1 value: 28.19774198483673 - type: nauc_mrr_at_1_max value: 41.44380927834253 - type: nauc_mrr_at_1_std value: 25.18222691885917 - type: nauc_mrr_at_20_diff1 value: 26.86487347109452 - type: nauc_mrr_at_20_max value: 47.1987778214726 - type: nauc_mrr_at_20_std value: 32.143517921610034 - type: nauc_mrr_at_3_diff1 value: 27.34340373236422 - type: nauc_mrr_at_3_max value: 46.358726506276646 - type: nauc_mrr_at_3_std value: 31.74924155572593 - type: nauc_mrr_at_5_diff1 value: 27.209667205060672 - type: nauc_mrr_at_5_max value: 46.79883369072009 - type: nauc_mrr_at_5_std value: 31.655605306670758 - type: nauc_ndcg_at_1000_diff1 value: 18.940195769769687 - type: nauc_ndcg_at_1000_max value: 46.48551313937331 - type: nauc_ndcg_at_1000_std value: 33.64819502089232 - type: nauc_ndcg_at_100_diff1 value: 19.50885253809146 - type: nauc_ndcg_at_100_max value: 40.53174462354878 - type: nauc_ndcg_at_100_std value: 28.516152877751118 - type: nauc_ndcg_at_10_diff1 value: 16.01699218096564 - type: nauc_ndcg_at_10_max value: 41.17322878314514 - type: nauc_ndcg_at_10_std value: 29.002233224832196 - type: nauc_ndcg_at_1_diff1 value: 27.443547710102205 - type: nauc_ndcg_at_1_max value: 40.66529763309582 - type: nauc_ndcg_at_1_std value: 24.15016766225869 - type: nauc_ndcg_at_20_diff1 value: 17.541197675685062 - type: nauc_ndcg_at_20_max value: 40.53231266973844 - type: nauc_ndcg_at_20_std value: 29.54096347876548 - type: nauc_ndcg_at_3_diff1 value: 18.649628357473716 - type: nauc_ndcg_at_3_max value: 41.18603570171764 - type: nauc_ndcg_at_3_std value: 27.125524188420396 - type: nauc_ndcg_at_5_diff1 value: 17.519593751448483 - type: nauc_ndcg_at_5_max value: 42.715997890377345 - type: nauc_ndcg_at_5_std value: 27.902627839899868 - type: nauc_precision_at_1000_diff1 value: -15.528797630565155 - type: nauc_precision_at_1000_max value: 13.741640921778671 - type: nauc_precision_at_1000_std value: 44.50896053788372 - type: nauc_precision_at_100_diff1 value: -14.491464489721887 - type: nauc_precision_at_100_max value: 23.136434418999457 - type: nauc_precision_at_100_std value: 49.73145147863128 - type: nauc_precision_at_10_diff1 value: -4.829188942994277 - type: nauc_precision_at_10_max value: 40.327612559528866 - type: nauc_precision_at_10_std value: 39.34919529635044 - type: nauc_precision_at_1_diff1 value: 28.19774198483673 - type: nauc_precision_at_1_max value: 41.44380927834253 - type: nauc_precision_at_1_std value: 25.18222691885917 - type: nauc_precision_at_20_diff1 value: -7.210726293112847 - type: nauc_precision_at_20_max value: 37.195679576636984 - type: nauc_precision_at_20_std value: 45.4597096418357 - type: nauc_precision_at_3_diff1 value: 7.578219537774854 - type: nauc_precision_at_3_max value: 41.59775233475654 - type: nauc_precision_at_3_std value: 30.764584790895118 - type: nauc_precision_at_5_diff1 value: 1.655451789039598 - type: nauc_precision_at_5_max value: 43.435739407610455 - type: nauc_precision_at_5_std value: 33.42552263325999 - type: nauc_recall_at_1000_diff1 value: 5.030705700690516 - type: nauc_recall_at_1000_max value: 19.108072570815583 - type: nauc_recall_at_1000_std value: 14.697734974217308 - type: nauc_recall_at_100_diff1 value: 14.746540318132407 - type: nauc_recall_at_100_max value: 21.798705033854795 - type: nauc_recall_at_100_std value: 11.416195108842587 - type: nauc_recall_at_10_diff1 value: 25.548642427860486 - type: nauc_recall_at_10_max value: 18.711677681987474 - type: nauc_recall_at_10_std value: -5.988904818971677 - type: nauc_recall_at_1_diff1 value: 48.8002894871328 - type: nauc_recall_at_1_max value: 5.7236722609868815 - type: nauc_recall_at_1_std value: -13.283554044471352 - type: nauc_recall_at_20_diff1 value: 23.39140739154809 - type: nauc_recall_at_20_max value: 19.351150636155474 - type: nauc_recall_at_20_std value: -2.757280266915132 - type: nauc_recall_at_3_diff1 value: 38.17453576012812 - type: nauc_recall_at_3_max value: 13.47003839643972 - type: nauc_recall_at_3_std value: -8.75780163862688 - type: nauc_recall_at_5_diff1 value: 33.02812855226899 - type: nauc_recall_at_5_max value: 15.477626408978477 - type: nauc_recall_at_5_std value: -9.072206441070708 - type: ndcg_at_1 value: 50.773999999999994 - type: ndcg_at_10 value: 41.486000000000004 - type: ndcg_at_100 value: 39.051 - type: ndcg_at_1000 value: 48.106 - type: ndcg_at_20 value: 39.432 - type: ndcg_at_3 value: 47.428 - type: ndcg_at_5 value: 45.227000000000004 - type: precision_at_1 value: 52.632 - type: precision_at_10 value: 31.146 - type: precision_at_100 value: 10.328 - type: precision_at_1000 value: 2.432 - type: precision_at_20 value: 23.793 - type: precision_at_3 value: 45.201 - type: precision_at_5 value: 39.876 - type: recall_at_1 value: 6.866 - type: recall_at_10 value: 20.447000000000003 - type: recall_at_100 value: 40.607 - type: recall_at_1000 value: 73.411 - type: recall_at_20 value: 26.082 - type: recall_at_3 value: 12.484 - type: recall_at_5 value: 15.847 - task: type: Retrieval dataset: name: MTEB NQ type: mteb/nq config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: main_score value: 69.072 - type: map_at_1 value: 45.483000000000004 - type: map_at_10 value: 62.050000000000004 - type: map_at_100 value: 62.693 - type: map_at_1000 value: 62.702999999999996 - type: map_at_20 value: 62.498 - type: map_at_3 value: 58.285 - type: map_at_5 value: 60.711000000000006 - type: mrr_at_1 value: 50.840092699884124 - type: mrr_at_10 value: 64.54635224116673 - type: mrr_at_100 value: 64.9526548702289 - type: mrr_at_1000 value: 64.95908460752281 - type: mrr_at_20 value: 64.82949565799959 - type: mrr_at_3 value: 61.89165701042856 - type: mrr_at_5 value: 63.632676709154026 - type: nauc_map_at_1000_diff1 value: 43.187285304185224 - type: nauc_map_at_1000_max value: 32.39921659632756 - type: nauc_map_at_1000_std value: -5.780901333066553 - type: nauc_map_at_100_diff1 value: 43.184487221204456 - type: nauc_map_at_100_max value: 32.41176116347982 - type: nauc_map_at_100_std value: -5.76422606662383 - type: nauc_map_at_10_diff1 value: 42.967066814031746 - type: nauc_map_at_10_max value: 32.489617364418514 - type: nauc_map_at_10_std value: -6.029045531102664 - type: nauc_map_at_1_diff1 value: 46.16376563218624 - type: nauc_map_at_1_max value: 26.342624776802232 - type: nauc_map_at_1_std value: -7.142171388751972 - type: nauc_map_at_20_diff1 value: 43.15894358608328 - type: nauc_map_at_20_max value: 32.46492198956245 - type: nauc_map_at_20_std value: -5.788373305449195 - type: nauc_map_at_3_diff1 value: 43.231752344608545 - type: nauc_map_at_3_max value: 31.68003009949564 - type: nauc_map_at_3_std value: -8.015235132765458 - type: nauc_map_at_5_diff1 value: 42.86197608819917 - type: nauc_map_at_5_max value: 32.363857571094485 - type: nauc_map_at_5_std value: -6.780487416387977 - type: nauc_mrr_at_1000_diff1 value: 43.40542912045782 - type: nauc_mrr_at_1000_max value: 32.8461770324533 - type: nauc_mrr_at_1000_std value: -3.6505425530008204 - type: nauc_mrr_at_100_diff1 value: 43.40233508014468 - type: nauc_mrr_at_100_max value: 32.85598538385942 - type: nauc_mrr_at_100_std value: -3.637477352635459 - type: nauc_mrr_at_10_diff1 value: 43.260179162806054 - type: nauc_mrr_at_10_max value: 32.942643527040474 - type: nauc_mrr_at_10_std value: -3.712052825320437 - type: nauc_mrr_at_1_diff1 value: 46.354919460881206 - type: nauc_mrr_at_1_max value: 29.1760258591106 - type: nauc_mrr_at_1_std value: -4.107225031227406 - type: nauc_mrr_at_20_diff1 value: 43.37092385434311 - type: nauc_mrr_at_20_max value: 32.93390254712846 - type: nauc_mrr_at_20_std value: -3.5719056112132006 - type: nauc_mrr_at_3_diff1 value: 43.1744474040527 - type: nauc_mrr_at_3_max value: 32.741290559777994 - type: nauc_mrr_at_3_std value: -4.72677925120697 - type: nauc_mrr_at_5_diff1 value: 43.108396819975674 - type: nauc_mrr_at_5_max value: 32.970519514893084 - type: nauc_mrr_at_5_std value: -4.090906158975974 - type: nauc_ndcg_at_1000_diff1 value: 42.786664193638714 - type: nauc_ndcg_at_1000_max value: 33.65554095609296 - type: nauc_ndcg_at_1000_std value: -4.024030130584482 - type: nauc_ndcg_at_100_diff1 value: 42.691246775210814 - type: nauc_ndcg_at_100_max value: 34.063232335110875 - type: nauc_ndcg_at_100_std value: -3.477813807415248 - type: nauc_ndcg_at_10_diff1 value: 41.90988990571757 - type: nauc_ndcg_at_10_max value: 34.58934812881633 - type: nauc_ndcg_at_10_std value: -4.3295110195497655 - type: nauc_ndcg_at_1_diff1 value: 46.354919460881206 - type: nauc_ndcg_at_1_max value: 29.1760258591106 - type: nauc_ndcg_at_1_std value: -4.107225031227406 - type: nauc_ndcg_at_20_diff1 value: 42.493206675867114 - type: nauc_ndcg_at_20_max value: 34.562441307459544 - type: nauc_ndcg_at_20_std value: -3.4456116866749107 - type: nauc_ndcg_at_3_diff1 value: 42.24180336502808 - type: nauc_ndcg_at_3_max value: 33.064267018100594 - type: nauc_ndcg_at_3_std value: -7.786248093572142 - type: nauc_ndcg_at_5_diff1 value: 41.692714787779565 - type: nauc_ndcg_at_5_max value: 34.20502498949156 - type: nauc_ndcg_at_5_std value: -5.979557859282785 - type: nauc_precision_at_1000_diff1 value: -13.779832506640702 - type: nauc_precision_at_1000_max value: 1.243001688631421 - type: nauc_precision_at_1000_std value: 17.351623398622323 - type: nauc_precision_at_100_diff1 value: -11.310526816290297 - type: nauc_precision_at_100_max value: 5.771669506192959 - type: nauc_precision_at_100_std value: 19.917795079540113 - type: nauc_precision_at_10_diff1 value: 2.163699384635286 - type: nauc_precision_at_10_max value: 19.66440698458386 - type: nauc_precision_at_10_std value: 13.689876348315726 - type: nauc_precision_at_1_diff1 value: 46.354919460881206 - type: nauc_precision_at_1_max value: 29.1760258591106 - type: nauc_precision_at_1_std value: -4.107225031227406 - type: nauc_precision_at_20_diff1 value: -3.038735879584471 - type: nauc_precision_at_20_max value: 14.132968299701695 - type: nauc_precision_at_20_std value: 17.78069734664346 - type: nauc_precision_at_3_diff1 value: 21.783760758070095 - type: nauc_precision_at_3_max value: 30.244127986404497 - type: nauc_precision_at_3_std value: -0.12411163467738723 - type: nauc_precision_at_5_diff1 value: 10.980635723302418 - type: nauc_precision_at_5_max value: 25.302293738975575 - type: nauc_precision_at_5_std value: 6.4740817488722024 - type: nauc_recall_at_1000_diff1 value: 34.10343772356593 - type: nauc_recall_at_1000_max value: 80.72497340357538 - type: nauc_recall_at_1000_std value: 69.54564103264093 - type: nauc_recall_at_100_diff1 value: 33.427719956774126 - type: nauc_recall_at_100_max value: 71.54086768335449 - type: nauc_recall_at_100_std value: 49.66157377654885 - type: nauc_recall_at_10_diff1 value: 33.70139560054039 - type: nauc_recall_at_10_max value: 45.47878072860151 - type: nauc_recall_at_10_std value: 1.4188516615716378 - type: nauc_recall_at_1_diff1 value: 46.16376563218624 - type: nauc_recall_at_1_max value: 26.342624776802232 - type: nauc_recall_at_1_std value: -7.142171388751972 - type: nauc_recall_at_20_diff1 value: 35.805379874970086 - type: nauc_recall_at_20_max value: 51.80479822253392 - type: nauc_recall_at_20_std value: 13.531467576460143 - type: nauc_recall_at_3_diff1 value: 37.288500141631616 - type: nauc_recall_at_3_max value: 35.07078243516728 - type: nauc_recall_at_3_std value: -10.452926441410405 - type: nauc_recall_at_5_diff1 value: 34.83186104526897 - type: nauc_recall_at_5_max value: 39.58488976496973 - type: nauc_recall_at_5_std value: -6.3049292065708835 - type: ndcg_at_1 value: 50.839999999999996 - type: ndcg_at_10 value: 69.072 - type: ndcg_at_100 value: 71.538 - type: ndcg_at_1000 value: 71.77799999999999 - type: ndcg_at_20 value: 70.41 - type: ndcg_at_3 value: 62.544999999999995 - type: ndcg_at_5 value: 66.33099999999999 - type: precision_at_1 value: 50.839999999999996 - type: precision_at_10 value: 10.495000000000001 - type: precision_at_100 value: 1.1900000000000002 - type: precision_at_1000 value: 0.121 - type: precision_at_20 value: 5.5809999999999995 - type: precision_at_3 value: 27.636 - type: precision_at_5 value: 18.864 - type: recall_at_1 value: 45.483000000000004 - type: recall_at_10 value: 87.483 - type: recall_at_100 value: 97.844 - type: recall_at_1000 value: 99.66199999999999 - type: recall_at_20 value: 92.294 - type: recall_at_3 value: 71.2 - type: recall_at_5 value: 79.753 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: mteb/quora config: default split: test revision: e4e08e0b7dbe3c8700f0daef558ff32256715259 metrics: - type: main_score value: 89.58 - type: map_at_1 value: 71.819 - type: map_at_10 value: 86.04899999999999 - type: map_at_100 value: 86.648 - type: map_at_1000 value: 86.66199999999999 - type: map_at_20 value: 86.441 - type: map_at_3 value: 83.114 - type: map_at_5 value: 84.981 - type: mrr_at_1 value: 82.62 - type: mrr_at_10 value: 88.62899999999979 - type: mrr_at_100 value: 88.70918591324215 - type: mrr_at_1000 value: 88.70973091492397 - type: mrr_at_20 value: 88.68914765317221 - type: mrr_at_3 value: 87.74999999999979 - type: mrr_at_5 value: 88.36799999999974 - type: nauc_map_at_1000_diff1 value: 77.89207709760448 - type: nauc_map_at_1000_max value: 29.63371361495422 - type: nauc_map_at_1000_std value: -48.628180385874344 - type: nauc_map_at_100_diff1 value: 77.89592179104915 - type: nauc_map_at_100_max value: 29.617171506130756 - type: nauc_map_at_100_std value: -48.66057170774648 - type: nauc_map_at_10_diff1 value: 78.0618161228185 - type: nauc_map_at_10_max value: 29.178490609366737 - type: nauc_map_at_10_std value: -50.74755004592002 - type: nauc_map_at_1_diff1 value: 81.64335579973574 - type: nauc_map_at_1_max value: 21.813832226652174 - type: nauc_map_at_1_std value: -42.57570978190876 - type: nauc_map_at_20_diff1 value: 77.9299081005938 - type: nauc_map_at_20_max value: 29.458718470003888 - type: nauc_map_at_20_std value: -49.63337236763102 - type: nauc_map_at_3_diff1 value: 78.72941448509229 - type: nauc_map_at_3_max value: 26.600997896960056 - type: nauc_map_at_3_std value: -51.889002227479885 - type: nauc_map_at_5_diff1 value: 78.31466610917171 - type: nauc_map_at_5_max value: 28.09863984582896 - type: nauc_map_at_5_std value: -52.14058096096497 - type: nauc_mrr_at_1000_diff1 value: 78.42667263739992 - type: nauc_mrr_at_1000_max value: 31.98996235127974 - type: nauc_mrr_at_1000_std value: -44.380439148429296 - type: nauc_mrr_at_100_diff1 value: 78.42661032698115 - type: nauc_mrr_at_100_max value: 31.991652631740102 - type: nauc_mrr_at_100_std value: -44.37854108460535 - type: nauc_mrr_at_10_diff1 value: 78.39126022544136 - type: nauc_mrr_at_10_max value: 32.02023484451197 - type: nauc_mrr_at_10_std value: -44.561252349176954 - type: nauc_mrr_at_1_diff1 value: 79.21630894647448 - type: nauc_mrr_at_1_max value: 31.526303156060177 - type: nauc_mrr_at_1_std value: -41.887504422443136 - type: nauc_mrr_at_20_diff1 value: 78.42548039170424 - type: nauc_mrr_at_20_max value: 31.99588275070137 - type: nauc_mrr_at_20_std value: -44.44957722627042 - type: nauc_mrr_at_3_diff1 value: 78.26165151833735 - type: nauc_mrr_at_3_max value: 32.18028826126801 - type: nauc_mrr_at_3_std value: -44.6998237213182 - type: nauc_mrr_at_5_diff1 value: 78.34786430903962 - type: nauc_mrr_at_5_max value: 32.168476272879566 - type: nauc_mrr_at_5_std value: -44.7915919956712 - type: nauc_ndcg_at_1000_diff1 value: 77.79198355957816 - type: nauc_ndcg_at_1000_max value: 31.14363511518406 - type: nauc_ndcg_at_1000_std value: -46.69335151274275 - type: nauc_ndcg_at_100_diff1 value: 77.79898090286419 - type: nauc_ndcg_at_100_max value: 31.115103811629215 - type: nauc_ndcg_at_100_std value: -46.73078913421965 - type: nauc_ndcg_at_10_diff1 value: 77.74856635461343 - type: nauc_ndcg_at_10_max value: 30.279584686212747 - type: nauc_ndcg_at_10_std value: -50.23514662356807 - type: nauc_ndcg_at_1_diff1 value: 79.17833000040999 - type: nauc_ndcg_at_1_max value: 31.703788144510746 - type: nauc_ndcg_at_1_std value: -41.854817402870715 - type: nauc_ndcg_at_20_diff1 value: 77.7380353804671 - type: nauc_ndcg_at_20_max value: 30.622294129001553 - type: nauc_ndcg_at_20_std value: -49.035794761065254 - type: nauc_ndcg_at_3_diff1 value: 77.41476880573593 - type: nauc_ndcg_at_3_max value: 29.015949978243032 - type: nauc_ndcg_at_3_std value: -49.78627087622648 - type: nauc_ndcg_at_5_diff1 value: 77.64439137502896 - type: nauc_ndcg_at_5_max value: 29.444684897492206 - type: nauc_ndcg_at_5_std value: -51.21908400252501 - type: nauc_precision_at_1000_diff1 value: -44.92396459446822 - type: nauc_precision_at_1000_max value: -3.674153720989045 - type: nauc_precision_at_1000_std value: 39.56552468277785 - type: nauc_precision_at_100_diff1 value: -44.75143023259094 - type: nauc_precision_at_100_max value: -3.705280025140011 - type: nauc_precision_at_100_std value: 39.433619999113326 - type: nauc_precision_at_10_diff1 value: -41.0651074726579 - type: nauc_precision_at_10_max value: -0.21097985601783667 - type: nauc_precision_at_10_std value: 26.24652824589493 - type: nauc_precision_at_1_diff1 value: 79.17833000040999 - type: nauc_precision_at_1_max value: 31.703788144510746 - type: nauc_precision_at_1_std value: -41.854817402870715 - type: nauc_precision_at_20_diff1 value: -43.368001340920294 - type: nauc_precision_at_20_max value: -2.036990010399129 - type: nauc_precision_at_20_std value: 32.37747041406297 - type: nauc_precision_at_3_diff1 value: -22.089307548346877 - type: nauc_precision_at_3_max value: 6.2280973175296 - type: nauc_precision_at_3_std value: 5.323992514036145 - type: nauc_precision_at_5_diff1 value: -34.07115055244003 - type: nauc_precision_at_5_max value: 2.5955315789198834 - type: nauc_precision_at_5_std value: 16.26096689407332 - type: nauc_recall_at_1000_diff1 value: 58.27703860947467 - type: nauc_recall_at_1000_max value: 68.59835835315768 - type: nauc_recall_at_1000_std value: 77.96687006056064 - type: nauc_recall_at_100_diff1 value: 73.24371223081737 - type: nauc_recall_at_100_max value: 39.55925344664591 - type: nauc_recall_at_100_std value: -32.25605030215798 - type: nauc_recall_at_10_diff1 value: 73.41261201339202 - type: nauc_recall_at_10_max value: 26.822979434062926 - type: nauc_recall_at_10_std value: -74.2909332592806 - type: nauc_recall_at_1_diff1 value: 81.64335579973574 - type: nauc_recall_at_1_max value: 21.813832226652174 - type: nauc_recall_at_1_std value: -42.57570978190876 - type: nauc_recall_at_20_diff1 value: 72.7621297920656 - type: nauc_recall_at_20_max value: 26.02492304096079 - type: nauc_recall_at_20_std value: -77.8724532438279 - type: nauc_recall_at_3_diff1 value: 75.25149312810714 - type: nauc_recall_at_3_max value: 23.20545662481487 - type: nauc_recall_at_3_std value: -59.69689982140521 - type: nauc_recall_at_5_diff1 value: 73.69807273001406 - type: nauc_recall_at_5_max value: 24.073666798066057 - type: nauc_recall_at_5_std value: -67.91121268130719 - type: ndcg_at_1 value: 82.64 - type: ndcg_at_10 value: 89.58 - type: ndcg_at_100 value: 90.606 - type: ndcg_at_1000 value: 90.676 - type: ndcg_at_20 value: 90.132 - type: ndcg_at_3 value: 86.88 - type: ndcg_at_5 value: 88.40299999999999 - type: precision_at_1 value: 82.64 - type: precision_at_10 value: 13.604 - type: precision_at_100 value: 1.539 - type: precision_at_1000 value: 0.157 - type: precision_at_20 value: 7.188 - type: precision_at_3 value: 38.083 - type: precision_at_5 value: 25.018 - type: recall_at_1 value: 71.819 - type: recall_at_10 value: 96.34700000000001 - type: recall_at_100 value: 99.715 - type: recall_at_1000 value: 99.995 - type: recall_at_20 value: 98.073 - type: recall_at_3 value: 88.57300000000001 - type: recall_at_5 value: 92.908 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: main_score value: 71.18966762070158 - type: v_measure value: 71.18966762070158 - type: v_measure_std value: 2.7498969054457048 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 385e3cb46b4cfa89021f56c4380204149d0efe33 metrics: - type: main_score value: 74.42014716862516 - type: v_measure value: 74.42014716862516 - type: v_measure_std value: 9.909739891410648 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: mteb/scidocs config: default split: test revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88 metrics: - type: main_score value: 25.041999999999998 - type: map_at_1 value: 5.893000000000001 - type: map_at_10 value: 15.260000000000002 - type: map_at_100 value: 18.084 - type: map_at_1000 value: 18.467 - type: map_at_20 value: 16.675 - type: map_at_3 value: 10.526 - type: map_at_5 value: 12.775 - type: mrr_at_1 value: 28.999999999999996 - type: mrr_at_10 value: 41.03575396825395 - type: mrr_at_100 value: 42.136771862785835 - type: mrr_at_1000 value: 42.16698555415099 - type: mrr_at_20 value: 41.707493696104315 - type: mrr_at_3 value: 37.34999999999998 - type: mrr_at_5 value: 39.59999999999995 - type: nauc_map_at_1000_diff1 value: 12.080002654911883 - type: nauc_map_at_1000_max value: 29.813563682286276 - type: nauc_map_at_1000_std value: 20.36659817908673 - type: nauc_map_at_100_diff1 value: 12.108735517749706 - type: nauc_map_at_100_max value: 29.76830671710955 - type: nauc_map_at_100_std value: 20.3433621032846 - type: nauc_map_at_10_diff1 value: 12.91575031185637 - type: nauc_map_at_10_max value: 29.427600958386318 - type: nauc_map_at_10_std value: 16.89867275177153 - type: nauc_map_at_1_diff1 value: 19.353069488987916 - type: nauc_map_at_1_max value: 17.093914951159693 - type: nauc_map_at_1_std value: 8.19886078055046 - type: nauc_map_at_20_diff1 value: 11.977233457943113 - type: nauc_map_at_20_max value: 29.171812822948805 - type: nauc_map_at_20_std value: 18.780517506173965 - type: nauc_map_at_3_diff1 value: 14.453129464176092 - type: nauc_map_at_3_max value: 25.801958649112077 - type: nauc_map_at_3_std value: 11.572823684429643 - type: nauc_map_at_5_diff1 value: 13.167155808104997 - type: nauc_map_at_5_max value: 27.355626948365792 - type: nauc_map_at_5_std value: 14.414151839192183 - type: nauc_mrr_at_1000_diff1 value: 17.262104643988636 - type: nauc_mrr_at_1000_max value: 23.991373837217058 - type: nauc_mrr_at_1000_std value: 12.44755488671623 - type: nauc_mrr_at_100_diff1 value: 17.267280132318703 - type: nauc_mrr_at_100_max value: 24.022189287889294 - type: nauc_mrr_at_100_std value: 12.480695500214788 - type: nauc_mrr_at_10_diff1 value: 17.012383998246268 - type: nauc_mrr_at_10_max value: 24.192637911171722 - type: nauc_mrr_at_10_std value: 12.524608847408917 - type: nauc_mrr_at_1_diff1 value: 19.43518811038007 - type: nauc_mrr_at_1_max value: 17.747482933395602 - type: nauc_mrr_at_1_std value: 8.410779775558684 - type: nauc_mrr_at_20_diff1 value: 17.202663281407446 - type: nauc_mrr_at_20_max value: 24.091991130543118 - type: nauc_mrr_at_20_std value: 12.503814263019908 - type: nauc_mrr_at_3_diff1 value: 17.52733013432995 - type: nauc_mrr_at_3_max value: 23.569459518780214 - type: nauc_mrr_at_3_std value: 11.770846827520726 - type: nauc_mrr_at_5_diff1 value: 17.10817561975543 - type: nauc_mrr_at_5_max value: 23.945141435234678 - type: nauc_mrr_at_5_std value: 12.034468615317719 - type: nauc_ndcg_at_1000_diff1 value: 12.317811393346936 - type: nauc_ndcg_at_1000_max value: 30.809991350156103 - type: nauc_ndcg_at_1000_std value: 24.517501065205067 - type: nauc_ndcg_at_100_diff1 value: 12.824804203182936 - type: nauc_ndcg_at_100_max value: 30.895499817010748 - type: nauc_ndcg_at_100_std value: 25.424376279745402 - type: nauc_ndcg_at_10_diff1 value: 13.32724552457439 - type: nauc_ndcg_at_10_max value: 30.409088666807456 - type: nauc_ndcg_at_10_std value: 18.216330475714113 - type: nauc_ndcg_at_1_diff1 value: 19.43518811038007 - type: nauc_ndcg_at_1_max value: 17.747482933395602 - type: nauc_ndcg_at_1_std value: 8.410779775558684 - type: nauc_ndcg_at_20_diff1 value: 12.224399111852902 - type: nauc_ndcg_at_20_max value: 29.86352330445272 - type: nauc_ndcg_at_20_std value: 21.196937851331807 - type: nauc_ndcg_at_3_diff1 value: 15.367489533734027 - type: nauc_ndcg_at_3_max value: 26.76486390741532 - type: nauc_ndcg_at_3_std value: 12.606077508789923 - type: nauc_ndcg_at_5_diff1 value: 13.831157482390935 - type: nauc_ndcg_at_5_max value: 28.070226983968904 - type: nauc_ndcg_at_5_std value: 15.236787943125435 - type: nauc_precision_at_1000_diff1 value: 0.016122957101357048 - type: nauc_precision_at_1000_max value: 24.380929903557334 - type: nauc_precision_at_1000_std value: 34.54045112720052 - type: nauc_precision_at_100_diff1 value: 7.255224788507301 - type: nauc_precision_at_100_max value: 27.98453788447542 - type: nauc_precision_at_100_std value: 35.38999555441665 - type: nauc_precision_at_10_diff1 value: 9.69185099834181 - type: nauc_precision_at_10_max value: 32.532315522580454 - type: nauc_precision_at_10_std value: 21.48948348473612 - type: nauc_precision_at_1_diff1 value: 19.43518811038007 - type: nauc_precision_at_1_max value: 17.747482933395602 - type: nauc_precision_at_1_std value: 8.410779775558684 - type: nauc_precision_at_20_diff1 value: 6.964076536695672 - type: nauc_precision_at_20_max value: 29.30087236410044 - type: nauc_precision_at_20_std value: 26.413625895571986 - type: nauc_precision_at_3_diff1 value: 14.145134359925155 - type: nauc_precision_at_3_max value: 29.915650960808303 - type: nauc_precision_at_3_std value: 14.095370019867797 - type: nauc_precision_at_5_diff1 value: 11.043933558522692 - type: nauc_precision_at_5_max value: 30.93016505807111 - type: nauc_precision_at_5_std value: 17.749256196062603 - type: nauc_recall_at_1000_diff1 value: -0.7776817772090345 - type: nauc_recall_at_1000_max value: 23.094717340324518 - type: nauc_recall_at_1000_std value: 37.189908681396425 - type: nauc_recall_at_100_diff1 value: 6.887748742013364 - type: nauc_recall_at_100_max value: 27.00798435230277 - type: nauc_recall_at_100_std value: 35.908147807345344 - type: nauc_recall_at_10_diff1 value: 9.605632017480751 - type: nauc_recall_at_10_max value: 31.845202901168655 - type: nauc_recall_at_10_std value: 21.497414586634683 - type: nauc_recall_at_1_diff1 value: 19.353069488987916 - type: nauc_recall_at_1_max value: 17.093914951159693 - type: nauc_recall_at_1_std value: 8.19886078055046 - type: nauc_recall_at_20_diff1 value: 6.927503731844782 - type: nauc_recall_at_20_max value: 28.611698183338202 - type: nauc_recall_at_20_std value: 26.69018660149911 - type: nauc_recall_at_3_diff1 value: 14.043724087062268 - type: nauc_recall_at_3_max value: 29.269835821380465 - type: nauc_recall_at_3_std value: 14.104419605998094 - type: nauc_recall_at_5_diff1 value: 11.017319452873336 - type: nauc_recall_at_5_max value: 30.295720628306228 - type: nauc_recall_at_5_std value: 17.758048545573825 - type: ndcg_at_1 value: 28.999999999999996 - type: ndcg_at_10 value: 25.041999999999998 - type: ndcg_at_100 value: 35.045 - type: ndcg_at_1000 value: 40.803 - type: ndcg_at_20 value: 28.584 - type: ndcg_at_3 value: 23.249 - type: ndcg_at_5 value: 20.533 - type: precision_at_1 value: 28.999999999999996 - type: precision_at_10 value: 13.120000000000001 - type: precision_at_100 value: 2.7470000000000003 - type: precision_at_1000 value: 0.41200000000000003 - type: precision_at_20 value: 8.584999999999999 - type: precision_at_3 value: 21.633 - type: precision_at_5 value: 18.099999999999998 - type: recall_at_1 value: 5.893000000000001 - type: recall_at_10 value: 26.567 - type: recall_at_100 value: 55.800000000000004 - type: recall_at_1000 value: 83.608 - type: recall_at_20 value: 34.86 - type: recall_at_3 value: 13.153 - type: recall_at_5 value: 18.323 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: 20a6d6f312dd54037fe07a32d58e5e168867909d metrics: - type: cosine_pearson value: 86.57284584320382 - type: cosine_spearman value: 82.20531642680812 - type: euclidean_pearson value: 83.94261758556554 - type: euclidean_spearman value: 82.20721497738559 - type: main_score value: 82.20531642680812 - type: manhattan_pearson value: 84.15902154703083 - type: manhattan_spearman value: 82.19506027155957 - type: pearson value: 86.57284584320382 - type: spearman value: 82.20531642680812 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cosine_pearson value: 86.28047602146931 - type: cosine_spearman value: 79.51504881448884 - type: euclidean_pearson value: 83.10545189967856 - type: euclidean_spearman value: 79.50586960492797 - type: main_score value: 79.51504881448884 - type: manhattan_pearson value: 83.44244457500889 - type: manhattan_spearman value: 79.730303339846 - type: pearson value: 86.28047602146931 - type: spearman value: 79.51504881448884 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cosine_pearson value: 88.74723553048702 - type: cosine_spearman value: 89.18936052329725 - type: euclidean_pearson value: 88.90400878928668 - type: euclidean_spearman value: 89.19174821431281 - type: main_score value: 89.18936052329725 - type: manhattan_pearson value: 88.81504628424054 - type: manhattan_spearman value: 89.18063294142597 - type: pearson value: 88.74723553048702 - type: spearman value: 89.18936052329725 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cosine_pearson value: 86.45403437836023 - type: cosine_spearman value: 85.14654611519086 - type: euclidean_pearson value: 85.87509624462743 - type: euclidean_spearman value: 85.1391108856681 - type: main_score value: 85.14654611519086 - type: manhattan_pearson value: 85.96635794953866 - type: manhattan_spearman value: 85.3271371527667 - type: pearson value: 86.45403437836023 - type: spearman value: 85.14654611519086 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cosine_pearson value: 87.84742260009705 - type: cosine_spearman value: 89.10215217191254 - type: euclidean_pearson value: 88.97393286325477 - type: euclidean_spearman value: 89.1014105509662 - type: main_score value: 89.10215217191254 - type: manhattan_pearson value: 89.31698781090151 - type: manhattan_spearman value: 89.53000001764433 - type: pearson value: 87.84742260009705 - type: spearman value: 89.10215217191254 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cosine_pearson value: 85.22397535461835 - type: cosine_spearman value: 87.14066355879785 - type: euclidean_pearson value: 86.31393364087295 - type: euclidean_spearman value: 87.14018892702765 - type: main_score value: 87.14066355879785 - type: manhattan_pearson value: 86.36366855248434 - type: manhattan_spearman value: 87.20858630423012 - type: pearson value: 85.22397535461835 - type: spearman value: 87.14066355879785 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 90.66131612061355 - type: cosine_spearman value: 90.97082650129164 - type: euclidean_pearson value: 90.98181906744969 - type: euclidean_spearman value: 90.99008476850047 - type: main_score value: 90.97082650129164 - type: manhattan_pearson value: 90.75245040709021 - type: manhattan_spearman value: 90.6199877691265 - type: pearson value: 90.66131612061355 - type: spearman value: 90.97082650129164 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 67.270656447085 - type: cosine_spearman value: 67.82870469746828 - type: euclidean_pearson value: 69.03857775285664 - type: euclidean_spearman value: 67.74455108773341 - type: main_score value: 67.82870469746828 - type: manhattan_pearson value: 69.25304172245812 - type: manhattan_spearman value: 68.00987097916055 - type: pearson value: 67.270656447085 - type: spearman value: 67.82870469746828 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cosine_pearson value: 87.17245205384889 - type: cosine_spearman value: 87.7360146030987 - type: euclidean_pearson value: 87.48919412794656 - type: euclidean_spearman value: 87.7312047878383 - type: main_score value: 87.7360146030987 - type: manhattan_pearson value: 87.61476224354806 - type: manhattan_spearman value: 87.95220889254693 - type: pearson value: 87.17245205384889 - type: spearman value: 87.7360146030987 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: main_score value: 88.43547871921146 - type: map value: 88.43547871921146 - type: mrr value: 96.5564473652709 - type: nAUC_map_diff1 value: -13.66029392579231 - type: nAUC_map_max value: 50.325613574053506 - type: nAUC_map_std value: 60.02986231275796 - type: nAUC_mrr_diff1 value: 23.83821476411125 - type: nAUC_mrr_max value: 86.72643311769906 - type: nAUC_mrr_std value: 72.12741063469213 - task: type: Retrieval dataset: name: MTEB SciFact type: mteb/scifact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: main_score value: 78.233 - type: map_at_1 value: 61.49400000000001 - type: map_at_10 value: 73.30600000000001 - type: map_at_100 value: 73.719 - type: map_at_1000 value: 73.724 - type: map_at_20 value: 73.611 - type: map_at_3 value: 70.626 - type: map_at_5 value: 72.417 - type: mrr_at_1 value: 64.66666666666666 - type: mrr_at_10 value: 74.30357142857143 - type: mrr_at_100 value: 74.56950898079988 - type: mrr_at_1000 value: 74.57295833098681 - type: mrr_at_20 value: 74.46165223665226 - type: mrr_at_3 value: 72.3888888888889 - type: mrr_at_5 value: 73.60555555555557 - type: nauc_map_at_1000_diff1 value: 76.51524604780636 - type: nauc_map_at_1000_max value: 53.48521938401881 - type: nauc_map_at_1000_std value: -7.347799382158861 - type: nauc_map_at_100_diff1 value: 76.5122888096236 - type: nauc_map_at_100_max value: 53.49221847471618 - type: nauc_map_at_100_std value: -7.329683735681086 - type: nauc_map_at_10_diff1 value: 76.30928630674504 - type: nauc_map_at_10_max value: 53.00102977185941 - type: nauc_map_at_10_std value: -7.7467740085108705 - type: nauc_map_at_1_diff1 value: 79.54189281784247 - type: nauc_map_at_1_max value: 46.630071622109526 - type: nauc_map_at_1_std value: -14.395943134644112 - type: nauc_map_at_20_diff1 value: 76.41604361947962 - type: nauc_map_at_20_max value: 53.578883876146875 - type: nauc_map_at_20_std value: -7.403103451288041 - type: nauc_map_at_3_diff1 value: 76.25911617571941 - type: nauc_map_at_3_max value: 49.140287380513605 - type: nauc_map_at_3_std value: -11.35992449218983 - type: nauc_map_at_5_diff1 value: 76.35122077770336 - type: nauc_map_at_5_max value: 52.1744367901208 - type: nauc_map_at_5_std value: -7.85753955055384 - type: nauc_mrr_at_1000_diff1 value: 76.97223309515867 - type: nauc_mrr_at_1000_max value: 57.263787498613326 - type: nauc_mrr_at_1000_std value: -4.884090708840035 - type: nauc_mrr_at_100_diff1 value: 76.97312970894603 - type: nauc_mrr_at_100_max value: 57.26850730446478 - type: nauc_mrr_at_100_std value: -4.875200894216617 - type: nauc_mrr_at_10_diff1 value: 76.65927674223613 - type: nauc_mrr_at_10_max value: 57.30979763941454 - type: nauc_mrr_at_10_std value: -4.863331094022142 - type: nauc_mrr_at_1_diff1 value: 80.0454932568644 - type: nauc_mrr_at_1_max value: 56.76038421319305 - type: nauc_mrr_at_1_std value: -4.101939392632653 - type: nauc_mrr_at_20_diff1 value: 76.87237970440503 - type: nauc_mrr_at_20_max value: 57.33843605225869 - type: nauc_mrr_at_20_std value: -4.96248984417978 - type: nauc_mrr_at_3_diff1 value: 76.74130186666727 - type: nauc_mrr_at_3_max value: 56.19313244846155 - type: nauc_mrr_at_3_std value: -5.684365934009136 - type: nauc_mrr_at_5_diff1 value: 76.66406918799962 - type: nauc_mrr_at_5_max value: 57.56110093228628 - type: nauc_mrr_at_5_std value: -3.7464413085588073 - type: nauc_ndcg_at_1000_diff1 value: 76.19194173971773 - type: nauc_ndcg_at_1000_max value: 55.57464600170693 - type: nauc_ndcg_at_1000_std value: -6.0761689532372625 - type: nauc_ndcg_at_100_diff1 value: 76.14631273843654 - type: nauc_ndcg_at_100_max value: 55.72246565373382 - type: nauc_ndcg_at_100_std value: -5.595160698860595 - type: nauc_ndcg_at_10_diff1 value: 75.0108223611192 - type: nauc_ndcg_at_10_max value: 55.27894212877493 - type: nauc_ndcg_at_10_std value: -6.968331740214591 - type: nauc_ndcg_at_1_diff1 value: 80.0454932568644 - type: nauc_ndcg_at_1_max value: 56.76038421319305 - type: nauc_ndcg_at_1_std value: -4.101939392632653 - type: nauc_ndcg_at_20_diff1 value: 75.54887755702472 - type: nauc_ndcg_at_20_max value: 56.406879417251496 - type: nauc_ndcg_at_20_std value: -6.495231061329629 - type: nauc_ndcg_at_3_diff1 value: 75.03620356688509 - type: nauc_ndcg_at_3_max value: 52.147381077773424 - type: nauc_ndcg_at_3_std value: -8.448005688956199 - type: nauc_ndcg_at_5_diff1 value: 75.1195898074229 - type: nauc_ndcg_at_5_max value: 54.2321033861173 - type: nauc_ndcg_at_5_std value: -5.882690780895338 - type: nauc_precision_at_1000_diff1 value: -28.081979732100532 - type: nauc_precision_at_1000_max value: 35.055348014832916 - type: nauc_precision_at_1000_std value: 59.61280468927384 - type: nauc_precision_at_100_diff1 value: -25.112740730587458 - type: nauc_precision_at_100_max value: 38.26331300116496 - type: nauc_precision_at_100_std value: 62.46316222328831 - type: nauc_precision_at_10_diff1 value: -2.6766206473658833 - type: nauc_precision_at_10_max value: 45.95321867204845 - type: nauc_precision_at_10_std value: 45.07212468670564 - type: nauc_precision_at_1_diff1 value: 80.0454932568644 - type: nauc_precision_at_1_max value: 56.76038421319305 - type: nauc_precision_at_1_std value: -4.101939392632653 - type: nauc_precision_at_20_diff1 value: -10.698911116738385 - type: nauc_precision_at_20_max value: 43.467275950182994 - type: nauc_precision_at_20_std value: 48.00467321991766 - type: nauc_precision_at_3_diff1 value: 33.6344708541193 - type: nauc_precision_at_3_max value: 49.309242331670504 - type: nauc_precision_at_3_std value: 21.02940391379915 - type: nauc_precision_at_5_diff1 value: 13.560415600596318 - type: nauc_precision_at_5_max value: 48.918726500100085 - type: nauc_precision_at_5_std value: 39.940930429172184 - type: nauc_recall_at_1000_diff1 value: .nan - type: nauc_recall_at_1000_max value: .nan - type: nauc_recall_at_1000_std value: .nan - type: nauc_recall_at_100_diff1 value: 70.82166199813196 - type: nauc_recall_at_100_max value: 76.6106442577042 - type: nauc_recall_at_100_std value: 66.47992530345513 - type: nauc_recall_at_10_diff1 value: 62.68908885556092 - type: nauc_recall_at_10_max value: 58.14262437741839 - type: nauc_recall_at_10_std value: -12.946717875063369 - type: nauc_recall_at_1_diff1 value: 79.54189281784247 - type: nauc_recall_at_1_max value: 46.630071622109526 - type: nauc_recall_at_1_std value: -14.395943134644112 - type: nauc_recall_at_20_diff1 value: 65.79470497876567 - type: nauc_recall_at_20_max value: 71.68308183488456 - type: nauc_recall_at_20_std value: -12.556850697268453 - type: nauc_recall_at_3_diff1 value: 68.3240211318129 - type: nauc_recall_at_3_max value: 45.05998217275036 - type: nauc_recall_at_3_std value: -14.23179772593869 - type: nauc_recall_at_5_diff1 value: 67.53366869904056 - type: nauc_recall_at_5_max value: 53.57935627081027 - type: nauc_recall_at_5_std value: -3.3271112904853393 - type: ndcg_at_1 value: 64.667 - type: ndcg_at_10 value: 78.233 - type: ndcg_at_100 value: 79.806 - type: ndcg_at_1000 value: 79.92099999999999 - type: ndcg_at_20 value: 79.006 - type: ndcg_at_3 value: 74.018 - type: ndcg_at_5 value: 76.334 - type: precision_at_1 value: 64.667 - type: precision_at_10 value: 10.4 - type: precision_at_100 value: 1.1199999999999999 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_20 value: 5.383 - type: precision_at_3 value: 29.444 - type: precision_at_5 value: 19.467000000000002 - type: recall_at_1 value: 61.49400000000001 - type: recall_at_10 value: 92.156 - type: recall_at_100 value: 99.167 - type: recall_at_1000 value: 100.0 - type: recall_at_20 value: 94.833 - type: recall_at_3 value: 80.833 - type: recall_at_5 value: 86.6 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cosine_accuracy value: 99.8039603960396 - type: cosine_accuracy_threshold value: 84.54211950302124 - type: cosine_ap value: 95.59056372734358 - type: cosine_f1 value: 90.1394422310757 - type: cosine_f1_threshold value: 84.54211950302124 - type: cosine_precision value: 89.78174603174604 - type: cosine_recall value: 90.5 - type: dot_accuracy value: 99.80594059405941 - type: dot_accuracy_threshold value: 85.57180166244507 - type: dot_ap value: 95.53453431914399 - type: dot_f1 value: 90.10442565887618 - type: dot_f1_threshold value: 84.59715843200684 - type: dot_precision value: 89.61424332344214 - type: dot_recall value: 90.60000000000001 - type: euclidean_accuracy value: 99.8039603960396 - type: euclidean_accuracy_threshold value: 53.253382444381714 - type: euclidean_ap value: 95.5850992402159 - type: euclidean_f1 value: 90.09457441513192 - type: euclidean_f1_threshold value: 55.725520849227905 - type: euclidean_precision value: 89.69276511397423 - type: euclidean_recall value: 90.5 - type: main_score value: 95.7485189884476 - type: manhattan_accuracy value: 99.81485148514851 - type: manhattan_accuracy_threshold value: 3491.29638671875 - type: manhattan_ap value: 95.7485189884476 - type: manhattan_f1 value: 90.464048954615 - type: manhattan_f1_threshold value: 3491.29638671875 - type: manhattan_precision value: 92.2996878251821 - type: manhattan_recall value: 88.7 - type: max_ap value: 95.7485189884476 - type: max_f1 value: 90.464048954615 - type: max_precision value: 92.2996878251821 - type: max_recall value: 90.60000000000001 - type: similarity_accuracy value: 99.8039603960396 - type: similarity_accuracy_threshold value: 84.54211950302124 - type: similarity_ap value: 95.59056372734358 - type: similarity_f1 value: 90.1394422310757 - type: similarity_f1_threshold value: 84.54211950302124 - type: similarity_precision value: 89.78174603174604 - type: similarity_recall value: 90.5 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: main_score value: 78.49205191950675 - type: v_measure value: 78.49205191950675 - type: v_measure_std value: 2.84869550699959 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: main_score value: 48.90421736513028 - type: v_measure value: 48.90421736513028 - type: v_measure_std value: 1.6875865714471023 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: main_score value: 52.9874730481696 - type: map value: 52.9874730481696 - type: mrr value: 53.85867604617604 - type: nAUC_map_diff1 value: 39.633429293407616 - type: nAUC_map_max value: 10.236807988858546 - type: nAUC_map_std value: 10.276522217929674 - type: nAUC_mrr_diff1 value: 40.0543079218377 - type: nAUC_mrr_max value: 10.96209807382042 - type: nAUC_mrr_std value: 10.524400196109918 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cosine_pearson value: 30.727801109114232 - type: cosine_spearman value: 31.66058223980157 - type: dot_pearson value: 30.78818248622866 - type: dot_spearman value: 31.525158776890265 - type: main_score value: 31.66058223980157 - type: pearson value: 30.727801109114232 - type: spearman value: 31.66058223980157 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: mteb/trec-covid config: default split: test revision: bb9466bac8153a0349341eb1b22e06409e78ef4e metrics: - type: main_score value: 85.206 - type: map_at_1 value: 0.246 - type: map_at_10 value: 2.1950000000000003 - type: map_at_100 value: 14.179 - type: map_at_1000 value: 35.037 - type: map_at_20 value: 4.143 - type: map_at_3 value: 0.7100000000000001 - type: map_at_5 value: 1.135 - type: mrr_at_1 value: 94.0 - type: mrr_at_10 value: 96.66666666666666 - type: mrr_at_100 value: 96.66666666666666 - type: mrr_at_1000 value: 96.66666666666666 - type: mrr_at_20 value: 96.66666666666666 - type: mrr_at_3 value: 96.66666666666666 - type: mrr_at_5 value: 96.66666666666666 - type: nauc_map_at_1000_diff1 value: -4.6264497624527525 - type: nauc_map_at_1000_max value: 44.594457564749355 - type: nauc_map_at_1000_std value: 73.17642341400133 - type: nauc_map_at_100_diff1 value: 23.451335157405726 - type: nauc_map_at_100_max value: 25.426398857299525 - type: nauc_map_at_100_std value: 64.07416694472633 - type: nauc_map_at_10_diff1 value: 46.57568738568346 - type: nauc_map_at_10_max value: 9.693233249079238 - type: nauc_map_at_10_std value: 28.549530265164357 - type: nauc_map_at_1_diff1 value: 53.48238396620123 - type: nauc_map_at_1_max value: 0.33476619393733076 - type: nauc_map_at_1_std value: 8.906362219128463 - type: nauc_map_at_20_diff1 value: 39.40719602207749 - type: nauc_map_at_20_max value: 9.635915072074045 - type: nauc_map_at_20_std value: 35.15634791346394 - type: nauc_map_at_3_diff1 value: 53.11784737840137 - type: nauc_map_at_3_max value: 3.059682761072153 - type: nauc_map_at_3_std value: 21.310633086556617 - type: nauc_map_at_5_diff1 value: 49.91570701185436 - type: nauc_map_at_5_max value: 8.045082896244576 - type: nauc_map_at_5_std value: 20.597686235051647 - type: nauc_mrr_at_1000_diff1 value: 41.98412698412726 - type: nauc_mrr_at_1000_max value: 78.24463118580779 - type: nauc_mrr_at_1000_std value: 0.30812324930028195 - type: nauc_mrr_at_100_diff1 value: 41.98412698412726 - type: nauc_mrr_at_100_max value: 78.24463118580779 - type: nauc_mrr_at_100_std value: 0.30812324930028195 - type: nauc_mrr_at_10_diff1 value: 41.98412698412726 - type: nauc_mrr_at_10_max value: 78.24463118580779 - type: nauc_mrr_at_10_std value: 0.30812324930028195 - type: nauc_mrr_at_1_diff1 value: 38.62433862433873 - type: nauc_mrr_at_1_max value: 80.78120136943666 - type: nauc_mrr_at_1_std value: -10.768751945222197 - type: nauc_mrr_at_20_diff1 value: 41.98412698412726 - type: nauc_mrr_at_20_max value: 78.24463118580779 - type: nauc_mrr_at_20_std value: 0.30812324930028195 - type: nauc_mrr_at_3_diff1 value: 41.98412698412726 - type: nauc_mrr_at_3_max value: 78.24463118580779 - type: nauc_mrr_at_3_std value: 0.30812324930028195 - type: nauc_mrr_at_5_diff1 value: 41.98412698412726 - type: nauc_mrr_at_5_max value: 78.24463118580779 - type: nauc_mrr_at_5_std value: 0.30812324930028195 - type: nauc_ndcg_at_1000_diff1 value: 0.5174948602880207 - type: nauc_ndcg_at_1000_max value: 48.60686602077053 - type: nauc_ndcg_at_1000_std value: 75.72456343175277 - type: nauc_ndcg_at_100_diff1 value: -20.747252137999254 - type: nauc_ndcg_at_100_max value: 49.985132618254994 - type: nauc_ndcg_at_100_std value: 61.096383293836574 - type: nauc_ndcg_at_10_diff1 value: 6.791377920463332 - type: nauc_ndcg_at_10_max value: 57.50019332833286 - type: nauc_ndcg_at_10_std value: 49.201028841219426 - type: nauc_ndcg_at_1_diff1 value: 54.92683440362145 - type: nauc_ndcg_at_1_max value: 83.8667228129276 - type: nauc_ndcg_at_1_std value: 1.6738604063586122 - type: nauc_ndcg_at_20_diff1 value: -5.1948699196314925 - type: nauc_ndcg_at_20_max value: 54.483087684806556 - type: nauc_ndcg_at_20_std value: 50.54823818118781 - type: nauc_ndcg_at_3_diff1 value: 26.267246500164372 - type: nauc_ndcg_at_3_max value: 63.0173212926611 - type: nauc_ndcg_at_3_std value: 41.025597406368256 - type: nauc_ndcg_at_5_diff1 value: 16.910185454343036 - type: nauc_ndcg_at_5_max value: 60.9328683868778 - type: nauc_ndcg_at_5_std value: 36.70169905857712 - type: nauc_precision_at_1000_diff1 value: -46.374447765983525 - type: nauc_precision_at_1000_max value: 35.36052337813863 - type: nauc_precision_at_1000_std value: 14.219220668161018 - type: nauc_precision_at_100_diff1 value: -29.7838083657744 - type: nauc_precision_at_100_max value: 43.93589400385112 - type: nauc_precision_at_100_std value: 55.425045718579945 - type: nauc_precision_at_10_diff1 value: -12.016613405227687 - type: nauc_precision_at_10_max value: 57.79924427743131 - type: nauc_precision_at_10_std value: 49.022036703550675 - type: nauc_precision_at_1_diff1 value: 38.62433862433873 - type: nauc_precision_at_1_max value: 80.78120136943666 - type: nauc_precision_at_1_std value: -10.768751945222197 - type: nauc_precision_at_20_diff1 value: -23.95633847880195 - type: nauc_precision_at_20_max value: 48.34715917258276 - type: nauc_precision_at_20_std value: 48.82198285255887 - type: nauc_precision_at_3_diff1 value: 6.871296905858807 - type: nauc_precision_at_3_max value: 70.54805793285054 - type: nauc_precision_at_3_std value: 44.65108624094803 - type: nauc_precision_at_5_diff1 value: -9.074932448759695 - type: nauc_precision_at_5_max value: 67.41284242437573 - type: nauc_precision_at_5_std value: 23.876891983919577 - type: nauc_recall_at_1000_diff1 value: 8.142288830293255 - type: nauc_recall_at_1000_max value: 38.85182826835104 - type: nauc_recall_at_1000_std value: 68.60783819217335 - type: nauc_recall_at_100_diff1 value: 34.262914076287466 - type: nauc_recall_at_100_max value: 12.87009658528838 - type: nauc_recall_at_100_std value: 56.21330603762995 - type: nauc_recall_at_10_diff1 value: 49.33830945338758 - type: nauc_recall_at_10_max value: 0.3539875530671406 - type: nauc_recall_at_10_std value: 26.85864465557644 - type: nauc_recall_at_1_diff1 value: 53.48238396620123 - type: nauc_recall_at_1_max value: 0.33476619393733076 - type: nauc_recall_at_1_std value: 8.906362219128463 - type: nauc_recall_at_20_diff1 value: 44.21928181266254 - type: nauc_recall_at_20_max value: -0.9198356057088594 - type: nauc_recall_at_20_std value: 31.484376992896784 - type: nauc_recall_at_3_diff1 value: 53.038093080990876 - type: nauc_recall_at_3_max value: -1.4170895916973003 - type: nauc_recall_at_3_std value: 21.890202855574497 - type: nauc_recall_at_5_diff1 value: 49.39742214825278 - type: nauc_recall_at_5_max value: 2.8412267611894517 - type: nauc_recall_at_5_std value: 18.01598921859512 - type: ndcg_at_1 value: 91.0 - type: ndcg_at_10 value: 85.206 - type: ndcg_at_100 value: 67.29 - type: ndcg_at_1000 value: 60.584 - type: ndcg_at_20 value: 82.321 - type: ndcg_at_3 value: 88.642 - type: ndcg_at_5 value: 87.063 - type: precision_at_1 value: 94.0 - type: precision_at_10 value: 89.8 - type: precision_at_100 value: 69.78 - type: precision_at_1000 value: 26.738 - type: precision_at_20 value: 87.2 - type: precision_at_3 value: 92.0 - type: precision_at_5 value: 90.8 - type: recall_at_1 value: 0.246 - type: recall_at_10 value: 2.344 - type: recall_at_100 value: 16.962 - type: recall_at_1000 value: 57.325 - type: recall_at_20 value: 4.517 - type: recall_at_3 value: 0.731 - type: recall_at_5 value: 1.1780000000000002 - task: type: Retrieval dataset: name: MTEB Touche2020 type: mteb/touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: main_score value: 31.455 - type: map_at_1 value: 2.9739999999999998 - type: map_at_10 value: 12.183 - type: map_at_100 value: 18.772 - type: map_at_1000 value: 20.415 - type: map_at_20 value: 14.451 - type: map_at_3 value: 6.507000000000001 - type: map_at_5 value: 8.66 - type: mrr_at_1 value: 40.816326530612244 - type: mrr_at_10 value: 57.70975056689341 - type: mrr_at_100 value: 58.18379126542391 - type: mrr_at_1000 value: 58.18379126542391 - type: mrr_at_20 value: 57.85552316164561 - type: mrr_at_3 value: 54.08163265306123 - type: mrr_at_5 value: 56.42857142857143 - type: nauc_map_at_1000_diff1 value: 3.1567471051481437 - type: nauc_map_at_1000_max value: -1.5882060729791523 - type: nauc_map_at_1000_std value: 18.69622198722074 - type: nauc_map_at_100_diff1 value: 3.3449677678147536 - type: nauc_map_at_100_max value: -2.8928606866168405 - type: nauc_map_at_100_std value: 15.789984947653412 - type: nauc_map_at_10_diff1 value: 2.9696743570444264 - type: nauc_map_at_10_max value: -9.096749212011876 - type: nauc_map_at_10_std value: -5.38545817258353 - type: nauc_map_at_1_diff1 value: 20.680780404542546 - type: nauc_map_at_1_max value: -7.04722927447817 - type: nauc_map_at_1_std value: -7.062494733973898 - type: nauc_map_at_20_diff1 value: 4.070437790119271 - type: nauc_map_at_20_max value: -4.84491434686032 - type: nauc_map_at_20_std value: 0.5846341109021014 - type: nauc_map_at_3_diff1 value: 11.9634978045925 - type: nauc_map_at_3_max value: -8.27834591046608 - type: nauc_map_at_3_std value: -8.687615453381065 - type: nauc_map_at_5_diff1 value: 0.9195191526009436 - type: nauc_map_at_5_max value: -1.673813362719489 - type: nauc_map_at_5_std value: -6.67549753473631 - type: nauc_mrr_at_1000_diff1 value: 19.877993208719573 - type: nauc_mrr_at_1000_max value: -10.37776706406218 - type: nauc_mrr_at_1000_std value: 7.132169578056367 - type: nauc_mrr_at_100_diff1 value: 19.877993208719573 - type: nauc_mrr_at_100_max value: -10.37776706406218 - type: nauc_mrr_at_100_std value: 7.132169578056367 - type: nauc_mrr_at_10_diff1 value: 20.414285568401457 - type: nauc_mrr_at_10_max value: -9.677800295687861 - type: nauc_mrr_at_10_std value: 8.001103690180859 - type: nauc_mrr_at_1_diff1 value: 22.393284073955723 - type: nauc_mrr_at_1_max value: -5.889370191243167 - type: nauc_mrr_at_1_std value: -1.5183536173658247 - type: nauc_mrr_at_20_diff1 value: 20.455564720604055 - type: nauc_mrr_at_20_max value: -10.230642830103074 - type: nauc_mrr_at_20_std value: 7.863582453266621 - type: nauc_mrr_at_3_diff1 value: 17.554895390732618 - type: nauc_mrr_at_3_max value: -15.618463505555052 - type: nauc_mrr_at_3_std value: 5.913231577966864 - type: nauc_mrr_at_5_diff1 value: 18.393678507779914 - type: nauc_mrr_at_5_max value: -11.903593353147762 - type: nauc_mrr_at_5_std value: 7.580745996262831 - type: nauc_ndcg_at_1000_diff1 value: 13.746937095530473 - type: nauc_ndcg_at_1000_max value: -0.9319249687895838 - type: nauc_ndcg_at_1000_std value: 38.56328031451904 - type: nauc_ndcg_at_100_diff1 value: 13.854865944415895 - type: nauc_ndcg_at_100_max value: -7.142142012591404 - type: nauc_ndcg_at_100_std value: 35.61341954818848 - type: nauc_ndcg_at_10_diff1 value: 9.010144273248759 - type: nauc_ndcg_at_10_max value: -15.320014897424574 - type: nauc_ndcg_at_10_std value: 2.84883880489144 - type: nauc_ndcg_at_1_diff1 value: 20.939533945592967 - type: nauc_ndcg_at_1_max value: -6.387319972188946 - type: nauc_ndcg_at_1_std value: -0.5258673122126726 - type: nauc_ndcg_at_20_diff1 value: 14.660827309009496 - type: nauc_ndcg_at_20_max value: -13.476196120145994 - type: nauc_ndcg_at_20_std value: 8.22391881710838 - type: nauc_ndcg_at_3_diff1 value: 13.429985227235935 - type: nauc_ndcg_at_3_max value: -14.904544592570247 - type: nauc_ndcg_at_3_std value: 1.599779998183342 - type: nauc_ndcg_at_5_diff1 value: 8.085466231900622 - type: nauc_ndcg_at_5_max value: -9.09591969526831 - type: nauc_ndcg_at_5_std value: 3.5794092637248505 - type: nauc_precision_at_1000_diff1 value: -9.31941215946743 - type: nauc_precision_at_1000_max value: 31.52913520470716 - type: nauc_precision_at_1000_std value: 22.720784312185856 - type: nauc_precision_at_100_diff1 value: 8.958548406995279 - type: nauc_precision_at_100_max value: 15.100597910674104 - type: nauc_precision_at_100_std value: 71.04548238175113 - type: nauc_precision_at_10_diff1 value: 12.4698194690008 - type: nauc_precision_at_10_max value: -15.84870544871496 - type: nauc_precision_at_10_std value: 7.575297622501928 - type: nauc_precision_at_1_diff1 value: 22.393284073955723 - type: nauc_precision_at_1_max value: -5.889370191243167 - type: nauc_precision_at_1_std value: -1.5183536173658247 - type: nauc_precision_at_20_diff1 value: 15.393505718138758 - type: nauc_precision_at_20_max value: -3.70684298539384 - type: nauc_precision_at_20_std value: 29.426137824970304 - type: nauc_precision_at_3_diff1 value: 9.997768085465394 - type: nauc_precision_at_3_max value: -17.12224314347674 - type: nauc_precision_at_3_std value: -1.343018166772313 - type: nauc_precision_at_5_diff1 value: 3.8936997437913554 - type: nauc_precision_at_5_max value: -5.689104289687632 - type: nauc_precision_at_5_std value: 3.181098051304285 - type: nauc_recall_at_1000_diff1 value: 9.908303508158387 - type: nauc_recall_at_1000_max value: 6.174506592699848 - type: nauc_recall_at_1000_std value: 77.41931114780012 - type: nauc_recall_at_100_diff1 value: 10.286839241876192 - type: nauc_recall_at_100_max value: -6.6138697026666815 - type: nauc_recall_at_100_std value: 49.608313692633224 - type: nauc_recall_at_10_diff1 value: 2.215545846659851 - type: nauc_recall_at_10_max value: -17.83025802478445 - type: nauc_recall_at_10_std value: -3.3784768673705465 - type: nauc_recall_at_1_diff1 value: 20.680780404542546 - type: nauc_recall_at_1_max value: -7.04722927447817 - type: nauc_recall_at_1_std value: -7.062494733973898 - type: nauc_recall_at_20_diff1 value: 6.974410239251615 - type: nauc_recall_at_20_max value: -14.161147924731646 - type: nauc_recall_at_20_std value: 9.328412057721454 - type: nauc_recall_at_3_diff1 value: 7.904589805754212 - type: nauc_recall_at_3_max value: -12.1912388648593 - type: nauc_recall_at_3_std value: -9.221542013385555 - type: nauc_recall_at_5_diff1 value: -3.2604132752706914 - type: nauc_recall_at_5_max value: -6.886351441658915 - type: nauc_recall_at_5_std value: -7.014252851712789 - type: ndcg_at_1 value: 39.796 - type: ndcg_at_10 value: 31.455 - type: ndcg_at_100 value: 42.388999999999996 - type: ndcg_at_1000 value: 53.556000000000004 - type: ndcg_at_20 value: 30.808000000000003 - type: ndcg_at_3 value: 35.831 - type: ndcg_at_5 value: 32.845 - type: precision_at_1 value: 40.816 - type: precision_at_10 value: 27.143 - type: precision_at_100 value: 8.449 - type: precision_at_1000 value: 1.6179999999999999 - type: precision_at_20 value: 19.387999999999998 - type: precision_at_3 value: 35.374 - type: precision_at_5 value: 31.019999999999996 - type: recall_at_1 value: 2.9739999999999998 - type: recall_at_10 value: 19.39 - type: recall_at_100 value: 51.636 - type: recall_at_1000 value: 86.99900000000001 - type: recall_at_20 value: 26.478 - type: recall_at_3 value: 7.703 - type: recall_at_5 value: 11.42 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de metrics: - type: accuracy value: 86.9384765625 - type: ap value: 31.737513704141552 - type: ap_weighted value: 31.737513704141552 - type: f1 value: 71.5490757306975 - type: f1_weighted value: 89.14632533489856 - type: main_score value: 86.9384765625 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 73.57668364459535 - type: f1 value: 73.90467103648074 - type: f1_weighted value: 73.42158415034704 - type: main_score value: 73.57668364459535 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: main_score value: 58.574148097494685 - type: v_measure value: 58.574148097494685 - type: v_measure_std value: 0.9443161637490822 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cosine_accuracy value: 88.1385229778864 - type: cosine_accuracy_threshold value: 83.86307954788208 - type: cosine_ap value: 80.17965893449055 - type: cosine_f1 value: 73.0614300100705 - type: cosine_f1_threshold value: 80.7942807674408 - type: cosine_precision value: 69.8603755416466 - type: cosine_recall value: 76.56992084432717 - type: dot_accuracy value: 88.2100494724921 - type: dot_accuracy_threshold value: 83.84793996810913 - type: dot_ap value: 80.18603932881858 - type: dot_f1 value: 73.07643714466204 - type: dot_f1_threshold value: 80.87586164474487 - type: dot_precision value: 70.10909090909091 - type: dot_recall value: 76.3060686015831 - type: euclidean_accuracy value: 88.1385229778864 - type: euclidean_accuracy_threshold value: 56.77661895751953 - type: euclidean_ap value: 80.1784070881624 - type: euclidean_f1 value: 73.04830369529574 - type: euclidean_f1_threshold value: 61.91838979721069 - type: euclidean_precision value: 69.96859144720948 - type: euclidean_recall value: 76.41160949868075 - type: main_score value: 80.18603932881858 - type: manhattan_accuracy value: 88.0431543184121 - type: manhattan_accuracy_threshold value: 3755.6137084960938 - type: manhattan_ap value: 79.98270453664578 - type: manhattan_f1 value: 72.68242015061023 - type: manhattan_f1_threshold value: 3892.494583129883 - type: manhattan_precision value: 71.54907975460122 - type: manhattan_recall value: 73.85224274406332 - type: max_ap value: 80.18603932881858 - type: max_f1 value: 73.07643714466204 - type: max_precision value: 71.54907975460122 - type: max_recall value: 76.56992084432717 - type: similarity_accuracy value: 88.1385229778864 - type: similarity_accuracy_threshold value: 83.86307954788208 - type: similarity_ap value: 80.17965893449055 - type: similarity_f1 value: 73.0614300100705 - type: similarity_f1_threshold value: 80.7942807674408 - type: similarity_precision value: 69.8603755416466 - type: similarity_recall value: 76.56992084432717 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cosine_accuracy value: 89.7892653393876 - type: cosine_accuracy_threshold value: 79.69566583633423 - type: cosine_ap value: 87.4579867302024 - type: cosine_f1 value: 79.91620843152658 - type: cosine_f1_threshold value: 78.53609323501587 - type: cosine_precision value: 77.7155329210622 - type: cosine_recall value: 82.24514936864799 - type: dot_accuracy value: 89.78732487289945 - type: dot_accuracy_threshold value: 80.05315661430359 - type: dot_ap value: 87.44916182456272 - type: dot_f1 value: 79.90419878751591 - type: dot_f1_threshold value: 78.57890725135803 - type: dot_precision value: 77.73409057812728 - type: dot_recall value: 82.19895287958116 - type: euclidean_accuracy value: 89.78538440641131 - type: euclidean_accuracy_threshold value: 62.29925751686096 - type: euclidean_ap value: 87.45904868911386 - type: euclidean_f1 value: 79.93127404474657 - type: euclidean_f1_threshold value: 65.61101078987122 - type: euclidean_precision value: 77.62060210373595 - type: euclidean_recall value: 82.38373883584848 - type: main_score value: 87.46554314325058 - type: manhattan_accuracy value: 89.76597974152986 - type: manhattan_accuracy_threshold value: 3988.5299682617188 - type: manhattan_ap value: 87.46554314325058 - type: manhattan_f1 value: 79.97181740645973 - type: manhattan_f1_threshold value: 4235.905838012695 - type: manhattan_precision value: 77.13713427283783 - type: manhattan_recall value: 83.02279026793964 - type: max_ap value: 87.46554314325058 - type: max_f1 value: 79.97181740645973 - type: max_precision value: 77.73409057812728 - type: max_recall value: 83.02279026793964 - type: similarity_accuracy value: 89.7892653393876 - type: similarity_accuracy_threshold value: 79.69566583633423 - type: similarity_ap value: 87.4579867302024 - type: similarity_f1 value: 79.91620843152658 - type: similarity_f1_threshold value: 78.53609323501587 - type: similarity_precision value: 77.7155329210622 - type: similarity_recall value: 82.24514936864799 --- # Introduction It's clone from 'dunzhang/stella_en_400M_v5'. This version is changing the code for only using CPU for inference.
[ "BIOSSES", "SCIFACT" ]
sinequa/passage-ranker.chocolate
sinequa
text-classification
[ "transformers", "pytorch", "bert", "text-classification", "en", "arxiv:2002.10957", "arxiv:1901.04085", "arxiv:1611.09268", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-07-10T12:42:08Z
2024-02-19T09:41:39+00:00
362
0
--- language: - en --- # Model Card for `passage-ranker.chocolate` This model is a passage ranker developed by Sinequa. It produces a relevance score given a query-passage pair and is used to order search results. Model name: `passage-ranker.chocolate` ## Supported Languages The model was trained and tested in the following languages: - English ## Scores | Metric | Value | |:--------------------|------:| | Relevance (NDCG@10) | 0.484 | Note that the relevance score is computed as an average over 14 retrieval datasets (see [details below](#evaluation-metrics)). ## Inference Times | GPU | Quantization type | Batch size 1 | Batch size 32 | |:------------------------------------------|:------------------|---------------:|---------------:| | NVIDIA A10 | FP16 | 1 ms | 5 ms | | NVIDIA A10 | FP32 | 2 ms | 22 ms | | NVIDIA T4 | FP16 | 1 ms | 13 ms | | NVIDIA T4 | FP32 | 3 ms | 66 ms | | NVIDIA L4 | FP16 | 2 ms | 6 ms | | NVIDIA L4 | FP32 | 3 ms | 30 ms | ## Gpu Memory usage | Quantization type | Memory | |:-------------------------------------------------|-----------:| | FP16 | 300 MiB | | FP32 | 550 MiB | Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which can be around 0.5 to 1 GiB depending on the used GPU. ## Requirements - Minimal Sinequa version: 11.10.0 - Minimal Sinequa version for using FP16 models and GPUs with CUDA compute capability of 8.9+ (like NVIDIA L4): 11.11.0 - [Cuda compute capability](https://developer.nvidia.com/cuda-gpus): above 5.0 (above 6.0 for FP16 use) ## Model Details ### Overview - Number of parameters: 23 million - Base language model: [MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) ([Paper](https://arxiv.org/abs/2002.10957), [GitHub](https://github.com/microsoft/unilm/tree/master/minilm)) - Insensitive to casing and accents - Training procedure: [MonoBERT](https://arxiv.org/abs/1901.04085) ### Training Data - MS MARCO Passage Ranking ([Paper](https://arxiv.org/abs/1611.09268), [Official Page](https://microsoft.github.io/msmarco/), [dataset on HF hub](https://huggingface.co/datasets/unicamp-dl/mmarco)) ### Evaluation Metrics To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the [BEIR benchmark](https://github.com/beir-cellar/beir). Note that all these datasets are in English. | Dataset | NDCG@10 | |:------------------|--------:| | Average | 0.486 | | | | | Arguana | 0.554 | | CLIMATE-FEVER | 0.209 | | DBPedia Entity | 0.367 | | FEVER | 0.744 | | FiQA-2018 | 0.339 | | HotpotQA | 0.685 | | MS MARCO | 0.412 | | NFCorpus | 0.352 | | NQ | 0.454 | | Quora | 0.818 | | SCIDOCS | 0.158 | | SciFact | 0.658 | | TREC-COVID | 0.674 | | Webis-Touche-2020 | 0.345 |
[ "SCIFACT" ]
alpha-ai/OopsHusBot-3B-GGUF
alpha-ai
null
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "alphaaico", "relationship-ai", "husband-helper", "communication", "humor", "en", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-3B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
2025-03-03T04:48:58Z
2025-03-03T05:51:06+00:00
362
0
--- base_model: meta-llama/Llama-3.2-3B-Instruct language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf - alphaaico - relationship-ai - husband-helper - communication - humor --- <div align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/669777597cb32718c20d97e9/4emWK_PB-RrifIbrCUjE8.png" alt="Title card" style="width: 500px; height: auto; object-position: center top;"> </div> # Uploaded Model - **Developed by:** Alpha AI - **License:** apache-2.0 - **Finetuned from model:** meta-llama/Llama-3.2-3B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Hugging Face's TRL library. ## OopsHusBot-3B: The AI Model for Husbands Who Try (and Sometimes Fail) at Communication ### Overview Husbands mean well. Really. But communication can sometimes feel like an unsolvable puzzle. OopsHusBot-3B is here to help! Designed to assist husbands in navigating tricky conversations, avoiding misunderstandings, and delivering just the right amount of romance (without overdoing it), this model is your ultimate survival guide for relationship communication. Built on meta-llama/Llama-3.2-3B-Instruct, this model is fine-tuned to prevent classic communication blunders—because sometimes, a simple “OK” isn’t the right answer. ### Model Details - **Base Model:** meta-llama/Llama-3.2-3B-Instruct - **Fine-tuned By:** Alpha AI - **Training Framework:** Unsloth #### Quantization Levels Available - q4_k_m - q5_k_m - q8_0 - 16-bit (Full precision) - [Link](https://huggingface.co/alphaaico/OopsHusBot-3B) *(Note: The INT1 16-bit link is referenced (https://huggingface.co/alphaaico/OopsHusBot-3B)* **Format:** GGUF (Optimized for local deployments, https://huggingface.co/alphaaico/OopsHusBot-3B-GGUF) ### Key Features - **Auto-Smooth Talk** – Helps generate heartfelt, thoughtful responses without sounding robotic. - **Oops Recovery Mode** – Immediate damage control when you say something unintentionally dumb. - **Danger Phrase Decoder** – Correctly interprets high-risk phrases like “Do whatever you want” (Hint: She doesn’t mean that). - **Anniversary & Birthday Reminder** – Generates sweet, meaningful texts to keep you in the clear. - **Pre-Apology Generator** – Because sometimes, you don’t know what you did wrong—but you know you need to fix it. - **Selective Hearing Fixer** – Crafts responses to make it seem like you were totally paying attention. ### Training & Data **OopsHusBot-3B** has been trained on a carefully curated dataset of: - Romantic yet slightly clueless husband responses - Apology best practices (ranked by effectiveness) - Deciphering “I’m fine” and other cryptic messages - Emergency sweet talk for when things go south - When to text “I love you” without being asked - Avoiding the classic “Are you mad?” trap ### Important Warnings ❌ Not responsible for husbands who still say “Calm down.” ❌ Does not fix situations where you actually forgot her birthday. ❌ AI-generated compliments may be too good, causing suspicion. ❌ Disables “I told you so” responses for your safety. ### Use Cases - **When she says “I have nothing to wear”** – Generates supportive yet non-argumentative responses. - **Emergency Romance Mode** – For those “You never say nice things to me” situations. - **Silent Treatment Prevention** – Helps craft messages to de-escalate tension before it spirals. - **Reading Between the Lines** – Ensures you don’t misinterpret “Do whatever you want.” - **Gift Idea Generator** – Ensures you never make the mistake of buying a vacuum as a romantic gift again. ### Model Performance **OopsHusBot-3B** has been further optimized to deliver: - **Empathic and Context-Aware Responses** – Improved understanding of user inputs with a focus on empathetic replies. - **High Efficiency on Consumer Hardware** – Maintains quick inference speeds even with more advanced conversation modeling. - **Balanced Coherence and Creativity** – Strikes an ideal balance for real-world dialogue applications, allowing for both coherent answers and creative flair. ### Limitations & Biases Like any AI system, this model may exhibit biases stemming from its training data. Users should employ it responsibly and consider additional fine-tuning if needed for sensitive or specialized applications. ### License Released under the **Apache-2.0** license. For full details, please consult the license file in the Hugging Face repository. ### Acknowledgments Special thanks to the Unsloth team for their optimized training pipeline for LLaMA models. Additional appreciation goes to Hugging Face’s TRL library for enabling accelerated and efficient fine-tuning workflows. ### NOTE - If you’re a husband who means well but sometimes just doesn’t get it—OopsHusBot-3B has your back. 🚀🔥
[ "CRAFT" ]
ntc-ai/SDXL-LoRA-slider.at-a-meetup
ntc-ai
text-to-image
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
2024-01-22T01:24:11Z
2024-01-22T01:24:14+00:00
361
0
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 language: - en license: mit tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers thumbnail: images/evaluate/at a meetup.../at a meetup_17_3.0.png widget: - text: at a meetup output: url: images/at a meetup_17_3.0.png - text: at a meetup output: url: images/at a meetup_19_3.0.png - text: at a meetup output: url: images/at a meetup_20_3.0.png - text: at a meetup output: url: images/at a meetup_21_3.0.png - text: at a meetup output: url: images/at a meetup_22_3.0.png inference: false instance_prompt: at a meetup --- # ntcai.xyz slider - at a meetup (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/at a meetup_17_-3.0.png" width=256 height=256 /> | <img src="images/at a meetup_17_0.0.png" width=256 height=256 /> | <img src="images/at a meetup_17_3.0.png" width=256 height=256 /> | | <img src="images/at a meetup_19_-3.0.png" width=256 height=256 /> | <img src="images/at a meetup_19_0.0.png" width=256 height=256 /> | <img src="images/at a meetup_19_3.0.png" width=256 height=256 /> | | <img src="images/at a meetup_20_-3.0.png" width=256 height=256 /> | <img src="images/at a meetup_20_0.0.png" width=256 height=256 /> | <img src="images/at a meetup_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` at a meetup ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.at-a-meetup', weight_name='at a meetup.safetensors', adapter_name="at a meetup") # Activate the LoRA pipe.set_adapters(["at a meetup"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, at a meetup" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
[ "CRAFT" ]
YBXL/Med-LLaMA3-8B
YBXL
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:2402.12749", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-06-24T13:56:05Z
2024-07-29T15:57:59+00:00
361
6
--- library_name: transformers tags: [] --- # Model Card for Med-LLaMA3-8B <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description Med-LLaMA3-8B is an 8-billion parameter medical language model that has undergone continual pre-training on LLaMA3-8B architecture using large-scale open-sourced medical data. ## Training Details Med-LLaMA3-8B is trained on a large-scale dataset comprising: medical books, medical literature, clinical guidelines and a small portion of general domain data It is a study extension based on our previous Me-LLaMA paper: https://arxiv.org/pdf/2402.12749 If you use the model, please cite the following papers: <pre> @misc{xie2024llama, title={Me LLaMA: Foundation Large Language Models for Medical Applications}, author={Qianqian Xie and Qingyu Chen and Aokun Chen and Cheng Peng and Yan Hu and Fongci Lin and Xueqing Peng and Jimin Huang and Jeffrey Zhang and Vipina Keloth and Huan He and Lucila Ohno-Machido and Yonghui Wu and Hua Xu and Jiang Bian}, year={2024}, eprint={2402.12749}, archivePrefix={arXiv}, primaryClass={cs.CL} } </pre>
[ "MEDICAL DATA" ]
kabita-choudhary/finetuned-bart-for-conversation-summary
kabita-choudhary
summarization
[ "transformers", "pytorch", "bart", "text2text-generation", "summarization", "dataset:samsum", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-01-25T11:00:13Z
2023-01-26T12:09:46+00:00
360
53
--- datasets: - samsum pipeline_tag: summarization widget: - text: "Laurie: So, what are your plans for this weekend?\nChristie: I don’t know.\ \ Do you want to get together or something?\nSarah: How about going to see a movie?\ \ Cinemax 26 on Carson Boulevard is showing Enchanted. Laurie: That sounds like\ \ a good idea. Maybe we should go out to eat beforehand.\nSarah: It is fine with\ \ me. Where do you want to meet?\nChristie: Let’s meet at Summer Pizza House.\ \ I have not gone there for a long time.\nLaurie: Good idea again. I heard they\ \ just came up with a new pizza. It should be good because Summer Pizza House\ \ always has the best pizza in town.\nSarah: When should we meet?\nChristie: Well,\ \ the movie is shown at 2:00PM, 4:00PM, 6:00PM and 8:00PM.\nLaurie: Why don’t\ \ we go to the 2:00PM show? We can meet at Summer Pizza House at noon. That will\ \ give us plenty of time to enjoy our pizza.\nSarah: My cousin Karen is in town.\ \ Can I bring her along? I hate to leave her home alone.\nChristie: Karen is in\ \ town? Yes, bring her along. Laurie, you remember Karen? We met her at Sara’s\ \ high school graduation party two years ago.\nLaurie: I do not quite remember\ \ her. What does she look like?\nSarah: She has blond hair, she is kind of slender,\ \ and she is about your height.\nLaurie: She wears eyeglasses, right?\nSarah:\ \ Yes, and she was playing the piano off and on during the party.\nLaurie: I remember\ \ her now. Yes, do bring her along Sara. She is such a nice person, and funny\ \ too.\nSarah: She will be happy to meet both of you again.\nChristie: What is\ \ she doing these days?\nSarah: She graduated last June, and she will start her\ \ teaching career next week when the new school term begins.\nLaurie: What grade\ \ is she going to teach?\nSarah: She will teach kindergarten. She loves working\ \ with kids, and she always has such a good rapport with them\nChristie: Kindergarten?\ \ She must be a very patient person. I always think kindergarten is the most difficult\ \ class to teach. Most of the kids have never been to school, and they have e\ \ never been away from mommy for long.\nSarah: I think Karen will do fine. She\ \ knows how to handle young children\nLaurie: I think the first few weeks will\ \ be tough. However, once the routine is set, it should not be too difficult to\ \ teach kindergarten.\nChristie: You are right. The kids might even look forward\ \ to going to school since they have so many friends to play with.\nSarah: There\ \ are so many new things for them to do at school too. They do a lot of crafts\ \ in kindergarten. I am always amazed by the things kindergarten teachers do.\ \ \nLaurie: Yes, I have seen my niece come home with so many neat stuff.\nChristie:\ \ Maybe we can ask Karen to show us some of the things that we can do for this\ \ Halloween.\nLaurie: Maybe we can stop by the craft store after the movie. What\ \ do you think, Sara?\nSarah: I will talk to her. I think she will like that.\ \ It will help her with school projects when Halloween comes.\nChristie: Michael’s\ \ is a good store for crafts. It always carries a variety of things, and you can\ \ find almost anything there.\nLaurie: There is a Michaels store not far away\ \ from Cinemax 26. I believe it is just around the corner, on Pioneer Avenue.\ \ We can even walk over there.\nSarah: So, we plan to meet for pizza at noon,\ \ go to the movies at two, and shop at Michael’s afterward. Right?\nLaurie and\ \ Christie: Yes. \n" model-index: - name: bart-large-cnn-samsum results: - task: type: summarization name: Conversation Summarization dataset: name: 'SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization' type: samsum metrics: - type: rogue-1 value: 54.8764 name: Validation ROGUE-1 - type: rogue-2 value: 29.6869, name: Validation ROGUE-2 - type: rogue-l value: 44.9874 name: Validation ROGUE-L - type: loss value: 1.47812 name: loss ---
[ "CRAFT" ]
llama-moe/LLaMA-MoE-v1-3_0B-2_16
llama-moe
text-generation
[ "transformers", "pytorch", "llama_moe", "text-generation", "MoE", "custom_code", "en", "arxiv:2310.06694", "arxiv:2406.16554", "license:apache-2.0", "autotrain_compatible", "region:us" ]
2023-12-23T13:46:22Z
2024-06-25T02:40:19+00:00
360
9
--- language: - en license: apache-2.0 pipeline_tag: text-generation tags: - MoE --- # LLaMA-MoE-v1-3.0B (2/16) [[💻 Code]](https://github.com/pjlab-sys4nlp/llama-moe) | [[📜 Technical Report]](https://github.com/pjlab-sys4nlp/llama-moe/blob/main/docs/LLaMA_MoE.pdf) 👋 Very nice to meet you here~ ❤️ This repo contains the model `LLaMA-MoE-v1-3.0B (2/16)`, which activates 2 out of 16 experts (3.0B parameters). This model is NOT fine-tuned by instruction pairs, so it may not be good enough to act like a chatbot. 📢 LLaMA-MoE is a series of Mixture-of-Expert (MoE) models based on [LLaMA-2](https://huggingface.co/meta-llama/Llama-2-7b-hf). You can find the code for training this model at [this repo](https://github.com/pjlab-sys4nlp/llama-moe). 💎 This series of models are obtained by partitioning original LLaMA FFNs into experts and further continual pre-training. The total model size is only 6.7B parameters, which is very convenient for deployment and research usage. More details could be found at [our technical report](https://arxiv.org/). ## 🚀 QuickStart ```python # python>=3.10 import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_dir = "llama-moe/LLaMA-MoE-v1-3_0B-2_16" tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(model_dir, torch_dtype=torch.bfloat16, trust_remote_code=True) model.eval() model.to("cuda:0") input_text = "Suzhou is famous of" inputs = tokenizer(input_text, return_tensors="pt") inputs = inputs.to("cuda:0") pred = model.generate(**inputs, max_length=50, temperature=0.0) print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True)) # Suzhou is famous of its beautiful gardens. The most famous one is the Humble Administrator's Garden. It is a classical Chinese garden with a history of more than 600 years. The garden is divided into three ``` ## 📊 Performance | Model | \#Activated Experts | \#Experts | \#Activated Params | Links | | :------------------------ | :-----------------: | :-------: | :----------------: | :-----------------------------------------------------------------------: | | **LLaMA-MoE-3.0B** | 2 | 16 | 3.0B | [[🤗 HF Weights]](https://huggingface.co/llama-moe/LLaMA-MoE-v1-3_0B-2_16) | | **LLaMA-MoE-3.5B (4/16)** | 4 | 16 | 3.5B | [[🤗 HF Weights]](https://huggingface.co/llama-moe/LLaMA-MoE-v1-3_5B-4_16) | | **LLaMA-MoE-3.5B (2/8)** | 2 | 8 | 3.5B | [[🤗 HF Weights]](https://huggingface.co/llama-moe/LLaMA-MoE-v1-3_5B-2_8) | | Model | SciQ | PIQA | WinoGrande | ARC-e | ARC-c (25) | HellaSwag (10) | LogiQA | BoolQ (32) | LAMBADA | NQ (32) | MMLU (5) | Average | | :------------------------------------------------------------------------------------ | :------: | :------: | :--------: | :------: | :--------: | :------------: | :------: | :--------: | :------: | :------: | :-------: | :-----: | | [OPT-2.7B](https://huggingface.co/facebook/opt-2.7b) | 78.9 | 74.8 | 60.8 | 54.4 | 34.0 | 61.4 | 25.8 | 63.3 | 63.6 | 10.7 | 25.8 | 50.3 | | [Pythia-2.8B](https://huggingface.co/EleutherAI/pythia-2.8b) | 83.2 | 73.6 | 59.6 | 58.8 | 36.7 | 60.7 | 28.1 | 65.9 | 64.6 | 8.7 | 26.8 | 51.5 | | [INCITE-BASE-3B](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-3B-v1) | 85.6 | 73.9 | 63.5 | 61.7 | 40.3 | 64.7 | 27.5 | 65.8 | 65.4 | 15.2 | 27.2 | 53.7 | | [Open-LLaMA-3B-v2](https://huggingface.co/openlm-research/open_llama_3b_v2) | 88.0 | 77.9 | 63.1 | 63.3 | 40.1 | 71.4 | 28.1 | 69.2 | 67.4 | 16.0 | 26.8 | 55.6 | | [Sheared-LLaMA-2.7B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B) | 87.5 | 76.9 | 65.0 | 63.3 | 41.6 | 71.0 | 28.3 | 73.6 | 68.3 | 17.6 | **27.3** | 56.4 | | **LLaMA-MoE-3.0B** | 84.2 | 77.5 | 63.6 | 60.2 | 40.9 | 70.8 | **30.6** | 71.9 | 66.6 | 17.0 | 26.8 | 55.5 | | **LLaMA-MoE-3.5B (4/16)** | 87.6 | **77.9** | 65.5 | **65.6** | **44.2** | **73.3** | 29.7 | **75.0** | **69.5** | **20.3** | 26.8 | 57.7 | | **LLaMA-MoE-3.5B (2/8)** | **88.4** | 77.6 | **66.7** | 65.3 | 43.1 | **73.3** | 29.6 | 73.9 | 69.4 | 19.8 | 27.0 | 57.6 | ## 📖 Details Training Data: 200B tokens from [SlimPajama](https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama) with the same data sampling weights as [Sheared LLaMA](https://arxiv.org/abs/2310.06694). ## 📃 Citation ```bibtex @article{llama-moe, title={LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training}, author={Tong Zhu and Xiaoye Qu and Daize Dong and Jiacheng Ruan and Jingqi Tong and Conghui He and Yu Cheng}, journal={arXiv preprint arXiv:2406.16554}, year={2024}, url={https://arxiv.org/abs/2406.16554}, } ```
[ "SCIQ" ]
BSC-LT/salamandraTA-7B-instruct-GGUF
BSC-LT
translation
[ "transformers", "gguf", "llama", "text-generation", "translation", "bg", "ca", "cs", "cy", "da", "de", "el", "en", "es", "et", "eu", "fi", "fr", "ga", "gl", "hr", "hu", "it", "lt", "lv", "mt", "nl", "nb", "no", "nn", "oc", "pl", "pt", "ro", "ru", "sl", "sk", "sr", "sv", "uk", "ast", "an", "base_model:BSC-LT/salamandraTA-7b-instruct", "base_model:quantized:BSC-LT/salamandraTA-7b-instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:eu", "conversational" ]
2025-03-03T08:38:47Z
2025-03-13T09:24:11+00:00
358
0
--- base_model: - BSC-LT/salamandraTA-7b-instruct language: - bg - ca - cs - cy - da - de - el - en - es - et - eu - fi - fr - ga - gl - hr - hu - it - lt - lv - mt - nl - nb - 'no' - nn - oc - pl - pt - ro - ru - sl - sk - sr - sv - uk - ast - an library_name: transformers license: apache-2.0 pipeline_tag: translation --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/633b489acbdbadd99c0b75ef/MhsW4ODhK6ofYq8DnpyKc.png) # SalamandraTA-7B-instruct-GGUF Model Card This model is the GGUF-quantized version of [SalamandraTA-7b-instruct](https://huggingface.co/BSC-LT/salamandraTA-7b-instruct). The model weights are quantized from FP16 to Q4_K_M quantization Q8_0 (8-bit quantization), (4-bit weights with K-means clustering quantization) and Q3_K_M (3-but weights with K-means clustering quantization) using the [Llama.cpp](https://github.com/ggml-org/llama.cpp) framework. Inferencing with this model can be done using [VLLM](https://docs.vllm.ai/en/stable/models/engine_args.html). SalamandraTA-7b-instruct is a translation LLM that has been instruction-tuned from SalamandraTA-7b-base. The base model results from continually pre-training [Salamandra-7b](https://huggingface.co/BSC-LT/salamandra-7b) on parallel data and has not been published, but is reserved for internal use. SalamandraTA-7b-instruct is proficent in 37 european languages and supports translation-related tasks, namely: sentence-level-translation, paragraph-level-translation, document-level-translation, automatic post-editing, grammar checking, machine translation evaluation, alternative translations, named-entity-recognition and context-aware translation. > [!WARNING] > **DISCLAIMER:** This version of Salamandra is tailored exclusively for translation tasks. It lacks chat capabilities and has not been trained with any chat instructions. --- The entire Salamandra family is released under a permissive [Apache 2.0 license]((https://www.apache.org/licenses/LICENSE-2.0)). ## How to Use The following example code works under ``Python 3.10.4``, ``vllm==0.7.3``, ``torch==2.5.1`` and ``torchvision==0.20.1``, though it should run on any current version of the libraries. This is an example of translation using the model: ``` from huggingface_hub import snapshot_download from vllm import LLM, SamplingParams model_dir = snapshot_download(repo_id="BSC-LT/salamandraTA-7B-instruct-GGUF", revision="main") model_name = "salamandrata_7b_inst_q4.gguf" llm = LLM(model=model_dir + '/' + model_name, tokenizer=model_dir) source = "Spanish" target = "English" sentence = "Ayer se fue, tomó sus cosas y se puso a navegar. Una camisa, un pantalón vaquero y una canción, dónde irá, dónde irá. Se despidió, y decidió batirse en duelo con el mar. Y recorrer el mundo en su velero. Y navegar, nai-na-na, navegar." prompt = f"Translate the following text from {source} into {target}.\\n{source}: {sentence} \\n{target}:" messages = [{'role': 'user', 'content': prompt}] outputs = llm.chat(messages, sampling_params=SamplingParams( temperature=0.1, stop_token_ids=[5], max_tokens=200) )[0].outputs print(outputs[0].text) ``` ## Additional information ### Author The Language Technologies Unit from Barcelona Supercomputing Center. ### Contact For further information, please send an email to <[email protected]>. ### Copyright Copyright(c) 2025 by Language Technologies Unit, Barcelona Supercomputing Center. ### Funding This work has been promoted and financed by the Government of Catalonia through the [Aina Project](https://projecteaina.cat/). This work is funded by the _Ministerio para la Transformación Digital y de la Función Pública_ - Funded by EU – NextGenerationEU within the framework of [ILENIA Project](https://proyectoilenia.es/) with reference 2022/TL22/00215337. ### Acknowledgements The success of this project has been made possible thanks to the invaluable contributions of our partners in the [ILENIA Project](https://proyectoilenia.es/): [HiTZ](http://hitz.ehu.eus/es), and [CiTIUS](https://citius.gal/es/). Their efforts have been instrumental in advancing our work, and we sincerely appreciate their help and support. ### Disclaimer ### Disclaimer Be aware that the model may contain biases or other unintended distortions. When third parties deploy systems or provide services based on this model, or use the model themselves, they bear the responsibility for mitigating any associated risks and ensuring compliance with applicable regulations, including those governing the use of Artificial Intelligence. The Barcelona Supercomputing Center, as the owner and creator of the model, shall not be held liable for any outcomes resulting from third-party use. ### License [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
[ "BEAR" ]
beethogedeon/gte-Qwen2-7B-instruct-Q4_K_M-GGUF
beethogedeon
sentence-similarity
[ "sentence-transformers", "gguf", "qwen2", "text-generation", "mteb", "transformers", "Qwen2", "sentence-similarity", "llama-cpp", "gguf-my-repo", "custom_code", "base_model:Alibaba-NLP/gte-Qwen2-7B-instruct", "base_model:quantized:Alibaba-NLP/gte-Qwen2-7B-instruct", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational" ]
2024-12-01T17:57:38Z
2024-12-01T18:10:15+00:00
354
2
--- base_model: Alibaba-NLP/gte-Qwen2-7B-instruct license: apache-2.0 tags: - mteb - sentence-transformers - transformers - Qwen2 - sentence-similarity - llama-cpp - gguf-my-repo model-index: - name: gte-qwen2-7B-instruct results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 91.31343283582089 - type: ap value: 67.64251402604096 - type: f1 value: 87.53372530755692 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 97.497825 - type: ap value: 96.30329547047529 - type: f1 value: 97.49769793778039 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 62.564 - type: f1 value: 60.975777935041066 - task: type: Retrieval dataset: name: MTEB ArguAna type: mteb/arguana config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: map_at_1 value: 36.486000000000004 - type: map_at_10 value: 54.842 - type: map_at_100 value: 55.206999999999994 - type: map_at_1000 value: 55.206999999999994 - type: map_at_3 value: 49.893 - type: map_at_5 value: 53.105000000000004 - type: mrr_at_1 value: 37.34 - type: mrr_at_10 value: 55.143 - type: mrr_at_100 value: 55.509 - type: mrr_at_1000 value: 55.509 - type: mrr_at_3 value: 50.212999999999994 - type: mrr_at_5 value: 53.432 - type: ndcg_at_1 value: 36.486000000000004 - type: ndcg_at_10 value: 64.273 - type: ndcg_at_100 value: 65.66199999999999 - type: ndcg_at_1000 value: 65.66199999999999 - type: ndcg_at_3 value: 54.352999999999994 - type: ndcg_at_5 value: 60.131 - type: precision_at_1 value: 36.486000000000004 - type: precision_at_10 value: 9.395000000000001 - type: precision_at_100 value: 0.996 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 22.428 - type: precision_at_5 value: 16.259 - type: recall_at_1 value: 36.486000000000004 - type: recall_at_10 value: 93.95400000000001 - type: recall_at_100 value: 99.644 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 67.283 - type: recall_at_5 value: 81.294 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 56.461169803700564 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 51.73600434466286 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 67.57827065898053 - type: mrr value: 79.08136569493911 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 83.53324575999243 - type: cos_sim_spearman value: 81.37173362822374 - type: euclidean_pearson value: 82.19243335103444 - type: euclidean_spearman value: 81.33679307304334 - type: manhattan_pearson value: 82.38752665975699 - type: manhattan_spearman value: 81.31510583189689 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 87.56818181818181 - type: f1 value: 87.25826722019875 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 50.09239610327673 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 46.64733054606282 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: map_at_1 value: 33.997 - type: map_at_10 value: 48.176 - type: map_at_100 value: 49.82 - type: map_at_1000 value: 49.924 - type: map_at_3 value: 43.626 - type: map_at_5 value: 46.275 - type: mrr_at_1 value: 42.059999999999995 - type: mrr_at_10 value: 53.726 - type: mrr_at_100 value: 54.398 - type: mrr_at_1000 value: 54.416 - type: mrr_at_3 value: 50.714999999999996 - type: mrr_at_5 value: 52.639 - type: ndcg_at_1 value: 42.059999999999995 - type: ndcg_at_10 value: 55.574999999999996 - type: ndcg_at_100 value: 60.744 - type: ndcg_at_1000 value: 61.85699999999999 - type: ndcg_at_3 value: 49.363 - type: ndcg_at_5 value: 52.44 - type: precision_at_1 value: 42.059999999999995 - type: precision_at_10 value: 11.101999999999999 - type: precision_at_100 value: 1.73 - type: precision_at_1000 value: 0.218 - type: precision_at_3 value: 24.464 - type: precision_at_5 value: 18.026 - type: recall_at_1 value: 33.997 - type: recall_at_10 value: 70.35900000000001 - type: recall_at_100 value: 91.642 - type: recall_at_1000 value: 97.977 - type: recall_at_3 value: 52.76 - type: recall_at_5 value: 61.148 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval type: BeIR/cqadupstack config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: map_at_1 value: 35.884 - type: map_at_10 value: 48.14 - type: map_at_100 value: 49.5 - type: map_at_1000 value: 49.63 - type: map_at_3 value: 44.646 - type: map_at_5 value: 46.617999999999995 - type: mrr_at_1 value: 44.458999999999996 - type: mrr_at_10 value: 53.751000000000005 - type: mrr_at_100 value: 54.37800000000001 - type: mrr_at_1000 value: 54.415 - type: mrr_at_3 value: 51.815 - type: mrr_at_5 value: 52.882 - type: ndcg_at_1 value: 44.458999999999996 - type: ndcg_at_10 value: 54.157 - type: ndcg_at_100 value: 58.362 - type: ndcg_at_1000 value: 60.178 - type: ndcg_at_3 value: 49.661 - type: ndcg_at_5 value: 51.74999999999999 - type: precision_at_1 value: 44.458999999999996 - type: precision_at_10 value: 10.248 - type: precision_at_100 value: 1.5890000000000002 - type: precision_at_1000 value: 0.207 - type: precision_at_3 value: 23.928 - type: precision_at_5 value: 16.878999999999998 - type: recall_at_1 value: 35.884 - type: recall_at_10 value: 64.798 - type: recall_at_100 value: 82.345 - type: recall_at_1000 value: 93.267 - type: recall_at_3 value: 51.847 - type: recall_at_5 value: 57.601 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval type: BeIR/cqadupstack config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: map_at_1 value: 39.383 - type: map_at_10 value: 53.714 - type: map_at_100 value: 54.838 - type: map_at_1000 value: 54.87800000000001 - type: map_at_3 value: 50.114999999999995 - type: map_at_5 value: 52.153000000000006 - type: mrr_at_1 value: 45.016 - type: mrr_at_10 value: 56.732000000000006 - type: mrr_at_100 value: 57.411 - type: mrr_at_1000 value: 57.431 - type: mrr_at_3 value: 54.044000000000004 - type: mrr_at_5 value: 55.639 - type: ndcg_at_1 value: 45.016 - type: ndcg_at_10 value: 60.228 - type: ndcg_at_100 value: 64.277 - type: ndcg_at_1000 value: 65.07 - type: ndcg_at_3 value: 54.124 - type: ndcg_at_5 value: 57.147000000000006 - type: precision_at_1 value: 45.016 - type: precision_at_10 value: 9.937 - type: precision_at_100 value: 1.288 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 24.471999999999998 - type: precision_at_5 value: 16.991 - type: recall_at_1 value: 39.383 - type: recall_at_10 value: 76.175 - type: recall_at_100 value: 93.02 - type: recall_at_1000 value: 98.60900000000001 - type: recall_at_3 value: 60.265 - type: recall_at_5 value: 67.46600000000001 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval type: BeIR/cqadupstack config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: map_at_1 value: 27.426000000000002 - type: map_at_10 value: 37.397000000000006 - type: map_at_100 value: 38.61 - type: map_at_1000 value: 38.678000000000004 - type: map_at_3 value: 34.150999999999996 - type: map_at_5 value: 36.137 - type: mrr_at_1 value: 29.944 - type: mrr_at_10 value: 39.654 - type: mrr_at_100 value: 40.638000000000005 - type: mrr_at_1000 value: 40.691 - type: mrr_at_3 value: 36.817 - type: mrr_at_5 value: 38.524 - type: ndcg_at_1 value: 29.944 - type: ndcg_at_10 value: 43.094 - type: ndcg_at_100 value: 48.789 - type: ndcg_at_1000 value: 50.339999999999996 - type: ndcg_at_3 value: 36.984 - type: ndcg_at_5 value: 40.248 - type: precision_at_1 value: 29.944 - type: precision_at_10 value: 6.78 - type: precision_at_100 value: 1.024 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 15.895000000000001 - type: precision_at_5 value: 11.39 - type: recall_at_1 value: 27.426000000000002 - type: recall_at_10 value: 58.464000000000006 - type: recall_at_100 value: 84.193 - type: recall_at_1000 value: 95.52000000000001 - type: recall_at_3 value: 42.172 - type: recall_at_5 value: 50.101 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval type: BeIR/cqadupstack config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: map_at_1 value: 19.721 - type: map_at_10 value: 31.604 - type: map_at_100 value: 32.972 - type: map_at_1000 value: 33.077 - type: map_at_3 value: 27.218999999999998 - type: map_at_5 value: 29.53 - type: mrr_at_1 value: 25.0 - type: mrr_at_10 value: 35.843 - type: mrr_at_100 value: 36.785000000000004 - type: mrr_at_1000 value: 36.842000000000006 - type: mrr_at_3 value: 32.193 - type: mrr_at_5 value: 34.264 - type: ndcg_at_1 value: 25.0 - type: ndcg_at_10 value: 38.606 - type: ndcg_at_100 value: 44.272 - type: ndcg_at_1000 value: 46.527 - type: ndcg_at_3 value: 30.985000000000003 - type: ndcg_at_5 value: 34.43 - type: precision_at_1 value: 25.0 - type: precision_at_10 value: 7.811 - type: precision_at_100 value: 1.203 - type: precision_at_1000 value: 0.15 - type: precision_at_3 value: 15.423 - type: precision_at_5 value: 11.791 - type: recall_at_1 value: 19.721 - type: recall_at_10 value: 55.625 - type: recall_at_100 value: 79.34400000000001 - type: recall_at_1000 value: 95.208 - type: recall_at_3 value: 35.19 - type: recall_at_5 value: 43.626 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval type: BeIR/cqadupstack config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: map_at_1 value: 33.784 - type: map_at_10 value: 47.522 - type: map_at_100 value: 48.949999999999996 - type: map_at_1000 value: 49.038 - type: map_at_3 value: 43.284 - type: map_at_5 value: 45.629 - type: mrr_at_1 value: 41.482 - type: mrr_at_10 value: 52.830999999999996 - type: mrr_at_100 value: 53.559999999999995 - type: mrr_at_1000 value: 53.588 - type: mrr_at_3 value: 50.016000000000005 - type: mrr_at_5 value: 51.614000000000004 - type: ndcg_at_1 value: 41.482 - type: ndcg_at_10 value: 54.569 - type: ndcg_at_100 value: 59.675999999999995 - type: ndcg_at_1000 value: 60.989000000000004 - type: ndcg_at_3 value: 48.187000000000005 - type: ndcg_at_5 value: 51.183 - type: precision_at_1 value: 41.482 - type: precision_at_10 value: 10.221 - type: precision_at_100 value: 1.486 - type: precision_at_1000 value: 0.17500000000000002 - type: precision_at_3 value: 23.548 - type: precision_at_5 value: 16.805 - type: recall_at_1 value: 33.784 - type: recall_at_10 value: 69.798 - type: recall_at_100 value: 90.098 - type: recall_at_1000 value: 98.176 - type: recall_at_3 value: 52.127 - type: recall_at_5 value: 59.861 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval type: BeIR/cqadupstack config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: map_at_1 value: 28.038999999999998 - type: map_at_10 value: 41.904 - type: map_at_100 value: 43.36 - type: map_at_1000 value: 43.453 - type: map_at_3 value: 37.785999999999994 - type: map_at_5 value: 40.105000000000004 - type: mrr_at_1 value: 35.046 - type: mrr_at_10 value: 46.926 - type: mrr_at_100 value: 47.815000000000005 - type: mrr_at_1000 value: 47.849000000000004 - type: mrr_at_3 value: 44.273 - type: mrr_at_5 value: 45.774 - type: ndcg_at_1 value: 35.046 - type: ndcg_at_10 value: 48.937000000000005 - type: ndcg_at_100 value: 54.544000000000004 - type: ndcg_at_1000 value: 56.069 - type: ndcg_at_3 value: 42.858000000000004 - type: ndcg_at_5 value: 45.644 - type: precision_at_1 value: 35.046 - type: precision_at_10 value: 9.452 - type: precision_at_100 value: 1.429 - type: precision_at_1000 value: 0.173 - type: precision_at_3 value: 21.346999999999998 - type: precision_at_5 value: 15.342 - type: recall_at_1 value: 28.038999999999998 - type: recall_at_10 value: 64.59700000000001 - type: recall_at_100 value: 87.735 - type: recall_at_1000 value: 97.41300000000001 - type: recall_at_3 value: 47.368 - type: recall_at_5 value: 54.93900000000001 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: BeIR/cqadupstack config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 28.17291666666667 - type: map_at_10 value: 40.025749999999995 - type: map_at_100 value: 41.39208333333333 - type: map_at_1000 value: 41.499249999999996 - type: map_at_3 value: 36.347 - type: map_at_5 value: 38.41391666666667 - type: mrr_at_1 value: 33.65925 - type: mrr_at_10 value: 44.085499999999996 - type: mrr_at_100 value: 44.94116666666667 - type: mrr_at_1000 value: 44.9855 - type: mrr_at_3 value: 41.2815 - type: mrr_at_5 value: 42.91491666666666 - type: ndcg_at_1 value: 33.65925 - type: ndcg_at_10 value: 46.430833333333325 - type: ndcg_at_100 value: 51.761 - type: ndcg_at_1000 value: 53.50899999999999 - type: ndcg_at_3 value: 40.45133333333333 - type: ndcg_at_5 value: 43.31483333333334 - type: precision_at_1 value: 33.65925 - type: precision_at_10 value: 8.4995 - type: precision_at_100 value: 1.3210000000000004 - type: precision_at_1000 value: 0.16591666666666666 - type: precision_at_3 value: 19.165083333333335 - type: precision_at_5 value: 13.81816666666667 - type: recall_at_1 value: 28.17291666666667 - type: recall_at_10 value: 61.12624999999999 - type: recall_at_100 value: 83.97266666666667 - type: recall_at_1000 value: 95.66550000000001 - type: recall_at_3 value: 44.661249999999995 - type: recall_at_5 value: 51.983333333333334 - type: map_at_1 value: 17.936 - type: map_at_10 value: 27.399 - type: map_at_100 value: 28.632 - type: map_at_1000 value: 28.738000000000003 - type: map_at_3 value: 24.456 - type: map_at_5 value: 26.06 - type: mrr_at_1 value: 19.224 - type: mrr_at_10 value: 28.998 - type: mrr_at_100 value: 30.11 - type: mrr_at_1000 value: 30.177 - type: mrr_at_3 value: 26.247999999999998 - type: mrr_at_5 value: 27.708 - type: ndcg_at_1 value: 19.224 - type: ndcg_at_10 value: 32.911 - type: ndcg_at_100 value: 38.873999999999995 - type: ndcg_at_1000 value: 41.277 - type: ndcg_at_3 value: 27.142 - type: ndcg_at_5 value: 29.755 - type: precision_at_1 value: 19.224 - type: precision_at_10 value: 5.6930000000000005 - type: precision_at_100 value: 0.9259999999999999 - type: precision_at_1000 value: 0.126 - type: precision_at_3 value: 12.138 - type: precision_at_5 value: 8.909 - type: recall_at_1 value: 17.936 - type: recall_at_10 value: 48.096 - type: recall_at_100 value: 75.389 - type: recall_at_1000 value: 92.803 - type: recall_at_3 value: 32.812999999999995 - type: recall_at_5 value: 38.851 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval type: BeIR/cqadupstack config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: map_at_1 value: 24.681 - type: map_at_10 value: 34.892 - type: map_at_100 value: 35.996 - type: map_at_1000 value: 36.083 - type: map_at_3 value: 31.491999999999997 - type: map_at_5 value: 33.632 - type: mrr_at_1 value: 28.528 - type: mrr_at_10 value: 37.694 - type: mrr_at_100 value: 38.613 - type: mrr_at_1000 value: 38.668 - type: mrr_at_3 value: 34.714 - type: mrr_at_5 value: 36.616 - type: ndcg_at_1 value: 28.528 - type: ndcg_at_10 value: 40.703 - type: ndcg_at_100 value: 45.993 - type: ndcg_at_1000 value: 47.847 - type: ndcg_at_3 value: 34.622 - type: ndcg_at_5 value: 38.035999999999994 - type: precision_at_1 value: 28.528 - type: precision_at_10 value: 6.902 - type: precision_at_100 value: 1.0370000000000001 - type: precision_at_1000 value: 0.126 - type: precision_at_3 value: 15.798000000000002 - type: precision_at_5 value: 11.655999999999999 - type: recall_at_1 value: 24.681 - type: recall_at_10 value: 55.81 - type: recall_at_100 value: 79.785 - type: recall_at_1000 value: 92.959 - type: recall_at_3 value: 39.074 - type: recall_at_5 value: 47.568 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval type: BeIR/cqadupstack config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: map_at_1 value: 18.627 - type: map_at_10 value: 27.872000000000003 - type: map_at_100 value: 29.237999999999996 - type: map_at_1000 value: 29.363 - type: map_at_3 value: 24.751 - type: map_at_5 value: 26.521 - type: mrr_at_1 value: 23.021 - type: mrr_at_10 value: 31.924000000000003 - type: mrr_at_100 value: 32.922000000000004 - type: mrr_at_1000 value: 32.988 - type: mrr_at_3 value: 29.192 - type: mrr_at_5 value: 30.798 - type: ndcg_at_1 value: 23.021 - type: ndcg_at_10 value: 33.535 - type: ndcg_at_100 value: 39.732 - type: ndcg_at_1000 value: 42.201 - type: ndcg_at_3 value: 28.153 - type: ndcg_at_5 value: 30.746000000000002 - type: precision_at_1 value: 23.021 - type: precision_at_10 value: 6.459 - type: precision_at_100 value: 1.1320000000000001 - type: precision_at_1000 value: 0.153 - type: precision_at_3 value: 13.719000000000001 - type: precision_at_5 value: 10.193000000000001 - type: recall_at_1 value: 18.627 - type: recall_at_10 value: 46.463 - type: recall_at_100 value: 74.226 - type: recall_at_1000 value: 91.28500000000001 - type: recall_at_3 value: 31.357000000000003 - type: recall_at_5 value: 38.067 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval type: BeIR/cqadupstack config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: map_at_1 value: 31.457 - type: map_at_10 value: 42.888 - type: map_at_100 value: 44.24 - type: map_at_1000 value: 44.327 - type: map_at_3 value: 39.588 - type: map_at_5 value: 41.423 - type: mrr_at_1 value: 37.126999999999995 - type: mrr_at_10 value: 47.083000000000006 - type: mrr_at_100 value: 47.997 - type: mrr_at_1000 value: 48.044 - type: mrr_at_3 value: 44.574000000000005 - type: mrr_at_5 value: 46.202 - type: ndcg_at_1 value: 37.126999999999995 - type: ndcg_at_10 value: 48.833 - type: ndcg_at_100 value: 54.327000000000005 - type: ndcg_at_1000 value: 56.011 - type: ndcg_at_3 value: 43.541999999999994 - type: ndcg_at_5 value: 46.127 - type: precision_at_1 value: 37.126999999999995 - type: precision_at_10 value: 8.376999999999999 - type: precision_at_100 value: 1.2309999999999999 - type: precision_at_1000 value: 0.146 - type: precision_at_3 value: 20.211000000000002 - type: precision_at_5 value: 14.16 - type: recall_at_1 value: 31.457 - type: recall_at_10 value: 62.369 - type: recall_at_100 value: 85.444 - type: recall_at_1000 value: 96.65599999999999 - type: recall_at_3 value: 47.961 - type: recall_at_5 value: 54.676 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval type: BeIR/cqadupstack config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: map_at_1 value: 27.139999999999997 - type: map_at_10 value: 38.801 - type: map_at_100 value: 40.549 - type: map_at_1000 value: 40.802 - type: map_at_3 value: 35.05 - type: map_at_5 value: 36.884 - type: mrr_at_1 value: 33.004 - type: mrr_at_10 value: 43.864 - type: mrr_at_100 value: 44.667 - type: mrr_at_1000 value: 44.717 - type: mrr_at_3 value: 40.777 - type: mrr_at_5 value: 42.319 - type: ndcg_at_1 value: 33.004 - type: ndcg_at_10 value: 46.022 - type: ndcg_at_100 value: 51.542 - type: ndcg_at_1000 value: 53.742000000000004 - type: ndcg_at_3 value: 39.795 - type: ndcg_at_5 value: 42.272 - type: precision_at_1 value: 33.004 - type: precision_at_10 value: 9.012 - type: precision_at_100 value: 1.7770000000000001 - type: precision_at_1000 value: 0.26 - type: precision_at_3 value: 19.038 - type: precision_at_5 value: 13.675999999999998 - type: recall_at_1 value: 27.139999999999997 - type: recall_at_10 value: 60.961 - type: recall_at_100 value: 84.451 - type: recall_at_1000 value: 98.113 - type: recall_at_3 value: 43.001 - type: recall_at_5 value: 49.896 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: mteb/climate-fever config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: map_at_1 value: 22.076999999999998 - type: map_at_10 value: 35.44 - type: map_at_100 value: 37.651 - type: map_at_1000 value: 37.824999999999996 - type: map_at_3 value: 30.764999999999997 - type: map_at_5 value: 33.26 - type: mrr_at_1 value: 50.163000000000004 - type: mrr_at_10 value: 61.207 - type: mrr_at_100 value: 61.675000000000004 - type: mrr_at_1000 value: 61.692 - type: mrr_at_3 value: 58.60999999999999 - type: mrr_at_5 value: 60.307 - type: ndcg_at_1 value: 50.163000000000004 - type: ndcg_at_10 value: 45.882 - type: ndcg_at_100 value: 53.239999999999995 - type: ndcg_at_1000 value: 55.852000000000004 - type: ndcg_at_3 value: 40.514 - type: ndcg_at_5 value: 42.038 - type: precision_at_1 value: 50.163000000000004 - type: precision_at_10 value: 13.466000000000001 - type: precision_at_100 value: 2.164 - type: precision_at_1000 value: 0.266 - type: precision_at_3 value: 29.707 - type: precision_at_5 value: 21.694 - type: recall_at_1 value: 22.076999999999998 - type: recall_at_10 value: 50.193 - type: recall_at_100 value: 74.993 - type: recall_at_1000 value: 89.131 - type: recall_at_3 value: 35.472 - type: recall_at_5 value: 41.814 - task: type: Retrieval dataset: name: MTEB DBPedia type: mteb/dbpedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: map_at_1 value: 9.953 - type: map_at_10 value: 24.515 - type: map_at_100 value: 36.173 - type: map_at_1000 value: 38.351 - type: map_at_3 value: 16.592000000000002 - type: map_at_5 value: 20.036 - type: mrr_at_1 value: 74.25 - type: mrr_at_10 value: 81.813 - type: mrr_at_100 value: 82.006 - type: mrr_at_1000 value: 82.011 - type: mrr_at_3 value: 80.875 - type: mrr_at_5 value: 81.362 - type: ndcg_at_1 value: 62.5 - type: ndcg_at_10 value: 52.42 - type: ndcg_at_100 value: 56.808 - type: ndcg_at_1000 value: 63.532999999999994 - type: ndcg_at_3 value: 56.654 - type: ndcg_at_5 value: 54.18300000000001 - type: precision_at_1 value: 74.25 - type: precision_at_10 value: 42.699999999999996 - type: precision_at_100 value: 13.675 - type: precision_at_1000 value: 2.664 - type: precision_at_3 value: 60.5 - type: precision_at_5 value: 52.800000000000004 - type: recall_at_1 value: 9.953 - type: recall_at_10 value: 30.253999999999998 - type: recall_at_100 value: 62.516000000000005 - type: recall_at_1000 value: 84.163 - type: recall_at_3 value: 18.13 - type: recall_at_5 value: 22.771 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 79.455 - type: f1 value: 74.16798697647569 - task: type: Retrieval dataset: name: MTEB FEVER type: mteb/fever config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: map_at_1 value: 87.531 - type: map_at_10 value: 93.16799999999999 - type: map_at_100 value: 93.341 - type: map_at_1000 value: 93.349 - type: map_at_3 value: 92.444 - type: map_at_5 value: 92.865 - type: mrr_at_1 value: 94.014 - type: mrr_at_10 value: 96.761 - type: mrr_at_100 value: 96.762 - type: mrr_at_1000 value: 96.762 - type: mrr_at_3 value: 96.672 - type: mrr_at_5 value: 96.736 - type: ndcg_at_1 value: 94.014 - type: ndcg_at_10 value: 95.112 - type: ndcg_at_100 value: 95.578 - type: ndcg_at_1000 value: 95.68900000000001 - type: ndcg_at_3 value: 94.392 - type: ndcg_at_5 value: 94.72500000000001 - type: precision_at_1 value: 94.014 - type: precision_at_10 value: 11.065 - type: precision_at_100 value: 1.157 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 35.259 - type: precision_at_5 value: 21.599 - type: recall_at_1 value: 87.531 - type: recall_at_10 value: 97.356 - type: recall_at_100 value: 98.965 - type: recall_at_1000 value: 99.607 - type: recall_at_3 value: 95.312 - type: recall_at_5 value: 96.295 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: mteb/fiqa config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: map_at_1 value: 32.055 - type: map_at_10 value: 53.114 - type: map_at_100 value: 55.235 - type: map_at_1000 value: 55.345 - type: map_at_3 value: 45.854 - type: map_at_5 value: 50.025 - type: mrr_at_1 value: 60.34 - type: mrr_at_10 value: 68.804 - type: mrr_at_100 value: 69.309 - type: mrr_at_1000 value: 69.32199999999999 - type: mrr_at_3 value: 66.40899999999999 - type: mrr_at_5 value: 67.976 - type: ndcg_at_1 value: 60.34 - type: ndcg_at_10 value: 62.031000000000006 - type: ndcg_at_100 value: 68.00500000000001 - type: ndcg_at_1000 value: 69.286 - type: ndcg_at_3 value: 56.355999999999995 - type: ndcg_at_5 value: 58.687 - type: precision_at_1 value: 60.34 - type: precision_at_10 value: 17.176 - type: precision_at_100 value: 2.36 - type: precision_at_1000 value: 0.259 - type: precision_at_3 value: 37.14 - type: precision_at_5 value: 27.809 - type: recall_at_1 value: 32.055 - type: recall_at_10 value: 70.91 - type: recall_at_100 value: 91.83 - type: recall_at_1000 value: 98.871 - type: recall_at_3 value: 51.202999999999996 - type: recall_at_5 value: 60.563 - task: type: Retrieval dataset: name: MTEB HotpotQA type: mteb/hotpotqa config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: map_at_1 value: 43.68 - type: map_at_10 value: 64.389 - type: map_at_100 value: 65.24 - type: map_at_1000 value: 65.303 - type: map_at_3 value: 61.309000000000005 - type: map_at_5 value: 63.275999999999996 - type: mrr_at_1 value: 87.36 - type: mrr_at_10 value: 91.12 - type: mrr_at_100 value: 91.227 - type: mrr_at_1000 value: 91.229 - type: mrr_at_3 value: 90.57600000000001 - type: mrr_at_5 value: 90.912 - type: ndcg_at_1 value: 87.36 - type: ndcg_at_10 value: 73.076 - type: ndcg_at_100 value: 75.895 - type: ndcg_at_1000 value: 77.049 - type: ndcg_at_3 value: 68.929 - type: ndcg_at_5 value: 71.28 - type: precision_at_1 value: 87.36 - type: precision_at_10 value: 14.741000000000001 - type: precision_at_100 value: 1.694 - type: precision_at_1000 value: 0.185 - type: precision_at_3 value: 43.043 - type: precision_at_5 value: 27.681 - type: recall_at_1 value: 43.68 - type: recall_at_10 value: 73.707 - type: recall_at_100 value: 84.7 - type: recall_at_1000 value: 92.309 - type: recall_at_3 value: 64.564 - type: recall_at_5 value: 69.203 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 96.75399999999999 - type: ap value: 95.29389839242187 - type: f1 value: 96.75348377433475 - task: type: Retrieval dataset: name: MTEB MSMARCO type: mteb/msmarco config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: map_at_1 value: 25.176 - type: map_at_10 value: 38.598 - type: map_at_100 value: 39.707 - type: map_at_1000 value: 39.744 - type: map_at_3 value: 34.566 - type: map_at_5 value: 36.863 - type: mrr_at_1 value: 25.874000000000002 - type: mrr_at_10 value: 39.214 - type: mrr_at_100 value: 40.251 - type: mrr_at_1000 value: 40.281 - type: mrr_at_3 value: 35.291 - type: mrr_at_5 value: 37.545 - type: ndcg_at_1 value: 25.874000000000002 - type: ndcg_at_10 value: 45.98 - type: ndcg_at_100 value: 51.197 - type: ndcg_at_1000 value: 52.073 - type: ndcg_at_3 value: 37.785999999999994 - type: ndcg_at_5 value: 41.870000000000005 - type: precision_at_1 value: 25.874000000000002 - type: precision_at_10 value: 7.181 - type: precision_at_100 value: 0.979 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 16.051000000000002 - type: precision_at_5 value: 11.713 - type: recall_at_1 value: 25.176 - type: recall_at_10 value: 68.67699999999999 - type: recall_at_100 value: 92.55 - type: recall_at_1000 value: 99.164 - type: recall_at_3 value: 46.372 - type: recall_at_5 value: 56.16 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 99.03784769721841 - type: f1 value: 98.97791641821495 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 91.88326493388054 - type: f1 value: 73.74809928034335 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 85.41358439811701 - type: f1 value: 83.503679460639 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 89.77135171486215 - type: f1 value: 88.89843747468366 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 46.22695362087359 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 44.132372165849425 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 33.35680810650402 - type: mrr value: 34.72625715637218 - task: type: Retrieval dataset: name: MTEB NFCorpus type: mteb/nfcorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: map_at_1 value: 7.165000000000001 - type: map_at_10 value: 15.424 - type: map_at_100 value: 20.28 - type: map_at_1000 value: 22.065 - type: map_at_3 value: 11.236 - type: map_at_5 value: 13.025999999999998 - type: mrr_at_1 value: 51.702999999999996 - type: mrr_at_10 value: 59.965 - type: mrr_at_100 value: 60.667 - type: mrr_at_1000 value: 60.702999999999996 - type: mrr_at_3 value: 58.772000000000006 - type: mrr_at_5 value: 59.267 - type: ndcg_at_1 value: 49.536 - type: ndcg_at_10 value: 40.6 - type: ndcg_at_100 value: 37.848 - type: ndcg_at_1000 value: 46.657 - type: ndcg_at_3 value: 46.117999999999995 - type: ndcg_at_5 value: 43.619 - type: precision_at_1 value: 51.393 - type: precision_at_10 value: 30.31 - type: precision_at_100 value: 9.972 - type: precision_at_1000 value: 2.329 - type: precision_at_3 value: 43.137 - type: precision_at_5 value: 37.585 - type: recall_at_1 value: 7.165000000000001 - type: recall_at_10 value: 19.689999999999998 - type: recall_at_100 value: 39.237 - type: recall_at_1000 value: 71.417 - type: recall_at_3 value: 12.247 - type: recall_at_5 value: 14.902999999999999 - task: type: Retrieval dataset: name: MTEB NQ type: mteb/nq config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: map_at_1 value: 42.653999999999996 - type: map_at_10 value: 59.611999999999995 - type: map_at_100 value: 60.32300000000001 - type: map_at_1000 value: 60.336 - type: map_at_3 value: 55.584999999999994 - type: map_at_5 value: 58.19 - type: mrr_at_1 value: 47.683 - type: mrr_at_10 value: 62.06700000000001 - type: mrr_at_100 value: 62.537 - type: mrr_at_1000 value: 62.544999999999995 - type: mrr_at_3 value: 59.178 - type: mrr_at_5 value: 61.034 - type: ndcg_at_1 value: 47.654 - type: ndcg_at_10 value: 67.001 - type: ndcg_at_100 value: 69.73899999999999 - type: ndcg_at_1000 value: 69.986 - type: ndcg_at_3 value: 59.95700000000001 - type: ndcg_at_5 value: 64.025 - type: precision_at_1 value: 47.654 - type: precision_at_10 value: 10.367999999999999 - type: precision_at_100 value: 1.192 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 26.651000000000003 - type: precision_at_5 value: 18.459 - type: recall_at_1 value: 42.653999999999996 - type: recall_at_10 value: 86.619 - type: recall_at_100 value: 98.04899999999999 - type: recall_at_1000 value: 99.812 - type: recall_at_3 value: 68.987 - type: recall_at_5 value: 78.158 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: mteb/quora config: default split: test revision: None metrics: - type: map_at_1 value: 72.538 - type: map_at_10 value: 86.702 - type: map_at_100 value: 87.31 - type: map_at_1000 value: 87.323 - type: map_at_3 value: 83.87 - type: map_at_5 value: 85.682 - type: mrr_at_1 value: 83.31 - type: mrr_at_10 value: 89.225 - type: mrr_at_100 value: 89.30399999999999 - type: mrr_at_1000 value: 89.30399999999999 - type: mrr_at_3 value: 88.44300000000001 - type: mrr_at_5 value: 89.005 - type: ndcg_at_1 value: 83.32000000000001 - type: ndcg_at_10 value: 90.095 - type: ndcg_at_100 value: 91.12 - type: ndcg_at_1000 value: 91.179 - type: ndcg_at_3 value: 87.606 - type: ndcg_at_5 value: 89.031 - type: precision_at_1 value: 83.32000000000001 - type: precision_at_10 value: 13.641 - type: precision_at_100 value: 1.541 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 38.377 - type: precision_at_5 value: 25.162000000000003 - type: recall_at_1 value: 72.538 - type: recall_at_10 value: 96.47200000000001 - type: recall_at_100 value: 99.785 - type: recall_at_1000 value: 99.99900000000001 - type: recall_at_3 value: 89.278 - type: recall_at_5 value: 93.367 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 73.55219145406065 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 74.13437105242755 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: mteb/scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 6.873 - type: map_at_10 value: 17.944 - type: map_at_100 value: 21.171 - type: map_at_1000 value: 21.528 - type: map_at_3 value: 12.415 - type: map_at_5 value: 15.187999999999999 - type: mrr_at_1 value: 33.800000000000004 - type: mrr_at_10 value: 46.455 - type: mrr_at_100 value: 47.378 - type: mrr_at_1000 value: 47.394999999999996 - type: mrr_at_3 value: 42.367 - type: mrr_at_5 value: 44.972 - type: ndcg_at_1 value: 33.800000000000004 - type: ndcg_at_10 value: 28.907 - type: ndcg_at_100 value: 39.695 - type: ndcg_at_1000 value: 44.582 - type: ndcg_at_3 value: 26.949 - type: ndcg_at_5 value: 23.988 - type: precision_at_1 value: 33.800000000000004 - type: precision_at_10 value: 15.079999999999998 - type: precision_at_100 value: 3.056 - type: precision_at_1000 value: 0.42100000000000004 - type: precision_at_3 value: 25.167 - type: precision_at_5 value: 21.26 - type: recall_at_1 value: 6.873 - type: recall_at_10 value: 30.568 - type: recall_at_100 value: 62.062 - type: recall_at_1000 value: 85.37700000000001 - type: recall_at_3 value: 15.312999999999999 - type: recall_at_5 value: 21.575 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 82.37009118256057 - type: cos_sim_spearman value: 79.27986395671529 - type: euclidean_pearson value: 79.18037715442115 - type: euclidean_spearman value: 79.28004791561621 - type: manhattan_pearson value: 79.34062972800541 - type: manhattan_spearman value: 79.43106695543402 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 87.48474767383833 - type: cos_sim_spearman value: 79.54505388752513 - type: euclidean_pearson value: 83.43282704179565 - type: euclidean_spearman value: 79.54579919925405 - type: manhattan_pearson value: 83.77564492427952 - type: manhattan_spearman value: 79.84558396989286 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 88.803698035802 - type: cos_sim_spearman value: 88.83451367754881 - type: euclidean_pearson value: 88.28939285711628 - type: euclidean_spearman value: 88.83528996073112 - type: manhattan_pearson value: 88.28017412671795 - type: manhattan_spearman value: 88.9228828016344 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 85.27469288153428 - type: cos_sim_spearman value: 83.87477064876288 - type: euclidean_pearson value: 84.2601737035379 - type: euclidean_spearman value: 83.87431082479074 - type: manhattan_pearson value: 84.3621547772745 - type: manhattan_spearman value: 84.12094375000423 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.12749863201587 - type: cos_sim_spearman value: 88.54287568368565 - type: euclidean_pearson value: 87.90429700607999 - type: euclidean_spearman value: 88.5437689576261 - type: manhattan_pearson value: 88.19276653356833 - type: manhattan_spearman value: 88.99995393814679 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 85.68398747560902 - type: cos_sim_spearman value: 86.48815303460574 - type: euclidean_pearson value: 85.52356631237954 - type: euclidean_spearman value: 86.486391949551 - type: manhattan_pearson value: 85.67267981761788 - type: manhattan_spearman value: 86.7073696332485 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.9057107443124 - type: cos_sim_spearman value: 88.7312168757697 - type: euclidean_pearson value: 88.72810439714794 - type: euclidean_spearman value: 88.71976185854771 - type: manhattan_pearson value: 88.50433745949111 - type: manhattan_spearman value: 88.51726175544195 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 67.59391795109886 - type: cos_sim_spearman value: 66.87613008631367 - type: euclidean_pearson value: 69.23198488262217 - type: euclidean_spearman value: 66.85427723013692 - type: manhattan_pearson value: 69.50730124841084 - type: manhattan_spearman value: 67.10404669820792 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 87.0820605344619 - type: cos_sim_spearman value: 86.8518089863434 - type: euclidean_pearson value: 86.31087134689284 - type: euclidean_spearman value: 86.8518520517941 - type: manhattan_pearson value: 86.47203796160612 - type: manhattan_spearman value: 87.1080149734421 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 89.09255369305481 - type: mrr value: 97.10323445617563 - task: type: Retrieval dataset: name: MTEB SciFact type: mteb/scifact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: map_at_1 value: 61.260999999999996 - type: map_at_10 value: 74.043 - type: map_at_100 value: 74.37700000000001 - type: map_at_1000 value: 74.384 - type: map_at_3 value: 71.222 - type: map_at_5 value: 72.875 - type: mrr_at_1 value: 64.333 - type: mrr_at_10 value: 74.984 - type: mrr_at_100 value: 75.247 - type: mrr_at_1000 value: 75.25500000000001 - type: mrr_at_3 value: 73.167 - type: mrr_at_5 value: 74.35000000000001 - type: ndcg_at_1 value: 64.333 - type: ndcg_at_10 value: 79.06 - type: ndcg_at_100 value: 80.416 - type: ndcg_at_1000 value: 80.55600000000001 - type: ndcg_at_3 value: 74.753 - type: ndcg_at_5 value: 76.97500000000001 - type: precision_at_1 value: 64.333 - type: precision_at_10 value: 10.567 - type: precision_at_100 value: 1.1199999999999999 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 29.889 - type: precision_at_5 value: 19.533 - type: recall_at_1 value: 61.260999999999996 - type: recall_at_10 value: 93.167 - type: recall_at_100 value: 99.0 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 81.667 - type: recall_at_5 value: 87.394 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.71980198019801 - type: cos_sim_ap value: 92.81616007802704 - type: cos_sim_f1 value: 85.17548454688318 - type: cos_sim_precision value: 89.43894389438944 - type: cos_sim_recall value: 81.3 - type: dot_accuracy value: 99.71980198019801 - type: dot_ap value: 92.81398760591358 - type: dot_f1 value: 85.17548454688318 - type: dot_precision value: 89.43894389438944 - type: dot_recall value: 81.3 - type: euclidean_accuracy value: 99.71980198019801 - type: euclidean_ap value: 92.81560637245072 - type: euclidean_f1 value: 85.17548454688318 - type: euclidean_precision value: 89.43894389438944 - type: euclidean_recall value: 81.3 - type: manhattan_accuracy value: 99.73069306930694 - type: manhattan_ap value: 93.14005487480794 - type: manhattan_f1 value: 85.56263269639068 - type: manhattan_precision value: 91.17647058823529 - type: manhattan_recall value: 80.60000000000001 - type: max_accuracy value: 99.73069306930694 - type: max_ap value: 93.14005487480794 - type: max_f1 value: 85.56263269639068 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 79.86443362395185 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 49.40897096662564 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 55.66040806627947 - type: mrr value: 56.58670475766064 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.51015090598575 - type: cos_sim_spearman value: 31.35016454939226 - type: dot_pearson value: 31.5150068731 - type: dot_spearman value: 31.34790869023487 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: mteb/trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.254 - type: map_at_10 value: 2.064 - type: map_at_100 value: 12.909 - type: map_at_1000 value: 31.761 - type: map_at_3 value: 0.738 - type: map_at_5 value: 1.155 - type: mrr_at_1 value: 96.0 - type: mrr_at_10 value: 98.0 - type: mrr_at_100 value: 98.0 - type: mrr_at_1000 value: 98.0 - type: mrr_at_3 value: 98.0 - type: mrr_at_5 value: 98.0 - type: ndcg_at_1 value: 93.0 - type: ndcg_at_10 value: 82.258 - type: ndcg_at_100 value: 64.34 - type: ndcg_at_1000 value: 57.912 - type: ndcg_at_3 value: 90.827 - type: ndcg_at_5 value: 86.79 - type: precision_at_1 value: 96.0 - type: precision_at_10 value: 84.8 - type: precision_at_100 value: 66.0 - type: precision_at_1000 value: 25.356 - type: precision_at_3 value: 94.667 - type: precision_at_5 value: 90.4 - type: recall_at_1 value: 0.254 - type: recall_at_10 value: 2.1950000000000003 - type: recall_at_100 value: 16.088 - type: recall_at_1000 value: 54.559000000000005 - type: recall_at_3 value: 0.75 - type: recall_at_5 value: 1.191 - task: type: Retrieval dataset: name: MTEB Touche2020 type: mteb/touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: map_at_1 value: 2.976 - type: map_at_10 value: 11.389000000000001 - type: map_at_100 value: 18.429000000000002 - type: map_at_1000 value: 20.113 - type: map_at_3 value: 6.483 - type: map_at_5 value: 8.770999999999999 - type: mrr_at_1 value: 40.816 - type: mrr_at_10 value: 58.118 - type: mrr_at_100 value: 58.489999999999995 - type: mrr_at_1000 value: 58.489999999999995 - type: mrr_at_3 value: 53.061 - type: mrr_at_5 value: 57.041 - type: ndcg_at_1 value: 40.816 - type: ndcg_at_10 value: 30.567 - type: ndcg_at_100 value: 42.44 - type: ndcg_at_1000 value: 53.480000000000004 - type: ndcg_at_3 value: 36.016 - type: ndcg_at_5 value: 34.257 - type: precision_at_1 value: 42.857 - type: precision_at_10 value: 25.714 - type: precision_at_100 value: 8.429 - type: precision_at_1000 value: 1.5939999999999999 - type: precision_at_3 value: 36.735 - type: precision_at_5 value: 33.878 - type: recall_at_1 value: 2.976 - type: recall_at_10 value: 17.854999999999997 - type: recall_at_100 value: 51.833 - type: recall_at_1000 value: 86.223 - type: recall_at_3 value: 7.887 - type: recall_at_5 value: 12.026 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 85.1174 - type: ap value: 30.169441069345748 - type: f1 value: 69.79254701873245 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 72.58347481607245 - type: f1 value: 72.74877295564937 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 53.90586138221305 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.35769207844072 - type: cos_sim_ap value: 77.9645072410354 - type: cos_sim_f1 value: 71.32352941176471 - type: cos_sim_precision value: 66.5903890160183 - type: cos_sim_recall value: 76.78100263852242 - type: dot_accuracy value: 87.37557370209214 - type: dot_ap value: 77.96250046429908 - type: dot_f1 value: 71.28932757557064 - type: dot_precision value: 66.95249130938586 - type: dot_recall value: 76.22691292875989 - type: euclidean_accuracy value: 87.35173153722357 - type: euclidean_ap value: 77.96520460741593 - type: euclidean_f1 value: 71.32470733210104 - type: euclidean_precision value: 66.91329479768785 - type: euclidean_recall value: 76.35883905013192 - type: manhattan_accuracy value: 87.25636287774931 - type: manhattan_ap value: 77.77752485611796 - type: manhattan_f1 value: 71.18148599269183 - type: manhattan_precision value: 66.10859728506787 - type: manhattan_recall value: 77.0976253298153 - type: max_accuracy value: 87.37557370209214 - type: max_ap value: 77.96520460741593 - type: max_f1 value: 71.32470733210104 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.38176737687739 - type: cos_sim_ap value: 86.58811861657401 - type: cos_sim_f1 value: 79.09430644097604 - type: cos_sim_precision value: 75.45085977911366 - type: cos_sim_recall value: 83.10748383122882 - type: dot_accuracy value: 89.38370784336554 - type: dot_ap value: 86.58840606004333 - type: dot_f1 value: 79.10179860068133 - type: dot_precision value: 75.44546153308643 - type: dot_recall value: 83.13058207576223 - type: euclidean_accuracy value: 89.38564830985369 - type: euclidean_ap value: 86.58820721061164 - type: euclidean_f1 value: 79.09070942235888 - type: euclidean_precision value: 75.38729937194697 - type: euclidean_recall value: 83.17677856482906 - type: manhattan_accuracy value: 89.40699344122326 - type: manhattan_ap value: 86.60631843011362 - type: manhattan_f1 value: 79.14949970570925 - type: manhattan_precision value: 75.78191039729502 - type: manhattan_recall value: 82.83030489682784 - type: max_accuracy value: 89.40699344122326 - type: max_ap value: 86.60631843011362 - type: max_f1 value: 79.14949970570925 - task: type: STS dataset: name: MTEB AFQMC type: C-MTEB/AFQMC config: default split: validation revision: b44c3b011063adb25877c13823db83bb193913c4 metrics: - type: cos_sim_pearson value: 65.58442135663871 - type: cos_sim_spearman value: 72.2538631361313 - type: euclidean_pearson value: 70.97255486607429 - type: euclidean_spearman value: 72.25374250228647 - type: manhattan_pearson value: 70.83250199989911 - type: manhattan_spearman value: 72.14819496536272 - task: type: STS dataset: name: MTEB ATEC type: C-MTEB/ATEC config: default split: test revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865 metrics: - type: cos_sim_pearson value: 59.99478404929932 - type: cos_sim_spearman value: 62.61836216999812 - type: euclidean_pearson value: 66.86429811933593 - type: euclidean_spearman value: 62.6183520374191 - type: manhattan_pearson value: 66.8063778911633 - type: manhattan_spearman value: 62.569607573241115 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 53.98400000000001 - type: f1 value: 51.21447361350723 - task: type: STS dataset: name: MTEB BQ type: C-MTEB/BQ config: default split: test revision: e3dda5e115e487b39ec7e618c0c6a29137052a55 metrics: - type: cos_sim_pearson value: 79.11941660686553 - type: cos_sim_spearman value: 81.25029594540435 - type: euclidean_pearson value: 82.06973504238826 - type: euclidean_spearman value: 81.2501989488524 - type: manhattan_pearson value: 82.10094630392753 - type: manhattan_spearman value: 81.27987244392389 - task: type: Clustering dataset: name: MTEB CLSClusteringP2P type: C-MTEB/CLSClusteringP2P config: default split: test revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476 metrics: - type: v_measure value: 47.07270168705156 - task: type: Clustering dataset: name: MTEB CLSClusteringS2S type: C-MTEB/CLSClusteringS2S config: default split: test revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f metrics: - type: v_measure value: 45.98511703185043 - task: type: Reranking dataset: name: MTEB CMedQAv1 type: C-MTEB/CMedQAv1-reranking config: default split: test revision: 8d7f1e942507dac42dc58017c1a001c3717da7df metrics: - type: map value: 88.19895157194931 - type: mrr value: 90.21424603174603 - task: type: Reranking dataset: name: MTEB CMedQAv2 type: C-MTEB/CMedQAv2-reranking config: default split: test revision: 23d186750531a14a0357ca22cd92d712fd512ea0 metrics: - type: map value: 88.03317320980119 - type: mrr value: 89.9461507936508 - task: type: Retrieval dataset: name: MTEB CmedqaRetrieval type: C-MTEB/CmedqaRetrieval config: default split: dev revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301 metrics: - type: map_at_1 value: 29.037000000000003 - type: map_at_10 value: 42.001 - type: map_at_100 value: 43.773 - type: map_at_1000 value: 43.878 - type: map_at_3 value: 37.637 - type: map_at_5 value: 40.034 - type: mrr_at_1 value: 43.136 - type: mrr_at_10 value: 51.158 - type: mrr_at_100 value: 52.083 - type: mrr_at_1000 value: 52.12 - type: mrr_at_3 value: 48.733 - type: mrr_at_5 value: 50.025 - type: ndcg_at_1 value: 43.136 - type: ndcg_at_10 value: 48.685 - type: ndcg_at_100 value: 55.513 - type: ndcg_at_1000 value: 57.242000000000004 - type: ndcg_at_3 value: 43.329 - type: ndcg_at_5 value: 45.438 - type: precision_at_1 value: 43.136 - type: precision_at_10 value: 10.56 - type: precision_at_100 value: 1.6129999999999998 - type: precision_at_1000 value: 0.184 - type: precision_at_3 value: 24.064 - type: precision_at_5 value: 17.269000000000002 - type: recall_at_1 value: 29.037000000000003 - type: recall_at_10 value: 59.245000000000005 - type: recall_at_100 value: 87.355 - type: recall_at_1000 value: 98.74000000000001 - type: recall_at_3 value: 42.99 - type: recall_at_5 value: 49.681999999999995 - task: type: PairClassification dataset: name: MTEB Cmnli type: C-MTEB/CMNLI config: default split: validation revision: 41bc36f332156f7adc9e38f53777c959b2ae9766 metrics: - type: cos_sim_accuracy value: 82.68190018039687 - type: cos_sim_ap value: 90.18017125327886 - type: cos_sim_f1 value: 83.64080906868193 - type: cos_sim_precision value: 79.7076890489303 - type: cos_sim_recall value: 87.98223053542202 - type: dot_accuracy value: 82.68190018039687 - type: dot_ap value: 90.18782350103646 - type: dot_f1 value: 83.64242087729039 - type: dot_precision value: 79.65313028764805 - type: dot_recall value: 88.05237315875614 - type: euclidean_accuracy value: 82.68190018039687 - type: euclidean_ap value: 90.1801957900632 - type: euclidean_f1 value: 83.63636363636364 - type: euclidean_precision value: 79.52772506852203 - type: euclidean_recall value: 88.19265840542437 - type: manhattan_accuracy value: 82.14070956103427 - type: manhattan_ap value: 89.96178420101427 - type: manhattan_f1 value: 83.21087838578791 - type: manhattan_precision value: 78.35605121850475 - type: manhattan_recall value: 88.70703764320785 - type: max_accuracy value: 82.68190018039687 - type: max_ap value: 90.18782350103646 - type: max_f1 value: 83.64242087729039 - task: type: Retrieval dataset: name: MTEB CovidRetrieval type: C-MTEB/CovidRetrieval config: default split: dev revision: 1271c7809071a13532e05f25fb53511ffce77117 metrics: - type: map_at_1 value: 72.234 - type: map_at_10 value: 80.10000000000001 - type: map_at_100 value: 80.36 - type: map_at_1000 value: 80.363 - type: map_at_3 value: 78.315 - type: map_at_5 value: 79.607 - type: mrr_at_1 value: 72.392 - type: mrr_at_10 value: 80.117 - type: mrr_at_100 value: 80.36999999999999 - type: mrr_at_1000 value: 80.373 - type: mrr_at_3 value: 78.469 - type: mrr_at_5 value: 79.633 - type: ndcg_at_1 value: 72.392 - type: ndcg_at_10 value: 83.651 - type: ndcg_at_100 value: 84.749 - type: ndcg_at_1000 value: 84.83000000000001 - type: ndcg_at_3 value: 80.253 - type: ndcg_at_5 value: 82.485 - type: precision_at_1 value: 72.392 - type: precision_at_10 value: 9.557 - type: precision_at_100 value: 1.004 - type: precision_at_1000 value: 0.101 - type: precision_at_3 value: 28.732000000000003 - type: precision_at_5 value: 18.377 - type: recall_at_1 value: 72.234 - type: recall_at_10 value: 94.573 - type: recall_at_100 value: 99.368 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 85.669 - type: recall_at_5 value: 91.01700000000001 - task: type: Retrieval dataset: name: MTEB DuRetrieval type: C-MTEB/DuRetrieval config: default split: dev revision: a1a333e290fe30b10f3f56498e3a0d911a693ced metrics: - type: map_at_1 value: 26.173999999999996 - type: map_at_10 value: 80.04 - type: map_at_100 value: 82.94500000000001 - type: map_at_1000 value: 82.98100000000001 - type: map_at_3 value: 55.562999999999995 - type: map_at_5 value: 69.89800000000001 - type: mrr_at_1 value: 89.5 - type: mrr_at_10 value: 92.996 - type: mrr_at_100 value: 93.06400000000001 - type: mrr_at_1000 value: 93.065 - type: mrr_at_3 value: 92.658 - type: mrr_at_5 value: 92.84599999999999 - type: ndcg_at_1 value: 89.5 - type: ndcg_at_10 value: 87.443 - type: ndcg_at_100 value: 90.253 - type: ndcg_at_1000 value: 90.549 - type: ndcg_at_3 value: 85.874 - type: ndcg_at_5 value: 84.842 - type: precision_at_1 value: 89.5 - type: precision_at_10 value: 41.805 - type: precision_at_100 value: 4.827 - type: precision_at_1000 value: 0.49 - type: precision_at_3 value: 76.85 - type: precision_at_5 value: 64.8 - type: recall_at_1 value: 26.173999999999996 - type: recall_at_10 value: 89.101 - type: recall_at_100 value: 98.08099999999999 - type: recall_at_1000 value: 99.529 - type: recall_at_3 value: 57.902 - type: recall_at_5 value: 74.602 - task: type: Retrieval dataset: name: MTEB EcomRetrieval type: C-MTEB/EcomRetrieval config: default split: dev revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9 metrics: - type: map_at_1 value: 56.10000000000001 - type: map_at_10 value: 66.15299999999999 - type: map_at_100 value: 66.625 - type: map_at_1000 value: 66.636 - type: map_at_3 value: 63.632999999999996 - type: map_at_5 value: 65.293 - type: mrr_at_1 value: 56.10000000000001 - type: mrr_at_10 value: 66.15299999999999 - type: mrr_at_100 value: 66.625 - type: mrr_at_1000 value: 66.636 - type: mrr_at_3 value: 63.632999999999996 - type: mrr_at_5 value: 65.293 - type: ndcg_at_1 value: 56.10000000000001 - type: ndcg_at_10 value: 71.146 - type: ndcg_at_100 value: 73.27799999999999 - type: ndcg_at_1000 value: 73.529 - type: ndcg_at_3 value: 66.09 - type: ndcg_at_5 value: 69.08999999999999 - type: precision_at_1 value: 56.10000000000001 - type: precision_at_10 value: 8.68 - type: precision_at_100 value: 0.964 - type: precision_at_1000 value: 0.098 - type: precision_at_3 value: 24.4 - type: precision_at_5 value: 16.1 - type: recall_at_1 value: 56.10000000000001 - type: recall_at_10 value: 86.8 - type: recall_at_100 value: 96.39999999999999 - type: recall_at_1000 value: 98.3 - type: recall_at_3 value: 73.2 - type: recall_at_5 value: 80.5 - task: type: Classification dataset: name: MTEB IFlyTek type: C-MTEB/IFlyTek-classification config: default split: validation revision: 421605374b29664c5fc098418fe20ada9bd55f8a metrics: - type: accuracy value: 54.52096960369373 - type: f1 value: 40.930845295808695 - task: type: Classification dataset: name: MTEB JDReview type: C-MTEB/JDReview-classification config: default split: test revision: b7c64bd89eb87f8ded463478346f76731f07bf8b metrics: - type: accuracy value: 86.51031894934334 - type: ap value: 55.9516014323483 - type: f1 value: 81.54813679326381 - task: type: STS dataset: name: MTEB LCQMC type: C-MTEB/LCQMC config: default split: test revision: 17f9b096f80380fce5ed12a9be8be7784b337daf metrics: - type: cos_sim_pearson value: 69.67437838574276 - type: cos_sim_spearman value: 73.81314174653045 - type: euclidean_pearson value: 72.63430276680275 - type: euclidean_spearman value: 73.81358736777001 - type: manhattan_pearson value: 72.58743833842829 - type: manhattan_spearman value: 73.7590419009179 - task: type: Reranking dataset: name: MTEB MMarcoReranking type: C-MTEB/Mmarco-reranking config: default split: dev revision: None metrics: - type: map value: 31.648613483640254 - type: mrr value: 30.37420634920635 - task: type: Retrieval dataset: name: MTEB MMarcoRetrieval type: C-MTEB/MMarcoRetrieval config: default split: dev revision: 539bbde593d947e2a124ba72651aafc09eb33fc2 metrics: - type: map_at_1 value: 73.28099999999999 - type: map_at_10 value: 81.977 - type: map_at_100 value: 82.222 - type: map_at_1000 value: 82.22699999999999 - type: map_at_3 value: 80.441 - type: map_at_5 value: 81.46600000000001 - type: mrr_at_1 value: 75.673 - type: mrr_at_10 value: 82.41000000000001 - type: mrr_at_100 value: 82.616 - type: mrr_at_1000 value: 82.621 - type: mrr_at_3 value: 81.094 - type: mrr_at_5 value: 81.962 - type: ndcg_at_1 value: 75.673 - type: ndcg_at_10 value: 85.15599999999999 - type: ndcg_at_100 value: 86.151 - type: ndcg_at_1000 value: 86.26899999999999 - type: ndcg_at_3 value: 82.304 - type: ndcg_at_5 value: 84.009 - type: precision_at_1 value: 75.673 - type: precision_at_10 value: 10.042 - type: precision_at_100 value: 1.052 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 30.673000000000002 - type: precision_at_5 value: 19.326999999999998 - type: recall_at_1 value: 73.28099999999999 - type: recall_at_10 value: 94.446 - type: recall_at_100 value: 98.737 - type: recall_at_1000 value: 99.649 - type: recall_at_3 value: 86.984 - type: recall_at_5 value: 91.024 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 81.08607935440484 - type: f1 value: 78.24879986066307 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 86.05917955615332 - type: f1 value: 85.05279279434997 - task: type: Retrieval dataset: name: MTEB MedicalRetrieval type: C-MTEB/MedicalRetrieval config: default split: dev revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6 metrics: - type: map_at_1 value: 56.2 - type: map_at_10 value: 62.57899999999999 - type: map_at_100 value: 63.154999999999994 - type: map_at_1000 value: 63.193 - type: map_at_3 value: 61.217 - type: map_at_5 value: 62.012 - type: mrr_at_1 value: 56.3 - type: mrr_at_10 value: 62.629000000000005 - type: mrr_at_100 value: 63.205999999999996 - type: mrr_at_1000 value: 63.244 - type: mrr_at_3 value: 61.267 - type: mrr_at_5 value: 62.062 - type: ndcg_at_1 value: 56.2 - type: ndcg_at_10 value: 65.592 - type: ndcg_at_100 value: 68.657 - type: ndcg_at_1000 value: 69.671 - type: ndcg_at_3 value: 62.808 - type: ndcg_at_5 value: 64.24499999999999 - type: precision_at_1 value: 56.2 - type: precision_at_10 value: 7.5 - type: precision_at_100 value: 0.899 - type: precision_at_1000 value: 0.098 - type: precision_at_3 value: 22.467000000000002 - type: precision_at_5 value: 14.180000000000001 - type: recall_at_1 value: 56.2 - type: recall_at_10 value: 75.0 - type: recall_at_100 value: 89.9 - type: recall_at_1000 value: 97.89999999999999 - type: recall_at_3 value: 67.4 - type: recall_at_5 value: 70.89999999999999 - task: type: Classification dataset: name: MTEB MultilingualSentiment type: C-MTEB/MultilingualSentiment-classification config: default split: validation revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a metrics: - type: accuracy value: 76.87666666666667 - type: f1 value: 76.7317686219665 - task: type: PairClassification dataset: name: MTEB Ocnli type: C-MTEB/OCNLI config: default split: validation revision: 66e76a618a34d6d565d5538088562851e6daa7ec metrics: - type: cos_sim_accuracy value: 79.64266377910124 - type: cos_sim_ap value: 84.78274442344829 - type: cos_sim_f1 value: 81.16947472745292 - type: cos_sim_precision value: 76.47058823529412 - type: cos_sim_recall value: 86.48363252375924 - type: dot_accuracy value: 79.64266377910124 - type: dot_ap value: 84.7851404063692 - type: dot_f1 value: 81.16947472745292 - type: dot_precision value: 76.47058823529412 - type: dot_recall value: 86.48363252375924 - type: euclidean_accuracy value: 79.64266377910124 - type: euclidean_ap value: 84.78068373762378 - type: euclidean_f1 value: 81.14794656110837 - type: euclidean_precision value: 76.35009310986965 - type: euclidean_recall value: 86.58922914466737 - type: manhattan_accuracy value: 79.48023822414727 - type: manhattan_ap value: 84.72928897427576 - type: manhattan_f1 value: 81.32084770823064 - type: manhattan_precision value: 76.24768946395564 - type: manhattan_recall value: 87.11721224920802 - type: max_accuracy value: 79.64266377910124 - type: max_ap value: 84.7851404063692 - type: max_f1 value: 81.32084770823064 - task: type: Classification dataset: name: MTEB OnlineShopping type: C-MTEB/OnlineShopping-classification config: default split: test revision: e610f2ebd179a8fda30ae534c3878750a96db120 metrics: - type: accuracy value: 94.3 - type: ap value: 92.8664032274438 - type: f1 value: 94.29311102997727 - task: type: STS dataset: name: MTEB PAWSX type: C-MTEB/PAWSX config: default split: test revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1 metrics: - type: cos_sim_pearson value: 48.51392279882909 - type: cos_sim_spearman value: 54.06338895994974 - type: euclidean_pearson value: 52.58480559573412 - type: euclidean_spearman value: 54.06417276612201 - type: manhattan_pearson value: 52.69525121721343 - type: manhattan_spearman value: 54.048147455389675 - task: type: STS dataset: name: MTEB QBQTC type: C-MTEB/QBQTC config: default split: test revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7 metrics: - type: cos_sim_pearson value: 29.728387290757325 - type: cos_sim_spearman value: 31.366121633635284 - type: euclidean_pearson value: 29.14588368552961 - type: euclidean_spearman value: 31.36764411112844 - type: manhattan_pearson value: 29.63517350523121 - type: manhattan_spearman value: 31.94157020583762 - task: type: STS dataset: name: MTEB STS22 (zh) type: mteb/sts22-crosslingual-sts config: zh split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 63.64868296271406 - type: cos_sim_spearman value: 66.12800618164744 - type: euclidean_pearson value: 63.21405767340238 - type: euclidean_spearman value: 66.12786567790748 - type: manhattan_pearson value: 64.04300276525848 - type: manhattan_spearman value: 66.5066857145652 - task: type: STS dataset: name: MTEB STSB type: C-MTEB/STSB config: default split: test revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0 metrics: - type: cos_sim_pearson value: 81.2302623912794 - type: cos_sim_spearman value: 81.16833673266562 - type: euclidean_pearson value: 79.47647843876024 - type: euclidean_spearman value: 81.16944349524972 - type: manhattan_pearson value: 79.84947238492208 - type: manhattan_spearman value: 81.64626599410026 - task: type: Reranking dataset: name: MTEB T2Reranking type: C-MTEB/T2Reranking config: default split: dev revision: 76631901a18387f85eaa53e5450019b87ad58ef9 metrics: - type: map value: 67.80129586475687 - type: mrr value: 77.77402311635554 - task: type: Retrieval dataset: name: MTEB T2Retrieval type: C-MTEB/T2Retrieval config: default split: dev revision: 8731a845f1bf500a4f111cf1070785c793d10e64 metrics: - type: map_at_1 value: 28.666999999999998 - type: map_at_10 value: 81.063 - type: map_at_100 value: 84.504 - type: map_at_1000 value: 84.552 - type: map_at_3 value: 56.897 - type: map_at_5 value: 70.073 - type: mrr_at_1 value: 92.087 - type: mrr_at_10 value: 94.132 - type: mrr_at_100 value: 94.19800000000001 - type: mrr_at_1000 value: 94.19999999999999 - type: mrr_at_3 value: 93.78999999999999 - type: mrr_at_5 value: 94.002 - type: ndcg_at_1 value: 92.087 - type: ndcg_at_10 value: 87.734 - type: ndcg_at_100 value: 90.736 - type: ndcg_at_1000 value: 91.184 - type: ndcg_at_3 value: 88.78 - type: ndcg_at_5 value: 87.676 - type: precision_at_1 value: 92.087 - type: precision_at_10 value: 43.46 - type: precision_at_100 value: 5.07 - type: precision_at_1000 value: 0.518 - type: precision_at_3 value: 77.49000000000001 - type: precision_at_5 value: 65.194 - type: recall_at_1 value: 28.666999999999998 - type: recall_at_10 value: 86.632 - type: recall_at_100 value: 96.646 - type: recall_at_1000 value: 98.917 - type: recall_at_3 value: 58.333999999999996 - type: recall_at_5 value: 72.974 - task: type: Classification dataset: name: MTEB TNews type: C-MTEB/TNews-classification config: default split: validation revision: 317f262bf1e6126357bbe89e875451e4b0938fe4 metrics: - type: accuracy value: 52.971999999999994 - type: f1 value: 50.2898280984929 - task: type: Clustering dataset: name: MTEB ThuNewsClusteringP2P type: C-MTEB/ThuNewsClusteringP2P config: default split: test revision: 5798586b105c0434e4f0fe5e767abe619442cf93 metrics: - type: v_measure value: 86.0797948663824 - task: type: Clustering dataset: name: MTEB ThuNewsClusteringS2S type: C-MTEB/ThuNewsClusteringS2S config: default split: test revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d metrics: - type: v_measure value: 85.10759092255017 - task: type: Retrieval dataset: name: MTEB VideoRetrieval type: C-MTEB/VideoRetrieval config: default split: dev revision: 58c2597a5943a2ba48f4668c3b90d796283c5639 metrics: - type: map_at_1 value: 65.60000000000001 - type: map_at_10 value: 74.773 - type: map_at_100 value: 75.128 - type: map_at_1000 value: 75.136 - type: map_at_3 value: 73.05 - type: map_at_5 value: 74.13499999999999 - type: mrr_at_1 value: 65.60000000000001 - type: mrr_at_10 value: 74.773 - type: mrr_at_100 value: 75.128 - type: mrr_at_1000 value: 75.136 - type: mrr_at_3 value: 73.05 - type: mrr_at_5 value: 74.13499999999999 - type: ndcg_at_1 value: 65.60000000000001 - type: ndcg_at_10 value: 78.84299999999999 - type: ndcg_at_100 value: 80.40899999999999 - type: ndcg_at_1000 value: 80.57 - type: ndcg_at_3 value: 75.40599999999999 - type: ndcg_at_5 value: 77.351 - type: precision_at_1 value: 65.60000000000001 - type: precision_at_10 value: 9.139999999999999 - type: precision_at_100 value: 0.984 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 27.400000000000002 - type: precision_at_5 value: 17.380000000000003 - type: recall_at_1 value: 65.60000000000001 - type: recall_at_10 value: 91.4 - type: recall_at_100 value: 98.4 - type: recall_at_1000 value: 99.6 - type: recall_at_3 value: 82.19999999999999 - type: recall_at_5 value: 86.9 - task: type: Classification dataset: name: MTEB Waimai type: C-MTEB/waimai-classification config: default split: test revision: 339287def212450dcaa9df8c22bf93e9980c7023 metrics: - type: accuracy value: 89.47 - type: ap value: 75.59561751845389 - type: f1 value: 87.95207751382563 - task: type: Clustering dataset: name: MTEB AlloProfClusteringP2P type: lyon-nlp/alloprof config: default split: test revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b metrics: - type: v_measure value: 76.05592323841036 - type: v_measure value: 64.51718058866508 - task: type: Reranking dataset: name: MTEB AlloprofReranking type: lyon-nlp/mteb-fr-reranking-alloprof-s2p config: default split: test revision: 666fdacebe0291776e86f29345663dfaf80a0db9 metrics: - type: map value: 73.08278490943373 - type: mrr value: 74.66561454570449 - task: type: Retrieval dataset: name: MTEB AlloprofRetrieval type: lyon-nlp/alloprof config: default split: test revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b metrics: - type: map_at_1 value: 38.912 - type: map_at_10 value: 52.437999999999995 - type: map_at_100 value: 53.38 - type: map_at_1000 value: 53.427 - type: map_at_3 value: 48.879 - type: map_at_5 value: 50.934000000000005 - type: mrr_at_1 value: 44.085 - type: mrr_at_10 value: 55.337 - type: mrr_at_100 value: 56.016999999999996 - type: mrr_at_1000 value: 56.043 - type: mrr_at_3 value: 52.55499999999999 - type: mrr_at_5 value: 54.20399999999999 - type: ndcg_at_1 value: 44.085 - type: ndcg_at_10 value: 58.876 - type: ndcg_at_100 value: 62.714000000000006 - type: ndcg_at_1000 value: 63.721000000000004 - type: ndcg_at_3 value: 52.444 - type: ndcg_at_5 value: 55.692 - type: precision_at_1 value: 44.085 - type: precision_at_10 value: 9.21 - type: precision_at_100 value: 1.164 - type: precision_at_1000 value: 0.128 - type: precision_at_3 value: 23.043 - type: precision_at_5 value: 15.898000000000001 - type: recall_at_1 value: 38.912 - type: recall_at_10 value: 75.577 - type: recall_at_100 value: 92.038 - type: recall_at_1000 value: 99.325 - type: recall_at_3 value: 58.592 - type: recall_at_5 value: 66.235 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (fr) type: mteb/amazon_reviews_multi config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 55.532000000000004 - type: f1 value: 52.5783943471605 - task: type: Retrieval dataset: name: MTEB BSARDRetrieval type: maastrichtlawtech/bsard config: default split: test revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59 metrics: - type: map_at_1 value: 8.108 - type: map_at_10 value: 14.710999999999999 - type: map_at_100 value: 15.891 - type: map_at_1000 value: 15.983 - type: map_at_3 value: 12.237 - type: map_at_5 value: 13.679 - type: mrr_at_1 value: 8.108 - type: mrr_at_10 value: 14.710999999999999 - type: mrr_at_100 value: 15.891 - type: mrr_at_1000 value: 15.983 - type: mrr_at_3 value: 12.237 - type: mrr_at_5 value: 13.679 - type: ndcg_at_1 value: 8.108 - type: ndcg_at_10 value: 18.796 - type: ndcg_at_100 value: 25.098 - type: ndcg_at_1000 value: 27.951999999999998 - type: ndcg_at_3 value: 13.712 - type: ndcg_at_5 value: 16.309 - type: precision_at_1 value: 8.108 - type: precision_at_10 value: 3.198 - type: precision_at_100 value: 0.626 - type: precision_at_1000 value: 0.086 - type: precision_at_3 value: 6.006 - type: precision_at_5 value: 4.865 - type: recall_at_1 value: 8.108 - type: recall_at_10 value: 31.982 - type: recall_at_100 value: 62.613 - type: recall_at_1000 value: 86.036 - type: recall_at_3 value: 18.018 - type: recall_at_5 value: 24.324 - task: type: Clustering dataset: name: MTEB HALClusteringS2S type: lyon-nlp/clustering-hal-s2s config: default split: test revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915 metrics: - type: v_measure value: 30.833269778867116 - task: type: Clustering dataset: name: MTEB MLSUMClusteringP2P type: mlsum config: default split: test revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7 metrics: - type: v_measure value: 50.0281928004713 - type: v_measure value: 43.699961510636534 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (fr) type: mteb/mtop_domain config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 96.68963357344191 - type: f1 value: 96.45175170820961 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (fr) type: mteb/mtop_intent config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 87.46946445349202 - type: f1 value: 65.79860440988624 - task: type: Classification dataset: name: MTEB MasakhaNEWSClassification (fra) type: masakhane/masakhanews config: fra split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: accuracy value: 82.60663507109005 - type: f1 value: 77.20462646604777 - task: type: Clustering dataset: name: MTEB MasakhaNEWSClusteringP2P (fra) type: masakhane/masakhanews config: fra split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: v_measure value: 60.19311264967803 - type: v_measure value: 63.6235764409785 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fr) type: mteb/amazon_massive_intent config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 81.65097511768661 - type: f1 value: 78.77796091490924 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fr) type: mteb/amazon_massive_scenario config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 86.64425016812373 - type: f1 value: 85.4912728670017 - task: type: Retrieval dataset: name: MTEB MintakaRetrieval (fr) type: jinaai/mintakaqa config: fr split: test revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e metrics: - type: map_at_1 value: 35.913000000000004 - type: map_at_10 value: 48.147 - type: map_at_100 value: 48.91 - type: map_at_1000 value: 48.949 - type: map_at_3 value: 45.269999999999996 - type: map_at_5 value: 47.115 - type: mrr_at_1 value: 35.913000000000004 - type: mrr_at_10 value: 48.147 - type: mrr_at_100 value: 48.91 - type: mrr_at_1000 value: 48.949 - type: mrr_at_3 value: 45.269999999999996 - type: mrr_at_5 value: 47.115 - type: ndcg_at_1 value: 35.913000000000004 - type: ndcg_at_10 value: 54.03 - type: ndcg_at_100 value: 57.839 - type: ndcg_at_1000 value: 58.925000000000004 - type: ndcg_at_3 value: 48.217999999999996 - type: ndcg_at_5 value: 51.56699999999999 - type: precision_at_1 value: 35.913000000000004 - type: precision_at_10 value: 7.244000000000001 - type: precision_at_100 value: 0.9039999999999999 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 18.905 - type: precision_at_5 value: 12.981000000000002 - type: recall_at_1 value: 35.913000000000004 - type: recall_at_10 value: 72.441 - type: recall_at_100 value: 90.41799999999999 - type: recall_at_1000 value: 99.099 - type: recall_at_3 value: 56.716 - type: recall_at_5 value: 64.90599999999999 - task: type: PairClassification dataset: name: MTEB OpusparcusPC (fr) type: GEM/opusparcus config: fr split: test revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a metrics: - type: cos_sim_accuracy value: 99.90069513406156 - type: cos_sim_ap value: 100.0 - type: cos_sim_f1 value: 99.95032290114257 - type: cos_sim_precision value: 100.0 - type: cos_sim_recall value: 99.90069513406156 - type: dot_accuracy value: 99.90069513406156 - type: dot_ap value: 100.0 - type: dot_f1 value: 99.95032290114257 - type: dot_precision value: 100.0 - type: dot_recall value: 99.90069513406156 - type: euclidean_accuracy value: 99.90069513406156 - type: euclidean_ap value: 100.0 - type: euclidean_f1 value: 99.95032290114257 - type: euclidean_precision value: 100.0 - type: euclidean_recall value: 99.90069513406156 - type: manhattan_accuracy value: 99.90069513406156 - type: manhattan_ap value: 100.0 - type: manhattan_f1 value: 99.95032290114257 - type: manhattan_precision value: 100.0 - type: manhattan_recall value: 99.90069513406156 - type: max_accuracy value: 99.90069513406156 - type: max_ap value: 100.0 - type: max_f1 value: 99.95032290114257 - task: type: PairClassification dataset: name: MTEB PawsX (fr) type: paws-x config: fr split: test revision: 8a04d940a42cd40658986fdd8e3da561533a3646 metrics: - type: cos_sim_accuracy value: 75.25 - type: cos_sim_ap value: 80.86376001270014 - type: cos_sim_f1 value: 73.65945437441204 - type: cos_sim_precision value: 64.02289452166802 - type: cos_sim_recall value: 86.71096345514951 - type: dot_accuracy value: 75.25 - type: dot_ap value: 80.93686107633002 - type: dot_f1 value: 73.65945437441204 - type: dot_precision value: 64.02289452166802 - type: dot_recall value: 86.71096345514951 - type: euclidean_accuracy value: 75.25 - type: euclidean_ap value: 80.86379136218862 - type: euclidean_f1 value: 73.65945437441204 - type: euclidean_precision value: 64.02289452166802 - type: euclidean_recall value: 86.71096345514951 - type: manhattan_accuracy value: 75.3 - type: manhattan_ap value: 80.87826606097734 - type: manhattan_f1 value: 73.68421052631581 - type: manhattan_precision value: 64.0 - type: manhattan_recall value: 86.82170542635659 - type: max_accuracy value: 75.3 - type: max_ap value: 80.93686107633002 - type: max_f1 value: 73.68421052631581 - task: type: STS dataset: name: MTEB SICKFr type: Lajavaness/SICK-fr config: default split: test revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a metrics: - type: cos_sim_pearson value: 81.42349425981143 - type: cos_sim_spearman value: 78.90454327031226 - type: euclidean_pearson value: 78.39086497435166 - type: euclidean_spearman value: 78.9046133980509 - type: manhattan_pearson value: 78.63743094286502 - type: manhattan_spearman value: 79.12136348449269 - task: type: STS dataset: name: MTEB STS22 (fr) type: mteb/sts22-crosslingual-sts config: fr split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 81.452697919749 - type: cos_sim_spearman value: 82.58116836039301 - type: euclidean_pearson value: 81.04038478932786 - type: euclidean_spearman value: 82.58116836039301 - type: manhattan_pearson value: 81.37075396187771 - type: manhattan_spearman value: 82.73678231355368 - task: type: STS dataset: name: MTEB STSBenchmarkMultilingualSTS (fr) type: stsb_multi_mt config: fr split: test revision: 93d57ef91790589e3ce9c365164337a8a78b7632 metrics: - type: cos_sim_pearson value: 85.7419764013806 - type: cos_sim_spearman value: 85.46085808849622 - type: euclidean_pearson value: 83.70449639870063 - type: euclidean_spearman value: 85.46159013076233 - type: manhattan_pearson value: 83.95259510313929 - type: manhattan_spearman value: 85.8029724659458 - task: type: Summarization dataset: name: MTEB SummEvalFr type: lyon-nlp/summarization-summeval-fr-p2p config: default split: test revision: b385812de6a9577b6f4d0f88c6a6e35395a94054 metrics: - type: cos_sim_pearson value: 32.61063271753325 - type: cos_sim_spearman value: 31.454589417353603 - type: dot_pearson value: 32.6106288643431 - type: dot_spearman value: 31.454589417353603 - task: type: Reranking dataset: name: MTEB SyntecReranking type: lyon-nlp/mteb-fr-reranking-syntec-s2p config: default split: test revision: b205c5084a0934ce8af14338bf03feb19499c84d metrics: - type: map value: 84.31666666666666 - type: mrr value: 84.31666666666666 - task: type: Retrieval dataset: name: MTEB SyntecRetrieval type: lyon-nlp/mteb-fr-retrieval-syntec-s2p config: default split: test revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff metrics: - type: map_at_1 value: 63.0 - type: map_at_10 value: 73.471 - type: map_at_100 value: 73.87 - type: map_at_1000 value: 73.87 - type: map_at_3 value: 70.5 - type: map_at_5 value: 73.05 - type: mrr_at_1 value: 63.0 - type: mrr_at_10 value: 73.471 - type: mrr_at_100 value: 73.87 - type: mrr_at_1000 value: 73.87 - type: mrr_at_3 value: 70.5 - type: mrr_at_5 value: 73.05 - type: ndcg_at_1 value: 63.0 - type: ndcg_at_10 value: 78.255 - type: ndcg_at_100 value: 79.88 - type: ndcg_at_1000 value: 79.88 - type: ndcg_at_3 value: 72.702 - type: ndcg_at_5 value: 77.264 - type: precision_at_1 value: 63.0 - type: precision_at_10 value: 9.3 - type: precision_at_100 value: 1.0 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 26.333000000000002 - type: precision_at_5 value: 18.0 - type: recall_at_1 value: 63.0 - type: recall_at_10 value: 93.0 - type: recall_at_100 value: 100.0 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 79.0 - type: recall_at_5 value: 90.0 - task: type: Retrieval dataset: name: MTEB XPQARetrieval (fr) type: jinaai/xpqa config: fr split: test revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f metrics: - type: map_at_1 value: 40.338 - type: map_at_10 value: 61.927 - type: map_at_100 value: 63.361999999999995 - type: map_at_1000 value: 63.405 - type: map_at_3 value: 55.479 - type: map_at_5 value: 59.732 - type: mrr_at_1 value: 63.551 - type: mrr_at_10 value: 71.006 - type: mrr_at_100 value: 71.501 - type: mrr_at_1000 value: 71.509 - type: mrr_at_3 value: 69.07 - type: mrr_at_5 value: 70.165 - type: ndcg_at_1 value: 63.551 - type: ndcg_at_10 value: 68.297 - type: ndcg_at_100 value: 73.13199999999999 - type: ndcg_at_1000 value: 73.751 - type: ndcg_at_3 value: 62.999 - type: ndcg_at_5 value: 64.89 - type: precision_at_1 value: 63.551 - type: precision_at_10 value: 15.661 - type: precision_at_100 value: 1.9789999999999999 - type: precision_at_1000 value: 0.207 - type: precision_at_3 value: 38.273 - type: precision_at_5 value: 27.61 - type: recall_at_1 value: 40.338 - type: recall_at_10 value: 77.267 - type: recall_at_100 value: 95.892 - type: recall_at_1000 value: 99.75500000000001 - type: recall_at_3 value: 60.36 - type: recall_at_5 value: 68.825 - task: type: Clustering dataset: name: MTEB 8TagsClustering type: PL-MTEB/8tags-clustering config: default split: test revision: None metrics: - type: v_measure value: 51.36126303874126 - task: type: Classification dataset: name: MTEB AllegroReviews type: PL-MTEB/allegro-reviews config: default split: test revision: None metrics: - type: accuracy value: 67.13717693836979 - type: f1 value: 57.27609848003782 - task: type: Retrieval dataset: name: MTEB ArguAna-PL type: clarin-knext/arguana-pl config: default split: test revision: 63fc86750af76253e8c760fc9e534bbf24d260a2 metrics: - type: map_at_1 value: 35.276999999999994 - type: map_at_10 value: 51.086 - type: map_at_100 value: 51.788000000000004 - type: map_at_1000 value: 51.791 - type: map_at_3 value: 46.147 - type: map_at_5 value: 49.078 - type: mrr_at_1 value: 35.917 - type: mrr_at_10 value: 51.315999999999995 - type: mrr_at_100 value: 52.018 - type: mrr_at_1000 value: 52.022 - type: mrr_at_3 value: 46.349000000000004 - type: mrr_at_5 value: 49.297000000000004 - type: ndcg_at_1 value: 35.276999999999994 - type: ndcg_at_10 value: 59.870999999999995 - type: ndcg_at_100 value: 62.590999999999994 - type: ndcg_at_1000 value: 62.661 - type: ndcg_at_3 value: 49.745 - type: ndcg_at_5 value: 55.067 - type: precision_at_1 value: 35.276999999999994 - type: precision_at_10 value: 8.791 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 20.057 - type: precision_at_5 value: 14.637 - type: recall_at_1 value: 35.276999999999994 - type: recall_at_10 value: 87.909 - type: recall_at_100 value: 99.14699999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 60.171 - type: recall_at_5 value: 73.18599999999999 - task: type: Classification dataset: name: MTEB CBD type: PL-MTEB/cbd config: default split: test revision: None metrics: - type: accuracy value: 78.03000000000002 - type: ap value: 29.12548553897622 - type: f1 value: 66.54857118886073 - task: type: PairClassification dataset: name: MTEB CDSC-E type: PL-MTEB/cdsce-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 89.0 - type: cos_sim_ap value: 76.75437826834582 - type: cos_sim_f1 value: 66.4850136239782 - type: cos_sim_precision value: 68.92655367231639 - type: cos_sim_recall value: 64.21052631578948 - type: dot_accuracy value: 89.0 - type: dot_ap value: 76.75437826834582 - type: dot_f1 value: 66.4850136239782 - type: dot_precision value: 68.92655367231639 - type: dot_recall value: 64.21052631578948 - type: euclidean_accuracy value: 89.0 - type: euclidean_ap value: 76.75437826834582 - type: euclidean_f1 value: 66.4850136239782 - type: euclidean_precision value: 68.92655367231639 - type: euclidean_recall value: 64.21052631578948 - type: manhattan_accuracy value: 89.0 - type: manhattan_ap value: 76.66074220647083 - type: manhattan_f1 value: 66.47058823529412 - type: manhattan_precision value: 75.33333333333333 - type: manhattan_recall value: 59.473684210526315 - type: max_accuracy value: 89.0 - type: max_ap value: 76.75437826834582 - type: max_f1 value: 66.4850136239782 - task: type: STS dataset: name: MTEB CDSC-R type: PL-MTEB/cdscr-sts config: default split: test revision: None metrics: - type: cos_sim_pearson value: 93.12903172428328 - type: cos_sim_spearman value: 92.66381487060741 - type: euclidean_pearson value: 90.37278396708922 - type: euclidean_spearman value: 92.66381487060741 - type: manhattan_pearson value: 90.32503296540962 - type: manhattan_spearman value: 92.6902938354313 - task: type: Retrieval dataset: name: MTEB DBPedia-PL type: clarin-knext/dbpedia-pl config: default split: test revision: 76afe41d9af165cc40999fcaa92312b8b012064a metrics: - type: map_at_1 value: 8.83 - type: map_at_10 value: 18.326 - type: map_at_100 value: 26.496 - type: map_at_1000 value: 28.455000000000002 - type: map_at_3 value: 12.933 - type: map_at_5 value: 15.168000000000001 - type: mrr_at_1 value: 66.0 - type: mrr_at_10 value: 72.76700000000001 - type: mrr_at_100 value: 73.203 - type: mrr_at_1000 value: 73.219 - type: mrr_at_3 value: 71.458 - type: mrr_at_5 value: 72.246 - type: ndcg_at_1 value: 55.375 - type: ndcg_at_10 value: 41.3 - type: ndcg_at_100 value: 45.891 - type: ndcg_at_1000 value: 52.905 - type: ndcg_at_3 value: 46.472 - type: ndcg_at_5 value: 43.734 - type: precision_at_1 value: 66.0 - type: precision_at_10 value: 33.074999999999996 - type: precision_at_100 value: 11.094999999999999 - type: precision_at_1000 value: 2.374 - type: precision_at_3 value: 48.583 - type: precision_at_5 value: 42.0 - type: recall_at_1 value: 8.83 - type: recall_at_10 value: 22.587 - type: recall_at_100 value: 50.61600000000001 - type: recall_at_1000 value: 73.559 - type: recall_at_3 value: 13.688 - type: recall_at_5 value: 16.855 - task: type: Retrieval dataset: name: MTEB FiQA-PL type: clarin-knext/fiqa-pl config: default split: test revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e metrics: - type: map_at_1 value: 20.587 - type: map_at_10 value: 33.095 - type: map_at_100 value: 35.24 - type: map_at_1000 value: 35.429 - type: map_at_3 value: 28.626 - type: map_at_5 value: 31.136999999999997 - type: mrr_at_1 value: 40.586 - type: mrr_at_10 value: 49.033 - type: mrr_at_100 value: 49.952999999999996 - type: mrr_at_1000 value: 49.992 - type: mrr_at_3 value: 46.553 - type: mrr_at_5 value: 48.035 - type: ndcg_at_1 value: 40.586 - type: ndcg_at_10 value: 41.046 - type: ndcg_at_100 value: 48.586 - type: ndcg_at_1000 value: 51.634 - type: ndcg_at_3 value: 36.773 - type: ndcg_at_5 value: 38.389 - type: precision_at_1 value: 40.586 - type: precision_at_10 value: 11.466 - type: precision_at_100 value: 1.909 - type: precision_at_1000 value: 0.245 - type: precision_at_3 value: 24.434 - type: precision_at_5 value: 18.426000000000002 - type: recall_at_1 value: 20.587 - type: recall_at_10 value: 47.986000000000004 - type: recall_at_100 value: 75.761 - type: recall_at_1000 value: 94.065 - type: recall_at_3 value: 33.339 - type: recall_at_5 value: 39.765 - task: type: Retrieval dataset: name: MTEB HotpotQA-PL type: clarin-knext/hotpotqa-pl config: default split: test revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907 metrics: - type: map_at_1 value: 40.878 - type: map_at_10 value: 58.775999999999996 - type: map_at_100 value: 59.632 - type: map_at_1000 value: 59.707 - type: map_at_3 value: 56.074 - type: map_at_5 value: 57.629 - type: mrr_at_1 value: 81.756 - type: mrr_at_10 value: 86.117 - type: mrr_at_100 value: 86.299 - type: mrr_at_1000 value: 86.30600000000001 - type: mrr_at_3 value: 85.345 - type: mrr_at_5 value: 85.832 - type: ndcg_at_1 value: 81.756 - type: ndcg_at_10 value: 67.608 - type: ndcg_at_100 value: 70.575 - type: ndcg_at_1000 value: 71.99600000000001 - type: ndcg_at_3 value: 63.723 - type: ndcg_at_5 value: 65.70700000000001 - type: precision_at_1 value: 81.756 - type: precision_at_10 value: 13.619 - type: precision_at_100 value: 1.5939999999999999 - type: precision_at_1000 value: 0.178 - type: precision_at_3 value: 39.604 - type: precision_at_5 value: 25.332 - type: recall_at_1 value: 40.878 - type: recall_at_10 value: 68.096 - type: recall_at_100 value: 79.696 - type: recall_at_1000 value: 89.082 - type: recall_at_3 value: 59.406000000000006 - type: recall_at_5 value: 63.329 - task: type: Retrieval dataset: name: MTEB MSMARCO-PL type: clarin-knext/msmarco-pl config: default split: test revision: 8634c07806d5cce3a6138e260e59b81760a0a640 metrics: - type: map_at_1 value: 2.1839999999999997 - type: map_at_10 value: 11.346 - type: map_at_100 value: 30.325000000000003 - type: map_at_1000 value: 37.806 - type: map_at_3 value: 4.842 - type: map_at_5 value: 6.891 - type: mrr_at_1 value: 86.047 - type: mrr_at_10 value: 89.14699999999999 - type: mrr_at_100 value: 89.46600000000001 - type: mrr_at_1000 value: 89.46600000000001 - type: mrr_at_3 value: 89.14699999999999 - type: mrr_at_5 value: 89.14699999999999 - type: ndcg_at_1 value: 67.829 - type: ndcg_at_10 value: 62.222 - type: ndcg_at_100 value: 55.337 - type: ndcg_at_1000 value: 64.076 - type: ndcg_at_3 value: 68.12700000000001 - type: ndcg_at_5 value: 64.987 - type: precision_at_1 value: 86.047 - type: precision_at_10 value: 69.535 - type: precision_at_100 value: 32.93 - type: precision_at_1000 value: 6.6049999999999995 - type: precision_at_3 value: 79.845 - type: precision_at_5 value: 75.349 - type: recall_at_1 value: 2.1839999999999997 - type: recall_at_10 value: 12.866 - type: recall_at_100 value: 43.505 - type: recall_at_1000 value: 72.366 - type: recall_at_3 value: 4.947 - type: recall_at_5 value: 7.192 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 80.75319435104238 - type: f1 value: 77.58961444860606 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 85.54472091459313 - type: f1 value: 84.29498563572106 - task: type: Retrieval dataset: name: MTEB NFCorpus-PL type: clarin-knext/nfcorpus-pl config: default split: test revision: 9a6f9567fda928260afed2de480d79c98bf0bec0 metrics: - type: map_at_1 value: 4.367 - type: map_at_10 value: 10.38 - type: map_at_100 value: 13.516 - type: map_at_1000 value: 14.982000000000001 - type: map_at_3 value: 7.367 - type: map_at_5 value: 8.59 - type: mrr_at_1 value: 41.486000000000004 - type: mrr_at_10 value: 48.886 - type: mrr_at_100 value: 49.657000000000004 - type: mrr_at_1000 value: 49.713 - type: mrr_at_3 value: 46.904 - type: mrr_at_5 value: 48.065000000000005 - type: ndcg_at_1 value: 40.402 - type: ndcg_at_10 value: 30.885 - type: ndcg_at_100 value: 28.393 - type: ndcg_at_1000 value: 37.428 - type: ndcg_at_3 value: 35.394999999999996 - type: ndcg_at_5 value: 33.391999999999996 - type: precision_at_1 value: 41.486000000000004 - type: precision_at_10 value: 23.437 - type: precision_at_100 value: 7.638 - type: precision_at_1000 value: 2.0389999999999997 - type: precision_at_3 value: 32.817 - type: precision_at_5 value: 28.915999999999997 - type: recall_at_1 value: 4.367 - type: recall_at_10 value: 14.655000000000001 - type: recall_at_100 value: 29.665999999999997 - type: recall_at_1000 value: 62.073 - type: recall_at_3 value: 8.51 - type: recall_at_5 value: 10.689 - task: type: Retrieval dataset: name: MTEB NQ-PL type: clarin-knext/nq-pl config: default split: test revision: f171245712cf85dd4700b06bef18001578d0ca8d metrics: - type: map_at_1 value: 28.616000000000003 - type: map_at_10 value: 41.626000000000005 - type: map_at_100 value: 42.689 - type: map_at_1000 value: 42.733 - type: map_at_3 value: 37.729 - type: map_at_5 value: 39.879999999999995 - type: mrr_at_1 value: 32.068000000000005 - type: mrr_at_10 value: 44.029 - type: mrr_at_100 value: 44.87 - type: mrr_at_1000 value: 44.901 - type: mrr_at_3 value: 40.687 - type: mrr_at_5 value: 42.625 - type: ndcg_at_1 value: 32.068000000000005 - type: ndcg_at_10 value: 48.449999999999996 - type: ndcg_at_100 value: 53.13 - type: ndcg_at_1000 value: 54.186 - type: ndcg_at_3 value: 40.983999999999995 - type: ndcg_at_5 value: 44.628 - type: precision_at_1 value: 32.068000000000005 - type: precision_at_10 value: 7.9750000000000005 - type: precision_at_100 value: 1.061 - type: precision_at_1000 value: 0.116 - type: precision_at_3 value: 18.404999999999998 - type: precision_at_5 value: 13.111 - type: recall_at_1 value: 28.616000000000003 - type: recall_at_10 value: 66.956 - type: recall_at_100 value: 87.657 - type: recall_at_1000 value: 95.548 - type: recall_at_3 value: 47.453 - type: recall_at_5 value: 55.87800000000001 - task: type: Classification dataset: name: MTEB PAC type: laugustyniak/abusive-clauses-pl config: default split: test revision: None metrics: - type: accuracy value: 69.04141326382856 - type: ap value: 77.47589122111044 - type: f1 value: 66.6332277374775 - task: type: PairClassification dataset: name: MTEB PPC type: PL-MTEB/ppc-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 86.4 - type: cos_sim_ap value: 94.1044939667201 - type: cos_sim_f1 value: 88.78048780487805 - type: cos_sim_precision value: 87.22044728434504 - type: cos_sim_recall value: 90.39735099337747 - type: dot_accuracy value: 86.4 - type: dot_ap value: 94.1044939667201 - type: dot_f1 value: 88.78048780487805 - type: dot_precision value: 87.22044728434504 - type: dot_recall value: 90.39735099337747 - type: euclidean_accuracy value: 86.4 - type: euclidean_ap value: 94.1044939667201 - type: euclidean_f1 value: 88.78048780487805 - type: euclidean_precision value: 87.22044728434504 - type: euclidean_recall value: 90.39735099337747 - type: manhattan_accuracy value: 86.4 - type: manhattan_ap value: 94.11438365697387 - type: manhattan_f1 value: 88.77968877968877 - type: manhattan_precision value: 87.84440842787681 - type: manhattan_recall value: 89.73509933774835 - type: max_accuracy value: 86.4 - type: max_ap value: 94.11438365697387 - type: max_f1 value: 88.78048780487805 - task: type: PairClassification dataset: name: MTEB PSC type: PL-MTEB/psc-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 97.86641929499072 - type: cos_sim_ap value: 99.36904211868182 - type: cos_sim_f1 value: 96.56203288490283 - type: cos_sim_precision value: 94.72140762463343 - type: cos_sim_recall value: 98.47560975609755 - type: dot_accuracy value: 97.86641929499072 - type: dot_ap value: 99.36904211868183 - type: dot_f1 value: 96.56203288490283 - type: dot_precision value: 94.72140762463343 - type: dot_recall value: 98.47560975609755 - type: euclidean_accuracy value: 97.86641929499072 - type: euclidean_ap value: 99.36904211868183 - type: euclidean_f1 value: 96.56203288490283 - type: euclidean_precision value: 94.72140762463343 - type: euclidean_recall value: 98.47560975609755 - type: manhattan_accuracy value: 98.14471243042672 - type: manhattan_ap value: 99.43359540492416 - type: manhattan_f1 value: 96.98795180722892 - type: manhattan_precision value: 95.83333333333334 - type: manhattan_recall value: 98.17073170731707 - type: max_accuracy value: 98.14471243042672 - type: max_ap value: 99.43359540492416 - type: max_f1 value: 96.98795180722892 - task: type: Classification dataset: name: MTEB PolEmo2.0-IN type: PL-MTEB/polemo2_in config: default split: test revision: None metrics: - type: accuracy value: 89.39058171745152 - type: f1 value: 86.8552093529568 - task: type: Classification dataset: name: MTEB PolEmo2.0-OUT type: PL-MTEB/polemo2_out config: default split: test revision: None metrics: - type: accuracy value: 74.97975708502024 - type: f1 value: 58.73081628832407 - task: type: Retrieval dataset: name: MTEB Quora-PL type: clarin-knext/quora-pl config: default split: test revision: 0be27e93455051e531182b85e85e425aba12e9d4 metrics: - type: map_at_1 value: 64.917 - type: map_at_10 value: 78.74600000000001 - type: map_at_100 value: 79.501 - type: map_at_1000 value: 79.524 - type: map_at_3 value: 75.549 - type: map_at_5 value: 77.495 - type: mrr_at_1 value: 74.9 - type: mrr_at_10 value: 82.112 - type: mrr_at_100 value: 82.314 - type: mrr_at_1000 value: 82.317 - type: mrr_at_3 value: 80.745 - type: mrr_at_5 value: 81.607 - type: ndcg_at_1 value: 74.83999999999999 - type: ndcg_at_10 value: 83.214 - type: ndcg_at_100 value: 84.997 - type: ndcg_at_1000 value: 85.207 - type: ndcg_at_3 value: 79.547 - type: ndcg_at_5 value: 81.46600000000001 - type: precision_at_1 value: 74.83999999999999 - type: precision_at_10 value: 12.822 - type: precision_at_100 value: 1.506 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 34.903 - type: precision_at_5 value: 23.16 - type: recall_at_1 value: 64.917 - type: recall_at_10 value: 92.27199999999999 - type: recall_at_100 value: 98.715 - type: recall_at_1000 value: 99.854 - type: recall_at_3 value: 82.04599999999999 - type: recall_at_5 value: 87.2 - task: type: Retrieval dataset: name: MTEB SCIDOCS-PL type: clarin-knext/scidocs-pl config: default split: test revision: 45452b03f05560207ef19149545f168e596c9337 metrics: - type: map_at_1 value: 3.51 - type: map_at_10 value: 9.046999999999999 - type: map_at_100 value: 10.823 - type: map_at_1000 value: 11.144 - type: map_at_3 value: 6.257 - type: map_at_5 value: 7.648000000000001 - type: mrr_at_1 value: 17.299999999999997 - type: mrr_at_10 value: 27.419 - type: mrr_at_100 value: 28.618 - type: mrr_at_1000 value: 28.685 - type: mrr_at_3 value: 23.817 - type: mrr_at_5 value: 25.927 - type: ndcg_at_1 value: 17.299999999999997 - type: ndcg_at_10 value: 16.084 - type: ndcg_at_100 value: 23.729 - type: ndcg_at_1000 value: 29.476999999999997 - type: ndcg_at_3 value: 14.327000000000002 - type: ndcg_at_5 value: 13.017999999999999 - type: precision_at_1 value: 17.299999999999997 - type: precision_at_10 value: 8.63 - type: precision_at_100 value: 1.981 - type: precision_at_1000 value: 0.336 - type: precision_at_3 value: 13.4 - type: precision_at_5 value: 11.700000000000001 - type: recall_at_1 value: 3.51 - type: recall_at_10 value: 17.518 - type: recall_at_100 value: 40.275 - type: recall_at_1000 value: 68.203 - type: recall_at_3 value: 8.155 - type: recall_at_5 value: 11.875 - task: type: PairClassification dataset: name: MTEB SICK-E-PL type: PL-MTEB/sicke-pl-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 86.30248675091724 - type: cos_sim_ap value: 83.6756734006714 - type: cos_sim_f1 value: 74.97367497367497 - type: cos_sim_precision value: 73.91003460207612 - type: cos_sim_recall value: 76.06837606837607 - type: dot_accuracy value: 86.30248675091724 - type: dot_ap value: 83.6756734006714 - type: dot_f1 value: 74.97367497367497 - type: dot_precision value: 73.91003460207612 - type: dot_recall value: 76.06837606837607 - type: euclidean_accuracy value: 86.30248675091724 - type: euclidean_ap value: 83.67566984333091 - type: euclidean_f1 value: 74.97367497367497 - type: euclidean_precision value: 73.91003460207612 - type: euclidean_recall value: 76.06837606837607 - type: manhattan_accuracy value: 86.28210354667753 - type: manhattan_ap value: 83.64216119130171 - type: manhattan_f1 value: 74.92152075340078 - type: manhattan_precision value: 73.4107997265892 - type: manhattan_recall value: 76.49572649572649 - type: max_accuracy value: 86.30248675091724 - type: max_ap value: 83.6756734006714 - type: max_f1 value: 74.97367497367497 - task: type: STS dataset: name: MTEB SICK-R-PL type: PL-MTEB/sickr-pl-sts config: default split: test revision: None metrics: - type: cos_sim_pearson value: 82.23295940859121 - type: cos_sim_spearman value: 78.89329160768719 - type: euclidean_pearson value: 79.56019107076818 - type: euclidean_spearman value: 78.89330209904084 - type: manhattan_pearson value: 79.76098513973719 - type: manhattan_spearman value: 79.05490162570123 - task: type: STS dataset: name: MTEB STS22 (pl) type: mteb/sts22-crosslingual-sts config: pl split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 37.732606308062486 - type: cos_sim_spearman value: 41.01645667030284 - type: euclidean_pearson value: 26.61722556367085 - type: euclidean_spearman value: 41.01645667030284 - type: manhattan_pearson value: 26.60917378970807 - type: manhattan_spearman value: 41.51335727617614 - task: type: Retrieval dataset: name: MTEB SciFact-PL type: clarin-knext/scifact-pl config: default split: test revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e metrics: - type: map_at_1 value: 54.31700000000001 - type: map_at_10 value: 65.564 - type: map_at_100 value: 66.062 - type: map_at_1000 value: 66.08699999999999 - type: map_at_3 value: 62.592999999999996 - type: map_at_5 value: 63.888 - type: mrr_at_1 value: 56.99999999999999 - type: mrr_at_10 value: 66.412 - type: mrr_at_100 value: 66.85900000000001 - type: mrr_at_1000 value: 66.88 - type: mrr_at_3 value: 64.22200000000001 - type: mrr_at_5 value: 65.206 - type: ndcg_at_1 value: 56.99999999999999 - type: ndcg_at_10 value: 70.577 - type: ndcg_at_100 value: 72.879 - type: ndcg_at_1000 value: 73.45 - type: ndcg_at_3 value: 65.5 - type: ndcg_at_5 value: 67.278 - type: precision_at_1 value: 56.99999999999999 - type: precision_at_10 value: 9.667 - type: precision_at_100 value: 1.083 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.0 - type: precision_at_5 value: 16.933 - type: recall_at_1 value: 54.31700000000001 - type: recall_at_10 value: 85.056 - type: recall_at_100 value: 95.667 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 71.0 - type: recall_at_5 value: 75.672 - task: type: Retrieval dataset: name: MTEB TRECCOVID-PL type: clarin-knext/trec-covid-pl config: default split: test revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd metrics: - type: map_at_1 value: 0.245 - type: map_at_10 value: 2.051 - type: map_at_100 value: 12.009 - type: map_at_1000 value: 27.448 - type: map_at_3 value: 0.721 - type: map_at_5 value: 1.13 - type: mrr_at_1 value: 88.0 - type: mrr_at_10 value: 93.0 - type: mrr_at_100 value: 93.0 - type: mrr_at_1000 value: 93.0 - type: mrr_at_3 value: 93.0 - type: mrr_at_5 value: 93.0 - type: ndcg_at_1 value: 85.0 - type: ndcg_at_10 value: 80.303 - type: ndcg_at_100 value: 61.23499999999999 - type: ndcg_at_1000 value: 52.978 - type: ndcg_at_3 value: 84.419 - type: ndcg_at_5 value: 82.976 - type: precision_at_1 value: 88.0 - type: precision_at_10 value: 83.39999999999999 - type: precision_at_100 value: 61.96 - type: precision_at_1000 value: 22.648 - type: precision_at_3 value: 89.333 - type: precision_at_5 value: 87.2 - type: recall_at_1 value: 0.245 - type: recall_at_10 value: 2.193 - type: recall_at_100 value: 14.938 - type: recall_at_1000 value: 48.563 - type: recall_at_3 value: 0.738 - type: recall_at_5 value: 1.173 --- # beethogedeon/gte-Qwen2-7B-instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`Alibaba-NLP/gte-Qwen2-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo beethogedeon/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo beethogedeon/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo beethogedeon/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo beethogedeon/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -c 2048 ```
[ "BIOSSES", "SCIFACT" ]
zjunlp/OneKE
zjunlp
text-generation
[ "transformers", "pytorch", "llama", "text-generation", "en", "zh", "dataset:zjunlp/iepile", "dataset:zjunlp/InstructIE", "arxiv:2402.14710", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-02-23T09:28:16Z
2024-05-06T09:49:31+00:00
353
42
--- datasets: - zjunlp/iepile - zjunlp/InstructIE language: - en - zh license: cc-by-nc-sa-4.0 --- <p align="center"> <a href="https://github.com/zjunlp/deepke"> <img src="assets/oneke_logo.png" width="400"/></a> <p> <p align="center"> <a href="https://oneke.openkg.cn/"> <img alt="Documentation" src="https://img.shields.io/badge/demo-website-blue"> </a> <a href="https://pypi.org/project/deepke/#files"> <img alt="PyPI" src="https://img.shields.io/pypi/v/deepke"> </a> <a href="https://github.com/zjunlp/DeepKE/blob/master/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/zjunlp/deepke"> </a> <a href="http://zjunlp.github.io/DeepKE"> <img alt="Documentation" src="https://img.shields.io/badge/doc-website-red"> </a> </p> <h1 align="center"> <p>OneKE: A Bilingual Large Language Model for <br>Knowledge Extraction</p> </h1> - [What is OneKE?](#what-is-oneke) - [How is OneKE trained?](#how-is-oneke-trained) - [Getting Started with OneKE](#getting-started-with-oneke) - [Quick Start](#quick-start) - [Advanced Use of OneKE](#advanced-use-of-oneke) - [OneKE Instruction Format](#oneke-instruction-format) - [Conversion of OneKE Instruction Format](#conversion-of-oneke-instruction-format) - [Customized Schema Description Instructions](#customized-schema-description-instructions) - [Evaluation](#evaluation) - [Continue Training](#continue-training) - [Citation](#citation) ## What is OneKE? OneKE is a large-scale model framework for knowledge extraction jointly developed by Ant Group and Zhejiang University. It possesses the capability of generalized knowledge extraction in bilingual Chinese and English, across multiple domains and tasks, and provides comprehensive toolchain support. OneKE has contributed to the OpenKG open knowledge graph community in an open-source manner. Knowledge construction based on unstructured documents has always been one of the key challenges for the large-scale implementation of knowledge graphs. The high fragmentation and unstructured nature of real-world information, along with the substantial disparities between extracted content and its natural language expression, often result in the suboptimal performance of large language models in information extraction tasks. Natural language text often contains ambiguities, polysemies, and metaphors due to implicit and long-distance context associations, posing significant challenges for knowledge extraction tasks. In response to these issues, Ant Group and Zhejiang University leveraged their years of expertise in knowledge graphs and natural language processing to jointly construct and upgrade the capabilities of Ant's large-scale model "BaiLing" in the field of knowledge extraction. They released the bilingual knowledge extraction framework OneKE which included a version based on full parametric fine-tuning of Chinese-Alpaca-2-13B. Evaluation metrics show that OneKE has achieved relatively good performance on several fully supervised and zero-shot entity/relation/event extraction tasks. The unified knowledge extraction framework has wide application scenarios and can significantly reduce the construction costs of domain-specific knowledge graphs. By extracting structured knowledge from massive datasets to construct high-quality knowledge graphs and establish logical associations between knowledge elements, interpretable inference and decision-making can be realized. It can also enhance large models by mitigating hallucination and boosting stability, accelerating the vertical domain applications of large models. For example, in the medical field, knowledge extraction can be used to convert doctors' experience into structured, rule-based management, building controlled auxiliary diagnostics, and medical Q&A systems. In the financial sector, it can extract financial indicators, risk events, causal logic, and industry chains for automated financial report generation, risk prediction, and industry chain analysis. In the public sector, it can facilitate knowledge-based management of government regulations, enhancing the efficiency and accuracy of public services. <p align="center" width="100%"> <a href="" target="_blank"><img src="assets/oneke.gif" alt="OneKE" style="width: 100%; min-width: 20px; display: block; margin: auto;"></a> </p> ## How is OneKE trained? OneKE mainly focuses on schema-generalizable information extraction. Due to issues such as non-standard formats, noisy data, and lack of diversity in existing extraction instruction data, OneKE adopted techniques such as normalization and cleaning of extraction instructions, difficult negative sample collection, and schema-based batched instruction construction, as shown in the illustration. For more detailed information, refer to the paper "[IEPile: Unearthing Large-Scale Schema-Based Information Extraction Corpus](https://arxiv.org/abs/2402.14710) [[Github](https://github.com/zjunlp/IEPile)]". The zero-shot generalization comparison results of OneKE with other large models are as follows: * `NER-en`: CrossNER_AI, CrossNER_literature, CrossNER_music, CrossNER_politics, CrossNER_science * `NER-zh`: WEIBONER, boson * `RE-zh`: COAE2016, IPRE, SKE2020 * `RE-en`: FewRel, Wiki-ZSL * `EE-en`: CrudeOilNews, WikiEvents, RAMS * `EE-zh`: FewFC, CCF Law <p align="center" width="50%"> <a href="" target="_blank"><img src="assets/oneke_results.png" alt="OneKE" style="width: 50%; min-width: 20px; display: block; margin: auto;"></a> </p> ![zero_en](./assets/zero_en.jpg) ![zero_zh](./assets/zero_zh.jpg) <details> <summary><b>Supervision Results</b></summary> ![supervision_ner](./assets/supervision_ner.jpg) ![supervision_re](./assets/supervision_re.jpg) ![supervision_ee](./assets/supervision_ee.jpg) </details> ## Getting Started with OneKE ### Quick Start It is recommended to have at least **20GB of VRAM** for training and inferencing. ```python import torch from transformers import ( AutoConfig, AutoTokenizer, AutoModelForCausalLM, GenerationConfig, BitsAndBytesConfig ) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model_path = 'zjunlp/OneKE' config = AutoConfig.from_pretrained(model_path, trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) # 4-bit Quantized OneKE quantization_config=BitsAndBytesConfig( load_in_4bit=True, llm_int8_threshold=6.0, llm_int8_has_fp16_weight=False, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", ) model = AutoModelForCausalLM.from_pretrained( model_path, config=config, device_map="auto", quantization_config=quantization_config, torch_dtype=torch.bfloat16, trust_remote_code=True, ) model.eval() system_prompt = '<<SYS>>\nYou are a helpful assistant. 你是一个乐于助人的助手。\n<</SYS>>\n\n' sintruct = "{\"instruction\": \"You are an expert in named entity recognition. Please extract entities that match the schema definition from the input. Return an empty list if the entity type does not exist. Please respond in the format of a JSON string.\", \"schema\": [\"person\", \"organization\", \"else\", \"location\"], \"input\": \"284 Robert Allenby ( Australia ) 69 71 71 73 , Miguel Angel Martin ( Spain ) 75 70 71 68 ( Allenby won at first play-off hole )\"}" sintruct = '[INST] ' + system_prompt + sintruct + '[/INST]' input_ids = tokenizer.encode(sintruct, return_tensors="pt").to(device) input_length = input_ids.size(1) generation_output = model.generate(input_ids=input_ids, generation_config=GenerationConfig(max_length=1024, max_new_tokens=512, return_dict_in_generate=True)) generation_output = generation_output.sequences[0] generation_output = generation_output[input_length:] output = tokenizer.decode(generation_output, skip_special_tokens=True) print(output) ``` For more detailed inference, please refer to [DeepKE-llm/InstructKGC/6.1.2IE专用模型](https://github.com/zjunlp/DeepKE/blob/main/example/llm/InstructKGC/README_CN.md/#612ie%E4%B8%93%E7%94%A8%E6%A8%A1%E5%9E%8B). ### Advanced Use of OneKE ### OneKE Instruction Format The instructions in OneKE are formatted in a dictionary-type string similar to JSON. It consists of three fields: (1) **`'instruction'`**, which is the task description, specifies in natural language the role the model plays and the task to be completed; (2) **`'schema'`**, a list of labels to be extracted, clearly indicates the key fields of the information to be extracted, reflecting the user's needs, and is dynamic and changeable; (3) **`'input'`**, refers to the source text for information extraction. Below are examples of instructions for various tasks: <details> <summary><b>Named Entity Recognition (NER)</b></summary> ```json { "instruction": "You are an expert specializing in entity extraction. Please extract entities that comply with the schema definition from the input; return an empty list for non-existent entity types. Please respond in the JSON string format.", "schema": ["Person Name", "Education", "Position", "Nationality"], "input": "Mr. Liu Zhijian: Born in 1956, Chinese nationality, no permanent residency abroad, member of the Communist Party, associate degree, senior economist." } ``` </details> <details> <summary><b>Relation Extraction (RE)</b></summary> ```json { "instruction": "You are an expert specializing in relation extraction. Please extract relationship triples that comply with the schema definition from the input; return an empty list for non-existent relationships. Please respond in the JSON string format.", "schema": ["Father", "Husband", "Postal Code", "Mother"], "input": "Ding Long took out his life savings of $12,000, which without a doubt was a substantial amount at the end of the 19th century, plus Carpentier's donation, they both funded Columbia University's sinology research together." } ``` </details> <details> <summary><b>Knowledge Graph Construction (KGC)</b></summary> ```json { "instruction": "You are an expert in structuring knowledge about graph entities. Based on the schema description of the input entity type, extract the corresponding entity instances and their property information from the text; do not output non-existent properties, return a list if there are multiple values for a property, and provide the output in a parseable json format.", "schema": [ { "entity_type": "Person", "attributes": ["Chinese Name", "English Name", "Ancestral Home", "Date of Birth", "Place of Birth", "Occupation", "Alma Mater", "Works", "Awards"] } ], "input": "Jay Chou (Jay Chou), born on January 18, 1979, in New Taipei City, Taiwan Province, ancestral home in Yongchun County, Quanzhou City, Fujian Province, Chinese pop singer, musician, actor, director, screenwriter, graduated from Tamkang High School. In 2000, he released his debut album 'Jay'. In 2001, he cemented his style of blending Eastern and Western music with the album 'Fantasy'. In 2002, he held ‘The One’ world tour; the same year, he won the Best Composer award at the 13th Taiwan Golden Melody Awards with the song 'Love Before the Century'." } ``` </details> <details> <summary><b>Event Extraction (EE)</b></summary> ```json { "instruction": "You are an expert specializing in event extraction. Please extract events that match the defined schema from the input; return an empty list for non-existent events, NAN for non-existent arguments, and a list if there are multiple values for an argument. Please provide your response in JSON string format.", "schema": [ { "event_type": "Finance/Trading - Interest Rate Hike", "trigger": true, "arguments": [ "Time" ] }, { "event_type": "Finance/Trading - Interest Rate Cut", "trigger": true, "arguments": [ "Cut Magnitude" ] }, { "event_type": "Finance/Trading - Price Increase", "trigger": true, "arguments": [ "Price Raiser" ] }, { "event_type": "Finance/Trading - Price Cut", "trigger": true, "arguments": [ "Price Cutter", "Time" ] } ], "input": "AI risk control solution provider Vezetech secures tens of millions of dollars in Series C+ funding" } ``` </details> <details> <summary><b>Event Trigger Identification (EET)</b></summary> ```json { "instruction": "You are an expert specializing in event trigger identification. Please extract the event types and triggers that match the defined schema from the input; return an empty list if the event type doesn't exist. Please provide your response in JSON string format.", "schema": ["Organizational Relationship - Dissolve", "Organizational Relationship - Layoff", "Organizational Relationship - Dismiss", "Competition Behavior - Promotion"], "input": "Nestlé lays off 4,000 employees: When the times leave you behind, they won't even say goodbye!" } ``` </details> <details> <summary><b>Event Argument Extraction (EEA)</b></summary> ```json { "instruction": "You are an expert specializing in event argument extraction. Please extract the event arguments and their roles that match the defined schema from the input; return NAN or an empty dictionary for non-existent arguments, and a list if there are multiple values for an argument. Please provide your response in JSON string format.", "schema": [{"event_type": "Organizational Relationship - Resignation/Departure", "arguments": ["Resigner", "Time", "Former Organization"]}], "input": "Nestlé lays off 4,000 employees: When the times leave you behind, they won't even say goodbye!" } ``` </details> > Note: In consideration of the complexity of information extraction within specific domains and the high reliance on prompts, we support the integration of Schema descriptions and examples in the instructions to enhance the effectiveness of extraction tasks. For details, refer to **`Customized Schema Description Instructions`** and **`Customized Example Instructions`**. Please understand that due to the limited scale of the model, the model output is prompt-dependent and different prompts may yield inconsistent results. ### Conversion of OneKE Instruction Format **List of Instructions**: ```python instruction_mapper = { 'NERzh': "你是专门进行实体抽取的专家。请从input中抽取出符合schema定义的实体,不存在的实体类型返回空列表。请按照JSON字符串的格式回答。", 'REzh': "你是专门进行关系抽取的专家。请从input中抽取出符合schema定义的关系三元组,不存在的关系返回空列表。请按照JSON字符串的格式回答。", 'EEzh': "你是专门进行事件提取的专家。请从input中抽取出符合schema定义的事件,不存在的事件返回空列表,不存在的论元返回NAN,如果论元存在多值请返回列表。请按照JSON字符串的格式回答。", 'EETzh': "你是专门进行事件提取的专家。请从input中抽取出符合schema定义的事件类型及事件触发词,不存在的事件返回空列表。请按照JSON字符串的格式回答。", 'EEAzh': "你是专门进行事件论元提取的专家。请从input中抽取出符合schema定义的事件论元及论元角色,不存在的论元返回NAN或空字典,如果论元存在多值请返回列表。请按照JSON字符串的格式回答。", 'KGzh': '你是一个图谱实体知识结构化专家。根据输入实体类型(entity type)的schema描述,从文本中抽取出相应的实体实例和其属性信息,不存在的属性不输出, 属性存在多值就返回列表,并输出为可解析的json格式。', 'NERen': "You are an expert in named entity recognition. Please extract entities that match the schema definition from the input. Return an empty list if the entity type does not exist. Please respond in the format of a JSON string.", 'REen': "You are an expert in relationship extraction. Please extract relationship triples that match the schema definition from the input. Return an empty list for relationships that do not exist. Please respond in the format of a JSON string.", 'EEen': "You are an expert in event extraction. Please extract events from the input that conform to the schema definition. Return an empty list for events that do not exist, and return NAN for arguments that do not exist. If an argument has multiple values, please return a list. Respond in the format of a JSON string.", 'EETen': "You are an expert in event extraction. Please extract event types and event trigger words from the input that conform to the schema definition. Return an empty list for non-existent events. Please respond in the format of a JSON string.", 'EEAen': "You are an expert in event argument extraction. Please extract event arguments and their roles from the input that conform to the schema definition, which already includes event trigger words. If an argument does not exist, return NAN or an empty dictionary. Please respond in the format of a JSON string.", 'KGen': 'You are an expert in structured knowledge systems for graph entities. Based on the schema description of the input entity type, you extract the corresponding entity instances and their attribute information from the text. Attributes that do not exist should not be output. If an attribute has multiple values, a list should be returned. The results should be output in a parsable JSON format.', } ``` Recommended **Split Numbers** for Each Task: ```python split_num_mapper = { 'NER':6, 'RE':4, 'EE':4, 'EET':4, 'EEA':4, 'KG':1 } ``` Since predicting all schemas in the label set at once is too challenging and not easily scalable, OneKE uses a batched approach during training. It divides the number of schemas asked in the instructions, querying a fixed number of schemas at a time. Hence, if the label set of a piece of data is too long, it will be split into multiple instructions that the model will address in turns. **Schema Format**: ```python NER: ["Person Name", "Education", "Position", "Nationality"] # List of strings RE: ["Father", "Husband", "Postal Code", "Mother"] # List of strings EE: [{"event_type": "Finance/Trading - Interest Rate Hike", "trigger": True, "arguments": ["Time"]}, {"event_type": "Finance/Trading - Interest Rate Cut", "trigger": True, "arguments": ["Cut Magnitude"]}] # List of dictionaries, "event_type" is a string, "trigger" is a bool, "arguments" is a list EET: ["Organizational Relationship - Dissolution", "Organizational Relationship - Layoff", "Organizational Relationship - Dismissal", "Competition Behavior - Advancement"] # List of strings EEA: [{"event_type": "Finance/Trading - Interest Rate Hike", "arguments": ["Time"]}, {"event_type": "Finance/Trading - Interest Rate Cut", "arguments": ["Cut Magnitude"]}] # List of dictionaries, "event_type" is a string, "arguments" is a list ``` Below is a simple Batched Instruction Generation script: ```python def get_instruction(language, task, schema, input): sintructs = [] split_num = split_num_mapper[task] if type(schema) == dict: sintruct = json.dumps({'instruction':instruction_mapper[task+language], 'schema':schema, 'input':input}, ensure_ascii=False) sintructs.append(sintruct) else: split_schemas = [schema[i:i+split_num] for i in range(0, len(schema), split_num)] for split_schema in split_schemas: sintruct = json.dumps({'instruction':instruction_mapper[task+language], 'schema':split_schema, 'input':input}, ensure_ascii=False) sintructs.append(sintruct) return sintructs ``` Below is an example using the aforementioned simple script: ```python task = 'NER' language = 'en' schema = ['person', 'organization', 'else', 'location'] split_num = split_num_mapper[task] split_schemas = [schema[i:i+split_num] for i in range(0, len(schema), split_num)] input = '284 Robert Allenby ( Australia ) 69 71 71 73 , Miguel Angel Martin ( Spain ) 75 70 71 68 ( Allenby won at first play-off hole )' sintructs = [] for split_schema in split_schemas: sintruct = json.dumps({'instruction':instruction_mapper[task+language], 'schema':split_schema, 'input':input}, ensure_ascii=False) sintructs.append(sintruct) ``` > '{"instruction": "You are an expert in named entity recognition. Please extract entities that match the schema definition from the input. Return an empty list if the entity type does not exist. Please respond in the format of a JSON string.", "schema": ["person", "organization", "else", "location"], "input": "284 Robert Allenby ( Australia ) 69 71 71 73 , Miguel Angel Martin ( Spain ) 75 70 71 68 ( Allenby won at first play-off hole )"}' For more detailed data conversion, please refer to [DeepKE-llm/InstructKGC/README_CN.md/2.3测试数据转换](https://github.com/zjunlp/DeepKE/blob/main/example/llm/InstructKGC/README_CN.md/#23%E6%B5%8B%E8%AF%95%E6%95%B0%E6%8D%AE%E8%BD%AC%E6%8D%A2) ### Customized Schema Description Instructions ```json { "instruction": "You are an expert specializing in entity extraction. Please extract entities that comply with the defined schema from the input; return an empty list for non-existent entity types. Please respond in JSON string format.", "schema": { "Position": "The entity type describes the occupation or official position of an individual or group, including specific role names such as 'producer', 'scorekeeper', 'ascetic', 'oil painter'.", "Attraction": "The entity type of attraction includes buildings, museums, memorials, art galleries, rivers, peaks, etc. Representative entities include the Pentagon, Tate Modern, Zheng Chenggong Memorial Hall, Duxi Palace, Barikasa, Robo River, Gunung Batur, Yugong Yishan LIVE, Xu Beihong Memorial Hall, Madame Tussauds, etc.", "Company": "Company is an entity type representing any legal entity or business organization. This type of entity can be a catering group, manufacturer, retailer, hotel, bank, design institute, etc. Examples include: 'Shangri-La Hotel Group', 'JVC', 'Shanghai Coolray Professional eSports Peripheral Store', 'K2&bull;Haitang Bay', 'Wuhan Iron and Steel', 'louisvuitton', 'Bank of Scotland', 'Beijing Institute of Architectural Design', '7 Days Inn', 'Vanke Group'.", "Address": "Address entities refer to entities with geographical location information, representing specific places such as a country, city, region, street, or abstract geographic areas. Examples include: 'the river dock at the southeast tip of downtown Manhattan', 'Tuapse', 'Venice, Italy', 'Huzhou Hot Spring Golf Course', 'North Carolina', 'Beijing-Tianjin region', 'Happy Internet Cafe', 'Yinian Nursing Home', 'Shangtang Town Pudong', 'Inner Mongolia Autonomous Region Chifeng City', etc.", "Organization": "Organizational entities refer to collective organizations such as companies, shops, clubs, schools, etc. They play a certain role in social and economic activities and have certain personality rights.", "Movie": "Movie entities include titles of movies in Chinese or English, and sometimes also include names of characters in films." }, "input": "It is difficult for me to imagine setting up another Haifishing Plaza. When we obtained this project, I just happened to be in Sanya." } ``` <details> <summary><b>Relation Extraction (RE) Description Instructions</b></summary> ```json { "instruction": "You are an expert specializing in relation extraction. Please extract triples that match the defined schema from the input; return an empty list for non-existent relations. Please respond in JSON string format.", "schema": { "Ethnicity": "Ethnicity", "Alma Mater": "This type of relationship describes the connection between a person and their alma mater; the person is the subject, and the alma mater is the object. By identifying the names of people and schools in the text and analyzing the relationship of graduation between them based on word combinations and contextual information.", "Lead Actor": "This is a type of relationship that describes the connection between a film or television work and its main actors; the subject is the film or television work and the object is the actor. In a valid 'Lead Actor' relationship, the actor (object) plays an important role in the work (subject).", "Father": "This type of relationship is used to indicate the kinship between a father and a child, where the father is the birth parent or caregiver of the child. In the triple, the subject of the 'Father' relation type is the child, and the object is the father." }, "input": "Throughout history, all those who have portrayed the character 'Chu Liuxiang' from Gu Long's novels are recognized as handsome men in the entertainment industry. In 2011, 36-year-old Zhang Zhiyao played Chu Liuxiang in 'The New Adventures of Chu Liuxiang', remaining irresistibly handsome." } ``` </details> <details> <summary><b>Event Extraction (EE) Description Instructions</b></summary> ```json { "instruction": "You are an expert specializing in event extraction. Please extract events that match the schema definition from the input; return an empty list for non-existent events, NAN for non-existent arguments, and a list if there are multiple values for an argument. Please respond in JSON string format.", "schema": { "Finance/Trading - Listing": { "Finance/Trading - Listing": "The act of a financial entity being listed on the stock market mainly involves companies, stocks, etc. Positive examples include specific information about a company or stock listing, while negative examples are unrelated to such activities.", "trigger": true, "arguments": { "Financing Amount": "Refers to the total amount of funds raised by a company in a listing event. It sums up the revenue of all share issues and is measured in currency, including but not limited to units like 'billion', 'million', 'dollars', 'RMB', etc.", "Time": "Describes the specific time of the listing event, which can be a specific date or relative time, and may also include location information and specific days and weeks.", "Listing Enterprise": "Refers to the company or enterprise that is conducting an IPO or has already been listed on the trading market in a listing event. Examples include: 'Shanghai Henlius Biotech', 'Three Squirrels', 'Baoxin Software', 'Little Bear Electric', 'Jinshang Bank', 'Beyond Meat (BYND)', 'DouYu gaming live-streaming platform', 'fast food empire', and 'autonomous driving lidar manufacturer Velodyne', etc.", "Location": "The specific location of the financial or trading event, such as a city, building, or room." } }, "Organizational Relationship - Resignation/Departure": { "Organizational Relationship - Resignation/Departure": "The event type 'Organizational Relationship - Resignation/Departure' refers to changes in the relationship between individuals or organizational members and their organization, mainly including 'resignation', 'requesting to resign', 'stepping down', 'leaving the team', 'retirement', 'leaving', etc. Often occurs in scenarios of high-level personnel changes, government officials changes, or athletes transfers. Examples: 'Li Nan announced resignation', 'Yu Xubo resigned from the position of chairman of the board just three months after taking office, Chen Lang succeeded'.", "trigger": true, "arguments": { "Resigner": "Refers to the individual or group who actively or passively leaves their original position or job post in an organizational relationship resignation/departure event. It can be one person or a group of people, such as: 'Finance Minister', '90s born guy from Shaoyang Longhui, Ouyang En and', 'Xiong Xiaoge', '*ST Changsheng two deputy general managers', 'Yang Tao', 'pilot Ma Qiang', 'HE WEI', '5 Baidu executives', 'Youxin Group COO Peng Weilian', 'Jianke Institute securities representative Shu Yanming', etc.", "Time": "Indicates the specific point in time or period when the resignation/departure event occurred, generally including specific dates, weeks, times, etc., like 'September 19', 'the evening of June 29', 'this Saturday', '10:30 AM on July 9', 'the morning of June 12', 'April 9', 'September 10', 'local time on Sunday', 'September 12', '10 AM on October 15', etc." } }, "Finance/Trading - Interest Rate Increase": { "Finance/Trading - Interest Rate Increase": "This event describes banks or financial institutions raising interest rates to tighten the money supply. The typical trigger word is 'hike'. 'Hike' indicates the occurrence of the Finance/Trading - Interest Rate Increase event.", "trigger": true, "arguments": { "Rate of Increase": "The rate of increase is usually presented as a percentage or basis points, indicating the degree or range of the interest rate hike in the event. Examples include: 'to 5.75%', '25 basis points', 'the benchmark rate from 0.25% up to 0.5%', '25 basis points'.", "Hiking Institution": "The hiking institution is the financial institution with the authority to determine or implement the interest rate hike policy in a Finance/Trading - Interest Rate Increase event, such as central banks from different countries (e.g., Bank of England, Federal Reserve, European Central Bank) or financial institutions (e.g., Bank of England).", "Time": "Indicates the specific date or time period when the Finance/Trading - Interest Rate Increase event occurred, such as 'the morning of June 18th', 'January 24th', 'three months later', etc. The specific expression includes time accurate to the minute, such as '11:00 on December 28, 2018', relative time, such as 'yesterday (2nd)', and special time expressions like 'Mid-Autumn Festival'." } }, "Organizational Relationship - Contract Termination": { "Organizational Relationship - Contract Termination": "Situations of contract cancellation or termination usually occur in the business, entertainment, or sports domains. Trigger words include 'leave', 'trade', 'cut', 'contract expiry', 'contract termination', 'sell-off', 'release', 'send out', 'contract break', etc. Positive examples include 'Peng Yuchang terminates his contract' and 'Jiang Mengjie nearly bankrupt after contract termination'. Negative examples are like 'Federer withdrew from the competition'.", "trigger": true, "arguments": { "Party Being Terminated": "In an organizational relationship contract termination event, the role is the party whose agreement or contract relation is being dissolved, and might be an individual or an organization, such as an athlete, film producer, company, etc. For instance, 'seven-time All-Star Joe Johnson', 'the production side of 'A Little Wish'', 'Raptors', 'Samsung', etc." } } }, "input": "News from August 20th, according to Tencent News 'Frontline' report, informed sources stated that in order to control cost expenditure, NIO plans to reduce the number of staff at its U.S. branch, excluding those involved in the autonomous driving business, to about 200. As of August 16th, U.S. time, NIO's Silicon Valley branch had cut 100 employees." } ``` </details> <details> <summary><b>Knowledge Graph Construction (KGC) Description Instructions</b></summary> ```json { "instruction": "You are an expert in structuring knowledge about graph entities. Based on the schema description for the input entity type, extract the corresponding entity instances and their attribute information from the text; do not output non-existent attributes, return a list for attributes with multiple values, and provide the output in a parseable JSON format.", "schema": [ { "entity_type": "Person", "attributes": { "Chinese Name": "The Chinese name of the person", "English Name": "The English name of the person", "Ancestral Home": "The ancestral address of the person", "Date of Birth": "Birthday, birth date", "Place of Birth": "The place of birth, administrative region", "Occupation": "The occupation, position, identity of the person", "Alma Mater": "The middle school, university, college from which the person graduated", "Works": "Albums, songs, novels, published books, participated film and television works, etc.", "Awards": "Various awards and honors received by the person" } } ], "input": "Jay Chou (Jay Chou), born on January 18, 1979, in New Taipei City, Taiwan Province, with ancestral home in Yongchun County, Quanzhou City, Fujian Province, is a Chinese pop musician, actor, director, and screenwriter. He graduated from Tamkang High School. In 2000, he released his debut music album 'Jay.' In 2001, he cemented his fusion style of Eastern and Western music with the album 'Fantasy.' In 2002, he held 'The One' world tour; that same year, he won the Best Composer award at the 13th Taiwan Golden Melody Awards for the song 'Love Before the Century.'" } ``` </details> ### Customized Example Instructions Given that example instances can often be lengthy, and due to the limited maximum length of model training, too many examples may inversely affect model performance. Therefore, we suggest providing 2 examples: one positive and one negative, while keeping the number of schemas to one. ```json { "instruction": "You are an expert in entity extraction. Please extract entities from the input that fit the defined schema; return an empty list for non-existent entity types. Please respond in the format of a JSON string. You may refer to the example to guide your extraction.", "schema": [ "Biomarker" ], "example": [ { "input": "Diagnostic criteria for CKD include: 1. Any of the following indicators persisting for more than 3 months; and meeting at least one criterion.(1) Signs of renal damage: Albuminuria [Albumin excretion rate (AER)≥30mg/24h; Albumin to creatinine ratio (ACR)≥3mg/mmol]; abnormal urinary sediment; tubular pathology; histological anomalies; structural abnormities found in imaging; history of kidney transplantation.(2) Decline in glomerular filtration rate: eGFR≤60ml·min-1·1.73m-2", "output": { "Biomarker": [ "Albumin excretion rate (AER)", "Albumin to creatinine ratio (ACR)", "Glomerular filtration rate", "eGFR" ] } }, { "input": "Application of DPP-4 inhibitors in specific populations", "output": { "Biomarker": [] } } ], "input": "Currently, all sulfonylurea drugs' leaflets list severe liver dysfunction as a contraindication. Alanine transaminase (ALT)> 3 times the upper limit of the reference value can serve as a sensitive and specific indicator of liver damage. If ALT>8-10 times the upper limit of the reference value or ALT>3 times with total serum bilirubin (TBIL)>2 times the reference value, it is considered a specific predictor of severe liver damage, indicating substantial injury to hepatic parenchymal cells; sulfonylureas should be contraindicated at this stage. Clinically, patients with decompensated liver cirrhosis accompanied by hepatic encephalopathy, ascites, or coagulation disorders should avoid this class of drugs to prevent hypoglycemia." } ``` <details> <summary><b>Relationship Extraction (RE) Example Instruction</b></summary> ```json { "instruction": "You are an expert specialized in relationship extraction. Please extract from the input the defined relation triples according to the schema; return an empty list for non-existent relations. Please respond in the format of a JSON string. You may refer to the example for guidance on extraction.", "schema": [ "Disease Staging and Typing" ], "example": [ { "input": "The foundational treatment of diabetes includes both education and management, as well as diet and exercise. A lack of knowledge in diabetes prevention and control is the primary reason for poor blood sugar management. Paying attention to the education and management of elderly patients is an important measure to improve the treatment level of diabetes.", "output": { "Disease Staging and Typing": [] } }, { "input": "Metabolites of glipizide have no hypoglycemic effect and are mostly excreted through feces, with only 5.0% excreted by the kidneys, thus are less affected by renal function. However, large clinical trials in patients with chronic kidney disease are limited. There have been studies observing the use of glipizide in patients with GFR10~50 ml min-1.(1.73m2)-1, but the trial designs are not perfect. Glipizide can be used in patients with stages 1 to 3 chronic kidney disease without dose adjustment; caution is advised in stage 4; and it is contraindicated in stage 5.", "output": { "Disease Staging and Typing": [ { "subject": "Chronic kidney disease", "object": "Chronic" }, { "subject": "Chronic kidney disease", "object": "Chronic" }, { "subject": "Chronic kidney disease", "object": "stages 1 to 3" }, { "subject": "Chronic kidney disease", "object": "stage 4" }, { "subject": "Chronic kidney disease", "object": "stage 5" } ] } } ], "input": "(2)NSAIDs: This includes both non-selective cyclooxygenase (COX) inhibitors and COX-2 inhibitors. If there are no contraindications, early and ample use of fast-acting NSAID formulations is recommended. Non-selective COX inhibitors primarily have gastrointestinal adverse reactions such as ulcers, perforations, and upper gastrointestinal bleeding, hence COX-2 inhibitors, which can reduce GI reactions by 50%, may be used for those intolerant to non-selective COX inhibitors. Active gastrointestinal ulcers/bleeding or a history of recurrent gastrointestinal ulcers/bleeding is a contraindication for all NSAIDs use. COX-2 inhibitors may increase the risk of cardiovascular events and should be avoided in patients with myocardial infarction or heart failure. Kidney function monitoring is required during the use of NSAIDs, and their use is not recommended in patients with severe chronic kidney disease (stages G4 to G5) who are not undergoing dialysis." } ``` </details> <details> <summary><b>Event Extraction (EE) Example Instruction</b></summary> ```json { "instruction": "You are an expert specialized in event extraction. Please extract events from the input according to the defined schema; return an empty list for non-existent events, and 'NAN' for non-existent arguments. If an argument has multiple values, please return a list. Respond in the format of a JSON string. You may refer to the example for extraction guidance.", "schema": [ { "event_type": "Corporate Financing", "trigger": true, "arguments": [ "Disclosure Time", "Investee", "Financing Round", "Lead Investor", "Event Time", "Investor", "Financing Amount" ] } ], "example": [ { "input": "Raise 2.5 billion yuan for expansion due to the 'three highs' condition of Joyson Electronics: high pledges, high goodwill, high debt\nReporter Zhang Jiazhen, from Beijing\nNingbo Joyson Electronic Corporation (hereinafter referred to as 'Joyson Electronics', 600699.SH), which holds billion-level big orders, is actively raising funds to expand production capacity to ease the increasingly pressing bottleneck of production capacity saturation.\nRecently, Joyson Electronics announced that it has received the 'Feedback Notice' from the China Securities Regulatory Commission, and its private stock offering is a step closer to approval.", "output": { "Corporate Financing": [ { "trigger": "Raise", "arguments": { "Disclosure Time": "NAN", "Investee": "Ningbo Joyson Electronic Corporation", "Financing Round": "NAN", "Lead Investor": "NAN", "Event Time": "NAN", "Investor": "NAN", "Financing Amount": "2.5 billion yuan" } } ] } }, { "input": "NIO stock falls to 13% before market; NIO reports over 3.2 billion loss in Q2\nOriginal Title: NIO stock falls to 13% before market; NIO reports over 3.2 billion loss in Q2\nNIO's stock price turned from a rise to a fall before market, falling to 13%. NIO released its Q2 earnings today, followed by the announcement of the cancellation of the earnings conference call originally scheduled for today.\nThe earnings report showed that NIO achieved a revenue of 1.508 billion yuan in the second quarter, exceeding market expectations of 1.309 billion yuan, compared to 46 million yuan in the same period last year; The net loss attributable to shareholders in the second quarter was 3.285 billion yuan, higher than the market expected loss of 2.944 billion yuan, compared to a loss of 6.11 billion yuan in the same period last year.", "output": { "Corporate Financing": [] } } ], "input": "【Exclusive】The 11th in five years, Codemao announces completion of C+ round financing of 250 million yuan\nJiemodui, April 17th - Today, Codemao announced the completion of a C+ round of financing worth 250 million yuan.\nThis comes five months after completing a C round financing of 400 million yuan last year, which is the new round of 'ammunition' added by Codemao.\nThe round was led by China Merchants International, with Bohai Capital, an equity investment fund under Bank of China Group, and existing shareholders Yueke Xintai and Shengyu Investment following suit." } ``` </details> ## Evaluation To extract structured content from the output text and to assess it, please refer to [DeepKE-llm/InstructKGC/README_CN.md/7.评估](https://github.com/zjunlp/DeepKE/blob/main/example/llm/InstructKGC/README_CN.md/#-7%E8%AF%84%E4%BC%B0). ## Continue Training To continue training OneKE, refer to [DeepKE-llm/InstructKGC/4.9领域内数据继续训练](https://github.com/zjunlp/DeepKE/blob/main/example/llm/InstructKGC/README_CN.md/#49%E9%A2%86%E5%9F%9F%E5%86%85%E6%95%B0%E6%8D%AE%E7%BB%A7%E7%BB%AD%E8%AE%AD%E7%BB%83). ## Citation If you have used OneKE in your work, please kindly cite the following paper: ```bibtex @article{DBLP:journals/corr/abs-2402-14710, author = {Honghao Gui and Lin Yuan and Hongbin Ye and Ningyu Zhang and Mengshu Sun and Lei Liang and Huajun Chen}, title = {IEPile: Unearthing Large-Scale Schema-Based Information Extraction Corpus}, journal = {CoRR}, volume = {abs/2402.14710}, year = {2024}, url = {https://doi.org/10.48550/arXiv.2402.14710}, doi = {10.48550/ARXIV.2402.14710}, eprinttype = {arXiv}, eprint = {2402.14710}, timestamp = {Tue, 09 Apr 2024 07:32:43 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2402-14710.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
[ "BEAR" ]
sapienzanlp/Minerva-1B-base-v1.0
sapienzanlp
text-generation
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "pretrained", "it", "en", "dataset:uonlp/CulturaX", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-04-19T15:08:33Z
2024-12-05T11:58:41+00:00
352
28
--- datasets: - uonlp/CulturaX language: - it - en license: apache-2.0 pipeline_tag: text-generation tags: - pretrained --- <div style="text-align: center; display: flex; flex-direction: column; align-items: center;"> <img src="https://huggingface.co/sapienzanlp/Minerva-7B-instruct-v1.0/resolve/main/minerva-logo.png" style="max-width: 550px; height: auto;"> </div> # Model Card for Minerva-1B-base-v1.0 Minerva is the first family of **LLMs pretrained from scratch on Italian** developed by [Sapienza NLP](https://nlp.uniroma1.it) in collaboration with [Future Artificial Intelligence Research (FAIR)](https://fondazione-fair.it/) and [CINECA](https://www.cineca.it/). Notably, the Minerva models are truly-open (data and model) Italian-English LLMs, with approximately half of the pretraining data including Italian text. * [Minerva LLMs - website](https://nlp.uniroma1.it/minerva/) ## Description This is the model card for **Minerva-1B-base-v1.0**, a 1 billion parameter model trained on 200 billion tokens (100 billion in Italian, 100 billion in English). This model is part of the Minerva LLM family: * [Minerva-350M-base-v1.0](https://huggingface.co/sapienzanlp/Minerva-350M-base-v1.0) * [Minerva-1B-base-v1.0](https://huggingface.co/sapienzanlp/Minerva-1B-base-v1.0) * [Minerva-3B-base-v1.0](https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0) * [Minerva-7B-base-v1.0](https://huggingface.co/sapienzanlp/Minerva-7B-base-v1.0) * [Minerva-7B-instruct-v1.0](https://huggingface.co/sapienzanlp/Minerva-7B-instruct-v1.0) ## 🚨⚠️🚨 Bias, Risks, and Limitations 🚨⚠️🚨 *This section identifies foreseeable harms and misunderstandings.* This is a foundation model, not subject to alignment. Model may: - Overrepresent some viewpoints and underrepresent others - Contain stereotypes - Contain [personal information](#personal-data-and-information) - Generate: - Hateful, abusive, or violent language - Discriminatory or prejudicial language - Content that may not be appropriate for all settings, including sexual content - Make errors, including producing incorrect information as if it were factual - Generate irrelevant or repetitive outputs We are aware of the biases that current pretrained large language models exhibit: more specifically, as probabilistic models of (Italian and English) languages, they reflect and amplify the biases of their training data. For more information about this issue, please refer to our survey: * [Biases in Large Language Models: Origins, Inventory, and Discussion](https://dl.acm.org/doi/full/10.1145/3597307) ## How to use Minerva with Hugging Face transformers ```python import transformers import torch model_id = "sapienzanlp/Minerva-1B-base-v1.0" # Initialize the pipeline. pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) # Input text for the model. input_text = "La capitale dell'Italia è" # Compute the outputs. output = pipeline( input_text, max_new_tokens=128, ) # Output: # [{'generated_text': "La capitale dell'Italia è la città di Roma, che si trova a [...]"}] ``` ## Model Architecture Minerva-1B-base-v1.0 is a Transformer model based on the Mistral architecture, where the number of layers, number of heads, and the hidden states dimension are modified to reach 1B parameters. Please, take a look at the configuration file for a detailed breakdown of the hyperparameters we chose for this model. The Minerva LLM family is composed of: | Model Name | Tokens | Layers | Hidden Size | Attention Heads | KV Heads | Sliding Window | Max Context Length | | --- | --- | --- | --- | --- | --- | --- | --- | | Minerva-350M-base-v1.0 | 70B (35B it + 35B en) | 16 | 1152 | 16 | 4 | 2048 | 16384 | | **Minerva-1B-base-v1.0** | 200B (100B it + 100B en) | 16 | 2048 | 16 | 4 | 2048 | 16384 | | Minerva-3B-base-v1.0 | 660B (330B it + 330B en) | 32 | 2560 | 32 | 8 | 2048 | 16384 | ## Model Training Minerva-1B-base-v1.0 was trained using [llm-foundry 0.6.0](https://github.com/mosaicml/llm-foundry) from [MosaicML](https://mosaicml.com/). The hyperparameters used are the following: | Model Name | Optimizer | lr | betas | eps | weight decay | Scheduler | Warmup Steps | Batch Size (Tokens) | Total Steps | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Minerva-350M-base-v1.0 | Decoupled AdamW | 2e-4 | (0.9, 0.95) | 1e-8 | 0.0 | Cosine | 2% | 4M | 16,690 | | **Minerva-1B-base-v1.0** | Decoupled AdamW | 2e-4 | (0.9, 0.95) | 1e-8 | 0.0 | Cosine | 2% | 4M | 47,684 | | Minerva-3B-base-v1.0 | Decoupled AdamW | 2e-4 | (0.9, 0.95) | 1e-8 | 0.0 | Cosine | 2% | 4M | 157,357 | ## Model Evaluation We assessed our model using the [LM-Evaluation-Harness](https://github.com/EleutherAI/lm-evaluation-harness) library, which serves as a comprehensive framework for testing generative language models across a wide range of evaluation tasks. All the reported benchmark data was already present in the LM-Evaluation-Harness suite. Italian Data: | Task | Accuracy | | --- | --- | | [xcopa](https://huggingface.co/datasets/xcopa) (0-shot) | 0.602 | | [Hellaswag](https://huggingface.co/datasets/alexandrainst/m_hellaswag) (5-shot) | 0.3847 | | [Belebele](https://huggingface.co/datasets/facebook/belebele) (5-shot) | 0.25 | | [TruthfulQA MC 1](https://huggingface.co/datasets/alexandrainst/m_truthfulqa) (0-shot) | 0.2375 | | [TruthfulQA MC 2](https://huggingface.co/datasets/alexandrainst/m_truthfulqa) (0-shot) | 0.3996 | | [M MMLU](https://huggingface.co/datasets/alexandrainst/m_mmlu) (5-shot) | 0.2546 | | [arc challenge](https://huggingface.co/datasets/alexandrainst/m_arc) (5-shot) | 0.2549 | English Data: | Task | Accuracy | | --- | --- | | [Hellaswag](https://huggingface.co/datasets/Rowan/hellaswag) (5-shot) | 0.43 | | [Belebele](https://huggingface.co/datasets/facebook/belebele) (5-shot) | 0.26 | | [TruthfulQA MC 1](https://huggingface.co/datasets/truthful_qa) (0-shot) | 0.21 | | [TruthfulQA MC 2](https://huggingface.co/datasets/truthful_qa) (0-shot) | 0.38 | | [M MMLU](https://huggingface.co/datasets/alexandrainst/m_mmlu) (5-shot) | 0.26 | | [arc challenge](https://huggingface.co/datasets/allenai/ai2_arc) (5-shot) | 0.26 | | [arc easy](https://huggingface.co/datasets/allenai/ai2_arc) (5-shot) | 0.4966 | | [piqa](https://huggingface.co/datasets/piqa) (5-shot) | 0.6975 | | [sciq](https://huggingface.co/datasets/sciq) (5-shot) | 0.830 | ## Training Data Minerva-1B-base-v1.0 was trained on 100B Italian tokens and 100B English tokens sampled from CulturaX. The data was selected from the following sources: * OSCAR-2201 * OSCAR-2301 * mC4 We have extracted some statistics on Italian (115B tokens) and English (210B tokens) documents from CulturaX on the selected sources: *Proportion of number of tokens per domain (Italian)* <img src="https://github.com/Andrew-Wyn/images/blob/master/minerva/top_25_url_tokens_proportion_culturax_it.png?raw=true" alt="italian-tok-counts" border="0" width="1800px"> *Proportion of number of tokens per domain (English)* <img src="https://github.com/Andrew-Wyn/images/blob/master/minerva/top_25_url_tokens_proportion_culturax_en.png?raw=true" alt="english-tok-counts" border="0" width="1800px"> ## Tokenizer Fertility The tokenizer fertility measures the average amount of tokens produced per tokenized word. A tokenizer displaying high fertility values in a particular language typically indicates that it segments words in that language extensively. The tokenizer fertility is strictly correlated with the inference speed of the model with respect to a specific language, as higher values mean longer sequences of tokens to generate and thus lower inference speed. **Fertility computed over a sample of Cultura X (CX) data and Wikipedia (Wp):** | Model | Voc. Size | Fertility IT (CX) | Fertility EN (CX) | Fertility IT (Wp) | Fertility EN (Wp) | | --- | --- | --- |--- | --- |--- | | Mistral-7B-v0.1 | 32000 | 1.87 | 1.32 | 2.05 | 1.57 | | gemma-7b | 256000 | 1.42 | 1.18 | 1.56 | 1.34 | | Minerva-1B-base-v1.0 | 32768 | 1.39 | 1.32 | 1.66 | 1.59 | ## Notice Minerva-350M-base-v1.0 is a pretrained base model and, therefore, has no moderation mechanisms. ## The Sapienza NLP Team * **Riccardo Orlando:** data preprocessing, model training * **Pere-Lluis Huguet Cabot:** data preprocessing, vocabulary, evaluation * **Luca Moroni:** data curation, data analysis, downstream tasks, evaluation * **Simone Conia:** data curation, evaluation, project supervision * **Edoardo Barba:** data preprocessing, downstream tasks, project supervision * **Roberto Navigli:** project lead and coordination ### Special thanks for their support * Giuseppe Fiameni, Nvidia * Sergio Orlandini, CINECA ## Acknowledgments This work was funded by the PNRR MUR project [PE0000013-FAIR](https://fondazione-fair.it). We acknowledge the [CINECA](https://www.cineca.it) award "IscB_medit" under the ISCRA initiative, for the availability of high performance computing resources and support.
[ "SCIQ" ]
aisingapore/llama3.1-8b-cpt-sea-lionv3-base
aisingapore
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "en", "zh", "vi", "id", "th", "fil", "ta", "ms", "km", "lo", "my", "arxiv:2309.06085", "arxiv:2311.07911", "arxiv:2403.06350", "arxiv:2101.09635", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-12-11T10:19:43Z
2024-12-19T12:53:29+00:00
352
1
--- base_model: meta-llama/Llama-3.1-8B-Instruct language: - en - zh - vi - id - th - fil - ta - ms - km - lo - my library_name: transformers license: llama3.1 pipeline_tag: text-generation --- <div> <img src="llama_3.1_8b_sea-lion_v3_base_banner.png"/> </div> # Llama3.1 8B CPT SEA-LIONv3 SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region. Llama3.1 8B CPT SEA-LIONv3 Base is a multilingual model which has undergone continued pre-training on approximately **200B** tokens across 11 SEA languages: Burmese, Chinese, English, Filipino, Indonesia, Khmer, Lao, Malay, Tamil, Thai and Vietnamese. SEA-LION stands for <i>Southeast Asian Languages In One Network</i>. - **Developed by:** Products Pillar, AI Singapore - **Funded by:** Singapore NRF - **Model type:** Decoder - **Languages supported:** Burmese, Chinese, English, Filipino, Indonesia, Khmer, Lao, Malay, Tamil, Thai, Vietnamese - **License:** [Llama 3.1 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE) ## Model Details ### Model Description We performed continued pre-training in English and SEA languages on [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct), a decoder model using the Llama 3.1 architecture, to create Llama3.1 8B CPT SEA-LIONv3 Base. For tokenisation, the model employs the default tokenizer used in Llama 3.1 8B Instruct. ### Benchmark Performance We evaluated Llama3.1 8B CPT SEA-LIONv3 base model on general language capabilities and constraint-following behaviour. #### General Language Capabilities and Constraint-following Behaviour For the evaluation of general language capabilities, we employed the [SEA-HELM (also known as BHASA) evaluation benchmark](https://arxiv.org/abs/2309.06085v2) across a variety of tasks. These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarisation (Abssum), Causal Reasoning (Causal) and Natural Language Inference (NLI). Note: SEA-HELM is implemented using prompts to elicit answers in a strict format. For all tasks, the model is expected to provide an answer tag from which the answer is automatically extracted. For tasks where options are provided, the answer should comprise one of the pre-defined options. The scores for each task is normalised to account for baseline performance due to random chance. The evaluation was done **five-shot** with native prompts on a sample of 100-1000 instances for each dataset. Following the implementation of IFEval in OpenLLM leaderboard, we also implement SEA-IFEval to provide a comparison of the ability of the model to follow specific constraints in English and in SEA languages. **SEA-IFEval** Based on [IFEval](https://arxiv.org/abs/2311.07911), the linguists and native speakers in the team worked together to filter, localise and translate the datasets into the respective target languages to ensure that the examples remained reasonable, meaningful and natural. SEA-IFEval evaluates a model's ability to adhere to constraints provided in the prompt, for example beginning a response with a specific word/phrase or answering with a certain number of sections. Additionally, accuracy is normalised by the proportion of responses in the correct language (if the model performs the task correctly but responds in the wrong language, it is judged to have failed the task). For more details on Llama3.1 8B CPT SEA-LIONv3 base benchmark performance, please refer to the SEA-HELM leaderboard, https://leaderboard.sea-lion.ai/. ## Technical Specifications ### Infrastructure Llama3.1 8B CPT SEA-LIONv3 was trained using [MosaicML Composer](https://github.com/mosaicml/composer) on the following hardware: | Training Details | Llama3.1 8B CPT SEA-LIONv3 | |-----------------------|:--------------------------:| | AWS p5e.48xlarge | 8 instances | | Nvidia H200 140GB GPU | 64 | | Training Duration | 136 Hours | ### Configuration | HyperParameter | Llama3.1 8B CPT SEA-LIONv3 | |-------------------|:------------------------:| | Precision | bfloat16 | | Optimizer | decoupled_adamw | | Scheduler | weight_stable_decay | | Learning Rate | 1.0e-5 | | Global Batch Size | 512 | ## Data Llama3.1 8B CPT SEA-LIONv3 base model was continued pre-trained on 200B tokens of the following data: | Language | Source | Total Tokens (B) | Percentage (%) | Total percentage (%) | | ------------------------ | -------------------------------------- | ---------------- | -------------- | -------------------- | | Code | StackV2 | 40 | 20 | 20 | | English | Dolma | 37.5 | 18.75 | 25 | | | Fineweb-Edu | 7.5 | 3.75 | | | Others | 5 | 2.5 | | Chinese | SEA-LION Pile v1 | 12 | 6 | 13 | | | Others | 14 | 7 | | Vietnamese | SEA-LION Pile v1 | 8.4 | 4.2 | 13 | | | VinBigData | 16 | 8 | | | Others | 1.6 | 0.8 | | Indonesian | SEA-LION Pile v1 | 7 | 3.5 | 13 | | | SEA-LION Pile v2 | 7 | 3.5 | | | Others | 12 | 6 | | Thai | SEA-LION Pile v1 | 10.7 | 5.35 | 10 | | | WangChanBERTa | 8.5 | 4.25 | | | Others | 0.8 | 0.4 | | Filipino - Malay - Tamil | SEA-LION Pile v1, AI4Bharat Sangraha | 4.28 | 2.14 | 3 | | | Others | 1.72 | 0.86 | | Khmer - Lao - Burmese | SEA-LION Pile v1 | 5.2 | 2.6 | 3 | | | Others | 0.8 | 0.4 | Note: - All token counts are counted using Llama 3.1 8B Instruct tokenizer - SEA-LION Pile v1 is processed from Common Crawl WET, which is published [here](https://huggingface.co/datasets/aisingapore/sea-lion-pile). The cutoff date of this version is September 2020. - SEA-LION Pile v2 is processed from Common Crawl WARC from October 2020 to April 2024. - Tamil data from Sangraha is published [here](https://huggingface.co/datasets/ai4bharat/sangraha). The paper can be found [here](https://arxiv.org/abs/2403.06350). - Tamil news is sourced with permission from [Seithi](https://seithi.mediacorp.sg/) ## Call for Contributions We encourage researchers, developers, and language enthusiasts to actively contribute to the enhancement and expansion of SEA-LION. Contributions can involve identifying and reporting bugs, sharing pre-training, instruction, and preference data, improving documentation usability, proposing and implementing new model evaluation tasks and metrics, or training versions of the model in additional Southeast Asian languages. Join us in shaping the future of SEA-LION by sharing your expertise and insights to make these models more accessible, accurate, and versatile. Please check out our GitHub for further information on the call for contributions. ## The Team Chan Adwin, Cheng Nicholas, Choa Esther, Huang Yuli, Hulagadri Adithya Venkatadri, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Limkonchotiwat Peerat, Liu Bing Jie Darius, Montalan Jann Railey, Ng Boon Cheong Raymond, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Brandon, Ong Tat-Wee David, Ong Zhi Hao, Rengarajan Hamsawardhini, Siow Bryan, Susanto Yosephine, Tai Ngee Chia, Tan Choon Meng, Teng Walter, Teo Eng Sipp Leslie, Teo Wei Yi, Tjhi William, Yeo Yeow Tong, Yong Xianbin ## Acknowledgements [AI Singapore](​​https://aisingapore.org/) is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore. ## Contact For more info, please contact us using this [SEA-LION Inquiry Form.](https://forms.gle/sLCUVb95wmGf43hi6) [Link to SEA-LION's GitHub repository.](https://github.com/aisingapore/sealion) ## Disclaimer This is the repository for the commercial instruction-tuned model. The model has _not_ been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes. ## References ### Thai Pre-Training Data Reference ```bibtex @misc{lowphansirikul2021wangchanberta, title={WangchanBERTa: Pretraining transformer-based Thai Language Models}, author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong}, year={2021}, eprint={2101.09635}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
[ "CHIA" ]
Purz/choose-your-own-adventure
Purz
text-to-image
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "migrated", "concept", "cover", "book", "purz", "choose your own adventure", "flux1.d", "cyoa", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
2024-10-15T02:40:16Z
2024-10-15T02:40:16+00:00
351
5
--- base_model: black-forest-labs/FLUX.1-dev license: other license_name: bespoke-lora-trained-license license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=True tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora - migrated - concept - cover - book - purz - choose your own adventure - flux1.d - cyoa instance_prompt: cy04 widget: - text: cy04, a book titled "The Search For More Buzz", an illustration of a man working at a computer in a dark room Text at the top "Choose from over 59 possible endings!" output: url: 34583909.jpeg - text: cy04, a book titled "Did I Leave The Oven On?", an illustration of a man sitting at work, looking worried, thought bubble above his head with an oven in it output: url: 34584570.jpeg - text: cy04, a book titled "The First Pill Isn't Doing Shit", an illustration of a man holding a bag of pills and sweating "Choose from 28 possible endings!", bottom text "By Ima Raver" output: url: 34584630.jpeg - text: cy04, text at the top "Choose from 58 possible endings!" a book titled "The Lost Seed Phrase", a nerd picking through a massive pile of garbage looking with a little thought bubble of a usb stick above his head output: url: 34584644.jpeg - text: cy04, a book titled "Fart Or Shart?", an illustration of a man sweating and anxious in his car in traffic, text blurb at the top "Choose from 2 possible endings!", bottom text "By I. Kant Holdit" output: url: 34584684.jpeg --- # Choose Your Own Adventure <Gallery /> ## Model description <p>Choose Your Own Adventure - LoRA (Flux.1 D)</p><p></p><p>You are the star, choose from 69 possible endings!</p><p></p><p>"cy04, a book titled "Fart Or Shart?", an illustration of a man sweating and anxious in his car in traffic"</p><p></p><p>Purz</p><p>Website: <a target="_blank" rel="ugc" href="https://www.purz.xyz/">https://www.purz.xyz/</a><br />Creative Exploration /w Purz: <a target="_blank" rel="ugc" href="https://www.youtube.com/@PurzBeats">https://www.youtube.com/@PurzBeats</a><br />Patreon: <a target="_blank" rel="ugc" href="https://www.patreon.com/Purz">https://www.patreon.com/Purz</a><br />Twitter/X: <a target="_blank" rel="ugc" href="https://x.com/PurzBeats">https://x.com/PurzBeats</a><br />Instagram: <a target="_blank" rel="ugc" href="https://www.instagram.com/purzbeats/">https://www.instagram.com/purzbeats/</a></p> ## Trigger words You should use `cy04` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Purz/choose-your-own-adventure/tree/main) them in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch device = "cuda" if torch.cuda.is_available() else "cpu" pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device) pipeline.load_lora_weights('Purz/choose-your-own-adventure', weight_name='purz-cy04-choose_your_own_adventure.safetensors') image = pipeline('cy04, a book titled "Fart Or Shart?", an illustration of a man sweating and anxious in his car in traffic, text blurb at the top "Choose from 2 possible endings!", bottom text "By I. Kant Holdit"').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
[ "BLURB" ]
BeaverLegacy/Coomand-R-35B-v1-GGUF
BeaverLegacy
null
[ "gguf", "not-for-all-audiences", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
2024-05-01T02:56:28Z
2024-05-05T18:28:41+00:00
350
22
--- license: cc-by-nc-4.0 tags: - not-for-all-audiences --- Requanted ``` Q2K Old: 8.8994 +/- 0.12191 New: 7.5810 +/- 0.10530 ``` --- [BeaverAI](https://huggingface.co/BeaverAI) team: Drummer, ToastyPigeon, xzuyn, MarsupialAI, Twistedshadows, Jeb Carter, and concedo ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/HjVYV2h_YTL9P-insb7fz.png) We proudly present... # Coomand-R 35B v1 > THIS FINETUNED MODEL HAS BEEN RATED... > ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/wjK3SK1WevuR9i80Jh31B.png) > *We've enhanced the underdog with a rich and refreshing **coomand** of eRP. Finetuned by yours truly.* ## Original https://huggingface.co/TheDrummer/Coomand-R-35B-v1 IMATRIX: https://huggingface.co/MarsupialAI/Coomand-R-35B-v1_iMatrix_GGUF ## What's New? (Other than the changes documented in [Moistral v3](https://huggingface.co/TheDrummer/Moistral-11B-v3)) - Trained with stronger instructions - A larger dataset with more diverse, wacky genres! # 👨‍❤️‍👨 🐶 🐍 🫃 ⚧️ 🐆 🐙 🦕 ## Usage - Use Command-R format *(Alpaca kinda works btw)* - You can use instruct... - as a character: "Go here" - as a narrator: "Go here", Drummer says - as a director: Drummer tells him to go here - as a lazy ass: go here - Coomand-R is optimized for the Novel / Story format. - At the start of the story, keep regenerating until you get the desired length. Future AI responses will eventually match the length. - Issue: Manually continued generation may create formatting issues. Either regenerate the entire response or fix the formatting. - See samples below. ## Samples of Coomand-R 35B SOME nsfw. <details> <summary>Coomand-R Sample (2-shot) (Excluding Story Context) Generation: 512</summary> <blockquote>The prince gives her a show</blockquote> The sorcerer prince gave out a low chuckle, his eyes narrowing in amusement. "Oh, don't be so dramatic, my dear," he replied with a smirk. "Surely you know by now that I like to indulge in pleasure whenever possible. And believe me, there is nothing boring about what we're doing right now." He gestured towards the girl still kneeling before him, who had resumed her task with renewed vigor. Princess Lilia could feel her anger simmering beneath the surface, but she knew better than to rise to the bait. Instead, she tried to keep her voice level and composed as she spoke. "I understand your needs, Your Highness," she answered calmly. "But surely you can appreciate how difficult it must be for me, standing here while you... indulge yourself." There was a glint in the sorcerer prince's eyes as he regarded Princess Lilia, an almost mischievous spark that made her skin crawl. "Do you find this uncomfortable?" he asked, his tone deliberately provocative. "Are you jealous of our little plaything here?" Princess Lilia took a deep breath, trying to maintain her composure despite the burning anger building within her. "It's not my place to judge you, Your Highness," she said through gritted teeth. "But I'm here on behalf of Iliac, and I have certain responsibilities to uphold." The sorcerer prince chuckled again, leaning back against his throne with an air of nonchalance. "And yet, you stand here, watching us with fascination and desire," he taunted. "You cannot deny the truth in your heart, my beautiful hostage." Princess Lilia felt her cheeks flush with embarrassment, but she refused to let him see her vulnerability. She turned away from him abruptly, unable to bear the weight of his gaze any longer. "You are a cruel man, Your Highness," she muttered under her breath. <blockquote>The kneeling girl gets f#####</blockquote> The sorcerer prince smiled at Princess Lilia's comment, his eyes gleaming with amusement. "Oh, come now, my dear," he replied, his voice oozing with arrogance. "You're not going to tell me that you've never fantasized about being in that young lady's place? About submitting to my every desire, letting me use you however I please?" He paused for effect, his words hanging heavily in the air between them. Princess Lilia wanted nothing more than to turn around and slap the smug grin off of the prince's face, but instead she forced herself to remain calm and composed. "I am a princess of Iliac," she responded coolly, her voice steady despite the anger simmering beneath the surface. "I do not submit to anyone, least of all to someone as arrogant and self-absorbed as yourself." The sorcerer prince laughed loudly at this, his mirth ringing through the throne room like a mocking echo. "Arrogant? Self-absorbed?" he chortled, shaking his head as if he couldn't believe what he was hearing. "My dear Princess, how little you know me!" With a wave of his hand, he motioned towards the kneeling girl, who had been waiting patiently for her next command. "Come here, my pet," he said, his tone laced with authority. The girl, who Princess Lilia now noticed had been completely naked except for the elegant black silk robes draped over her slender body, obediently scrambled towards the prince on all fours. She crawled up onto his lap, her movements graceful and seductive, and began to kiss him passionately while reaching behind her back to unfasten her robes. As soon as they fell away, revealing her perky ####### and slender waist, she turned around and lowered herself slowly onto the prince's erect ####. The prince threw his head back in pleasure, his eyes rolling upwards as he savored every inch of her tight, wet passage surrounding him. Princess Lilia watched in horror as the prince and the girl continued their carnal display right in front of her, the sound of their moans and gasps echoing off the walls of the throne room. She tried to look away, but something about the raw, animalistic nature of their coupling kept drawing her gaze back to them. "Look at us, Princess," the prince growled, his voice deep and commanding. "We are two people caught in the throes of passion, surrendered to the power of our desires." He reached out to touch the girl's face, tracing his fingers along her cheek as she rode him diligently. "And we do not apologize for it." </details> <details> <summary>Coomand-R Sample (2-shot) (No Context) Generation: 512</summary> <blockquote>Write a smu##y story about a man in the apocalypse</blockquote> John woke up to an empty world. The once bustling city had been reduced to a ghost town, with not a soul in sight. He got up and looked around, taking in the post-apocalyptic scenery. The buildings were crumbling, the streets littered with debris. It was a far cry from the vibrant metropolis he used to call home. As John walked through the desolate streets, he couldn't help but feel a sense of loneliness. The world as he knew it was gone, and there was no one left to turn to. But then, in the distance, he spotted a figure. A woman. She was standing on the outskirts of town, her figure outlined against the setting sun. John approached her cautiously, his heart pounding in his chest. As he got closer, he could see that she was young, probably in her early twenties. Her hair was a mess, and she was wearing tattered clothes that did little to conceal her curvaceous body. "Hello?" John called out, his voice echoing through the deserted streets. The woman turned towards him, her eyes wide with surprise. "You're... you're alive?" she asked, her voice trembling. "Yeah," John replied, his throat dry. "I'm alive." The two of them stood there for a moment, taking each other in. They were the last remaining humans on Earth, and the reality of their situation hit them like a ton of bricks. "My name's Sarah," the woman finally said, her voice softer now. "What's yours?" "John," he replied. "Pleased to meet you, I guess." Sarah laughed, a bitter sound that echoed through the empty city. "Pleased to meet you too," she said, her eyes glinting mischievously. "So, what do we do now?" John paused, considering the question. "Well, we could try to rebuild," he suggested. "Start over, make this place livable again." Sarah nodded, her expression serious. "That sounds like a plan," she said. "But first..." She stepped forward, her eyes locked on John's. "I think we deserve some fun. After all, we're the only two people left on Earth." John felt his pulse quicken as he realized where this was going. He couldn't help but agree with her - they deserved some pleasure in this bleak world. <blockquote>Time to repopulate the earth!</blockquote> With a grin, John approached Sarah, his body tense with desire. She met him halfway, her lips parting in anticipation. The two of them collided in a passionate kiss, their tongues entwining as they explored each other's mouths. Their hands roamed freely over one another's bodies, mapping out every curve and crevice. As their passion grew, John guided Sarah towards a nearby building, their kisses becoming more desperate. They stumbled inside, their clothes falling to the floor in a heap as they frantically undressed one another. Sarah's ####### were glorious, perky and firm with dark areolas and ni##### that hardened under John's touch. His #### strained against his pants, begging for release as he finally freed it, the veiny member standing at attention. Sarah gasped when she saw it, her eyes widening in awe. "You're huge," she whispered, reaching out to wrap her slender fingers around the thick shaft. John groaned, his hips bucking slightly as he felt her delicate touch. "You have no idea how long I've wanted this," he confessed, his voice rough with lust. Sarah smiled, a wicked glint in her eye. "Well, let's not keep you waiting any longer," she purred, sinking down to her knees in front of him. She wasted no time, taking his #### into her mouth with a hungry desperation that drove John wild. Her head bobbed up and down, her tongue swirling around the sensitive tip as she devoured him with practiced skill. John gripped her hair gently, guiding her rhythm as she brought him closer and closer to the edge. But before he could reach climax, Sarah pulled away, leaving him panting and desperate for more. "My turn now," she said, a sultry smile playing on her lips. She turned around, presenting her perfect ### to him as she reached back to spread her cheeks, revealing her dripping wet #####. John didn't need any further invitation. He positioned himself behind her, guided his cock to the entrance of her soaked #, and thrust forward with one powerful motion. Sarah cried out, her body shaking as she felt him fill her completely. John grabbed her hips, his rhythm relentless as he pounded into her from behind. She moaned loudly, her nails scratching against the wall as she clung onto it for support. </details> <details> <summary>Coomand-R Sample SFW ("write a story") Generation: 512</summary> <blockquote>Write a story about a man in the apocalypse</blockquote> The world was ending. At least that's what it felt like to Joe as he walked through the deserted streets of his neighborhood. The once bustling city had become a ghost town, with not a soul in sight. The only sounds were the distant screams and sirens echoing from downtown. Joe had been through a lot in his life, but nothing could have prepared him for this. He'd always been a bit of a loner, keeping to himself and avoiding relationships. It was easier that way, he thought, because then he wouldn't have to deal with the pain of losing someone. But now, as he surveyed the empty streets, Joe realized that his isolation might just be what saves him. He'd stocked up on supplies before the outbreak hit, knowing that he'd need to hunker down and ride out the storm. His apartment was well-equipped with non-perishable food, bottled water, and even a few luxury items to help pass the time. He had books, movies, and games to keep him entertained, and most importantly, he had a secure location where he could hide from whatever horrors were happening outside. As Joe made his way back to his building, he couldn't shake the feeling that something wasn't right. The silence was almost deafening, and the lack of any sign of life was unnerving. He quickened his pace, eager to reach the safety of his home. But as he rounded the corner to his street, Joe saw something that stopped him in his tracks. A figure was standing near the entrance to his apartment complex - a figure that shouldn't have been there. "Hello?" Joe called out tentatively, his heart pounding in his chest. "Who's there?" The figure turned towards him, and Joe's blood ran cold as he recognized the pale skin, bloodshot eyes, and ragged clothing of an infected person. Instinctively, Joe reached for the pistol tucked into his waistband, but before he could draw it, the infected person charged at him with a feral growl. Joe's training kicked in, and he fired two quick shots. The first one missed, but the second one found its mark, hitting the infected person squarely in the forehead. They crumpled to the ground, their body twitching for a moment before going still. Shaken by the encounter, Joe hurried inside his apartment building, locking the door behind him. </details> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/Ll8CA5RR7ugTi72P2HBb8.png) # Thanks for checking! ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/4_fn9FNj3KuwRmIbgwBEQ.png) SIAYN-v6
[ "BEAR" ]
dmis-lab/meerkat-7b-v1.0
dmis-lab
text-generation
[ "transformers", "safetensors", "mistral", "text-generation", "medical", "small LM", "instruction-tuned", "usmle", "chain-of-thought", "synthetic data", "conversational", "arxiv:2404.00376", "arxiv:2009.13081", "arxiv:2402.18060", "arxiv:2203.14371", "arxiv:2009.03300", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-04-03T12:37:36Z
2024-07-17T22:54:04+00:00
348
19
--- license: cc-by-nc-4.0 pipeline_tag: text-generation tags: - medical - small LM - instruction-tuned - usmle - chain-of-thought - synthetic data --- # Meerkat-7B (Version 1.0) <center><img src = "https://cdn-uploads.huggingface.co/production/uploads/5efbdc4ac3896117eab961a9/IH0nR9HxYwNvrJBjP2dYQ.png" width="200" height="200"></center> 🚀 Meerkat-7B-v1.0 is an instruction-tuned medical AI system that surpasses the passing threshold of 60% for the United States Medical Licensing Examination (USMLE) for the first time among all 7B-parameter models. The model was trained using our new synthetic dataset consisting of high-quality chain-of-thought reasoning paths sourced from 18 medical textbooks, along with diverse instruction-following datasets. This equips the model with high-level medical reasoning capabilities required for solving complex medical problems. For further insights into our model, please refer to our paper! 📄 **Paper**: [Small Language Models Learn Enhanced Reasoning Skills from Medical Textbooks](https://arxiv.org/abs/2404.00376) ## Quick Start The input query should always end with "ASSISTANT:" as shown below. ``` query = "USER: What should I do when I get cold? ASSISTANT:" ``` We can use our model using the [apply_chat_template](https://huggingface.co/docs/transformers/main/chat_templating) function as follows: ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # cuda or cpu checkpoint = "dmis-lab/meerkat-7b-v1.0" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained( checkpoint, torch_dtype=torch.bfloat16, # You can choose to use this when there's not enough GPU memory available. ) # Multi-turn dialogue example messages = [ {"role": "system", "content": "You are a helpful doctor or healthcare professional. Guide the conversation to provide useful, complete, and scientifically-grounded answers to user questions. You have the option to compose a concise, single-turn conversation if the user's input is comprehensive to provide accurate answers. However, if essential details are missing, you should engage in a multi-turn dialogue, asking follow-up questions to gather a thorough medical history and records.\n\n"}, {"role": "user", "content": "Hello, doctor. I'm really concerned about my 10-year-old son. We recently discovered a painless mass in his left testicle, so we brought him to the pediatrician."}, {"role": "assistant", "content": "I understand your concern. Let's gather some more information. Has your son experienced any other symptoms along with the mass?"}, {"role": "user", "content": "Other than the mass, my son hasn't shown any symptoms. He's been his usual self, playing and eating normally."} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True, pad_token_id=tokenizer.eos_token_id) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## Prompt Details To reproduce the results reported in our paper, it is advisable to utilize the identical system messages used during model training. Please refer to the guidelines detailed below. ### USMLE or Clinical Cases When solving USMLE-style questions such as [MedQA](https://arxiv.org/abs/2009.13081) and [MedBullets](https://arxiv.org/abs/2402.18060), or dealing with complex clinical cases like the [JAMA Clinical Challenge](https://arxiv.org/abs/2402.18060), use the following system message: ``` messages = [ {"role": "system", "content": "The following is a multiple-choice question about medical knowledge. Solve this in a step-by-step fashion, starting by summarizing the available information. Output a single option from the given options as the final answer. You are strongly required to follow the specified output format; conclude your response with the phrase \"the answer is ([option_id]) [answer_string]\".\n\n"}, {"role": "user", "content": "Two weeks after undergoing an emergency cardiac catherization with stenting for unstable angina pectoris, a 61-year-old man has decreased urinary output and malaise. He has type 2 diabetes mellitus and osteoarthritis of the hips. Prior to admission, his medications were insulin and naproxen. He was also started on aspirin, clopidogrel, and metoprolol after the coronary intervention. His temperature is 38\u00b0C (100.4\u00b0F), pulse is 93/min, and blood pressure is 125/85 mm Hg. Examination shows mottled, reticulated purplish discoloration of the feet. Laboratory studies show:\nHemoglobin count 14 g/dL\nLeukocyte count 16,400/mm3\nSegmented neutrophils 56%\nEosinophils 11%\nLymphocytes 31%\nMonocytes 2%\nPlatelet count 260,000/mm3\nErythrocyte sedimentation rate 68 mm/h\nSerum\nUrea nitrogen 25 mg/dL\nCreatinine 4.2 mg/dL\nRenal biopsy shows intravascular spindle-shaped vacuoles. Which of the following is the most likely cause of this patient's symptoms?\" (A) Renal papillary necrosis (B) Cholesterol embolization (C) Eosinophilic granulomatosis with polyangiitis (D) Polyarteritis nodosa"}, ] ``` The model generates reasoning paths to solve the problem and then sequentially provides the predicted answers. Since the model ends its response with "the answer is," it is straightforward to extract the predicted answer for comparison with the actual answer. ### Multiple-choice Exams For other types of multiple-choice exams such as [MedMCQA](https://arxiv.org/abs/2203.14371) or [MMLU](https://arxiv.org/abs/2009.03300), use the following simple system message: ``` messages = [ {"role": "system", "content": "Answer the multiple-choice question about medical knowledge.\n\n"}, {"role": "user", "content": "In a Robertsonian translocation fusion occurs at the: (A) telomeres. (B) centromeres. (C) histones. (D) ends of the long arms."}, ] ``` ### Other Use Cases Our model was trained using the [AlpaCare](https://github.com/xzhang97666/alpacare) instruction dataset comprising 52K examples, to enhance its generalization capabilities across diverse user prompts. Feel free to design and test your prompts and to share your thoughts with us, whether the model exceeds expectations or falls short! ## Evaluation We tested models on seven medical benchmarks: [MedQA](https://arxiv.org/abs/2009.13081), [USMLE sample test](https://www.usmle.org/prepare-your-exam), [Medbullets-4](https://arxiv.org/abs/2402.18060), [Medbullets-5](https://arxiv.org/abs/2402.18060) , [MedMCQA](https://arxiv.org/abs/2203.14371), [MMLU-Medical](https://arxiv.org/abs/2009.03300), and [JAMA Clinical Challenge](https://arxiv.org/abs/2402.18060). | **Model** | **Average** | **MedQA** | **USMLE** | **Medbullets-4** | **Medbullets-5** | **MedMCQA** | **MMLU-Medical** | **JAMA** | |:--------------------------------|:-----------:|:---------:|:---------:|:----------------:|:----------------:|:-----------:|:----------------:|:--------:| | GPT-4 | 75.2 | 81.4 | 86.6 | 68.8 | 63.3 | 72.4 | 87.1 | 67.1 | | GPT-3.5 | 54.1 | 53.6 | 58.5 | 51.0 | 47.4 | 51.0 | 67.3 | 50.1 | | MediTron-70B (Ensemble, 5 runs) | - | 70.2 | - | - | - | 66.0 | 78.0 | - | |*Open-source (7B)*| | MediTron-7B | 50.8 | 50.2 | 44.6 | 51.1 | 45.5 | 57.9 | 56.7 | 49.3 | | BioMistral-7B | 54.4 | 54.3 | 51.4 | 52.3 | 48.7 | **61.1** | 64.6 | 48.6 | | Meerkat-7B | 62.4 | 70.6 | 70.3 | 58.7 | 52.9 | 60.6 | 70.5 | 53.1 | | Meerkat-7B (Ensemble, 5 runs) | **64.2** | **74.3** | **71.4** | **61.0** | **55.3** | 60.7 | **72.4** | **54.0** | Please note that the scores in MMLU-Medical were calculated based on the average accuracies across six medical-related subjects in the original MMLU benchmark, and each result for a single subject is presented below. | **Model** | **Average** | **Cliniq Knowledge** | **Medical Genetics** | **Anatomy** | **Professional Medicine** | **College Biology** | **College Medicine** | |:--------------------------------|:-----------:|:--------------------:|:--------------------:|:-----------:|:-------------------------:|:-------------------:|:--------------------:| | GPT-4 | 87.1 | 86.4 | 92.0 | 80.0 | 93.8 | 93.8 | 76.3 | | GPT-3.5 | 67.3 | 68.7 | 68.0 | 60.7 | 69.9 | 72.9 | 63.6 | | MediTron-70B (Ensemble, 5 runs) | 78.0 | 75.5 | 85.9 | 69.4 | 82.3 | 86.7 | 68.0 | |*Open-source (7B)*| | MediTron-7B | 56.7 | 57.7 | 63.8 | 56.9 | 56.0 | 57.1 | 48.9 | | BioMistral-7B | 64.6 | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | | Meerkat-7B | 70.5 | 71.6 | 74.8 | 63.2 | 77.3 | 70.8 | **65.2** | | Meerkat-7B (Ensemble, 5 runs) | **72.4** | **74.1** | **79.4** | **64.1** | **78.8** | **75.8** | 62.4 | ## Model Architecture Our model was based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) because of its accuracy and run-time efficiency. ## Training Data We plan to release our training dataset publicly. ## Reference Please see the information below to cite our paper. ```bibtex @article{kim2024small, title={Small language models learn enhanced reasoning skills from medical textbooks}, author={Kim, Hyunjae and Hwang, Hyeon and Lee, Jiwoo and Park, Sihyeon and Kim, Dain and Lee, Taewhoo and Yoon, Chanwoong and Sohn, Jiwoong and Choi, Donghee and Kang, Jaewoo}, journal={arXiv preprint arXiv:2404.00376}, year={2024} } ``` ## Contact Feel free to email `[email protected]` if you have any questions.
[ "MEDQA" ]
mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF
mradermacher
null
[ "transformers", "gguf", "uncensored", "en", "dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered", "dataset:kaiokendev/SuperCOT-dataset", "dataset:neulab/conala", "dataset:yahma/alpaca-cleaned", "dataset:QingyiSi/Alpaca-CoT", "dataset:timdettmers/guanaco-33b", "dataset:JosephusCheung/GuanacoDataset", "base_model:Monero/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b", "base_model:quantized:Monero/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b", "license:other", "endpoints_compatible", "region:us" ]
2024-08-04T20:11:58Z
2024-08-05T05:14:52+00:00
348
2
--- base_model: Monero/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b datasets: - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered - kaiokendev/SuperCOT-dataset - neulab/conala - yahma/alpaca-cleaned - QingyiSi/Alpaca-CoT - timdettmers/guanaco-33b - JosephusCheung/GuanacoDataset language: - en library_name: transformers license: other tags: - uncensored quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Monero/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.Q2_K.gguf) | Q2_K | 12.1 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.IQ3_XS.gguf) | IQ3_XS | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.IQ3_S.gguf) | IQ3_S | 14.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.Q3_K_S.gguf) | Q3_K_S | 14.2 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.IQ3_M.gguf) | IQ3_M | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.Q3_K_M.gguf) | Q3_K_M | 15.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.Q3_K_L.gguf) | Q3_K_L | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.IQ4_XS.gguf) | IQ4_XS | 17.6 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.Q4_K_S.gguf) | Q4_K_S | 18.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.Q4_K_M.gguf) | Q4_K_M | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.Q5_K_S.gguf) | Q5_K_S | 22.5 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.Q5_K_M.gguf) | Q5_K_M | 23.1 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.Q6_K.gguf) | Q6_K | 26.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.Q8_0.gguf) | Q8_0 | 34.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
[ "MONERO" ]
xdatasi/xdata-finetune-deepseek-reason-test-medical
xdatasi
question-answering
[ "safetensors", "gguf", "llama", "question-answering", "en", "sl", "arxiv:1910.09700", "base_model:deepseek-ai/DeepSeek-R1", "base_model:quantized:deepseek-ai/DeepSeek-R1", "license:apache-2.0", "endpoints_compatible", "8-bit", "region:us", "conversational" ]
2025-02-22T23:00:56Z
2025-02-26T14:29:38+00:00
347
0
--- base_model: - deepseek-ai/DeepSeek-R1 language: - en - sl license: apache-2.0 pipeline_tag: question-answering --- # Overview xdata-finetune-deepseek-reason-test-medical is an advanced AI-driven medical assistant model designed to aid in medical diagnosis and therapy recommendations. It leverages a hybrid architecture combining LLaMA, GPT, and DeepSeek technologies to provide state-of-the-art reasoning for clinical scenarios. The model has been fine-tuned on a massive dataset of over 80 million medical records and diagnostic reports, giving it deep knowledge across a broad range of diseases and treatments. This model is currently released for research and testing purposes only and is not intended for real-world clinical use. Users should treat its outputs as experimental and always verify with qualified medical professionals. # Model Details - **Model Name: xdata-finetune-deepseek-reason-test-medical** - **Architecture: Built on a blend of Meta’s LLaMA and OpenAI’s GPT large language model architectures, enhanced with DeepSeek reasoning technology for improved inference.** - **Purpose: Developed as an AI medical assistant to support diagnostic reasoning and suggest therapeutic options based on input patient data or clinical scenarios.** - **Training Data: Trained on an extensive corpus of 80+ million medical data records and diagnosis datasets, encompassing diverse conditions, patient histories, laboratory results, and treatment outcomes. This diverse training data provides the model with broad medical knowledge.** - **Usage Limitations: For testing and research only – not production-ready. The model has not undergone clinical validation, so it should not be used for actual patient care or any mission-critical tasks.** - **License: Open-source (available under an open license for the community to use, inspect, and improve).** - **Team: Developed by a Slovenia-based team (XDATA.si) with expertise in medical AI and natural language processing.** # Key Features - **Advanced Reasoning: The model employs state-of-the-art AI inference techniques to analyze complex medical cases. It can interpret symptoms and medical notes to reason about possible diagnoses, much like a preliminary medical opinion.** - **Deep Medical Knowledge: Having been trained on a vast and diverse medical dataset, the model has learned about a broad range of diseases, conditions, and treatments. It can recall medical facts and cross-reference symptoms with conditions effectively, providing context-rich insights.** - **Therapy Recommendations: Based on a given diagnosis or set of symptoms, the model can suggest potential therapeutic approaches. These suggestions include common treatment plans, medication options, or further diagnostic tests that are often considered for similar cases in the data it was trained on.** - **Ethical Use Only: This model is designed as a supportive tool for medical research and education. It is not intended for clinical use and should not replace professional medical judgment. All outputs (diagnoses or treatment suggestions) must be reviewed by licensed healthcare providers before any real-world application.** # Limitations and Ethical Considerations While xdata-finetune-deepseek-reason-test-medical demonstrates strong performance in medical reasoning tasks, it comes with important limitations and ethical safeguards: - **Not Clinically Validated: The model’s suggestions have not been verified in clinical trials or by regulatory bodies. Its accuracy and reliability in real medical scenarios are unknown. Do not use this model as a sole source for medical decision-making.** - **Potential Biases: The model learns from historical medical records, which may contain biases or outdated practices. It might reflect the biases present in its training data (e.g., underrepresentation of certain patient groups or medical conditions) and could generate suggestions that favor common conditions over rare ones.** - **Accuracy Limitations: AI models can sometimes produce incorrect or nonsensical answers, especially if given ambiguous or insufficient information. In a medical context, an incorrect diagnosis or treatment plan could be harmful. Always double-check the model’s output against trusted medical sources and expertise.** - **Ethical Use: This model should be used only for research, testing, or educational purposes to explore how AI might assist medical professionals. It is **not a substitute for a certified doctor. Patients and users should never act on the model’s advice without consulting a healthcare professional.** - **Privacy and Data Handling: The model does not know any real patient’s personal data beyond what was in the anonymized training set. Ensure that any patient information input into the model is properly anonymized to protect privacy. The development team has followed ethical guidelines to use de-identified data for training.** # Call for Collaboration We invite the Hugging Face community and medical AI researchers to test, provide feedback, and contribute to improving xdata-finetune-deepseek-reason-test-medical. Your insights can help refine the model’s reasoning, address its limitations, and enhance its performance. If you encounter issues, have suggestions, or develop improvements (such as fine-tuning on additional data or refining prompts), please let us know. Join the discussion on the Hugging Face forum: Hugging Face Community. Together, through open collaboration, we can advance this project toward a more robust and reliable AI medical assistant that benefits everyone in the healthcare community. --- license: cc tags: - medical - unsloth - trl - sft base_model: - deepseek-ai/DeepSeek-R1 - unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "MEDICAL DATA" ]
BioMistral/BioMistral-7B-DARE-GGUF
BioMistral
text-generation
[ "transformers", "gguf", "mistral", "text-generation", "mergekit", "merge", "dare", "medical", "biology", "en", "fr", "nl", "es", "it", "pl", "ro", "de", "dataset:pubmed", "arxiv:2311.03099", "arxiv:2306.01708", "arxiv:2402.10373", "base_model:BioMistral/BioMistral-7B", "base_model:merge:BioMistral/BioMistral-7B", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:merge:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational" ]
2024-02-19T20:51:42Z
2024-02-19T22:46:19+00:00
344
10
--- base_model: - BioMistral/BioMistral-7B - mistralai/Mistral-7B-Instruct-v0.1 datasets: - pubmed language: - en - fr - nl - es - it - pl - ro - de library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - mergekit - merge - dare - medical - biology --- # BioMistral-7B-mistral7instruct-dare This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) as a base. ### Models Merged The following models were included in the merge: * [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: mistralai/Mistral-7B-Instruct-v0.1 # No parameters necessary for base model - model: BioMistral/BioMistral-7B parameters: density: 0.5 weight: 0.5 merge_method: dare_ties base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: int8_mask: true dtype: bfloat16 ``` <p align="center"> <img src="https://huggingface.co/BioMistral/BioMistral-7B/resolve/main/wordart_blue_m_rectangle.png?download=true" alt="drawing" width="250"/> </p> # BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains **Abstract:** Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, offering potential applications across specialized domains such as healthcare and medicine. Despite the availability of various open-source LLMs tailored for health contexts, adapting general-purpose LLMs to the medical domain presents significant challenges. In this paper, we introduce BioMistral, an open-source LLM tailored for the biomedical domain, utilizing Mistral as its foundation model and further pre-trained on PubMed Central. We conduct a comprehensive evaluation of BioMistral on a benchmark comprising 10 established medical question-answering (QA) tasks in English. We also explore lightweight models obtained through quantization and model merging approaches. Our results demonstrate BioMistral's superior performance compared to existing open-source medical models and its competitive edge against proprietary counterparts. Finally, to address the limited availability of data beyond English and to assess the multilingual generalization of medical LLMs, we automatically translated and evaluated this benchmark into 7 other languages. This marks the first large-scale multilingual evaluation of LLMs in the medical domain. Datasets, multilingual evaluation benchmarks, scripts, and all the models obtained during our experiments are freely released. **Advisory Notice!** Although BioMistral is intended to encapsulate medical knowledge sourced from high-quality evidence, it hasn't been tailored to effectively, safely, or suitably convey this knowledge within professional parameters for action. We advise refraining from utilizing BioMistral in medical contexts unless it undergoes thorough alignment with specific use cases and undergoes further testing, notably including randomized controlled trials in real-world medical environments. BioMistral 7B may possess inherent risks and biases that have not yet been thoroughly assessed. Additionally, the model's performance has not been evaluated in real-world clinical settings. Consequently, we recommend using BioMistral 7B strictly as a research tool and advise against deploying it in production environments for natural language generation or any professional health and medical purposes. # 1. BioMistral models **BioMistral** is a suite of Mistral-based further pre-trained open source models suited for the medical domains and pre-trained using textual data from PubMed Central Open Access (CC0, CC BY, CC BY-SA, and CC BY-ND). All the models are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French HPC. | Model Name | Base Model | Model Type | Sequence Length | Download | |:-------------------:|:----------------------------------:|:-------------------:|:---------------:|:-----------------------------------------------------:| | BioMistral-7B | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Further Pre-trained | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) | | BioMistral-7B-DARE | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge DARE | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE) | | BioMistral-7B-TIES | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge TIES | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES) | | BioMistral-7B-SLERP | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge SLERP | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP) | # 2. Quantized Models | Base Model | Method | q_group_size | w_bit | version | VRAM GB | Time | Download | |:-------------------:|:------:|:------------:|:-----:|:-------:|:-------:|:------:|:--------:| | BioMistral-7B | FP16/BF16 | | | | 15.02 | x1.00 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) | | BioMistral-7B | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMM) | | BioMistral-7B | AWQ | 128 | 4 | GEMV | 4.68 | x10.30 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMV) | | BioMistral-7B | BnB.4 | | 4 | | 5.03 | x3.25 | [HuggingFace](blank) | | BioMistral-7B | BnB.8 | | 8 | | 8.04 | x4.34 | [HuggingFace](blank) | | BioMistral-7B-DARE | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE-AWQ-QGS128-W4-GEMM) | | BioMistral-7B-TIES | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES-AWQ-QGS128-W4-GEMM) | | BioMistral-7B-SLERP | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP-AWQ-QGS128-W4-GEMM) | # 2. Using BioMistral You can use BioMistral with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow. Loading the model and tokenizer : ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("BioMistral/BioMistral-7B") model = AutoModel.from_pretrained("BioMistral/BioMistral-7B") ``` # 3. Supervised Fine-tuning Benchmark | | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | MedQA 5 opts | PubMedQA | MedMCQA | Avg. | |-------------------------------------------|:---------------------------------------------:|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|------------------| | **BioMistral 7B** | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | 50.6 | 42.8 | 77.5 | 48.1 | 57.3 | | **Mistral 7B Instruct** | **62.9** | 57.0 | 55.6 | 59.4 | 62.5 | <u>57.2</u> | 42.0 | 40.9 | 75.7 | 46.1 | 55.9 | | | | | | | | | | | | | | | **BioMistral 7B Ensemble** | <u>62.8</u> | 62.7 | <u>57.5</u> | **63.5** | 64.3 | 55.7 | 50.6 | 43.6 | 77.5 | **48.8** | 58.7 | | **BioMistral 7B DARE** | 62.3 | **67.0** | 55.8 | 61.4 | **66.9** | **58.0** | **51.1** | **45.2** | <u>77.7</u> | <u>48.7</u> | **59.4** | | **BioMistral 7B TIES** | 60.1 | <u>65.0</u> | **58.5** | 60.5 | 60.4 | 56.5 | 49.5 | 43.2 | 77.5 | 48.1 | 57.9 | | **BioMistral 7B SLERP** | 62.5 | 64.7 | 55.8 | <u>62.7</u> | <u>64.8</u> | 56.3 | <u>50.8</u> | <u>44.3</u> | **77.8** | 48.6 | <u>58.8</u> | | | | | | | | | | | | | | | **MedAlpaca 7B** | 53.1 | 58.0 | 54.1 | 58.8 | 58.1 | 48.6 | 40.1 | 33.7 | 73.6 | 37.0 | 51.5 | | **PMC-LLaMA 7B** | 24.5 | 27.7 | 35.3 | 17.4 | 30.3 | 23.3 | 25.5 | 20.2 | 72.9 | 26.6 | 30.4 | | **MediTron-7B** | 41.6 | 50.3 | 46.4 | 27.9 | 44.4 | 30.8 | 41.6 | 28.1 | 74.9 | 41.3 | 42.7 | | **BioMedGPT-LM-7B** | 51.4 | 52.0 | 49.4 | 53.3 | 50.7 | 49.1 | 42.5 | 33.9 | 76.8 | 37.6 | 49.7 | | | | | | | | | | | | | | | **GPT-3.5 Turbo 1106*** | 74.71 | 74.00 | 65.92 | 72.79 | 72.91 | 64.73 | 57.71 | 50.82 | 72.66 | 53.79 | 66.0 | Supervised Fine-Tuning (SFT) performance of BioMistral 7B models compared to baselines, measured by accuracy (↑) and averaged across 3 random seeds of 3-shot. DARE, TIES, and SLERP are model merging strategies that combine BioMistral 7B and Mistral 7B Instruct. Best model in bold, and second-best underlined. *GPT-3.5 Turbo performances are reported from the 3-shot results without SFT. # Citation BibTeX Arxiv : [https://arxiv.org/abs/2402.10373](https://arxiv.org/abs/2402.10373) ```bibtex @misc{labrak2024biomistral, title={BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains}, author={Yanis Labrak and Adrien Bazoge and Emmanuel Morin and Pierre-Antoine Gourraud and Mickael Rouvier and Richard Dufour}, year={2024}, eprint={2402.10373}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` **CAUTION!** Both direct and downstream users need to be informed about the risks, biases, and constraints inherent in the model. While the model can produce natural language text, our exploration of its capabilities and limitations is just beginning. In fields such as medicine, comprehending these limitations is crucial. Hence, we strongly advise against deploying this model for natural language generation in production or for professional tasks in the realm of health and medicine.
[ "MEDQA", "PUBMEDQA" ]
Omartificial-Intelligence-Space/Arabic-MiniLM-L12-v2-all-nli-triplet
Omartificial-Intelligence-Space
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "mteb", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:557850", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "ar", "dataset:Omartificial-Intelligence-Space/Arabic-NLi-Triplet", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-06-25T17:56:53Z
2025-01-10T18:06:24+00:00
341
4
--- base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 datasets: - Omartificial-Intelligence-Space/Arabic-NLi-Triplet language: - ar library_name: sentence-transformers license: apache-2.0 metrics: - pearson_cosine - spearman_cosine - pearson_manhattan - spearman_manhattan - pearson_euclidean - spearman_euclidean - pearson_dot - spearman_dot - pearson_max - spearman_max pipeline_tag: sentence-similarity tags: - mteb - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:557850 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss widget: - source_sentence: ذكر متوازن بعناية يقف على قدم واحدة بالقرب من منطقة شاطئ المحيط النظيفة sentences: - رجل يقدم عرضاً - هناك رجل بالخارج قرب الشاطئ - رجل يجلس على أريكه - source_sentence: رجل يقفز إلى سريره القذر sentences: - السرير قذر. - رجل يضحك أثناء غسيل الملابس - الرجل على القمر - source_sentence: الفتيات بالخارج sentences: - امرأة تلف الخيط إلى كرات بجانب كومة من الكرات - فتيان يركبان في جولة متعة - ثلاث فتيات يقفون سوية في غرفة واحدة تستمع وواحدة تكتب على الحائط والثالثة تتحدث إليهن - source_sentence: الرجل يرتدي قميصاً أزرق. sentences: - رجل يرتدي قميصاً أزرق يميل إلى الجدار بجانب الطريق مع شاحنة زرقاء وسيارة حمراء مع الماء في الخلفية. - كتاب القصص مفتوح - رجل يرتدي قميص أسود يعزف على الجيتار. - source_sentence: يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة شابة. sentences: - ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه - رجل يستلقي على وجهه على مقعد في الحديقة. - الشاب نائم بينما الأم تقود ابنتها إلى الحديقة model-index: - name: SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 results: - task: type: Retrieval dataset: name: MTEB MintakaRetrieval (ar) type: mintaka/mmteb-mintaka config: ar split: test revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e metrics: - type: main_score value: 12.493 - type: map_at_1 value: 5.719 - type: map_at_3 value: 8.269 - type: map_at_5 value: 9.172 - type: map_at_10 value: 9.894 - type: ndcg_at_1 value: 5.719 - type: ndcg_at_3 value: 9.128 - type: ndcg_at_5 value: 10.745 - type: ndcg_at_10 value: 12.493 - type: recall_at_1 value: 5.719 - type: recall_at_3 value: 11.621 - type: recall_at_5 value: 15.524 - type: recall_at_10 value: 20.926 - type: precision_at_1 value: 5.719 - type: precision_at_3 value: 3.874 - type: precision_at_5 value: 3.105 - type: precision_at_10 value: 2.093 - type: mrr_at_1 value: 5.7195 - type: mrr_at_3 value: 8.269 - type: mrr_at_5 value: 9.1723 - type: mrr_at_10 value: 9.8942 - task: type: Retrieval dataset: name: MTEB MIRACLRetrievalHardNegatives (ar) type: miracl/mmteb-miracl-hardnegatives config: ar split: dev revision: 95c8db7d4a6e9c1d8a60601afd63d553ae20a2eb metrics: - type: main_score value: 22.396 - type: map_at_1 value: 8.866 - type: map_at_3 value: 13.905 - type: map_at_5 value: 15.326 - type: map_at_10 value: 16.851 - type: ndcg_at_1 value: 13.9 - type: ndcg_at_3 value: 17.309 - type: ndcg_at_5 value: 19.174 - type: ndcg_at_10 value: 22.396 - type: recall_at_1 value: 8.866 - type: recall_at_3 value: 19.177 - type: recall_at_5 value: 23.999 - type: recall_at_10 value: 32.421 - type: precision_at_1 value: 13.9 - type: precision_at_3 value: 10.933 - type: precision_at_5 value: 8.5 - type: precision_at_10 value: 5.96 - type: mrr_at_1 value: 13.9 - type: mrr_at_3 value: 20.0667 - type: mrr_at_5 value: 21.3617 - type: mrr_at_10 value: 22.7531 - task: type: Retrieval dataset: name: MTEB MLQARetrieval (ar) type: mlqa/mmteb-mlqa config: ar split: validation revision: 397ed406c1a7902140303e7faf60fff35b58d285 metrics: - type: main_score value: 57.312 - type: map_at_1 value: 44.487 - type: map_at_3 value: 50.516 - type: map_at_5 value: 51.715 - type: map_at_10 value: 52.778 - type: ndcg_at_1 value: 44.487 - type: ndcg_at_3 value: 52.586 - type: ndcg_at_5 value: 54.742 - type: ndcg_at_10 value: 57.312 - type: recall_at_1 value: 44.487 - type: recall_at_3 value: 58.607 - type: recall_at_5 value: 63.83 - type: recall_at_10 value: 71.76 - type: precision_at_1 value: 44.487 - type: precision_at_3 value: 19.536 - type: precision_at_5 value: 12.766 - type: precision_at_10 value: 7.176 - type: mrr_at_1 value: 44.4874 - type: mrr_at_3 value: 50.5158 - type: mrr_at_5 value: 51.715 - type: mrr_at_10 value: 52.7782 - task: type: Retrieval dataset: name: MTEB SadeemQuestionRetrieval (ar) type: sadeem/mmteb-sadeem config: default split: test revision: 3cb0752b182e5d5d740df547748b06663c8e0bd9 metrics: - type: main_score value: 52.976 - type: map_at_1 value: 22.307 - type: map_at_3 value: 41.727 - type: map_at_5 value: 43.052 - type: map_at_10 value: 43.844 - type: ndcg_at_1 value: 22.307 - type: ndcg_at_3 value: 48.7 - type: ndcg_at_5 value: 51.057 - type: ndcg_at_10 value: 52.976 - type: recall_at_1 value: 22.307 - type: recall_at_3 value: 69.076 - type: recall_at_5 value: 74.725 - type: recall_at_10 value: 80.661 - type: precision_at_1 value: 22.307 - type: precision_at_3 value: 23.025 - type: precision_at_5 value: 14.945 - type: precision_at_10 value: 8.066 - type: mrr_at_1 value: 21.0148 - type: mrr_at_3 value: 40.8808 - type: mrr_at_5 value: 42.1254 - type: mrr_at_10 value: 42.9125 - task: type: STS dataset: name: MTEB BIOSSES (default) type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cosine_pearson value: 72.5081840952171 - type: cosine_spearman value: 69.41362982941537 - type: euclidean_pearson value: 67.45121490183709 - type: euclidean_spearman value: 67.15273493989758 - type: main_score value: 69.41362982941537 - type: manhattan_pearson value: 67.6119022794479 - type: manhattan_spearman value: 67.51659865246586 - task: type: STS dataset: name: MTEB SICK-R (default) type: mteb/sickr-sts config: default split: test revision: 20a6d6f312dd54037fe07a32d58e5e168867909d metrics: - type: cosine_pearson value: 83.61591268324493 - type: cosine_spearman value: 79.61914245705792 - type: euclidean_pearson value: 81.32044881859483 - type: euclidean_spearman value: 79.04866675279919 - type: main_score value: 79.61914245705792 - type: manhattan_pearson value: 81.09220518201322 - type: manhattan_spearman value: 78.87590523907905 - task: type: STS dataset: name: MTEB STS12 (default) type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cosine_pearson value: 84.59807803376341 - type: cosine_spearman value: 77.38689922564416 - type: euclidean_pearson value: 83.92034850646732 - type: euclidean_spearman value: 76.75857193093438 - type: main_score value: 77.38689922564416 - type: manhattan_pearson value: 83.97191863964667 - type: manhattan_spearman value: 76.89790070725708 - task: type: STS dataset: name: MTEB STS13 (default) type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cosine_pearson value: 78.18664268536664 - type: cosine_spearman value: 79.58989311630421 - type: euclidean_pearson value: 79.25259731614729 - type: euclidean_spearman value: 80.1701122827397 - type: main_score value: 79.58989311630421 - type: manhattan_pearson value: 79.12601451996869 - type: manhattan_spearman value: 79.98999436073663 - task: type: STS dataset: name: MTEB STS14 (default) type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cosine_pearson value: 80.97541876658141 - type: cosine_spearman value: 79.78614320477877 - type: euclidean_pearson value: 81.01514505747167 - type: euclidean_spearman value: 80.73664735567839 - type: main_score value: 79.78614320477877 - type: manhattan_pearson value: 80.8746560526314 - type: manhattan_spearman value: 80.67025673179079 - task: type: STS dataset: name: MTEB STS15 (default) type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cosine_pearson value: 85.23661155813113 - type: cosine_spearman value: 86.21134464371615 - type: euclidean_pearson value: 85.82518684522182 - type: euclidean_spearman value: 86.43600784349509 - type: main_score value: 86.21134464371615 - type: manhattan_pearson value: 85.83101152371589 - type: manhattan_spearman value: 86.42228695679498 - task: type: STS dataset: name: MTEB STS16 (default) type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cosine_pearson value: 79.20106689077852 - type: cosine_spearman value: 81.39570893867825 - type: euclidean_pearson value: 80.39578888768929 - type: euclidean_spearman value: 81.19950443340412 - type: main_score value: 81.39570893867825 - type: manhattan_pearson value: 80.2226679341839 - type: manhattan_spearman value: 80.99142422593823 - task: type: STS dataset: name: MTEB STS17 (ar-ar) type: mteb/sts17-crosslingual-sts config: ar-ar split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 81.05294851623468 - type: cosine_spearman value: 81.10570655134113 - type: euclidean_pearson value: 79.22292773537778 - type: euclidean_spearman value: 78.84204232638425 - type: main_score value: 81.10570655134113 - type: manhattan_pearson value: 79.43750460320484 - type: manhattan_spearman value: 79.33713593557482 - task: type: STS dataset: name: MTEB STS22 (ar) type: mteb/sts22-crosslingual-sts config: ar split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 45.96875498680092 - type: cosine_spearman value: 52.405509117149904 - type: euclidean_pearson value: 42.097450896728226 - type: euclidean_spearman value: 50.89022884113707 - type: main_score value: 52.405509117149904 - type: manhattan_pearson value: 42.22827727075534 - type: manhattan_spearman value: 50.912841055442634 - task: type: STS dataset: name: MTEB STSBenchmark (default) type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cosine_pearson value: 83.13261516884116 - type: cosine_spearman value: 84.3492527221498 - type: euclidean_pearson value: 82.691603178401 - type: euclidean_spearman value: 83.0499566200785 - type: main_score value: 84.3492527221498 - type: manhattan_pearson value: 82.68307441014618 - type: manhattan_spearman value: 83.01315787964519 - task: type: Summarization dataset: name: MTEB SummEval (default) type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cosine_pearson value: 31.149232235402845 - type: cosine_spearman value: 30.685504130606255 - type: dot_pearson value: 27.466307571160375 - type: dot_spearman value: 28.93064261485915 - type: main_score value: 30.685504130606255 - type: pearson value: 31.149232235402845 - type: spearman value: 30.685504130606255 - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test 256 type: sts-test-256 metrics: - type: pearson_cosine value: 0.8264447022356382 name: Pearson Cosine - type: spearman_cosine value: 0.8386403752382455 name: Spearman Cosine - type: pearson_manhattan value: 0.8219134931449013 name: Pearson Manhattan - type: spearman_manhattan value: 0.825509659109493 name: Spearman Manhattan - type: pearson_euclidean value: 0.8223094468630248 name: Pearson Euclidean - type: spearman_euclidean value: 0.8260503151751462 name: Spearman Euclidean - type: pearson_dot value: 0.6375226884845725 name: Pearson Dot - type: spearman_dot value: 0.6287228614640888 name: Spearman Dot - type: pearson_max value: 0.8264447022356382 name: Pearson Max - type: spearman_max value: 0.8386403752382455 name: Spearman Max - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test 128 type: sts-test-128 metrics: - type: pearson_cosine value: 0.8209661910768973 name: Pearson Cosine - type: spearman_cosine value: 0.8347149482673766 name: Spearman Cosine - type: pearson_manhattan value: 0.8082811559854036 name: Pearson Manhattan - type: spearman_manhattan value: 0.8148314269262763 name: Spearman Manhattan - type: pearson_euclidean value: 0.8093138512113149 name: Pearson Euclidean - type: spearman_euclidean value: 0.8156468458613929 name: Spearman Euclidean - type: pearson_dot value: 0.5795109620454884 name: Pearson Dot - type: spearman_dot value: 0.5760223026552876 name: Spearman Dot - type: pearson_max value: 0.8209661910768973 name: Pearson Max - type: spearman_max value: 0.8347149482673766 name: Spearman Max - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test 64 type: sts-test-64 metrics: - type: pearson_cosine value: 0.808708530451336 name: Pearson Cosine - type: spearman_cosine value: 0.8217532539767914 name: Spearman Cosine - type: pearson_manhattan value: 0.7876121380998453 name: Pearson Manhattan - type: spearman_manhattan value: 0.7969092304137347 name: Spearman Manhattan - type: pearson_euclidean value: 0.7902997966909958 name: Pearson Euclidean - type: spearman_euclidean value: 0.7987635968785215 name: Spearman Euclidean - type: pearson_dot value: 0.495047136234386 name: Pearson Dot - type: spearman_dot value: 0.49287000679901516 name: Spearman Dot - type: pearson_max value: 0.808708530451336 name: Pearson Max - type: spearman_max value: 0.8217532539767914 name: Spearman Max --- # SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on the Omartificial-Intelligence-Space/arabic-n_li-triplet dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision bf3bf13ab40c3157080a7ab344c831b9ad18b5eb --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - Omartificial-Intelligence-Space/arabic-n_li-triplet <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Omartificial-Intelligence-Space/MiniLM-L12-v2-all-nli-triplet") # Run inference sentences = [ 'يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة شابة.', 'ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه', 'الشاب نائم بينما الأم تقود ابنتها إلى الحديقة', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `sts-test-256` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.8264 | | **spearman_cosine** | **0.8386** | | pearson_manhattan | 0.8219 | | spearman_manhattan | 0.8255 | | pearson_euclidean | 0.8223 | | spearman_euclidean | 0.8261 | | pearson_dot | 0.6375 | | spearman_dot | 0.6287 | | pearson_max | 0.8264 | | spearman_max | 0.8386 | #### Semantic Similarity * Dataset: `sts-test-128` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.821 | | **spearman_cosine** | **0.8347** | | pearson_manhattan | 0.8083 | | spearman_manhattan | 0.8148 | | pearson_euclidean | 0.8093 | | spearman_euclidean | 0.8156 | | pearson_dot | 0.5795 | | spearman_dot | 0.576 | | pearson_max | 0.821 | | spearman_max | 0.8347 | #### Semantic Similarity * Dataset: `sts-test-64` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.8087 | | **spearman_cosine** | **0.8218** | | pearson_manhattan | 0.7876 | | spearman_manhattan | 0.7969 | | pearson_euclidean | 0.7903 | | spearman_euclidean | 0.7988 | | pearson_dot | 0.495 | | spearman_dot | 0.4929 | | pearson_max | 0.8087 | | spearman_max | 0.8218 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Omartificial-Intelligence-Space/arabic-n_li-triplet * Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet * Size: 557,850 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 10.33 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 13.21 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 15.32 tokens</li><li>max: 53 tokens</li></ul> | * Samples: | anchor | positive | negative | |:------------------------------------------------------------|:--------------------------------------------|:------------------------------------| | <code>شخص على حصان يقفز فوق طائرة معطلة</code> | <code>شخص في الهواء الطلق، على حصان.</code> | <code>شخص في مطعم، يطلب عجة.</code> | | <code>أطفال يبتسمون و يلوحون للكاميرا</code> | <code>هناك أطفال حاضرون</code> | <code>الاطفال يتجهمون</code> | | <code>صبي يقفز على لوح التزلج في منتصف الجسر الأحمر.</code> | <code>الفتى يقوم بخدعة التزلج</code> | <code>الصبي يتزلج على الرصيف</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Evaluation Dataset #### Omartificial-Intelligence-Space/arabic-n_li-triplet * Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet * Size: 6,584 evaluation samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 21.86 tokens</li><li>max: 105 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.22 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 11.2 tokens</li><li>max: 33 tokens</li></ul> | * Samples: | anchor | positive | negative | |:-----------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------|:---------------------------------------------------| | <code>امرأتان يتعانقان بينما يحملان حزمة</code> | <code>إمرأتان يحملان حزمة</code> | <code>الرجال يتشاجرون خارج مطعم</code> | | <code>طفلين صغيرين يرتديان قميصاً أزرق، أحدهما يرتدي الرقم 9 والآخر يرتدي الرقم 2 يقفان على خطوات خشبية في الحمام ويغسلان أيديهما في المغسلة.</code> | <code>طفلين يرتديان قميصاً مرقماً يغسلون أيديهم</code> | <code>طفلين يرتديان سترة يذهبان إلى المدرسة</code> | | <code>رجل يبيع الدونات لعميل خلال معرض عالمي أقيم في مدينة أنجليس</code> | <code>رجل يبيع الدونات لعميل</code> | <code>امرأة تشرب قهوتها في مقهى صغير</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | sts-test-128_spearman_cosine | sts-test-256_spearman_cosine | sts-test-64_spearman_cosine | |:------:|:----:|:-------------:|:----------------------------:|:----------------------------:|:---------------------------:| | 0.0229 | 200 | 6.2204 | - | - | - | | 0.0459 | 400 | 4.9559 | - | - | - | | 0.0688 | 600 | 4.7835 | - | - | - | | 0.0918 | 800 | 4.2725 | - | - | - | | 0.1147 | 1000 | 4.291 | - | - | - | | 0.1377 | 1200 | 4.0704 | - | - | - | | 0.1606 | 1400 | 3.7962 | - | - | - | | 0.1835 | 1600 | 3.7447 | - | - | - | | 0.2065 | 1800 | 3.569 | - | - | - | | 0.2294 | 2000 | 3.5373 | - | - | - | | 0.2524 | 2200 | 3.608 | - | - | - | | 0.2753 | 2400 | 3.5609 | - | - | - | | 0.2983 | 2600 | 3.5231 | - | - | - | | 0.3212 | 2800 | 3.3312 | - | - | - | | 0.3442 | 3000 | 3.4803 | - | - | - | | 0.3671 | 3200 | 3.3552 | - | - | - | | 0.3900 | 3400 | 3.3024 | - | - | - | | 0.4130 | 3600 | 3.2559 | - | - | - | | 0.4359 | 3800 | 3.1882 | - | - | - | | 0.4589 | 4000 | 3.227 | - | - | - | | 0.4818 | 4200 | 3.0889 | - | - | - | | 0.5048 | 4400 | 3.0861 | - | - | - | | 0.5277 | 4600 | 3.0178 | - | - | - | | 0.5506 | 4800 | 3.231 | - | - | - | | 0.5736 | 5000 | 3.1593 | - | - | - | | 0.5965 | 5200 | 3.1101 | - | - | - | | 0.6195 | 5400 | 3.1307 | - | - | - | | 0.6424 | 5600 | 3.1265 | - | - | - | | 0.6654 | 5800 | 3.1116 | - | - | - | | 0.6883 | 6000 | 3.1417 | - | - | - | | 0.7113 | 6200 | 3.0862 | - | - | - | | 0.7342 | 6400 | 2.9652 | - | - | - | | 0.7571 | 6600 | 2.8466 | - | - | - | | 0.7801 | 6800 | 2.271 | - | - | - | | 0.8030 | 7000 | 2.046 | - | - | - | | 0.8260 | 7200 | 1.9634 | - | - | - | | 0.8489 | 7400 | 1.8875 | - | - | - | | 0.8719 | 7600 | 1.7655 | - | - | - | | 0.8948 | 7800 | 1.6874 | - | - | - | | 0.9177 | 8000 | 1.7315 | - | - | - | | 0.9407 | 8200 | 1.6674 | - | - | - | | 0.9636 | 8400 | 1.6574 | - | - | - | | 0.9866 | 8600 | 1.6142 | - | - | - | | 1.0 | 8717 | - | 0.8347 | 0.8386 | 0.8218 | ### Framework Versions - Python: 3.9.18 - Sentence Transformers: 3.0.1 - Transformers: 4.40.0 - PyTorch: 2.2.2+cu121 - Accelerate: 0.26.1 - Datasets: 2.19.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## <span style="color:blue">Acknowledgments</span> The author would like to thank Prince Sultan University for their invaluable support in this project. Their contributions and resources have been instrumental in the development and fine-tuning of these models. ```markdown ## Citation If you use the Arabic Matryoshka Embeddings Model, please cite it as follows: ```bibtex @software{nacar2024, author = {Omer Nacar}, title = {Arabic Matryoshka Embeddings Model - Arabic MiniLM L12 v2 All Nli Triplet}, year = 2024, url = {https://huggingface.co/Omartificial-Intelligence-Space/Arabic-MiniLM-L12-v2-all-nli-triplet}, version = {1.0.0}, }
[ "BIOSSES" ]
McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp-unsup-simcse
McGill-NLP
sentence-similarity
[ "peft", "safetensors", "text-embedding", "embeddings", "information-retrieval", "beir", "text-classification", "language-model", "text-clustering", "text-semantic-similarity", "text-evaluation", "text-reranking", "feature-extraction", "sentence-similarity", "Sentence Similarity", "natural_questions", "ms_marco", "fever", "hotpot_qa", "mteb", "en", "arxiv:2404.05961", "license:mit", "model-index", "region:us" ]
2024-04-04T03:06:33Z
2024-04-11T20:09:10+00:00
338
7
--- language: - en library_name: peft license: mit pipeline_tag: sentence-similarity tags: - text-embedding - embeddings - information-retrieval - beir - text-classification - language-model - text-clustering - text-semantic-similarity - text-evaluation - text-reranking - feature-extraction - sentence-similarity - Sentence Similarity - natural_questions - ms_marco - fever - hotpot_qa - mteb model-index: - name: LLM2Vec-Mistral-7B-unsupervised results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 76.94029850746269 - type: ap value: 41.01055096636703 - type: f1 value: 71.2582580801963 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 85.288275 - type: ap value: 80.9174293931393 - type: f1 value: 85.26284279319103 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.089999999999996 - type: f1 value: 46.42571856588491 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 25.676 - type: map_at_10 value: 41.705999999999996 - type: map_at_100 value: 42.649 - type: map_at_1000 value: 42.655 - type: map_at_3 value: 36.214 - type: map_at_5 value: 39.475 - type: mrr_at_1 value: 26.173999999999996 - type: mrr_at_10 value: 41.873 - type: mrr_at_100 value: 42.817 - type: mrr_at_1000 value: 42.823 - type: mrr_at_3 value: 36.427 - type: mrr_at_5 value: 39.646 - type: ndcg_at_1 value: 25.676 - type: ndcg_at_10 value: 51.001 - type: ndcg_at_100 value: 55.001 - type: ndcg_at_1000 value: 55.167 - type: ndcg_at_3 value: 39.713 - type: ndcg_at_5 value: 45.596 - type: precision_at_1 value: 25.676 - type: precision_at_10 value: 8.087 - type: precision_at_100 value: 0.983 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 16.619 - type: precision_at_5 value: 12.831000000000001 - type: recall_at_1 value: 25.676 - type: recall_at_10 value: 80.868 - type: recall_at_100 value: 98.29299999999999 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 49.858000000000004 - type: recall_at_5 value: 64.154 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 47.557333278165295 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 39.921940994207674 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 58.602773795071585 - type: mrr value: 72.93749725190169 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_spearman value: 83.29045204631967 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 86.1590909090909 - type: f1 value: 86.08993054539444 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 36.13784714320738 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 30.26284987791574 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: cqadupstack/android config: default split: test revision: None metrics: - type: map_at_1 value: 27.611 - type: map_at_10 value: 37.838 - type: map_at_100 value: 39.446999999999996 - type: map_at_1000 value: 39.583 - type: map_at_3 value: 34.563 - type: map_at_5 value: 36.332 - type: mrr_at_1 value: 35.765 - type: mrr_at_10 value: 44.614 - type: mrr_at_100 value: 45.501000000000005 - type: mrr_at_1000 value: 45.558 - type: mrr_at_3 value: 42.513 - type: mrr_at_5 value: 43.515 - type: ndcg_at_1 value: 35.765 - type: ndcg_at_10 value: 44.104 - type: ndcg_at_100 value: 50.05500000000001 - type: ndcg_at_1000 value: 52.190000000000005 - type: ndcg_at_3 value: 39.834 - type: ndcg_at_5 value: 41.491 - type: precision_at_1 value: 35.765 - type: precision_at_10 value: 8.870000000000001 - type: precision_at_100 value: 1.505 - type: precision_at_1000 value: 0.2 - type: precision_at_3 value: 19.886 - type: precision_at_5 value: 14.277999999999999 - type: recall_at_1 value: 27.611 - type: recall_at_10 value: 55.065 - type: recall_at_100 value: 80.60199999999999 - type: recall_at_1000 value: 94.517 - type: recall_at_3 value: 41.281 - type: recall_at_5 value: 46.791 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval type: cqadupstack/english config: default split: test revision: None metrics: - type: map_at_1 value: 28.599999999999998 - type: map_at_10 value: 38.218999999999994 - type: map_at_100 value: 39.336 - type: map_at_1000 value: 39.464 - type: map_at_3 value: 35.599 - type: map_at_5 value: 36.927 - type: mrr_at_1 value: 37.197 - type: mrr_at_10 value: 44.759 - type: mrr_at_100 value: 45.372 - type: mrr_at_1000 value: 45.422000000000004 - type: mrr_at_3 value: 42.941 - type: mrr_at_5 value: 43.906 - type: ndcg_at_1 value: 37.197 - type: ndcg_at_10 value: 43.689 - type: ndcg_at_100 value: 47.588 - type: ndcg_at_1000 value: 49.868 - type: ndcg_at_3 value: 40.434 - type: ndcg_at_5 value: 41.617 - type: precision_at_1 value: 37.197 - type: precision_at_10 value: 8.529 - type: precision_at_100 value: 1.325 - type: precision_at_1000 value: 0.181 - type: precision_at_3 value: 20.212 - type: precision_at_5 value: 13.987 - type: recall_at_1 value: 28.599999999999998 - type: recall_at_10 value: 52.266999999999996 - type: recall_at_100 value: 69.304 - type: recall_at_1000 value: 84.249 - type: recall_at_3 value: 41.281 - type: recall_at_5 value: 45.56 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval type: cqadupstack/gaming config: default split: test revision: None metrics: - type: map_at_1 value: 33.168 - type: map_at_10 value: 44.690999999999995 - type: map_at_100 value: 45.804 - type: map_at_1000 value: 45.876 - type: map_at_3 value: 41.385 - type: map_at_5 value: 43.375 - type: mrr_at_1 value: 38.997 - type: mrr_at_10 value: 48.782 - type: mrr_at_100 value: 49.534 - type: mrr_at_1000 value: 49.57 - type: mrr_at_3 value: 46.134 - type: mrr_at_5 value: 47.814 - type: ndcg_at_1 value: 38.997 - type: ndcg_at_10 value: 50.707 - type: ndcg_at_100 value: 55.358 - type: ndcg_at_1000 value: 56.818999999999996 - type: ndcg_at_3 value: 45.098 - type: ndcg_at_5 value: 48.065999999999995 - type: precision_at_1 value: 38.997 - type: precision_at_10 value: 8.414000000000001 - type: precision_at_100 value: 1.159 - type: precision_at_1000 value: 0.135 - type: precision_at_3 value: 20.564 - type: precision_at_5 value: 14.445 - type: recall_at_1 value: 33.168 - type: recall_at_10 value: 64.595 - type: recall_at_100 value: 85.167 - type: recall_at_1000 value: 95.485 - type: recall_at_3 value: 49.555 - type: recall_at_5 value: 56.871 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval type: cqadupstack/gis config: default split: test revision: None metrics: - type: map_at_1 value: 17.254 - type: map_at_10 value: 23.925 - type: map_at_100 value: 25.008000000000003 - type: map_at_1000 value: 25.123 - type: map_at_3 value: 21.676000000000002 - type: map_at_5 value: 23.042 - type: mrr_at_1 value: 18.756999999999998 - type: mrr_at_10 value: 25.578 - type: mrr_at_100 value: 26.594 - type: mrr_at_1000 value: 26.680999999999997 - type: mrr_at_3 value: 23.371 - type: mrr_at_5 value: 24.721 - type: ndcg_at_1 value: 18.756999999999998 - type: ndcg_at_10 value: 27.878999999999998 - type: ndcg_at_100 value: 33.285 - type: ndcg_at_1000 value: 36.333 - type: ndcg_at_3 value: 23.461000000000002 - type: ndcg_at_5 value: 25.836 - type: precision_at_1 value: 18.756999999999998 - type: precision_at_10 value: 4.429 - type: precision_at_100 value: 0.754 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 9.981 - type: precision_at_5 value: 7.412000000000001 - type: recall_at_1 value: 17.254 - type: recall_at_10 value: 38.42 - type: recall_at_100 value: 63.50900000000001 - type: recall_at_1000 value: 86.787 - type: recall_at_3 value: 26.840999999999998 - type: recall_at_5 value: 32.4 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval type: cqadupstack/mathematica config: default split: test revision: None metrics: - type: map_at_1 value: 10.495000000000001 - type: map_at_10 value: 16.505 - type: map_at_100 value: 17.59 - type: map_at_1000 value: 17.709 - type: map_at_3 value: 13.974 - type: map_at_5 value: 15.466 - type: mrr_at_1 value: 14.179 - type: mrr_at_10 value: 20.396 - type: mrr_at_100 value: 21.384 - type: mrr_at_1000 value: 21.47 - type: mrr_at_3 value: 17.828 - type: mrr_at_5 value: 19.387999999999998 - type: ndcg_at_1 value: 14.179 - type: ndcg_at_10 value: 20.852 - type: ndcg_at_100 value: 26.44 - type: ndcg_at_1000 value: 29.448999999999998 - type: ndcg_at_3 value: 16.181 - type: ndcg_at_5 value: 18.594 - type: precision_at_1 value: 14.179 - type: precision_at_10 value: 4.229 - type: precision_at_100 value: 0.8170000000000001 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 8.126 - type: precision_at_5 value: 6.493 - type: recall_at_1 value: 10.495000000000001 - type: recall_at_10 value: 30.531000000000002 - type: recall_at_100 value: 55.535999999999994 - type: recall_at_1000 value: 77.095 - type: recall_at_3 value: 17.805 - type: recall_at_5 value: 24.041 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval type: cqadupstack/physics config: default split: test revision: None metrics: - type: map_at_1 value: 24.826999999999998 - type: map_at_10 value: 34.957 - type: map_at_100 value: 36.314 - type: map_at_1000 value: 36.437999999999995 - type: map_at_3 value: 31.328 - type: map_at_5 value: 33.254 - type: mrr_at_1 value: 31.375999999999998 - type: mrr_at_10 value: 40.493 - type: mrr_at_100 value: 41.410000000000004 - type: mrr_at_1000 value: 41.46 - type: mrr_at_3 value: 37.504 - type: mrr_at_5 value: 39.212 - type: ndcg_at_1 value: 31.375999999999998 - type: ndcg_at_10 value: 41.285 - type: ndcg_at_100 value: 46.996 - type: ndcg_at_1000 value: 49.207 - type: ndcg_at_3 value: 35.297 - type: ndcg_at_5 value: 37.999 - type: precision_at_1 value: 31.375999999999998 - type: precision_at_10 value: 7.960000000000001 - type: precision_at_100 value: 1.277 - type: precision_at_1000 value: 0.165 - type: precision_at_3 value: 17.132 - type: precision_at_5 value: 12.57 - type: recall_at_1 value: 24.826999999999998 - type: recall_at_10 value: 54.678000000000004 - type: recall_at_100 value: 78.849 - type: recall_at_1000 value: 93.36 - type: recall_at_3 value: 37.775 - type: recall_at_5 value: 44.993 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval type: cqadupstack/programmers config: default split: test revision: None metrics: - type: map_at_1 value: 21.195 - type: map_at_10 value: 29.003 - type: map_at_100 value: 30.379 - type: map_at_1000 value: 30.508000000000003 - type: map_at_3 value: 25.927 - type: map_at_5 value: 27.784 - type: mrr_at_1 value: 26.941 - type: mrr_at_10 value: 34.305 - type: mrr_at_100 value: 35.32 - type: mrr_at_1000 value: 35.386 - type: mrr_at_3 value: 31.735000000000003 - type: mrr_at_5 value: 33.213 - type: ndcg_at_1 value: 26.941 - type: ndcg_at_10 value: 34.31 - type: ndcg_at_100 value: 40.242 - type: ndcg_at_1000 value: 42.9 - type: ndcg_at_3 value: 29.198 - type: ndcg_at_5 value: 31.739 - type: precision_at_1 value: 26.941 - type: precision_at_10 value: 6.507000000000001 - type: precision_at_100 value: 1.124 - type: precision_at_1000 value: 0.154 - type: precision_at_3 value: 13.850999999999999 - type: precision_at_5 value: 10.411 - type: recall_at_1 value: 21.195 - type: recall_at_10 value: 45.083 - type: recall_at_100 value: 70.14200000000001 - type: recall_at_1000 value: 88.34100000000001 - type: recall_at_3 value: 31.175000000000004 - type: recall_at_5 value: 37.625 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: mteb/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 20.293916666666664 - type: map_at_10 value: 28.353666666666665 - type: map_at_100 value: 29.524333333333335 - type: map_at_1000 value: 29.652583333333332 - type: map_at_3 value: 25.727916666666665 - type: map_at_5 value: 27.170833333333334 - type: mrr_at_1 value: 25.21375 - type: mrr_at_10 value: 32.67591666666667 - type: mrr_at_100 value: 33.56233333333334 - type: mrr_at_1000 value: 33.63283333333334 - type: mrr_at_3 value: 30.415333333333333 - type: mrr_at_5 value: 31.679583333333333 - type: ndcg_at_1 value: 25.21375 - type: ndcg_at_10 value: 33.37108333333333 - type: ndcg_at_100 value: 38.57725 - type: ndcg_at_1000 value: 41.246833333333335 - type: ndcg_at_3 value: 28.98183333333334 - type: ndcg_at_5 value: 30.986083333333337 - type: precision_at_1 value: 25.21375 - type: precision_at_10 value: 6.200583333333333 - type: precision_at_100 value: 1.0527499999999999 - type: precision_at_1000 value: 0.14675000000000002 - type: precision_at_3 value: 13.808333333333334 - type: precision_at_5 value: 9.976416666666669 - type: recall_at_1 value: 20.293916666666664 - type: recall_at_10 value: 43.90833333333333 - type: recall_at_100 value: 67.26575 - type: recall_at_1000 value: 86.18591666666666 - type: recall_at_3 value: 31.387416666666667 - type: recall_at_5 value: 36.73883333333333 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval type: cqadupstack/stats config: default split: test revision: None metrics: - type: map_at_1 value: 15.043000000000001 - type: map_at_10 value: 22.203 - type: map_at_100 value: 23.254 - type: map_at_1000 value: 23.362 - type: map_at_3 value: 20.157 - type: map_at_5 value: 21.201999999999998 - type: mrr_at_1 value: 17.485 - type: mrr_at_10 value: 24.729 - type: mrr_at_100 value: 25.715 - type: mrr_at_1000 value: 25.796999999999997 - type: mrr_at_3 value: 22.725 - type: mrr_at_5 value: 23.829 - type: ndcg_at_1 value: 17.485 - type: ndcg_at_10 value: 26.31 - type: ndcg_at_100 value: 31.722 - type: ndcg_at_1000 value: 34.621 - type: ndcg_at_3 value: 22.414 - type: ndcg_at_5 value: 24.125 - type: precision_at_1 value: 17.485 - type: precision_at_10 value: 4.601 - type: precision_at_100 value: 0.7849999999999999 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 10.327 - type: precision_at_5 value: 7.331 - type: recall_at_1 value: 15.043000000000001 - type: recall_at_10 value: 36.361 - type: recall_at_100 value: 61.63999999999999 - type: recall_at_1000 value: 83.443 - type: recall_at_3 value: 25.591 - type: recall_at_5 value: 29.808 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval type: cqadupstack/tex config: default split: test revision: None metrics: - type: map_at_1 value: 11.018 - type: map_at_10 value: 15.886 - type: map_at_100 value: 16.830000000000002 - type: map_at_1000 value: 16.956 - type: map_at_3 value: 14.222000000000001 - type: map_at_5 value: 15.110999999999999 - type: mrr_at_1 value: 14.625 - type: mrr_at_10 value: 19.677 - type: mrr_at_100 value: 20.532 - type: mrr_at_1000 value: 20.622 - type: mrr_at_3 value: 17.992 - type: mrr_at_5 value: 18.909000000000002 - type: ndcg_at_1 value: 14.625 - type: ndcg_at_10 value: 19.414 - type: ndcg_at_100 value: 24.152 - type: ndcg_at_1000 value: 27.433000000000003 - type: ndcg_at_3 value: 16.495 - type: ndcg_at_5 value: 17.742 - type: precision_at_1 value: 14.625 - type: precision_at_10 value: 3.833 - type: precision_at_100 value: 0.744 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 8.213 - type: precision_at_5 value: 6.036 - type: recall_at_1 value: 11.018 - type: recall_at_10 value: 26.346000000000004 - type: recall_at_100 value: 47.99 - type: recall_at_1000 value: 72.002 - type: recall_at_3 value: 17.762 - type: recall_at_5 value: 21.249000000000002 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval type: cqadupstack/unix config: default split: test revision: None metrics: - type: map_at_1 value: 20.053 - type: map_at_10 value: 27.950000000000003 - type: map_at_100 value: 29.207 - type: map_at_1000 value: 29.309 - type: map_at_3 value: 25.612000000000002 - type: map_at_5 value: 26.793 - type: mrr_at_1 value: 24.813 - type: mrr_at_10 value: 32.297 - type: mrr_at_100 value: 33.312999999999995 - type: mrr_at_1000 value: 33.379999999999995 - type: mrr_at_3 value: 30.239 - type: mrr_at_5 value: 31.368000000000002 - type: ndcg_at_1 value: 24.813 - type: ndcg_at_10 value: 32.722 - type: ndcg_at_100 value: 38.603 - type: ndcg_at_1000 value: 41.11 - type: ndcg_at_3 value: 28.74 - type: ndcg_at_5 value: 30.341 - type: precision_at_1 value: 24.813 - type: precision_at_10 value: 5.83 - type: precision_at_100 value: 0.9860000000000001 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 13.433 - type: precision_at_5 value: 9.384 - type: recall_at_1 value: 20.053 - type: recall_at_10 value: 42.867 - type: recall_at_100 value: 68.90899999999999 - type: recall_at_1000 value: 87.031 - type: recall_at_3 value: 31.606 - type: recall_at_5 value: 35.988 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval type: cqadupstack/webmasters config: default split: test revision: None metrics: - type: map_at_1 value: 20.696 - type: map_at_10 value: 29.741 - type: map_at_100 value: 30.958999999999996 - type: map_at_1000 value: 31.22 - type: map_at_3 value: 26.679000000000002 - type: map_at_5 value: 28.244999999999997 - type: mrr_at_1 value: 27.272999999999996 - type: mrr_at_10 value: 35.101 - type: mrr_at_100 value: 35.91 - type: mrr_at_1000 value: 35.987 - type: mrr_at_3 value: 32.378 - type: mrr_at_5 value: 33.732 - type: ndcg_at_1 value: 27.272999999999996 - type: ndcg_at_10 value: 36.136 - type: ndcg_at_100 value: 40.9 - type: ndcg_at_1000 value: 44.184 - type: ndcg_at_3 value: 31.123 - type: ndcg_at_5 value: 33.182 - type: precision_at_1 value: 27.272999999999996 - type: precision_at_10 value: 7.489999999999999 - type: precision_at_100 value: 1.506 - type: precision_at_1000 value: 0.24 - type: precision_at_3 value: 15.348999999999998 - type: precision_at_5 value: 11.344 - type: recall_at_1 value: 20.696 - type: recall_at_10 value: 48.041 - type: recall_at_100 value: 71.316 - type: recall_at_1000 value: 92.794 - type: recall_at_3 value: 32.983000000000004 - type: recall_at_5 value: 38.627 - task: type: Retrieval dataset: name: MTEB CQADupstackWordpressRetrieval type: cqadupstack/wordpress config: default split: test revision: None metrics: - type: map_at_1 value: 13.567000000000002 - type: map_at_10 value: 19.326 - type: map_at_100 value: 20.164 - type: map_at_1000 value: 20.283 - type: map_at_3 value: 17.613 - type: map_at_5 value: 18.519 - type: mrr_at_1 value: 15.157000000000002 - type: mrr_at_10 value: 21.38 - type: mrr_at_100 value: 22.163 - type: mrr_at_1000 value: 22.261 - type: mrr_at_3 value: 19.624 - type: mrr_at_5 value: 20.548 - type: ndcg_at_1 value: 15.157000000000002 - type: ndcg_at_10 value: 23.044999999999998 - type: ndcg_at_100 value: 27.586 - type: ndcg_at_1000 value: 30.848 - type: ndcg_at_3 value: 19.506999999999998 - type: ndcg_at_5 value: 21.101 - type: precision_at_1 value: 15.157000000000002 - type: precision_at_10 value: 3.7150000000000003 - type: precision_at_100 value: 0.651 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 8.626000000000001 - type: precision_at_5 value: 6.026 - type: recall_at_1 value: 13.567000000000002 - type: recall_at_10 value: 32.646 - type: recall_at_100 value: 54.225 - type: recall_at_1000 value: 79.12700000000001 - type: recall_at_3 value: 22.994 - type: recall_at_5 value: 26.912999999999997 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 7.26 - type: map_at_10 value: 15.109 - type: map_at_100 value: 17.155 - type: map_at_1000 value: 17.354 - type: map_at_3 value: 11.772 - type: map_at_5 value: 13.542000000000002 - type: mrr_at_1 value: 16.678 - type: mrr_at_10 value: 29.470000000000002 - type: mrr_at_100 value: 30.676 - type: mrr_at_1000 value: 30.714999999999996 - type: mrr_at_3 value: 25.44 - type: mrr_at_5 value: 27.792 - type: ndcg_at_1 value: 16.678 - type: ndcg_at_10 value: 22.967000000000002 - type: ndcg_at_100 value: 31.253999999999998 - type: ndcg_at_1000 value: 34.748000000000005 - type: ndcg_at_3 value: 17.058 - type: ndcg_at_5 value: 19.43 - type: precision_at_1 value: 16.678 - type: precision_at_10 value: 7.974 - type: precision_at_100 value: 1.6740000000000002 - type: precision_at_1000 value: 0.232 - type: precision_at_3 value: 13.681 - type: precision_at_5 value: 11.322000000000001 - type: recall_at_1 value: 7.26 - type: recall_at_10 value: 30.407 - type: recall_at_100 value: 59.073 - type: recall_at_1000 value: 78.58800000000001 - type: recall_at_3 value: 16.493 - type: recall_at_5 value: 22.453 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 5.176 - type: map_at_10 value: 11.951 - type: map_at_100 value: 16.208 - type: map_at_1000 value: 17.067 - type: map_at_3 value: 8.669 - type: map_at_5 value: 10.061 - type: mrr_at_1 value: 42.5 - type: mrr_at_10 value: 54.312000000000005 - type: mrr_at_100 value: 54.925999999999995 - type: mrr_at_1000 value: 54.959 - type: mrr_at_3 value: 52.292 - type: mrr_at_5 value: 53.554 - type: ndcg_at_1 value: 31.374999999999996 - type: ndcg_at_10 value: 25.480999999999998 - type: ndcg_at_100 value: 30.018 - type: ndcg_at_1000 value: 36.103 - type: ndcg_at_3 value: 27.712999999999997 - type: ndcg_at_5 value: 26.415 - type: precision_at_1 value: 42.5 - type: precision_at_10 value: 20.549999999999997 - type: precision_at_100 value: 6.387 - type: precision_at_1000 value: 1.204 - type: precision_at_3 value: 32.917 - type: precision_at_5 value: 27.400000000000002 - type: recall_at_1 value: 5.176 - type: recall_at_10 value: 18.335 - type: recall_at_100 value: 38.629999999999995 - type: recall_at_1000 value: 59.74699999999999 - type: recall_at_3 value: 10.36 - type: recall_at_5 value: 13.413 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 48.885 - type: f1 value: 44.330258440550644 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 25.211 - type: map_at_10 value: 37.946999999999996 - type: map_at_100 value: 38.852 - type: map_at_1000 value: 38.896 - type: map_at_3 value: 34.445 - type: map_at_5 value: 36.451 - type: mrr_at_1 value: 27.453 - type: mrr_at_10 value: 40.505 - type: mrr_at_100 value: 41.342 - type: mrr_at_1000 value: 41.377 - type: mrr_at_3 value: 36.971 - type: mrr_at_5 value: 39.013999999999996 - type: ndcg_at_1 value: 27.453 - type: ndcg_at_10 value: 45.106 - type: ndcg_at_100 value: 49.357 - type: ndcg_at_1000 value: 50.546 - type: ndcg_at_3 value: 38.063 - type: ndcg_at_5 value: 41.603 - type: precision_at_1 value: 27.453 - type: precision_at_10 value: 7.136000000000001 - type: precision_at_100 value: 0.9390000000000001 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 16.677 - type: precision_at_5 value: 11.899 - type: recall_at_1 value: 25.211 - type: recall_at_10 value: 64.964 - type: recall_at_100 value: 84.23 - type: recall_at_1000 value: 93.307 - type: recall_at_3 value: 45.936 - type: recall_at_5 value: 54.489 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 11.434 - type: map_at_10 value: 20.325 - type: map_at_100 value: 22.267 - type: map_at_1000 value: 22.46 - type: map_at_3 value: 16.864 - type: map_at_5 value: 18.584999999999997 - type: mrr_at_1 value: 24.074 - type: mrr_at_10 value: 32.487 - type: mrr_at_100 value: 33.595000000000006 - type: mrr_at_1000 value: 33.649 - type: mrr_at_3 value: 29.578 - type: mrr_at_5 value: 31.044 - type: ndcg_at_1 value: 24.074 - type: ndcg_at_10 value: 27.244 - type: ndcg_at_100 value: 35.244 - type: ndcg_at_1000 value: 38.964999999999996 - type: ndcg_at_3 value: 22.709 - type: ndcg_at_5 value: 24.114 - type: precision_at_1 value: 24.074 - type: precision_at_10 value: 8.21 - type: precision_at_100 value: 1.627 - type: precision_at_1000 value: 0.22999999999999998 - type: precision_at_3 value: 15.741 - type: precision_at_5 value: 12.037 - type: recall_at_1 value: 11.434 - type: recall_at_10 value: 35.423 - type: recall_at_100 value: 66.056 - type: recall_at_1000 value: 88.63799999999999 - type: recall_at_3 value: 20.968 - type: recall_at_5 value: 26.540999999999997 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 30.506 - type: map_at_10 value: 44.864 - type: map_at_100 value: 46.016 - type: map_at_1000 value: 46.1 - type: map_at_3 value: 41.518 - type: map_at_5 value: 43.461 - type: mrr_at_1 value: 61.013 - type: mrr_at_10 value: 69.918 - type: mrr_at_100 value: 70.327 - type: mrr_at_1000 value: 70.342 - type: mrr_at_3 value: 68.226 - type: mrr_at_5 value: 69.273 - type: ndcg_at_1 value: 61.013 - type: ndcg_at_10 value: 54.539 - type: ndcg_at_100 value: 58.819 - type: ndcg_at_1000 value: 60.473 - type: ndcg_at_3 value: 49.27 - type: ndcg_at_5 value: 51.993 - type: precision_at_1 value: 61.013 - type: precision_at_10 value: 11.757 - type: precision_at_100 value: 1.5110000000000001 - type: precision_at_1000 value: 0.173 - type: precision_at_3 value: 31.339 - type: precision_at_5 value: 20.959 - type: recall_at_1 value: 30.506 - type: recall_at_10 value: 58.785 - type: recall_at_100 value: 75.55 - type: recall_at_1000 value: 86.455 - type: recall_at_3 value: 47.009 - type: recall_at_5 value: 52.397000000000006 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 77.954 - type: ap value: 73.06067313842448 - type: f1 value: 77.6469083443121 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 7.7170000000000005 - type: map_at_10 value: 14.696000000000002 - type: map_at_100 value: 15.973 - type: map_at_1000 value: 16.079 - type: map_at_3 value: 12.059000000000001 - type: map_at_5 value: 13.478000000000002 - type: mrr_at_1 value: 7.9079999999999995 - type: mrr_at_10 value: 14.972 - type: mrr_at_100 value: 16.235 - type: mrr_at_1000 value: 16.337 - type: mrr_at_3 value: 12.323 - type: mrr_at_5 value: 13.751 - type: ndcg_at_1 value: 7.9079999999999995 - type: ndcg_at_10 value: 19.131 - type: ndcg_at_100 value: 25.868000000000002 - type: ndcg_at_1000 value: 28.823999999999998 - type: ndcg_at_3 value: 13.611 - type: ndcg_at_5 value: 16.178 - type: precision_at_1 value: 7.9079999999999995 - type: precision_at_10 value: 3.4259999999999997 - type: precision_at_100 value: 0.687 - type: precision_at_1000 value: 0.094 - type: precision_at_3 value: 6.103 - type: precision_at_5 value: 4.951 - type: recall_at_1 value: 7.7170000000000005 - type: recall_at_10 value: 33.147999999999996 - type: recall_at_100 value: 65.55199999999999 - type: recall_at_1000 value: 88.748 - type: recall_at_3 value: 17.863 - type: recall_at_5 value: 24.083 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 95.48335613315093 - type: f1 value: 95.18813547597892 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 82.83857729138167 - type: f1 value: 63.61922697275075 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 76.65433759246805 - type: f1 value: 73.24385243140212 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 79.98655010087425 - type: f1 value: 79.3880305174127 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 30.109152457220606 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 26.928355856501696 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 29.73337424086118 - type: mrr value: 30.753319352871074 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 4.303 - type: map_at_10 value: 9.653 - type: map_at_100 value: 11.952 - type: map_at_1000 value: 13.126999999999999 - type: map_at_3 value: 6.976 - type: map_at_5 value: 8.292 - type: mrr_at_1 value: 35.913000000000004 - type: mrr_at_10 value: 45.827 - type: mrr_at_100 value: 46.587 - type: mrr_at_1000 value: 46.635 - type: mrr_at_3 value: 43.344 - type: mrr_at_5 value: 44.876 - type: ndcg_at_1 value: 34.056 - type: ndcg_at_10 value: 27.161 - type: ndcg_at_100 value: 25.552999999999997 - type: ndcg_at_1000 value: 34.671 - type: ndcg_at_3 value: 31.267 - type: ndcg_at_5 value: 29.896 - type: precision_at_1 value: 35.604 - type: precision_at_10 value: 19.969 - type: precision_at_100 value: 6.115 - type: precision_at_1000 value: 1.892 - type: precision_at_3 value: 29.825000000000003 - type: precision_at_5 value: 26.253999999999998 - type: recall_at_1 value: 4.303 - type: recall_at_10 value: 14.033999999999999 - type: recall_at_100 value: 28.250999999999998 - type: recall_at_1000 value: 58.751 - type: recall_at_3 value: 8.257 - type: recall_at_5 value: 10.761999999999999 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 14.668000000000001 - type: map_at_10 value: 26.593 - type: map_at_100 value: 28.094 - type: map_at_1000 value: 28.155 - type: map_at_3 value: 22.054000000000002 - type: map_at_5 value: 24.583 - type: mrr_at_1 value: 17.063 - type: mrr_at_10 value: 29.061999999999998 - type: mrr_at_100 value: 30.281000000000002 - type: mrr_at_1000 value: 30.325000000000003 - type: mrr_at_3 value: 24.754 - type: mrr_at_5 value: 27.281 - type: ndcg_at_1 value: 17.034 - type: ndcg_at_10 value: 34.157 - type: ndcg_at_100 value: 40.988 - type: ndcg_at_1000 value: 42.382999999999996 - type: ndcg_at_3 value: 25.076999999999998 - type: ndcg_at_5 value: 29.572 - type: precision_at_1 value: 17.034 - type: precision_at_10 value: 6.561 - type: precision_at_100 value: 1.04 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 12.167 - type: precision_at_5 value: 9.809 - type: recall_at_1 value: 14.668000000000001 - type: recall_at_10 value: 55.291999999999994 - type: recall_at_100 value: 85.82 - type: recall_at_1000 value: 96.164 - type: recall_at_3 value: 31.208999999999996 - type: recall_at_5 value: 41.766 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 66.20899999999999 - type: map_at_10 value: 80.024 - type: map_at_100 value: 80.73 - type: map_at_1000 value: 80.753 - type: map_at_3 value: 76.82900000000001 - type: map_at_5 value: 78.866 - type: mrr_at_1 value: 76.25 - type: mrr_at_10 value: 83.382 - type: mrr_at_100 value: 83.535 - type: mrr_at_1000 value: 83.538 - type: mrr_at_3 value: 82.013 - type: mrr_at_5 value: 82.931 - type: ndcg_at_1 value: 76.25999999999999 - type: ndcg_at_10 value: 84.397 - type: ndcg_at_100 value: 85.988 - type: ndcg_at_1000 value: 86.18299999999999 - type: ndcg_at_3 value: 80.778 - type: ndcg_at_5 value: 82.801 - type: precision_at_1 value: 76.25999999999999 - type: precision_at_10 value: 12.952 - type: precision_at_100 value: 1.509 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 35.323 - type: precision_at_5 value: 23.524 - type: recall_at_1 value: 66.20899999999999 - type: recall_at_10 value: 93.108 - type: recall_at_100 value: 98.817 - type: recall_at_1000 value: 99.857 - type: recall_at_3 value: 83.031 - type: recall_at_5 value: 88.441 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 41.82535503883439 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 62.077510084458055 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 3.383 - type: map_at_10 value: 8.839 - type: map_at_100 value: 10.876 - type: map_at_1000 value: 11.201 - type: map_at_3 value: 6.361 - type: map_at_5 value: 7.536 - type: mrr_at_1 value: 16.6 - type: mrr_at_10 value: 26.003999999999998 - type: mrr_at_100 value: 27.271 - type: mrr_at_1000 value: 27.354 - type: mrr_at_3 value: 22.900000000000002 - type: mrr_at_5 value: 24.58 - type: ndcg_at_1 value: 16.6 - type: ndcg_at_10 value: 15.345 - type: ndcg_at_100 value: 23.659 - type: ndcg_at_1000 value: 29.537000000000003 - type: ndcg_at_3 value: 14.283999999999999 - type: ndcg_at_5 value: 12.509999999999998 - type: precision_at_1 value: 16.6 - type: precision_at_10 value: 8.17 - type: precision_at_100 value: 2.028 - type: precision_at_1000 value: 0.34299999999999997 - type: precision_at_3 value: 13.633000000000001 - type: precision_at_5 value: 11.16 - type: recall_at_1 value: 3.383 - type: recall_at_10 value: 16.557 - type: recall_at_100 value: 41.123 - type: recall_at_1000 value: 69.67999999999999 - type: recall_at_3 value: 8.298 - type: recall_at_5 value: 11.322000000000001 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_spearman value: 75.55405115197729 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_spearman value: 67.65074099726466 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_spearman value: 83.89765011154986 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_spearman value: 76.97256789216159 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_spearman value: 83.80216382863031 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_spearman value: 81.90574806413879 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_spearman value: 85.58485422591949 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_spearman value: 65.92967262944444 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_spearman value: 80.41509666334721 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 77.81287769479543 - type: mrr value: 94.13409665860645 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 52.093999999999994 - type: map_at_10 value: 63.641999999999996 - type: map_at_100 value: 64.402 - type: map_at_1000 value: 64.416 - type: map_at_3 value: 60.878 - type: map_at_5 value: 62.778 - type: mrr_at_1 value: 55.333 - type: mrr_at_10 value: 65.139 - type: mrr_at_100 value: 65.75999999999999 - type: mrr_at_1000 value: 65.77199999999999 - type: mrr_at_3 value: 62.944 - type: mrr_at_5 value: 64.511 - type: ndcg_at_1 value: 55.333 - type: ndcg_at_10 value: 68.675 - type: ndcg_at_100 value: 71.794 - type: ndcg_at_1000 value: 72.18299999999999 - type: ndcg_at_3 value: 63.977 - type: ndcg_at_5 value: 66.866 - type: precision_at_1 value: 55.333 - type: precision_at_10 value: 9.232999999999999 - type: precision_at_100 value: 1.087 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 25.667 - type: precision_at_5 value: 17.0 - type: recall_at_1 value: 52.093999999999994 - type: recall_at_10 value: 82.506 - type: recall_at_100 value: 95.933 - type: recall_at_1000 value: 99.0 - type: recall_at_3 value: 70.078 - type: recall_at_5 value: 77.35600000000001 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.7128712871287 - type: cos_sim_ap value: 91.30057039245253 - type: cos_sim_f1 value: 85.35480624056368 - type: cos_sim_precision value: 85.91691995947315 - type: cos_sim_recall value: 84.8 - type: dot_accuracy value: 99.35346534653465 - type: dot_ap value: 67.929309733355 - type: dot_f1 value: 63.94205897568547 - type: dot_precision value: 66.2379421221865 - type: dot_recall value: 61.8 - type: euclidean_accuracy value: 99.69009900990099 - type: euclidean_ap value: 89.62179420600057 - type: euclidean_f1 value: 83.93039918116682 - type: euclidean_precision value: 85.9538784067086 - type: euclidean_recall value: 82.0 - type: manhattan_accuracy value: 99.70990099009902 - type: manhattan_ap value: 90.29611631593602 - type: manhattan_f1 value: 84.81729284611424 - type: manhattan_precision value: 87.38069989395547 - type: manhattan_recall value: 82.39999999999999 - type: max_accuracy value: 99.7128712871287 - type: max_ap value: 91.30057039245253 - type: max_f1 value: 85.35480624056368 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 67.33611278831218 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 34.504437768624214 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.80014786474266 - type: mrr value: 50.468909154570916 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.677648147466808 - type: cos_sim_spearman value: 30.191761045901888 - type: dot_pearson value: 23.16759191245942 - type: dot_spearman value: 23.186942570638486 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.214 - type: map_at_10 value: 1.2309999999999999 - type: map_at_100 value: 5.867 - type: map_at_1000 value: 14.671999999999999 - type: map_at_3 value: 0.519 - type: map_at_5 value: 0.764 - type: mrr_at_1 value: 82.0 - type: mrr_at_10 value: 87.519 - type: mrr_at_100 value: 87.519 - type: mrr_at_1000 value: 87.536 - type: mrr_at_3 value: 86.333 - type: mrr_at_5 value: 87.233 - type: ndcg_at_1 value: 77.0 - type: ndcg_at_10 value: 55.665 - type: ndcg_at_100 value: 39.410000000000004 - type: ndcg_at_1000 value: 37.21 - type: ndcg_at_3 value: 65.263 - type: ndcg_at_5 value: 61.424 - type: precision_at_1 value: 82.0 - type: precision_at_10 value: 55.400000000000006 - type: precision_at_100 value: 39.04 - type: precision_at_1000 value: 16.788 - type: precision_at_3 value: 67.333 - type: precision_at_5 value: 62.8 - type: recall_at_1 value: 0.214 - type: recall_at_10 value: 1.4200000000000002 - type: recall_at_100 value: 9.231 - type: recall_at_1000 value: 35.136 - type: recall_at_3 value: 0.544 - type: recall_at_5 value: 0.832 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 0.41000000000000003 - type: map_at_10 value: 2.32 - type: map_at_100 value: 4.077 - type: map_at_1000 value: 4.9430000000000005 - type: map_at_3 value: 1.087 - type: map_at_5 value: 1.466 - type: mrr_at_1 value: 6.122 - type: mrr_at_10 value: 13.999 - type: mrr_at_100 value: 16.524 - type: mrr_at_1000 value: 16.567999999999998 - type: mrr_at_3 value: 11.224 - type: mrr_at_5 value: 13.163 - type: ndcg_at_1 value: 5.102 - type: ndcg_at_10 value: 6.542000000000001 - type: ndcg_at_100 value: 14.127 - type: ndcg_at_1000 value: 24.396 - type: ndcg_at_3 value: 5.653 - type: ndcg_at_5 value: 5.5649999999999995 - type: precision_at_1 value: 6.122 - type: precision_at_10 value: 7.142999999999999 - type: precision_at_100 value: 3.51 - type: precision_at_1000 value: 0.9860000000000001 - type: precision_at_3 value: 6.802999999999999 - type: precision_at_5 value: 6.938999999999999 - type: recall_at_1 value: 0.41000000000000003 - type: recall_at_10 value: 5.627 - type: recall_at_100 value: 23.121 - type: recall_at_1000 value: 54.626 - type: recall_at_3 value: 1.763 - type: recall_at_5 value: 3.013 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.71119999999999 - type: ap value: 15.1342268718371 - type: f1 value: 55.043262693594855 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 60.89983022071308 - type: f1 value: 61.13086468149106 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 30.264802332456515 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 84.46086904690947 - type: cos_sim_ap value: 68.76039123104324 - type: cos_sim_f1 value: 63.002224839680665 - type: cos_sim_precision value: 62.503245910153204 - type: cos_sim_recall value: 63.50923482849604 - type: dot_accuracy value: 80.07391071109257 - type: dot_ap value: 53.43322643579626 - type: dot_f1 value: 52.6850065983149 - type: dot_precision value: 42.81471704339218 - type: dot_recall value: 68.46965699208444 - type: euclidean_accuracy value: 84.2701317279609 - type: euclidean_ap value: 67.55078414631596 - type: euclidean_f1 value: 62.90723537877797 - type: euclidean_precision value: 62.392940565792884 - type: euclidean_recall value: 63.43007915567283 - type: manhattan_accuracy value: 84.22244739822375 - type: manhattan_ap value: 67.92488847948273 - type: manhattan_f1 value: 62.99132210311383 - type: manhattan_precision value: 60.99851705388038 - type: manhattan_recall value: 65.11873350923483 - type: max_accuracy value: 84.46086904690947 - type: max_ap value: 68.76039123104324 - type: max_f1 value: 63.002224839680665 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 87.71296619707377 - type: cos_sim_ap value: 82.76174215711472 - type: cos_sim_f1 value: 75.73585592141168 - type: cos_sim_precision value: 71.79416430985721 - type: cos_sim_recall value: 80.1355097012627 - type: dot_accuracy value: 85.62502425583111 - type: dot_ap value: 77.50549495030725 - type: dot_f1 value: 71.47900863425035 - type: dot_precision value: 65.4587361546834 - type: dot_recall value: 78.71881736987989 - type: euclidean_accuracy value: 87.12694531765437 - type: euclidean_ap value: 81.63583409712018 - type: euclidean_f1 value: 74.50966015324268 - type: euclidean_precision value: 71.11764294212331 - type: euclidean_recall value: 78.24145364952264 - type: manhattan_accuracy value: 87.35009896379088 - type: manhattan_ap value: 82.20417545366242 - type: manhattan_f1 value: 74.84157622550805 - type: manhattan_precision value: 71.00898410504493 - type: manhattan_recall value: 79.11148752694795 - type: max_accuracy value: 87.71296619707377 - type: max_ap value: 82.76174215711472 - type: max_f1 value: 75.73585592141168 --- # LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders > LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance. - **Repository:** https://github.com/McGill-NLP/llm2vec - **Paper:** https://arxiv.org/abs/2404.05961 ## Installation ```bash pip install llm2vec ``` ## Usage ```python from llm2vec import LLM2Vec import torch from transformers import AutoTokenizer, AutoModel, AutoConfig from peft import PeftModel # Loading base Mistral model, along with custom code that enables bidirectional connections in decoder-only LLMs. MNTP LoRA weights are merged into the base model. tokenizer = AutoTokenizer.from_pretrained( "McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp" ) config = AutoConfig.from_pretrained( "McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp", trust_remote_code=True ) model = AutoModel.from_pretrained( "McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp", trust_remote_code=True, config=config, torch_dtype=torch.bfloat16, device_map="cuda" if torch.cuda.is_available() else "cpu", ) model = PeftModel.from_pretrained( model, "McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp", ) model = model.merge_and_unload() # This can take several minutes on cpu # Loading unsupervised SimCSE model. This loads the trained LoRA weights on top of MNTP model. Hence the final weights are -- Base model + MNTP (LoRA) + SimCSE (LoRA). model = PeftModel.from_pretrained( model, "McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp-unsup-simcse" ) # Wrapper for encoding and pooling operations l2v = LLM2Vec(model, tokenizer, pooling_mode="mean", max_length=512) # Encoding queries using instructions instruction = ( "Given a web search query, retrieve relevant passages that answer the query:" ) queries = [ [instruction, "how much protein should a female eat"], [instruction, "summit define"], ] q_reps = l2v.encode(queries) # Encoding documents. Instruction are not required for documents documents = [ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.", ] d_reps = l2v.encode(documents) # Compute cosine similarity q_reps_norm = torch.nn.functional.normalize(q_reps, p=2, dim=1) d_reps_norm = torch.nn.functional.normalize(d_reps, p=2, dim=1) cos_sim = torch.mm(q_reps_norm, d_reps_norm.transpose(0, 1)) print(cos_sim) """ tensor([[0.6175, 0.2535], [0.2298, 0.5792]]) """ ``` ## Questions If you have any question about the code, feel free to email Parishad (`[email protected]`) and Vaibhav (`[email protected]`).
[ "BIOSSES", "SCIFACT" ]
RichardErkhov/USTC-KnowledgeComputingLab_-_Llama3-KALE-LM-Chem-1.5-8B-gguf
RichardErkhov
null
[ "gguf", "arxiv:2409.18695", "endpoints_compatible", "region:us", "conversational" ]
2024-10-10T16:42:58Z
2024-10-10T20:15:52+00:00
337
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama3-KALE-LM-Chem-1.5-8B - GGUF - Model creator: https://huggingface.co/USTC-KnowledgeComputingLab/ - Original model: https://huggingface.co/USTC-KnowledgeComputingLab/Llama3-KALE-LM-Chem-1.5-8B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama3-KALE-LM-Chem-1.5-8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/USTC-KnowledgeComputingLab_-_Llama3-KALE-LM-Chem-1.5-8B-gguf/blob/main/Llama3-KALE-LM-Chem-1.5-8B.Q2_K.gguf) | Q2_K | 2.96GB | | [Llama3-KALE-LM-Chem-1.5-8B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/USTC-KnowledgeComputingLab_-_Llama3-KALE-LM-Chem-1.5-8B-gguf/blob/main/Llama3-KALE-LM-Chem-1.5-8B.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [Llama3-KALE-LM-Chem-1.5-8B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/USTC-KnowledgeComputingLab_-_Llama3-KALE-LM-Chem-1.5-8B-gguf/blob/main/Llama3-KALE-LM-Chem-1.5-8B.IQ3_S.gguf) | IQ3_S | 3.43GB | | [Llama3-KALE-LM-Chem-1.5-8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/USTC-KnowledgeComputingLab_-_Llama3-KALE-LM-Chem-1.5-8B-gguf/blob/main/Llama3-KALE-LM-Chem-1.5-8B.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Llama3-KALE-LM-Chem-1.5-8B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/USTC-KnowledgeComputingLab_-_Llama3-KALE-LM-Chem-1.5-8B-gguf/blob/main/Llama3-KALE-LM-Chem-1.5-8B.IQ3_M.gguf) | IQ3_M | 3.52GB | | [Llama3-KALE-LM-Chem-1.5-8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/USTC-KnowledgeComputingLab_-_Llama3-KALE-LM-Chem-1.5-8B-gguf/blob/main/Llama3-KALE-LM-Chem-1.5-8B.Q3_K.gguf) | Q3_K | 3.74GB | | [Llama3-KALE-LM-Chem-1.5-8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/USTC-KnowledgeComputingLab_-_Llama3-KALE-LM-Chem-1.5-8B-gguf/blob/main/Llama3-KALE-LM-Chem-1.5-8B.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Llama3-KALE-LM-Chem-1.5-8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/USTC-KnowledgeComputingLab_-_Llama3-KALE-LM-Chem-1.5-8B-gguf/blob/main/Llama3-KALE-LM-Chem-1.5-8B.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Llama3-KALE-LM-Chem-1.5-8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/USTC-KnowledgeComputingLab_-_Llama3-KALE-LM-Chem-1.5-8B-gguf/blob/main/Llama3-KALE-LM-Chem-1.5-8B.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Llama3-KALE-LM-Chem-1.5-8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/USTC-KnowledgeComputingLab_-_Llama3-KALE-LM-Chem-1.5-8B-gguf/blob/main/Llama3-KALE-LM-Chem-1.5-8B.Q4_0.gguf) | Q4_0 | 4.34GB | | [Llama3-KALE-LM-Chem-1.5-8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/USTC-KnowledgeComputingLab_-_Llama3-KALE-LM-Chem-1.5-8B-gguf/blob/main/Llama3-KALE-LM-Chem-1.5-8B.IQ4_NL.gguf) | IQ4_NL | 1.49GB | | [Llama3-KALE-LM-Chem-1.5-8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/USTC-KnowledgeComputingLab_-_Llama3-KALE-LM-Chem-1.5-8B-gguf/blob/main/Llama3-KALE-LM-Chem-1.5-8B.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Llama3-KALE-LM-Chem-1.5-8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/USTC-KnowledgeComputingLab_-_Llama3-KALE-LM-Chem-1.5-8B-gguf/blob/main/Llama3-KALE-LM-Chem-1.5-8B.Q4_K.gguf) | Q4_K | 4.58GB | | [Llama3-KALE-LM-Chem-1.5-8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/USTC-KnowledgeComputingLab_-_Llama3-KALE-LM-Chem-1.5-8B-gguf/blob/main/Llama3-KALE-LM-Chem-1.5-8B.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Llama3-KALE-LM-Chem-1.5-8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/USTC-KnowledgeComputingLab_-_Llama3-KALE-LM-Chem-1.5-8B-gguf/blob/main/Llama3-KALE-LM-Chem-1.5-8B.Q4_1.gguf) | Q4_1 | 4.78GB | | [Llama3-KALE-LM-Chem-1.5-8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/USTC-KnowledgeComputingLab_-_Llama3-KALE-LM-Chem-1.5-8B-gguf/blob/main/Llama3-KALE-LM-Chem-1.5-8B.Q5_0.gguf) | Q5_0 | 5.21GB | | [Llama3-KALE-LM-Chem-1.5-8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/USTC-KnowledgeComputingLab_-_Llama3-KALE-LM-Chem-1.5-8B-gguf/blob/main/Llama3-KALE-LM-Chem-1.5-8B.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Llama3-KALE-LM-Chem-1.5-8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/USTC-KnowledgeComputingLab_-_Llama3-KALE-LM-Chem-1.5-8B-gguf/blob/main/Llama3-KALE-LM-Chem-1.5-8B.Q5_K.gguf) | Q5_K | 5.34GB | | [Llama3-KALE-LM-Chem-1.5-8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/USTC-KnowledgeComputingLab_-_Llama3-KALE-LM-Chem-1.5-8B-gguf/blob/main/Llama3-KALE-LM-Chem-1.5-8B.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Llama3-KALE-LM-Chem-1.5-8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/USTC-KnowledgeComputingLab_-_Llama3-KALE-LM-Chem-1.5-8B-gguf/blob/main/Llama3-KALE-LM-Chem-1.5-8B.Q5_1.gguf) | Q5_1 | 5.65GB | | [Llama3-KALE-LM-Chem-1.5-8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/USTC-KnowledgeComputingLab_-_Llama3-KALE-LM-Chem-1.5-8B-gguf/blob/main/Llama3-KALE-LM-Chem-1.5-8B.Q6_K.gguf) | Q6_K | 6.14GB | | [Llama3-KALE-LM-Chem-1.5-8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/USTC-KnowledgeComputingLab_-_Llama3-KALE-LM-Chem-1.5-8B-gguf/blob/main/Llama3-KALE-LM-Chem-1.5-8B.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- license: llama3 language: - en base_model: - meta-llama/Meta-Llama-3-8B-Instruct tags: - KALE-LM - science - chemistry pipeline_tag: text-generation --- # Llama3-KALE-LM-Chem-1.5-8B ## Introduction We are thrilled to present Llama3-KALE-LM-Chem-1.5-8B, a new version of our open-source KALE-LM for science, which specializes in chemistry. We have trained our model with a larger amount of data. ## Benchmarks ### Open Benchmarks | Models | ChemBench | MMLU | MMLU-Chem | SciQ | IE(Acc) | IE(LS) | | ---- | ---- | ---- | ---- | ---- | ---- | ---- | | GPT-3.5 | 47.15 | 69.75 | 53.32 | 89.6 | 52.98 | 68.28 | | GPT-4 | 53.72 | 78.67 | 63.70 | 94.10 | 54.20 | 69.74 | | Llama3-8B-Instruct | 46.02 | 68.3 | 51.10 | 93.30 | 45.83 | 61.22 | | LlaSMol | 28.47 | 54.47 | 33.24 | 72.30 | 2.16 | 3.23 | | ChemDFM | 44.44 | 58.11 | 45.60 | 86.70 | 7.61 | 11.49 | | ChemLLM-7B-Chat | 34.16 | 61.79 | 48.39 | 94.00 | 29.66 | 39.17 | | ChemLLM-7B-Chat-1.5-SFT | 42.75 | 63.56 | 49.63 | **95.10** | 14.96 | 19.61 | | **Llama3-KALE-LM-Chem-1.5-8B** | **57.01** | 68.06 | **54.83** | 91.60 | **57.53** | **64.16** | #### ChemBench Details (Evaluated By OpenCompass) | Models | NC | PP | M2C | C2M | PP | RS | YP | TP | SP | Average | | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | | GPT-3.5 | 46.93 | 56.98 | 85.28 | 38.25 | 43.67 | 42.33 | 30.33 | 42.57 | 38 | 47.15 | | GPT-4 | 54.82 | 65.02 | 92.64 | 52.88 | 62.67 | 52.67 | 42.33 | 24.75 | 35.67 | 53.72 | | Llama3-8B-Instruct | 51.31 | 27.79 | 90.30 | 40.88 | 34.00 | 30.00 | 45.33 | 60.89 | 33.67 | 46.02 | | LlaSMol | 27.78 | 29.34 | 31.44 | 23.38 | 25.67 | 24.00 | 37.33 | 34.65 | 22.67 | 28.47 | | ChemDFM | 36.92 | 55.57 | 83.95 | 42.00 | 40.00 | 37.33 | 39.00 | 33.17 | 32.00 | 44.44 | | ChemLLM-7B-Chat | 41.05 | 29.76 | 85.28 | 26.12 | 26.00 | 24.00 | 20.00 | 24.26 | 31.00 | 34.16 | | ChemLLM-7B-Chat-1.5-SFT | 50.06 | 49.51 | 85.28 | 38.75 | 38.00 | 26.67 | 28.33 | 31.68 | 33.67 | 42.44 | | Llama3-KALE-LM-Chem-1.5-8B | 61.33 | 43.44 | 90.30 | 53.62 | 72.67 | 53.67 | 46.00 | 47.03 | 45.00 | 57.01 | ## Cite This Work ``` @article{dai2024kale, title={KALE-LM: Unleash The Power Of AI For Science Via Knowledge And Logic Enhanced Large Model}, author={Dai, Weichen and Chen, Yezeng and Dai, Zijie and Huang, Zhijie and Liu, Yubo and Pan, Yixuan and Song, Baiyang and Zhong, Chengli and Li, Xinhe and Wang, Zeyu and others}, journal={arXiv preprint arXiv:2409.18695}, year={2024} } ```
[ "SCIQ" ]
nomic-ai/nomic-embed-text-v1-ablated
nomic-ai
sentence-similarity
[ "sentence-transformers", "pytorch", "onnx", "nomic_bert", "feature-extraction", "sentence-similarity", "mteb", "custom_code", "arxiv:2402.01613", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-01-15T21:26:38Z
2024-08-02T02:24:29+00:00
335
4
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - feature-extraction - sentence-similarity - mteb model-index: - name: epoch_0_model results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 78.67164179104476 - type: ap value: 42.7379383648841 - type: f1 value: 72.79997373883408 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 90.413775 - type: ap value: 87.08812293673202 - type: f1 value: 90.39246586225426 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.80799999999999 - type: f1 value: 47.25679462673503 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 30.37 - type: map_at_10 value: 45.748 - type: map_at_100 value: 46.617 - type: map_at_1000 value: 46.622 - type: map_at_3 value: 40.564 - type: map_at_5 value: 43.69 - type: mrr_at_1 value: 30.868000000000002 - type: mrr_at_10 value: 45.905 - type: mrr_at_100 value: 46.787 - type: mrr_at_1000 value: 46.792 - type: mrr_at_3 value: 40.717999999999996 - type: mrr_at_5 value: 43.851 - type: ndcg_at_1 value: 30.37 - type: ndcg_at_10 value: 54.662 - type: ndcg_at_100 value: 58.23700000000001 - type: ndcg_at_1000 value: 58.373 - type: ndcg_at_3 value: 44.069 - type: ndcg_at_5 value: 49.728 - type: precision_at_1 value: 30.37 - type: precision_at_10 value: 8.321000000000002 - type: precision_at_100 value: 0.985 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 18.089 - type: precision_at_5 value: 13.613 - type: recall_at_1 value: 30.37 - type: recall_at_10 value: 83.21499999999999 - type: recall_at_100 value: 98.506 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 54.266999999999996 - type: recall_at_5 value: 68.065 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 45.85329429748079 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 36.12666783330692 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 57.58783867794241 - type: mrr value: 71.84078617596622 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 87.92453139507079 - type: cos_sim_spearman value: 85.37122234964886 - type: euclidean_pearson value: 86.19345621799168 - type: euclidean_spearman value: 85.37122234964886 - type: manhattan_pearson value: 86.4685290616604 - type: manhattan_spearman value: 85.91400580167537 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 83.81818181818181 - type: f1 value: 83.76155217378863 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 38.46362764203256 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 33.13807021168658 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 29.725 - type: map_at_10 value: 39.654 - type: map_at_100 value: 41.022 - type: map_at_1000 value: 41.144999999999996 - type: map_at_3 value: 36.819 - type: map_at_5 value: 38.376 - type: mrr_at_1 value: 36.195 - type: mrr_at_10 value: 45.171 - type: mrr_at_100 value: 45.987 - type: mrr_at_1000 value: 46.033 - type: mrr_at_3 value: 43.038 - type: mrr_at_5 value: 44.196000000000005 - type: ndcg_at_1 value: 36.195 - type: ndcg_at_10 value: 45.194 - type: ndcg_at_100 value: 50.516000000000005 - type: ndcg_at_1000 value: 52.739000000000004 - type: ndcg_at_3 value: 41.142 - type: ndcg_at_5 value: 42.973 - type: precision_at_1 value: 36.195 - type: precision_at_10 value: 8.312 - type: precision_at_100 value: 1.346 - type: precision_at_1000 value: 0.182 - type: precision_at_3 value: 19.599 - type: precision_at_5 value: 13.847999999999999 - type: recall_at_1 value: 29.725 - type: recall_at_10 value: 55.51199999999999 - type: recall_at_100 value: 78.182 - type: recall_at_1000 value: 92.727 - type: recall_at_3 value: 43.287 - type: recall_at_5 value: 48.732 - type: map_at_1 value: 30.23 - type: map_at_10 value: 40.091 - type: map_at_100 value: 41.251 - type: map_at_1000 value: 41.384 - type: map_at_3 value: 37.247 - type: map_at_5 value: 38.865 - type: mrr_at_1 value: 38.279999999999994 - type: mrr_at_10 value: 46.288000000000004 - type: mrr_at_100 value: 47.022999999999996 - type: mrr_at_1000 value: 47.068 - type: mrr_at_3 value: 44.395 - type: mrr_at_5 value: 45.446 - type: ndcg_at_1 value: 38.279999999999994 - type: ndcg_at_10 value: 45.647 - type: ndcg_at_100 value: 49.851 - type: ndcg_at_1000 value: 51.991 - type: ndcg_at_3 value: 41.795 - type: ndcg_at_5 value: 43.578 - type: precision_at_1 value: 38.279999999999994 - type: precision_at_10 value: 8.522 - type: precision_at_100 value: 1.361 - type: precision_at_1000 value: 0.185 - type: precision_at_3 value: 20.297 - type: precision_at_5 value: 14.255 - type: recall_at_1 value: 30.23 - type: recall_at_10 value: 55.094 - type: recall_at_100 value: 72.887 - type: recall_at_1000 value: 86.295 - type: recall_at_3 value: 43.244 - type: recall_at_5 value: 48.507 - type: map_at_1 value: 40.854 - type: map_at_10 value: 52.232 - type: map_at_100 value: 53.129000000000005 - type: map_at_1000 value: 53.185 - type: map_at_3 value: 49.094 - type: map_at_5 value: 50.834999999999994 - type: mrr_at_1 value: 46.708 - type: mrr_at_10 value: 56.021 - type: mrr_at_100 value: 56.584 - type: mrr_at_1000 value: 56.611999999999995 - type: mrr_at_3 value: 53.657 - type: mrr_at_5 value: 55.027 - type: ndcg_at_1 value: 46.708 - type: ndcg_at_10 value: 57.89 - type: ndcg_at_100 value: 61.541999999999994 - type: ndcg_at_1000 value: 62.754 - type: ndcg_at_3 value: 52.632 - type: ndcg_at_5 value: 55.104 - type: precision_at_1 value: 46.708 - type: precision_at_10 value: 9.122 - type: precision_at_100 value: 1.187 - type: precision_at_1000 value: 0.134 - type: precision_at_3 value: 23.072 - type: precision_at_5 value: 15.661 - type: recall_at_1 value: 40.854 - type: recall_at_10 value: 70.98 - type: recall_at_100 value: 86.947 - type: recall_at_1000 value: 95.62 - type: recall_at_3 value: 56.782999999999994 - type: recall_at_5 value: 62.980000000000004 - type: map_at_1 value: 26.366 - type: map_at_10 value: 33.674 - type: map_at_100 value: 34.58 - type: map_at_1000 value: 34.662 - type: map_at_3 value: 31.596999999999998 - type: map_at_5 value: 32.596000000000004 - type: mrr_at_1 value: 28.588 - type: mrr_at_10 value: 35.912 - type: mrr_at_100 value: 36.696 - type: mrr_at_1000 value: 36.760999999999996 - type: mrr_at_3 value: 33.823 - type: mrr_at_5 value: 34.829 - type: ndcg_at_1 value: 28.588 - type: ndcg_at_10 value: 38.031 - type: ndcg_at_100 value: 42.678 - type: ndcg_at_1000 value: 44.871 - type: ndcg_at_3 value: 33.815 - type: ndcg_at_5 value: 35.531 - type: precision_at_1 value: 28.588 - type: precision_at_10 value: 5.638 - type: precision_at_100 value: 0.8380000000000001 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 13.974 - type: precision_at_5 value: 9.401 - type: recall_at_1 value: 26.366 - type: recall_at_10 value: 49.353 - type: recall_at_100 value: 71.194 - type: recall_at_1000 value: 87.842 - type: recall_at_3 value: 37.829 - type: recall_at_5 value: 41.976 - type: map_at_1 value: 16.634 - type: map_at_10 value: 23.271 - type: map_at_100 value: 24.366 - type: map_at_1000 value: 24.484 - type: map_at_3 value: 21.075 - type: map_at_5 value: 22.364 - type: mrr_at_1 value: 20.522000000000002 - type: mrr_at_10 value: 27.735 - type: mrr_at_100 value: 28.691 - type: mrr_at_1000 value: 28.762999999999998 - type: mrr_at_3 value: 25.518 - type: mrr_at_5 value: 26.762000000000004 - type: ndcg_at_1 value: 20.522000000000002 - type: ndcg_at_10 value: 27.791 - type: ndcg_at_100 value: 33.101 - type: ndcg_at_1000 value: 36.075 - type: ndcg_at_3 value: 23.74 - type: ndcg_at_5 value: 25.691000000000003 - type: precision_at_1 value: 20.522000000000002 - type: precision_at_10 value: 4.963 - type: precision_at_100 value: 0.873 - type: precision_at_1000 value: 0.128 - type: precision_at_3 value: 11.111 - type: precision_at_5 value: 8.01 - type: recall_at_1 value: 16.634 - type: recall_at_10 value: 37.498 - type: recall_at_100 value: 60.598 - type: recall_at_1000 value: 81.828 - type: recall_at_3 value: 26.136 - type: recall_at_5 value: 31.211 - type: map_at_1 value: 28.200999999999997 - type: map_at_10 value: 37.619 - type: map_at_100 value: 38.834999999999994 - type: map_at_1000 value: 38.951 - type: map_at_3 value: 35.119 - type: map_at_5 value: 36.559999999999995 - type: mrr_at_1 value: 33.782000000000004 - type: mrr_at_10 value: 43.033 - type: mrr_at_100 value: 43.761 - type: mrr_at_1000 value: 43.818 - type: mrr_at_3 value: 40.727999999999994 - type: mrr_at_5 value: 42.129 - type: ndcg_at_1 value: 33.782000000000004 - type: ndcg_at_10 value: 43.178 - type: ndcg_at_100 value: 48.27 - type: ndcg_at_1000 value: 50.559 - type: ndcg_at_3 value: 38.974 - type: ndcg_at_5 value: 41.019 - type: precision_at_1 value: 33.782000000000004 - type: precision_at_10 value: 7.575 - type: precision_at_100 value: 1.1820000000000002 - type: precision_at_1000 value: 0.154 - type: precision_at_3 value: 18.223 - type: precision_at_5 value: 12.742999999999999 - type: recall_at_1 value: 28.200999999999997 - type: recall_at_10 value: 54.089 - type: recall_at_100 value: 75.57000000000001 - type: recall_at_1000 value: 90.827 - type: recall_at_3 value: 42.435 - type: recall_at_5 value: 47.652 - type: map_at_1 value: 25.313000000000002 - type: map_at_10 value: 34.329 - type: map_at_100 value: 35.445 - type: map_at_1000 value: 35.556 - type: map_at_3 value: 31.659 - type: map_at_5 value: 32.981 - type: mrr_at_1 value: 30.822 - type: mrr_at_10 value: 39.084 - type: mrr_at_100 value: 39.97 - type: mrr_at_1000 value: 40.025 - type: mrr_at_3 value: 36.815 - type: mrr_at_5 value: 38.002 - type: ndcg_at_1 value: 30.822 - type: ndcg_at_10 value: 39.512 - type: ndcg_at_100 value: 44.925 - type: ndcg_at_1000 value: 47.274 - type: ndcg_at_3 value: 35.055 - type: ndcg_at_5 value: 36.788 - type: precision_at_1 value: 30.822 - type: precision_at_10 value: 7.1 - type: precision_at_100 value: 1.15 - type: precision_at_1000 value: 0.151 - type: precision_at_3 value: 16.476 - type: precision_at_5 value: 11.461 - type: recall_at_1 value: 25.313000000000002 - type: recall_at_10 value: 50.178 - type: recall_at_100 value: 74.312 - type: recall_at_1000 value: 90.50200000000001 - type: recall_at_3 value: 37.626 - type: recall_at_5 value: 42.34 - type: map_at_1 value: 25.502250000000004 - type: map_at_10 value: 33.655166666666666 - type: map_at_100 value: 34.72833333333333 - type: map_at_1000 value: 34.84375 - type: map_at_3 value: 31.253999999999998 - type: map_at_5 value: 32.55075 - type: mrr_at_1 value: 29.91975 - type: mrr_at_10 value: 37.65441666666667 - type: mrr_at_100 value: 38.464416666666665 - type: mrr_at_1000 value: 38.52591666666667 - type: mrr_at_3 value: 35.57858333333333 - type: mrr_at_5 value: 36.71083333333333 - type: ndcg_at_1 value: 29.91975 - type: ndcg_at_10 value: 38.47316666666667 - type: ndcg_at_100 value: 43.256416666666674 - type: ndcg_at_1000 value: 45.70658333333333 - type: ndcg_at_3 value: 34.350833333333334 - type: ndcg_at_5 value: 36.184583333333336 - type: precision_at_1 value: 29.91975 - type: precision_at_10 value: 6.5489999999999995 - type: precision_at_100 value: 1.0553333333333332 - type: precision_at_1000 value: 0.14516666666666667 - type: precision_at_3 value: 15.579083333333333 - type: precision_at_5 value: 10.851083333333332 - type: recall_at_1 value: 25.502250000000004 - type: recall_at_10 value: 48.7965 - type: recall_at_100 value: 69.93500000000002 - type: recall_at_1000 value: 87.17049999999999 - type: recall_at_3 value: 37.20433333333333 - type: recall_at_5 value: 42.00783333333333 - type: map_at_1 value: 23.777 - type: map_at_10 value: 29.932 - type: map_at_100 value: 30.778 - type: map_at_1000 value: 30.879 - type: map_at_3 value: 27.898 - type: map_at_5 value: 29.086000000000002 - type: mrr_at_1 value: 26.227 - type: mrr_at_10 value: 32.443 - type: mrr_at_100 value: 33.212 - type: mrr_at_1000 value: 33.29 - type: mrr_at_3 value: 30.419 - type: mrr_at_5 value: 31.616 - type: ndcg_at_1 value: 26.227 - type: ndcg_at_10 value: 33.774 - type: ndcg_at_100 value: 37.917 - type: ndcg_at_1000 value: 40.557 - type: ndcg_at_3 value: 29.875 - type: ndcg_at_5 value: 31.845000000000002 - type: precision_at_1 value: 26.227 - type: precision_at_10 value: 5.153 - type: precision_at_100 value: 0.784 - type: precision_at_1000 value: 0.108 - type: precision_at_3 value: 12.423 - type: precision_at_5 value: 8.773 - type: recall_at_1 value: 23.777 - type: recall_at_10 value: 43.142 - type: recall_at_100 value: 61.68900000000001 - type: recall_at_1000 value: 81.37100000000001 - type: recall_at_3 value: 32.582 - type: recall_at_5 value: 37.403 - type: map_at_1 value: 16.659 - type: map_at_10 value: 22.926 - type: map_at_100 value: 23.837 - type: map_at_1000 value: 23.953 - type: map_at_3 value: 21.029999999999998 - type: map_at_5 value: 22.019 - type: mrr_at_1 value: 19.649 - type: mrr_at_10 value: 26.32 - type: mrr_at_100 value: 27.143 - type: mrr_at_1000 value: 27.222 - type: mrr_at_3 value: 24.484 - type: mrr_at_5 value: 25.468000000000004 - type: ndcg_at_1 value: 19.649 - type: ndcg_at_10 value: 26.941 - type: ndcg_at_100 value: 31.522 - type: ndcg_at_1000 value: 34.538999999999994 - type: ndcg_at_3 value: 23.419999999999998 - type: ndcg_at_5 value: 24.927 - type: precision_at_1 value: 19.649 - type: precision_at_10 value: 4.7010000000000005 - type: precision_at_100 value: 0.8130000000000001 - type: precision_at_1000 value: 0.124 - type: precision_at_3 value: 10.735999999999999 - type: precision_at_5 value: 7.591 - type: recall_at_1 value: 16.659 - type: recall_at_10 value: 35.721000000000004 - type: recall_at_100 value: 56.43 - type: recall_at_1000 value: 78.464 - type: recall_at_3 value: 25.878 - type: recall_at_5 value: 29.731999999999996 - type: map_at_1 value: 24.309 - type: map_at_10 value: 31.990000000000002 - type: map_at_100 value: 32.895 - type: map_at_1000 value: 33.0 - type: map_at_3 value: 29.848999999999997 - type: map_at_5 value: 30.942999999999998 - type: mrr_at_1 value: 28.638 - type: mrr_at_10 value: 36.036 - type: mrr_at_100 value: 36.787 - type: mrr_at_1000 value: 36.855 - type: mrr_at_3 value: 34.08 - type: mrr_at_5 value: 35.073 - type: ndcg_at_1 value: 28.638 - type: ndcg_at_10 value: 36.588 - type: ndcg_at_100 value: 41.152 - type: ndcg_at_1000 value: 43.769999999999996 - type: ndcg_at_3 value: 32.632 - type: ndcg_at_5 value: 34.249 - type: precision_at_1 value: 28.638 - type: precision_at_10 value: 5.942 - type: precision_at_100 value: 0.9249999999999999 - type: precision_at_1000 value: 0.127 - type: precision_at_3 value: 14.582999999999998 - type: precision_at_5 value: 9.944 - type: recall_at_1 value: 24.309 - type: recall_at_10 value: 46.725 - type: recall_at_100 value: 67.11 - type: recall_at_1000 value: 85.91499999999999 - type: recall_at_3 value: 35.72 - type: recall_at_5 value: 39.854 - type: map_at_1 value: 22.997999999999998 - type: map_at_10 value: 30.564000000000004 - type: map_at_100 value: 32.06 - type: map_at_1000 value: 32.282 - type: map_at_3 value: 28.12 - type: map_at_5 value: 29.395 - type: mrr_at_1 value: 27.075 - type: mrr_at_10 value: 34.510999999999996 - type: mrr_at_100 value: 35.549 - type: mrr_at_1000 value: 35.616 - type: mrr_at_3 value: 32.444 - type: mrr_at_5 value: 33.589999999999996 - type: ndcg_at_1 value: 27.075 - type: ndcg_at_10 value: 35.582 - type: ndcg_at_100 value: 41.308 - type: ndcg_at_1000 value: 44.385999999999996 - type: ndcg_at_3 value: 31.467 - type: ndcg_at_5 value: 33.189 - type: precision_at_1 value: 27.075 - type: precision_at_10 value: 6.68 - type: precision_at_100 value: 1.427 - type: precision_at_1000 value: 0.231 - type: precision_at_3 value: 14.625 - type: precision_at_5 value: 10.356 - type: recall_at_1 value: 22.997999999999998 - type: recall_at_10 value: 45.196 - type: recall_at_100 value: 70.319 - type: recall_at_1000 value: 90.766 - type: recall_at_3 value: 33.487 - type: recall_at_5 value: 38.297 - type: map_at_1 value: 20.961 - type: map_at_10 value: 27.58 - type: map_at_100 value: 28.542 - type: map_at_1000 value: 28.644 - type: map_at_3 value: 25.541000000000004 - type: map_at_5 value: 26.589000000000002 - type: mrr_at_1 value: 22.551 - type: mrr_at_10 value: 29.298999999999996 - type: mrr_at_100 value: 30.17 - type: mrr_at_1000 value: 30.248 - type: mrr_at_3 value: 27.542 - type: mrr_at_5 value: 28.392 - type: ndcg_at_1 value: 22.551 - type: ndcg_at_10 value: 31.55 - type: ndcg_at_100 value: 36.295 - type: ndcg_at_1000 value: 38.964 - type: ndcg_at_3 value: 27.663 - type: ndcg_at_5 value: 29.321 - type: precision_at_1 value: 22.551 - type: precision_at_10 value: 4.88 - type: precision_at_100 value: 0.7779999999999999 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 11.83 - type: precision_at_5 value: 8.17 - type: recall_at_1 value: 20.961 - type: recall_at_10 value: 42.07 - type: recall_at_100 value: 63.982000000000006 - type: recall_at_1000 value: 83.889 - type: recall_at_3 value: 31.445 - type: recall_at_5 value: 35.410000000000004 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 11.314 - type: map_at_10 value: 18.983 - type: map_at_100 value: 20.851 - type: map_at_1000 value: 21.066 - type: map_at_3 value: 16.014 - type: map_at_5 value: 17.569000000000003 - type: mrr_at_1 value: 25.277 - type: mrr_at_10 value: 36.657000000000004 - type: mrr_at_100 value: 37.646 - type: mrr_at_1000 value: 37.686 - type: mrr_at_3 value: 33.17 - type: mrr_at_5 value: 35.232 - type: ndcg_at_1 value: 25.277 - type: ndcg_at_10 value: 27.011000000000003 - type: ndcg_at_100 value: 34.418 - type: ndcg_at_1000 value: 38.089 - type: ndcg_at_3 value: 22.026 - type: ndcg_at_5 value: 23.866 - type: precision_at_1 value: 25.277 - type: precision_at_10 value: 8.397 - type: precision_at_100 value: 1.6320000000000001 - type: precision_at_1000 value: 0.22999999999999998 - type: precision_at_3 value: 16.156000000000002 - type: precision_at_5 value: 12.612000000000002 - type: recall_at_1 value: 11.314 - type: recall_at_10 value: 32.474 - type: recall_at_100 value: 57.926 - type: recall_at_1000 value: 78.387 - type: recall_at_3 value: 20.415 - type: recall_at_5 value: 25.407999999999998 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.835999999999999 - type: map_at_10 value: 19.73 - type: map_at_100 value: 28.011000000000003 - type: map_at_1000 value: 29.519000000000002 - type: map_at_3 value: 14.249 - type: map_at_5 value: 16.472 - type: mrr_at_1 value: 67.0 - type: mrr_at_10 value: 74.632 - type: mrr_at_100 value: 74.97200000000001 - type: mrr_at_1000 value: 74.97500000000001 - type: mrr_at_3 value: 72.958 - type: mrr_at_5 value: 73.908 - type: ndcg_at_1 value: 55.875 - type: ndcg_at_10 value: 42.071999999999996 - type: ndcg_at_100 value: 46.091 - type: ndcg_at_1000 value: 52.737 - type: ndcg_at_3 value: 47.079 - type: ndcg_at_5 value: 43.788 - type: precision_at_1 value: 67.0 - type: precision_at_10 value: 33.45 - type: precision_at_100 value: 10.633 - type: precision_at_1000 value: 2.067 - type: precision_at_3 value: 49.583 - type: precision_at_5 value: 41.25 - type: recall_at_1 value: 8.835999999999999 - type: recall_at_10 value: 24.872 - type: recall_at_100 value: 51.427 - type: recall_at_1000 value: 72.17099999999999 - type: recall_at_3 value: 15.631999999999998 - type: recall_at_5 value: 18.956 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 48.80500000000001 - type: f1 value: 43.91955883597831 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 61.480999999999995 - type: map_at_10 value: 72.162 - type: map_at_100 value: 72.487 - type: map_at_1000 value: 72.504 - type: map_at_3 value: 70.354 - type: map_at_5 value: 71.509 - type: mrr_at_1 value: 66.262 - type: mrr_at_10 value: 76.605 - type: mrr_at_100 value: 76.833 - type: mrr_at_1000 value: 76.839 - type: mrr_at_3 value: 74.977 - type: mrr_at_5 value: 76.06 - type: ndcg_at_1 value: 66.262 - type: ndcg_at_10 value: 77.323 - type: ndcg_at_100 value: 78.685 - type: ndcg_at_1000 value: 79.032 - type: ndcg_at_3 value: 74.015 - type: ndcg_at_5 value: 75.916 - type: precision_at_1 value: 66.262 - type: precision_at_10 value: 9.757 - type: precision_at_100 value: 1.059 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 29.032999999999998 - type: precision_at_5 value: 18.5 - type: recall_at_1 value: 61.480999999999995 - type: recall_at_10 value: 88.878 - type: recall_at_100 value: 94.719 - type: recall_at_1000 value: 97.066 - type: recall_at_3 value: 79.95100000000001 - type: recall_at_5 value: 84.691 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 19.925 - type: map_at_10 value: 31.621 - type: map_at_100 value: 33.282000000000004 - type: map_at_1000 value: 33.455 - type: map_at_3 value: 27.504 - type: map_at_5 value: 29.921999999999997 - type: mrr_at_1 value: 39.660000000000004 - type: mrr_at_10 value: 47.366 - type: mrr_at_100 value: 48.179 - type: mrr_at_1000 value: 48.219 - type: mrr_at_3 value: 45.062000000000005 - type: mrr_at_5 value: 46.404 - type: ndcg_at_1 value: 39.660000000000004 - type: ndcg_at_10 value: 39.019 - type: ndcg_at_100 value: 45.286 - type: ndcg_at_1000 value: 48.370000000000005 - type: ndcg_at_3 value: 35.421 - type: ndcg_at_5 value: 36.767 - type: precision_at_1 value: 39.660000000000004 - type: precision_at_10 value: 10.494 - type: precision_at_100 value: 1.7069999999999999 - type: precision_at_1000 value: 0.22599999999999998 - type: precision_at_3 value: 23.200000000000003 - type: precision_at_5 value: 17.253 - type: recall_at_1 value: 19.925 - type: recall_at_10 value: 45.48 - type: recall_at_100 value: 68.585 - type: recall_at_1000 value: 87.128 - type: recall_at_3 value: 31.913000000000004 - type: recall_at_5 value: 38.107 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 37.961 - type: map_at_10 value: 55.010000000000005 - type: map_at_100 value: 55.896 - type: map_at_1000 value: 55.962 - type: map_at_3 value: 52.03 - type: map_at_5 value: 53.866 - type: mrr_at_1 value: 75.922 - type: mrr_at_10 value: 81.655 - type: mrr_at_100 value: 81.879 - type: mrr_at_1000 value: 81.889 - type: mrr_at_3 value: 80.657 - type: mrr_at_5 value: 81.291 - type: ndcg_at_1 value: 75.922 - type: ndcg_at_10 value: 64.119 - type: ndcg_at_100 value: 67.25 - type: ndcg_at_1000 value: 68.55499999999999 - type: ndcg_at_3 value: 59.792 - type: ndcg_at_5 value: 62.165000000000006 - type: precision_at_1 value: 75.922 - type: precision_at_10 value: 13.155 - type: precision_at_100 value: 1.5599999999999998 - type: precision_at_1000 value: 0.173 - type: precision_at_3 value: 37.461 - type: precision_at_5 value: 24.351 - type: recall_at_1 value: 37.961 - type: recall_at_10 value: 65.77300000000001 - type: recall_at_100 value: 78.015 - type: recall_at_1000 value: 86.685 - type: recall_at_3 value: 56.192 - type: recall_at_5 value: 60.878 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 83.7804 - type: ap value: 78.89508987851809 - type: f1 value: 83.72392373438922 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 23.807000000000002 - type: map_at_10 value: 36.411 - type: map_at_100 value: 37.574000000000005 - type: map_at_1000 value: 37.618 - type: map_at_3 value: 32.653 - type: map_at_5 value: 34.902 - type: mrr_at_1 value: 24.499000000000002 - type: mrr_at_10 value: 37.045 - type: mrr_at_100 value: 38.135999999999996 - type: mrr_at_1000 value: 38.175 - type: mrr_at_3 value: 33.326 - type: mrr_at_5 value: 35.561 - type: ndcg_at_1 value: 24.512999999999998 - type: ndcg_at_10 value: 43.328 - type: ndcg_at_100 value: 48.779 - type: ndcg_at_1000 value: 49.897999999999996 - type: ndcg_at_3 value: 35.713 - type: ndcg_at_5 value: 39.729 - type: precision_at_1 value: 24.512999999999998 - type: precision_at_10 value: 6.7379999999999995 - type: precision_at_100 value: 0.9450000000000001 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 15.196000000000002 - type: precision_at_5 value: 11.158 - type: recall_at_1 value: 23.807000000000002 - type: recall_at_10 value: 64.488 - type: recall_at_100 value: 89.386 - type: recall_at_1000 value: 97.968 - type: recall_at_3 value: 43.891000000000005 - type: recall_at_5 value: 53.535 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.47013223894209 - type: f1 value: 93.15020887152107 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 75.27131782945737 - type: f1 value: 58.45703758149779 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.76395427034298 - type: f1 value: 70.6084399610629 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.69804976462676 - type: f1 value: 76.61599181962723 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 32.7253797676744 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 30.547731924629424 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.286918745183772 - type: mrr value: 32.47449315230336 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.894 - type: map_at_10 value: 13.405000000000001 - type: map_at_100 value: 16.586000000000002 - type: map_at_1000 value: 17.919 - type: map_at_3 value: 10.066 - type: map_at_5 value: 11.679 - type: mrr_at_1 value: 45.201 - type: mrr_at_10 value: 54.018 - type: mrr_at_100 value: 54.581999999999994 - type: mrr_at_1000 value: 54.623 - type: mrr_at_3 value: 51.6 - type: mrr_at_5 value: 53.473000000000006 - type: ndcg_at_1 value: 43.189 - type: ndcg_at_10 value: 35.306 - type: ndcg_at_100 value: 31.505 - type: ndcg_at_1000 value: 39.991 - type: ndcg_at_3 value: 41.108 - type: ndcg_at_5 value: 39.039 - type: precision_at_1 value: 44.582 - type: precision_at_10 value: 26.161 - type: precision_at_100 value: 7.867 - type: precision_at_1000 value: 2.043 - type: precision_at_3 value: 39.112 - type: precision_at_5 value: 34.18 - type: recall_at_1 value: 5.894 - type: recall_at_10 value: 16.88 - type: recall_at_100 value: 30.671 - type: recall_at_1000 value: 61.42999999999999 - type: recall_at_3 value: 11.022 - type: recall_at_5 value: 13.697999999999999 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 38.440999999999995 - type: map_at_10 value: 54.187 - type: map_at_100 value: 55.022000000000006 - type: map_at_1000 value: 55.044000000000004 - type: map_at_3 value: 50.174 - type: map_at_5 value: 52.61 - type: mrr_at_1 value: 42.903000000000006 - type: mrr_at_10 value: 56.699 - type: mrr_at_100 value: 57.31 - type: mrr_at_1000 value: 57.325 - type: mrr_at_3 value: 53.63099999999999 - type: mrr_at_5 value: 55.596000000000004 - type: ndcg_at_1 value: 42.903000000000006 - type: ndcg_at_10 value: 61.434 - type: ndcg_at_100 value: 64.852 - type: ndcg_at_1000 value: 65.36 - type: ndcg_at_3 value: 54.193000000000005 - type: ndcg_at_5 value: 58.15 - type: precision_at_1 value: 42.903000000000006 - type: precision_at_10 value: 9.623 - type: precision_at_100 value: 1.1560000000000001 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 24.034 - type: precision_at_5 value: 16.779 - type: recall_at_1 value: 38.440999999999995 - type: recall_at_10 value: 80.72399999999999 - type: recall_at_100 value: 95.329 - type: recall_at_1000 value: 99.059 - type: recall_at_3 value: 62.343 - type: recall_at_5 value: 71.304 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 70.85000000000001 - type: map_at_10 value: 84.54 - type: map_at_100 value: 85.148 - type: map_at_1000 value: 85.168 - type: map_at_3 value: 81.631 - type: map_at_5 value: 83.45700000000001 - type: mrr_at_1 value: 81.58 - type: mrr_at_10 value: 87.732 - type: mrr_at_100 value: 87.825 - type: mrr_at_1000 value: 87.82600000000001 - type: mrr_at_3 value: 86.783 - type: mrr_at_5 value: 87.437 - type: ndcg_at_1 value: 81.56 - type: ndcg_at_10 value: 88.32900000000001 - type: ndcg_at_100 value: 89.513 - type: ndcg_at_1000 value: 89.63799999999999 - type: ndcg_at_3 value: 85.51100000000001 - type: ndcg_at_5 value: 87.062 - type: precision_at_1 value: 81.56 - type: precision_at_10 value: 13.349 - type: precision_at_100 value: 1.518 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 37.293 - type: precision_at_5 value: 24.502 - type: recall_at_1 value: 70.85000000000001 - type: recall_at_10 value: 95.351 - type: recall_at_100 value: 99.405 - type: recall_at_1000 value: 99.958 - type: recall_at_3 value: 87.184 - type: recall_at_5 value: 91.625 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 56.81818576893834 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 61.57033658868022 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.468 - type: map_at_10 value: 11.109 - type: map_at_100 value: 12.921 - type: map_at_1000 value: 13.187999999999999 - type: map_at_3 value: 8.094999999999999 - type: map_at_5 value: 9.664 - type: mrr_at_1 value: 22.1 - type: mrr_at_10 value: 32.482 - type: mrr_at_100 value: 33.558 - type: mrr_at_1000 value: 33.623999999999995 - type: mrr_at_3 value: 29.25 - type: mrr_at_5 value: 31.080000000000002 - type: ndcg_at_1 value: 22.1 - type: ndcg_at_10 value: 18.695999999999998 - type: ndcg_at_100 value: 25.749 - type: ndcg_at_1000 value: 30.711 - type: ndcg_at_3 value: 17.974 - type: ndcg_at_5 value: 15.684000000000001 - type: precision_at_1 value: 22.1 - type: precision_at_10 value: 9.56 - type: precision_at_100 value: 1.966 - type: precision_at_1000 value: 0.316 - type: precision_at_3 value: 16.667 - type: precision_at_5 value: 13.68 - type: recall_at_1 value: 4.468 - type: recall_at_10 value: 19.373 - type: recall_at_100 value: 39.853 - type: recall_at_1000 value: 64.118 - type: recall_at_3 value: 10.133000000000001 - type: recall_at_5 value: 13.877999999999998 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 80.11452150923512 - type: cos_sim_spearman value: 77.3007421887329 - type: euclidean_pearson value: 78.2493681078981 - type: euclidean_spearman value: 77.3007432741821 - type: manhattan_pearson value: 78.19716818242554 - type: manhattan_spearman value: 77.26439033199102 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 82.70293570563516 - type: cos_sim_spearman value: 77.97040896962338 - type: euclidean_pearson value: 77.98827330337348 - type: euclidean_spearman value: 77.9704358930525 - type: manhattan_pearson value: 78.06991702207395 - type: manhattan_spearman value: 78.03857843100195 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 77.81236960157503 - type: cos_sim_spearman value: 79.38801416063187 - type: euclidean_pearson value: 79.35003045476847 - type: euclidean_spearman value: 79.38797289536578 - type: manhattan_pearson value: 79.33155563344724 - type: manhattan_spearman value: 79.3858955436803 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 77.35604880089507 - type: cos_sim_spearman value: 78.17327332594571 - type: euclidean_pearson value: 77.30302038209295 - type: euclidean_spearman value: 78.17327332594571 - type: manhattan_pearson value: 77.31323781935417 - type: manhattan_spearman value: 78.20141256686921 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 84.29348597583 - type: cos_sim_spearman value: 85.50877410088334 - type: euclidean_pearson value: 85.22367284169081 - type: euclidean_spearman value: 85.50877410088334 - type: manhattan_pearson value: 85.17979979737612 - type: manhattan_spearman value: 85.46459282596254 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 83.16190794761513 - type: cos_sim_spearman value: 84.94610605287254 - type: euclidean_pearson value: 83.95587174131369 - type: euclidean_spearman value: 84.94610605287254 - type: manhattan_pearson value: 83.99025745366798 - type: manhattan_spearman value: 84.98123107148953 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.3047190687711 - type: cos_sim_spearman value: 85.86642469958113 - type: euclidean_pearson value: 86.74377658528041 - type: euclidean_spearman value: 85.86642469958113 - type: manhattan_pearson value: 86.56967885987439 - type: manhattan_spearman value: 85.63613272583275 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 64.8298932792099 - type: cos_sim_spearman value: 64.27626667878636 - type: euclidean_pearson value: 66.01603861201576 - type: euclidean_spearman value: 64.27626667878636 - type: manhattan_pearson value: 66.31232809448106 - type: manhattan_spearman value: 64.46190921631559 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 82.73696291316243 - type: cos_sim_spearman value: 83.41508337893958 - type: euclidean_pearson value: 82.8827053024064 - type: euclidean_spearman value: 83.41508337893958 - type: manhattan_pearson value: 82.85613329045803 - type: manhattan_spearman value: 83.40522047443645 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 75.51490079179645 - type: mrr value: 92.6809655486126 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 58.594 - type: map_at_10 value: 67.208 - type: map_at_100 value: 67.702 - type: map_at_1000 value: 67.73 - type: map_at_3 value: 64.815 - type: map_at_5 value: 65.946 - type: mrr_at_1 value: 61.667 - type: mrr_at_10 value: 68.52000000000001 - type: mrr_at_100 value: 68.888 - type: mrr_at_1000 value: 68.911 - type: mrr_at_3 value: 66.833 - type: mrr_at_5 value: 67.617 - type: ndcg_at_1 value: 61.667 - type: ndcg_at_10 value: 71.511 - type: ndcg_at_100 value: 73.765 - type: ndcg_at_1000 value: 74.40299999999999 - type: ndcg_at_3 value: 67.411 - type: ndcg_at_5 value: 68.88 - type: precision_at_1 value: 61.667 - type: precision_at_10 value: 9.433 - type: precision_at_100 value: 1.0670000000000002 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 26.222 - type: precision_at_5 value: 16.866999999999997 - type: recall_at_1 value: 58.594 - type: recall_at_10 value: 83.439 - type: recall_at_100 value: 94.1 - type: recall_at_1000 value: 99.0 - type: recall_at_3 value: 71.922 - type: recall_at_5 value: 75.678 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.7990099009901 - type: cos_sim_ap value: 94.8316184070519 - type: cos_sim_f1 value: 89.75265017667844 - type: cos_sim_precision value: 90.62181447502549 - type: cos_sim_recall value: 88.9 - type: dot_accuracy value: 99.7990099009901 - type: dot_ap value: 94.831611518794 - type: dot_f1 value: 89.75265017667844 - type: dot_precision value: 90.62181447502549 - type: dot_recall value: 88.9 - type: euclidean_accuracy value: 99.7990099009901 - type: euclidean_ap value: 94.83161335144017 - type: euclidean_f1 value: 89.75265017667844 - type: euclidean_precision value: 90.62181447502549 - type: euclidean_recall value: 88.9 - type: manhattan_accuracy value: 99.8 - type: manhattan_ap value: 94.84210829841739 - type: manhattan_f1 value: 89.60905349794238 - type: manhattan_precision value: 92.26694915254238 - type: manhattan_recall value: 87.1 - type: max_accuracy value: 99.8 - type: max_ap value: 94.84210829841739 - type: max_f1 value: 89.75265017667844 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 63.18343792633894 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 33.50944549814364 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 48.89100016028111 - type: mrr value: 49.607630931160344 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.628145384101522 - type: cos_sim_spearman value: 31.275306930726675 - type: dot_pearson value: 30.62814883550051 - type: dot_spearman value: 31.275306930726675 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.26 - type: map_at_10 value: 2.163 - type: map_at_100 value: 12.29 - type: map_at_1000 value: 29.221999999999998 - type: map_at_3 value: 0.729 - type: map_at_5 value: 1.161 - type: mrr_at_1 value: 96.0 - type: mrr_at_10 value: 98.0 - type: mrr_at_100 value: 98.0 - type: mrr_at_1000 value: 98.0 - type: mrr_at_3 value: 98.0 - type: mrr_at_5 value: 98.0 - type: ndcg_at_1 value: 89.0 - type: ndcg_at_10 value: 82.312 - type: ndcg_at_100 value: 61.971 - type: ndcg_at_1000 value: 54.065 - type: ndcg_at_3 value: 87.87700000000001 - type: ndcg_at_5 value: 85.475 - type: precision_at_1 value: 96.0 - type: precision_at_10 value: 87.4 - type: precision_at_100 value: 64.02 - type: precision_at_1000 value: 24.093999999999998 - type: precision_at_3 value: 94.0 - type: precision_at_5 value: 90.8 - type: recall_at_1 value: 0.26 - type: recall_at_10 value: 2.302 - type: recall_at_100 value: 15.148 - type: recall_at_1000 value: 50.55 - type: recall_at_3 value: 0.744 - type: recall_at_5 value: 1.198 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.217 - type: map_at_10 value: 11.378 - type: map_at_100 value: 18.022 - type: map_at_1000 value: 19.544 - type: map_at_3 value: 6.079 - type: map_at_5 value: 8.559 - type: mrr_at_1 value: 28.571 - type: mrr_at_10 value: 48.423 - type: mrr_at_100 value: 49.028 - type: mrr_at_1000 value: 49.028 - type: mrr_at_3 value: 44.897999999999996 - type: mrr_at_5 value: 46.531 - type: ndcg_at_1 value: 25.509999999999998 - type: ndcg_at_10 value: 27.860000000000003 - type: ndcg_at_100 value: 39.34 - type: ndcg_at_1000 value: 50.21 - type: ndcg_at_3 value: 30.968 - type: ndcg_at_5 value: 29.541 - type: precision_at_1 value: 28.571 - type: precision_at_10 value: 25.918000000000003 - type: precision_at_100 value: 8.184 - type: precision_at_1000 value: 1.545 - type: precision_at_3 value: 35.374 - type: precision_at_5 value: 31.837 - type: recall_at_1 value: 2.217 - type: recall_at_10 value: 18.511 - type: recall_at_100 value: 50.178 - type: recall_at_1000 value: 83.07600000000001 - type: recall_at_3 value: 7.811999999999999 - type: recall_at_5 value: 11.684 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.386 - type: ap value: 14.58573366644018 - type: f1 value: 55.0170316975105 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 60.868704018109796 - type: f1 value: 61.175908652496624 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 48.72082824812323 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 85.43839780652083 - type: cos_sim_ap value: 72.55258980537292 - type: cos_sim_f1 value: 66.4145419055752 - type: cos_sim_precision value: 61.765373269798054 - type: cos_sim_recall value: 71.82058047493403 - type: dot_accuracy value: 85.43839780652083 - type: dot_ap value: 72.55256370197756 - type: dot_f1 value: 66.4145419055752 - type: dot_precision value: 61.765373269798054 - type: dot_recall value: 71.82058047493403 - type: euclidean_accuracy value: 85.43839780652083 - type: euclidean_ap value: 72.55259011957311 - type: euclidean_f1 value: 66.4145419055752 - type: euclidean_precision value: 61.765373269798054 - type: euclidean_recall value: 71.82058047493403 - type: manhattan_accuracy value: 85.40263455921799 - type: manhattan_ap value: 72.47856062032 - type: manhattan_f1 value: 66.39413249969942 - type: manhattan_precision value: 60.989617848464775 - type: manhattan_recall value: 72.84960422163589 - type: max_accuracy value: 85.43839780652083 - type: max_ap value: 72.55259011957311 - type: max_f1 value: 66.4145419055752 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.24981565568363 - type: cos_sim_ap value: 86.38437585690401 - type: cos_sim_f1 value: 78.79039565086076 - type: cos_sim_precision value: 77.29629629629629 - type: cos_sim_recall value: 80.34339390206344 - type: dot_accuracy value: 89.24981565568363 - type: dot_ap value: 86.38437587564587 - type: dot_f1 value: 78.79039565086076 - type: dot_precision value: 77.29629629629629 - type: dot_recall value: 80.34339390206344 - type: euclidean_accuracy value: 89.24981565568363 - type: euclidean_ap value: 86.38437691024106 - type: euclidean_f1 value: 78.79039565086076 - type: euclidean_precision value: 77.29629629629629 - type: euclidean_recall value: 80.34339390206344 - type: manhattan_accuracy value: 89.25563705514806 - type: manhattan_ap value: 86.35729146774388 - type: manhattan_f1 value: 78.7238059278837 - type: manhattan_precision value: 77.23938653034007 - type: manhattan_recall value: 80.26639975361873 - type: max_accuracy value: 89.25563705514806 - type: max_ap value: 86.38437691024106 - type: max_f1 value: 78.79039565086076 --- # nomic-embed-text-v1-ablated: A Reproducible Long Context (8192) Text Embedder `nomic-embed-text-v1-ablated` is 8192 context length text encoder. This is a checkpoint trained after modifying the training dataset to be different from the dataset used to train our [final model](https://huggingface.co/nomic-ai/nomic-embed-text-v1). The purpose of releasing this checkpoint is to understand the impact that subsets of our training data had on model outcomes. This release is part of our commitment to open-source training artifacts from our Nomic Embed Text tech report [here](https://arxiv.org/pdf/2402.01613) If you want to use a model to extract embeddings, we suggest using [nomic-embed-text-v1](https://huggingface.co/nomic-ai/nomic-embed-text-v1). # Join the Nomic Community - Nomic: [https://nomic.ai](https://nomic.ai) - Discord: [https://discord.gg/myY5YDR8z8](https://discord.gg/myY5YDR8z8) - Twitter: [https://twitter.com/nomic_ai](https://twitter.com/nomic_ai)
[ "BIOSSES", "SCIFACT" ]
qwp4w3hyb/Phi-3-mini-4k-instruct-iMat-GGUF
qwp4w3hyb
text-generation
[ "transformers", "gguf", "phi3", "text-generation", "nlp", "code", "microsoft", "phi", "phi-3", "imatrix", "importance matrix", "conversational", "custom_code", "en", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:quantized:microsoft/Phi-3-mini-4k-instruct", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-04-23T21:46:03Z
2024-05-22T20:18:08+00:00
335
1
--- base_model: microsoft/Phi-3-mini-4k-instruct language: - en license: mit license_link: LICENSE pipeline_tag: text-generation tags: - nlp - code - microsoft - phi - phi-3 - gguf - imatrix - importance matrix --- # Quant Infos ## Includes latest bpe tokenizer fixes 🎉 - Updated for latest bpe pre-tokenizer fixes https://github.com/ggerganov/llama.cpp/pull/6920 - quants done with an importance matrix for improved quantization loss - K & IQ quants in basically all variants from Q6_K down to IQ1_S - fixed end token for instruct mode (<|eot_id|>[128009]) - Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [f4ab2a41476600a98067a9474ea8f9e6db41bcfa](https://github.com/ggerganov/llama.cpp/commit/f4ab2a41476600a98067a9474ea8f9e6db41bcfa) (master from 2024-04-29) - Imatrtix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) dataset. ``` ./imatrix -c 512 -m $model_name-f16.gguf -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat ``` # Original Model Card ## Model Summary The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + Phi-3 GGUF: [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf) + Phi-3 ONNX: [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx) ## Intended Uses **Primary use cases** The model is intended for commercial and research use in English. The model provides uses for applications which require: 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## How to Use Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source. The current `transformers` version can be verified with: `pip list | grep transformers`. Phi-3 Mini-4K-Instruct is also available in [HuggingChat](https://aka.ms/try-phi3-hf-chat). ### Chat Format Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion <|end|>\n<|assistant|> ``` For example: ```markdown <|system|> You are a helpful AI assistant.<|end|> <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|system|> You are a helpful AI assistant.<|end|> <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model = AutoModelForCausalLM.from_pretrained( "microsoft/Phi-3-mini-4k-instruct", device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct") messages = [ {"role": "system", "content": "You are a helpful digital assistant. Please provide safe, ethical and accurate information to the user."}, {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 4K tokens * GPUs: 512 H100-80G * Training time: 7 days * Training data: 3.3T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. ### Datasets Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. ### Fine-tuning A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/sample_finetune.py). ## Benchmarks We report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5. All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. | | Phi-3-Mini-4K-In<br>3.8b | Phi-3-Small<br>7b (preview) | Phi-3-Medium<br>14b (preview) | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 | |---|---|---|---|---|---|---|---|---|---| | MMLU <br>5-Shot | 68.8 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 | | HellaSwag <br> 5-Shot | 76.7 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 | | ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 | | GSM-8K <br> 0-Shot; CoT | 82.5 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 | | MedQA <br> 2-Shot | 53.8 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 | | AGIEval <br> 0-Shot | 37.5 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 | | TriviaQA <br> 5-Shot | 64.0 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 | | Arc-C <br> 10-Shot | 84.9 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 | | Arc-E <br> 10-Shot | 94.6 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 | | PIQA <br> 5-Shot | 84.2 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 | | SociQA <br> 5-Shot | 76.6 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 | | BigBench-Hard <br> 0-Shot | 71.7 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 | | WinoGrande <br> 5-Shot | 70.8 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65 | 62.0 | 68.8 | | OpenBookQA <br> 10-Shot | 83.2 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 | | BoolQ <br> 0-Shot | 77.6 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 | | CommonSenseQA <br> 10-Shot | 80.2 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 | | TruthfulQA <br> 10-Shot | 65.0 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 | | HumanEval <br> 0-Shot | 59.1 | 59.1 | 54.7 | 59.0 | 28.0 | 34.1 | 60.4 | 37.8 | 62.2 | | MBPP <br> 3-Shot | 53.8 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 | ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: * NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager" * CPU: use the **GGUF** quantized models [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf) + Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx) ## Cross Platform Support ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model [here](https://aka.ms/phi3-mini-4k-instruct-onnx). Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-4k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
[ "MEDQA" ]
ntc-ai/SDXL-LoRA-slider.very-aesthetic
ntc-ai
text-to-image
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
2024-01-04T05:04:01Z
2024-01-04T05:04:04+00:00
330
1
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 language: - en license: mit tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers thumbnail: images/evaluate/very aesthetic.../very aesthetic_17_3.0.png widget: - text: very aesthetic output: url: images/very aesthetic_17_3.0.png - text: very aesthetic output: url: images/very aesthetic_19_3.0.png - text: very aesthetic output: url: images/very aesthetic_20_3.0.png - text: very aesthetic output: url: images/very aesthetic_21_3.0.png - text: very aesthetic output: url: images/very aesthetic_22_3.0.png inference: false instance_prompt: very aesthetic --- # ntcai.xyz slider - very aesthetic (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/very aesthetic_17_-3.0.png" width=256 height=256 /> | <img src="images/very aesthetic_17_0.0.png" width=256 height=256 /> | <img src="images/very aesthetic_17_3.0.png" width=256 height=256 /> | | <img src="images/very aesthetic_19_-3.0.png" width=256 height=256 /> | <img src="images/very aesthetic_19_0.0.png" width=256 height=256 /> | <img src="images/very aesthetic_19_3.0.png" width=256 height=256 /> | | <img src="images/very aesthetic_20_-3.0.png" width=256 height=256 /> | <img src="images/very aesthetic_20_0.0.png" width=256 height=256 /> | <img src="images/very aesthetic_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` very aesthetic ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.very-aesthetic', weight_name='very aesthetic.safetensors', adapter_name="very aesthetic") # Activate the LoRA pipe.set_adapters(["very aesthetic"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, very aesthetic" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 850+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
[ "CRAFT" ]
facebook/mms-1b-l1107
facebook
automatic-speech-recognition
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "mms", "ab", "af", "ak", "am", "ar", "as", "av", "ay", "az", "ba", "bm", "be", "bn", "bi", "bo", "sh", "br", "bg", "ca", "cs", "ce", "cv", "ku", "cy", "da", "de", "dv", "dz", "el", "en", "eo", "et", "eu", "ee", "fo", "fa", "fj", "fi", "fr", "fy", "ff", "ga", "gl", "gn", "gu", "zh", "ht", "ha", "he", "hi", "hu", "hy", "ig", "ia", "ms", "is", "it", "jv", "ja", "kn", "ka", "kk", "kr", "km", "ki", "rw", "ky", "ko", "kv", "lo", "la", "lv", "ln", "lt", "lb", "lg", "mh", "ml", "mr", "mk", "mg", "mt", "mn", "mi", "my", "nl", "no", "ne", "ny", "oc", "om", "or", "os", "pa", "pl", "pt", "ps", "qu", "ro", "rn", "ru", "sg", "sk", "sl", "sm", "sn", "sd", "so", "es", "sq", "su", "sv", "sw", "ta", "tt", "te", "tg", "tl", "th", "ti", "ts", "tr", "uk", "vi", "wo", "xh", "yo", "zu", "za", "dataset:google/fleurs", "arxiv:2305.13516", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
2023-05-27T12:19:55Z
2023-06-05T10:23:13+00:00
329
10
--- datasets: - google/fleurs language: - ab - af - ak - am - ar - as - av - ay - az - ba - bm - be - bn - bi - bo - sh - br - bg - ca - cs - ce - cv - ku - cy - da - de - dv - dz - el - en - eo - et - eu - ee - fo - fa - fj - fi - fr - fy - ff - ga - gl - gn - gu - zh - ht - ha - he - hi - sh - hu - hy - ig - ia - ms - is - it - jv - ja - kn - ka - kk - kr - km - ki - rw - ky - ko - kv - lo - la - lv - ln - lt - lb - lg - mh - ml - mr - ms - mk - mg - mt - mn - mi - my - zh - nl - 'no' - 'no' - ne - ny - oc - om - or - os - pa - pl - pt - ms - ps - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - ro - rn - ru - sg - sk - sl - sm - sn - sd - so - es - sq - su - sv - sw - ta - tt - te - tg - tl - th - ti - ts - tr - uk - ms - vi - wo - xh - ms - yo - ms - zu - za license: cc-by-nc-4.0 metrics: - wer tags: - mms --- # Massively Multilingual Speech (MMS) - Finetuned ASR - L1107 This checkpoint is a model fine-tuned for multi-lingual ASR and part of Facebook's [Massive Multilingual Speech project](https://research.facebook.com/publications/scaling-speech-technology-to-1000-languages/). This checkpoint is based on the [Wav2Vec2 architecture](https://huggingface.co/docs/transformers/model_doc/wav2vec2) and makes use of adapter models to transcribe 1000+ languages. The checkpoint consists of **1 billion parameters** and has been fine-tuned from [facebook/mms-1b](https://huggingface.co/facebook/mms-1b) on 1107 languages. ## Table Of Content - [Example](#example) - [Supported Languages](#supported-languages) - [Model details](#model-details) - [Additional links](#additional-links) ## Example This MMS checkpoint can be used with [Transformers](https://github.com/huggingface/transformers) to transcribe audio of 1107 different languages. Let's look at a simple example. First, we install transformers and some other libraries ``` pip install torch accelerate torchaudio datasets pip install --upgrade transformers ```` **Note**: In order to use MMS you need to have at least `transformers >= 4.30` installed. If the `4.30` version is not yet available [on PyPI](https://pypi.org/project/transformers/) make sure to install `transformers` from source: ``` pip install git+https://github.com/huggingface/transformers.git ``` Next, we load a couple of audio samples via `datasets`. Make sure that the audio data is sampled to 16000 kHz. ```py from datasets import load_dataset, Audio # English stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="test", streaming=True) stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000)) en_sample = next(iter(stream_data))["audio"]["array"] # French stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "fr", split="test", streaming=True) stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000)) fr_sample = next(iter(stream_data))["audio"]["array"] ``` Next, we load the model and processor ```py from transformers import Wav2Vec2ForCTC, AutoProcessor import torch model_id = "facebook/mms-1b-l1107" processor = AutoProcessor.from_pretrained(model_id) model = Wav2Vec2ForCTC.from_pretrained(model_id) ``` Now we process the audio data, pass the processed audio data to the model and transcribe the model output, just like we usually do for Wav2Vec2 models such as [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) ```py inputs = processor(en_sample, sampling_rate=16_000, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs).logits ids = torch.argmax(outputs, dim=-1)[0] transcription = processor.decode(ids) # 'joe keton disapproved of films and buster also had reservations about the media' ``` We can now keep the same model in memory and simply switch out the language adapters by calling the convenient [`load_adapter()`]() function for the model and [`set_target_lang()`]() for the tokenizer. We pass the target language as an input - "fra" for French. ```py processor.tokenizer.set_target_lang("fra") model.load_adapter("fra") inputs = processor(fr_sample, sampling_rate=16_000, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs).logits ids = torch.argmax(outputs, dim=-1)[0] transcription = processor.decode(ids) # "ce dernier est volé tout au long de l'histoire romaine" ``` In the same way the language can be switched out for all other supported languages. Please have a look at: ```py processor.tokenizer.vocab.keys() ``` For more details, please have a look at [the official docs](https://huggingface.co/docs/transformers/main/en/model_doc/mms). ## Supported Languages This model supports 1107 languages. Unclick the following to toogle all supported languages of this checkpoint in [ISO 639-3 code](https://en.wikipedia.org/wiki/ISO_639-3). You can find more details about the languages and their ISO 649-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html). <details> <summary>Click to toggle</summary> - abi - abp - aca - acd - ace - acf - ach - acn - acr - acu - ade - adh - adj - adx - aeu - agd - agg - agn - agr - agu - agx - aha - ahk - aia - aka - akb - ake - akp - alj - alp - alt - alz - ame - amf - amh - ami - amk - ann - any - aoz - apb - apr - ara - arl - asa - asg - asm - ata - atb - atg - ati - atq - ava - avn - avu - awa - awb - ayo - ayr - ayz - azb - azg - azj-script_cyrillic - azj-script_latin - azz - bak - bam - ban - bao - bav - bba - bbb - bbc - bbo - bcc-script_arabic - bcc-script_latin - bcl - bcw - bdg - bdh - bdq - bdu - bdv - beh - bem - ben - bep - bex - bfa - bfo - bfy - bfz - bgc - bgq - bgr - bgt - bgw - bha - bht - bhz - bib - bim - bis - biv - bjr - bjv - bjw - bjz - bkd - bkv - blh - blt - blx - blz - bmq - bmr - bmu - bmv - bng - bno - bnp - boa - bod - boj - bom - bor - bov - box - bpr - bps - bqc - bqi - bqj - bqp - bru - bsc - bsq - bss - btd - bts - btt - btx - bud - bul - bus - bvc - bvz - bwq - bwu - byr - bzh - bzi - bzj - caa - cab - cac-dialect_sanmateoixtatan - cac-dialect_sansebastiancoatan - cak-dialect_central - cak-dialect_santamariadejesus - cak-dialect_santodomingoxenacoj - cak-dialect_southcentral - cak-dialect_western - cak-dialect_yepocapa - cap - car - cas - cat - cax - cbc - cbi - cbr - cbs - cbt - cbu - cbv - cce - cco - cdj - ceb - ceg - cek - cfm - cgc - chf - chv - chz - cjo - cjp - cjs - cko - ckt - cla - cle - cly - cme - cmo-script_khmer - cmo-script_latin - cmr - cnh - cni - cnl - cnt - coe - cof - cok - con - cot - cou - cpa - cpb - cpu - crh - crk-script_latin - crk-script_syllabics - crn - crq - crs - crt - csk - cso - ctd - ctg - cto - ctu - cuc - cui - cuk - cul - cwa - cwe - cwt - cya - cym - daa - dah - dar - dbj - dbq - ddn - ded - des - deu - dga - dgi - dgk - dgo - dgr - dhi - did - dig - dik - dip - div - djk - dnj-dialect_blowowest - dnj-dialect_gweetaawueast - dnt - dnw - dop - dos - dsh - dso - dtp - dts - dug - dwr - dyi - dyo - dyu - dzo - eip - eka - ell - emp - enb - eng - enx - ese - ess - eus - evn - ewe - eza - fal - fao - far - fas - fij - fin - flr - fmu - fon - fra - frd - ful - gag-script_cyrillic - gag-script_latin - gai - gam - gau - gbi - gbk - gbm - gbo - gde - geb - gej - gil - gjn - gkn - gld - glk - gmv - gna - gnd - gng - gof-script_latin - gog - gor - gqr - grc - gri - grn - grt - gso - gub - guc - gud - guh - guj - guk - gum - guo - guq - guu - gux - gvc - gvl - gwi - gwr - gym - gyr - had - hag - hak - hap - hat - hau - hay - heb - heh - hif - hig - hil - hin - hlb - hlt - hne - hnn - hns - hoc - hoy - hto - hub - hui - hun - hus-dialect_centralveracruz - hus-dialect_westernpotosino - huu - huv - hvn - hwc - hyw - iba - icr - idd - ifa - ifb - ife - ifk - ifu - ify - ign - ikk - ilb - ilo - imo - inb - ind - iou - ipi - iqw - iri - irk - isl - itl - itv - ixl-dialect_sangasparchajul - ixl-dialect_sanjuancotzal - ixl-dialect_santamarianebaj - izr - izz - jac - jam - jav - jbu - jen - jic - jiv - jmc - jmd - jun - juy - jvn - kaa - kab - kac - kak - kan - kao - kaq - kay - kaz - kbo - kbp - kbq - kbr - kby - kca - kcg - kdc - kde - kdh - kdi - kdj - kdl - kdn - kdt - kek - ken - keo - ker - key - kez - kfb - kff-script_telugu - kfw - kfx - khg - khm - khq - kia - kij - kik - kin - kir - kjb - kje - kjg - kjh - kki - kkj - kle - klu - klv - klw - kma - kmd - kml - kmr-script_arabic - kmr-script_cyrillic - kmr-script_latin - kmu - knb - kne - knf - knj - knk - kno - kog - kor - kpq - kps - kpv - kpy - kpz - kqe - kqp - kqr - kqy - krc - kri - krj - krl - krr - krs - kru - ksb - ksr - kss - ktb - ktj - kub - kue - kum - kus - kvn - kvw - kwd - kwf - kwi - kxc - kxf - kxm - kxv - kyb - kyc - kyf - kyg - kyo - kyq - kyu - kyz - kzf - lac - laj - lam - lao - las - lat - lav - law - lbj - lbw - lcp - lee - lef - lem - lew - lex - lgg - lgl - lhu - lia - lid - lif - lip - lis - lje - ljp - llg - lln - lme - lnd - lns - lob - lok - lom - lon - loq - lsi - lsm - luc - lug - lwo - lww - lzz - maa-dialect_sanantonio - maa-dialect_sanjeronimo - mad - mag - mah - mai - maj - mak - mal - mam-dialect_central - mam-dialect_northern - mam-dialect_southern - mam-dialect_western - maq - mar - maw - maz - mbb - mbc - mbh - mbj - mbt - mbu - mbz - mca - mcb - mcd - mco - mcp - mcq - mcu - mda - mdv - mdy - med - mee - mej - men - meq - met - mev - mfe - mfh - mfi - mfk - mfq - mfy - mfz - mgd - mge - mgh - mgo - mhi - mhr - mhu - mhx - mhy - mib - mie - mif - mih - mil - mim - min - mio - mip - miq - mit - miy - miz - mjl - mjv - mkl - mkn - mlg - mmg - mnb - mnf - mnk - mnw - mnx - moa - mog - mon - mop - mor - mos - mox - moz - mpg - mpm - mpp - mpx - mqb - mqf - mqj - mqn - mrw - msy - mtd - mtj - mto - muh - mup - mur - muv - muy - mvp - mwq - mwv - mxb - mxq - mxt - mxv - mya - myb - myk - myl - myv - myx - myy - mza - mzi - mzj - mzk - mzm - mzw - nab - nag - nan - nas - naw - nca - nch - ncj - ncl - ncu - ndj - ndp - ndv - ndy - ndz - neb - new - nfa - nfr - nga - ngl - ngp - ngu - nhe - nhi - nhu - nhw - nhx - nhy - nia - nij - nim - nin - nko - nlc - nld - nlg - nlk - nmz - nnb - nnq - nnw - noa - nod - nog - not - npl - npy - nst - nsu - ntm - ntr - nuj - nus - nuz - nwb - nxq - nya - nyf - nyn - nyo - nyy - nzi - obo - ojb-script_latin - ojb-script_syllabics - oku - old - omw - onb - ood - orm - ory - oss - ote - otq - ozm - pab - pad - pag - pam - pan - pao - pap - pau - pbb - pbc - pbi - pce - pcm - peg - pez - pib - pil - pir - pis - pjt - pkb - pls - plw - pmf - pny - poh-dialect_eastern - poh-dialect_western - poi - pol - por - poy - ppk - pps - prf - prk - prt - pse - pss - ptu - pui - pwg - pww - pxm - qub - quc-dialect_central - quc-dialect_east - quc-dialect_north - quf - quh - qul - quw - quy - quz - qvc - qve - qvh - qvm - qvn - qvo - qvs - qvw - qvz - qwh - qxh - qxl - qxn - qxo - qxr - rah - rai - rap - rav - raw - rej - rel - rgu - rhg - rif-script_arabic - rif-script_latin - ril - rim - rjs - rkt - rmc-script_cyrillic - rmc-script_latin - rmo - rmy-script_cyrillic - rmy-script_latin - rng - rnl - rol - ron - rop - rro - rub - ruf - rug - run - rus - sab - sag - sah - saj - saq - sas - sba - sbd - sbl - sbp - sch - sck - sda - sea - seh - ses - sey - sgb - sgj - sgw - shi - shk - shn - sho - shp - sid - sig - sil - sja - sjm - sld - slu - sml - smo - sna - sne - snn - snp - snw - som - soy - spa - spp - spy - sqi - sri - srm - srn - srx - stn - stp - suc - suk - sun - sur - sus - suv - suz - swe - swh - sxb - sxn - sya - syl - sza - tac - taj - tam - tao - tap - taq - tat - tav - tbc - tbg - tbk - tbl - tby - tbz - tca - tcc - tcs - tcz - tdj - ted - tee - tel - tem - teo - ter - tes - tew - tex - tfr - tgj - tgk - tgl - tgo - tgp - tha - thk - thl - tih - tik - tir - tkr - tlb - tlj - tly - tmc - tmf - tna - tng - tnk - tnn - tnp - tnr - tnt - tob - toc - toh - tom - tos - tpi - tpm - tpp - tpt - trc - tri - trn - trs - tso - tsz - ttc - tte - ttq-script_tifinagh - tue - tuf - tuk-script_arabic - tuk-script_latin - tuo - tur - tvw - twb - twe - twu - txa - txq - txu - tye - tzh-dialect_bachajon - tzh-dialect_tenejapa - tzj-dialect_eastern - tzj-dialect_western - tzo-dialect_chamula - tzo-dialect_chenalho - ubl - ubu - udm - udu - uig-script_arabic - uig-script_cyrillic - ukr - unr - upv - ura - urb - urd-script_arabic - urd-script_devanagari - urd-script_latin - urk - urt - ury - usp - uzb-script_cyrillic - vag - vid - vie - vif - vmw - vmy - vun - vut - wal-script_ethiopic - wal-script_latin - wap - war - waw - way - wba - wlo - wlx - wmw - wob - wsg - wwa - xal - xdy - xed - xer - xmm - xnj - xnr - xog - xon - xrb - xsb - xsm - xsr - xsu - xta - xtd - xte - xtm - xtn - xua - xuo - yaa - yad - yal - yam - yao - yas - yat - yaz - yba - ybb - ycl - ycn - yea - yka - yli - yor - yre - yua - yuz - yva - zaa - zab - zac - zad - zae - zai - zam - zao - zaq - zar - zas - zav - zaw - zca - zga - zim - ziw - zlm - zmz - zne - zos - zpc - zpg - zpi - zpl - zpm - zpo - zpt - zpu - zpz - ztq - zty - zyb - zyp - zza </details> ## Model details - **Developed by:** Vineel Pratap et al. - **Model type:** Multi-Lingual Automatic Speech Recognition model - **Language(s):** 1000+ languages, see [supported languages](#supported-languages) - **License:** CC-BY-NC 4.0 license - **Num parameters**: 1 billion - **Audio sampling rate**: 16,000 kHz - **Cite as:** @article{pratap2023mms, title={Scaling Speech Technology to 1,000+ Languages}, author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli}, journal={arXiv}, year={2023} } ## Additional Links - [Blog post](https://ai.facebook.com/blog/multilingual-model-speech-recognition/) - [Transformers documentation](https://huggingface.co/docs/transformers/main/en/model_doc/mms). - [Paper](https://arxiv.org/abs/2305.13516) - [GitHub Repository](https://github.com/facebookresearch/fairseq/tree/main/examples/mms#asr) - [Other **MMS** checkpoints](https://huggingface.co/models?other=mms) - MMS base checkpoints: - [facebook/mms-1b](https://huggingface.co/facebook/mms-1b) - [facebook/mms-300m](https://huggingface.co/facebook/mms-300m) - [Official Space](https://huggingface.co/spaces/facebook/MMS)
[ "CAS" ]
Nextcloud-AI/multilingual-e5-large-instruct
Nextcloud-AI
feature-extraction
[ "sentence-transformers", "onnx", "safetensors", "xlm-roberta", "feature-extraction", "mteb", "transformers", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "arxiv:2402.05672", "arxiv:2401.00368", "arxiv:2104.08663", "arxiv:2210.07316", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-09-27T06:44:12Z
2024-09-26T06:33:15+00:00
329
5
--- language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: mit tags: - mteb - sentence-transformers - transformers model-index: - name: multilingual-e5-large-instruct results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 76.23880597014924 - type: ap value: 39.07351965022687 - type: f1 value: 70.04836733862683 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (de) type: mteb/amazon_counterfactual config: de split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 66.71306209850107 - type: ap value: 79.01499914759529 - type: f1 value: 64.81951817560703 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en-ext) type: mteb/amazon_counterfactual config: en-ext split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 73.85307346326837 - type: ap value: 22.447519885878737 - type: f1 value: 61.0162730745633 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (ja) type: mteb/amazon_counterfactual config: ja split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 76.04925053533191 - type: ap value: 23.44983217128922 - type: f1 value: 62.5723230907759 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 96.28742500000001 - type: ap value: 94.8449918887462 - type: f1 value: 96.28680923610432 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 56.716 - type: f1 value: 55.76510398266401 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (de) type: mteb/amazon_reviews_multi config: de split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 52.99999999999999 - type: f1 value: 52.00829994765178 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (es) type: mteb/amazon_reviews_multi config: es split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.806000000000004 - type: f1 value: 48.082345914983634 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (fr) type: mteb/amazon_reviews_multi config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.507999999999996 - type: f1 value: 47.68752844642045 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (ja) type: mteb/amazon_reviews_multi config: ja split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.709999999999994 - type: f1 value: 47.05870376637181 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 44.662000000000006 - type: f1 value: 43.42371965372771 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 31.721 - type: map_at_10 value: 49.221 - type: map_at_100 value: 49.884 - type: map_at_1000 value: 49.888 - type: map_at_3 value: 44.31 - type: map_at_5 value: 47.276 - type: mrr_at_1 value: 32.432 - type: mrr_at_10 value: 49.5 - type: mrr_at_100 value: 50.163000000000004 - type: mrr_at_1000 value: 50.166 - type: mrr_at_3 value: 44.618 - type: mrr_at_5 value: 47.541 - type: ndcg_at_1 value: 31.721 - type: ndcg_at_10 value: 58.384 - type: ndcg_at_100 value: 61.111000000000004 - type: ndcg_at_1000 value: 61.187999999999995 - type: ndcg_at_3 value: 48.386 - type: ndcg_at_5 value: 53.708999999999996 - type: precision_at_1 value: 31.721 - type: precision_at_10 value: 8.741 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 20.057 - type: precision_at_5 value: 14.609 - type: recall_at_1 value: 31.721 - type: recall_at_10 value: 87.411 - type: recall_at_100 value: 99.075 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 60.171 - type: recall_at_5 value: 73.044 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 46.40419580759799 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 40.48593255007969 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 63.889179122289995 - type: mrr value: 77.61146286769556 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 88.15075203727929 - type: cos_sim_spearman value: 86.9622224570873 - type: euclidean_pearson value: 86.70473853624121 - type: euclidean_spearman value: 86.9622224570873 - type: manhattan_pearson value: 86.21089380980065 - type: manhattan_spearman value: 86.75318154937008 - task: type: BitextMining dataset: name: MTEB BUCC (de-en) type: mteb/bucc-bitext-mining config: de-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.65553235908142 - type: f1 value: 99.60681976339595 - type: precision value: 99.58246346555325 - type: recall value: 99.65553235908142 - task: type: BitextMining dataset: name: MTEB BUCC (fr-en) type: mteb/bucc-bitext-mining config: fr-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.26260180497468 - type: f1 value: 99.14520507740848 - type: precision value: 99.08650671362535 - type: recall value: 99.26260180497468 - task: type: BitextMining dataset: name: MTEB BUCC (ru-en) type: mteb/bucc-bitext-mining config: ru-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 98.07412538967787 - type: f1 value: 97.86629719431936 - type: precision value: 97.76238309664012 - type: recall value: 98.07412538967787 - task: type: BitextMining dataset: name: MTEB BUCC (zh-en) type: mteb/bucc-bitext-mining config: zh-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.42074776197998 - type: f1 value: 99.38564156573635 - type: precision value: 99.36808846761454 - type: recall value: 99.42074776197998 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 85.73376623376623 - type: f1 value: 85.68480707214599 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 40.935218072113855 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 36.276389017675264 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 27.764166666666668 - type: map_at_10 value: 37.298166666666674 - type: map_at_100 value: 38.530166666666666 - type: map_at_1000 value: 38.64416666666667 - type: map_at_3 value: 34.484833333333334 - type: map_at_5 value: 36.0385 - type: mrr_at_1 value: 32.93558333333333 - type: mrr_at_10 value: 41.589749999999995 - type: mrr_at_100 value: 42.425333333333334 - type: mrr_at_1000 value: 42.476333333333336 - type: mrr_at_3 value: 39.26825 - type: mrr_at_5 value: 40.567083333333336 - type: ndcg_at_1 value: 32.93558333333333 - type: ndcg_at_10 value: 42.706583333333334 - type: ndcg_at_100 value: 47.82483333333333 - type: ndcg_at_1000 value: 49.95733333333334 - type: ndcg_at_3 value: 38.064750000000004 - type: ndcg_at_5 value: 40.18158333333333 - type: precision_at_1 value: 32.93558333333333 - type: precision_at_10 value: 7.459833333333334 - type: precision_at_100 value: 1.1830833333333335 - type: precision_at_1000 value: 0.15608333333333332 - type: precision_at_3 value: 17.5235 - type: precision_at_5 value: 12.349833333333333 - type: recall_at_1 value: 27.764166666666668 - type: recall_at_10 value: 54.31775 - type: recall_at_100 value: 76.74350000000001 - type: recall_at_1000 value: 91.45208333333332 - type: recall_at_3 value: 41.23425 - type: recall_at_5 value: 46.73983333333334 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 12.969 - type: map_at_10 value: 21.584999999999997 - type: map_at_100 value: 23.3 - type: map_at_1000 value: 23.5 - type: map_at_3 value: 18.218999999999998 - type: map_at_5 value: 19.983 - type: mrr_at_1 value: 29.316 - type: mrr_at_10 value: 40.033 - type: mrr_at_100 value: 40.96 - type: mrr_at_1000 value: 41.001 - type: mrr_at_3 value: 37.123 - type: mrr_at_5 value: 38.757999999999996 - type: ndcg_at_1 value: 29.316 - type: ndcg_at_10 value: 29.858 - type: ndcg_at_100 value: 36.756 - type: ndcg_at_1000 value: 40.245999999999995 - type: ndcg_at_3 value: 24.822 - type: ndcg_at_5 value: 26.565 - type: precision_at_1 value: 29.316 - type: precision_at_10 value: 9.186 - type: precision_at_100 value: 1.6549999999999998 - type: precision_at_1000 value: 0.22999999999999998 - type: precision_at_3 value: 18.436 - type: precision_at_5 value: 13.876 - type: recall_at_1 value: 12.969 - type: recall_at_10 value: 35.142 - type: recall_at_100 value: 59.143 - type: recall_at_1000 value: 78.594 - type: recall_at_3 value: 22.604 - type: recall_at_5 value: 27.883000000000003 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.527999999999999 - type: map_at_10 value: 17.974999999999998 - type: map_at_100 value: 25.665 - type: map_at_1000 value: 27.406000000000002 - type: map_at_3 value: 13.017999999999999 - type: map_at_5 value: 15.137 - type: mrr_at_1 value: 62.5 - type: mrr_at_10 value: 71.891 - type: mrr_at_100 value: 72.294 - type: mrr_at_1000 value: 72.296 - type: mrr_at_3 value: 69.958 - type: mrr_at_5 value: 71.121 - type: ndcg_at_1 value: 50.875 - type: ndcg_at_10 value: 38.36 - type: ndcg_at_100 value: 44.235 - type: ndcg_at_1000 value: 52.154 - type: ndcg_at_3 value: 43.008 - type: ndcg_at_5 value: 40.083999999999996 - type: precision_at_1 value: 62.5 - type: precision_at_10 value: 30.0 - type: precision_at_100 value: 10.038 - type: precision_at_1000 value: 2.0869999999999997 - type: precision_at_3 value: 46.833000000000006 - type: precision_at_5 value: 38.800000000000004 - type: recall_at_1 value: 8.527999999999999 - type: recall_at_10 value: 23.828 - type: recall_at_100 value: 52.322 - type: recall_at_1000 value: 77.143 - type: recall_at_3 value: 14.136000000000001 - type: recall_at_5 value: 17.761 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 51.51 - type: f1 value: 47.632159862049896 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 60.734 - type: map_at_10 value: 72.442 - type: map_at_100 value: 72.735 - type: map_at_1000 value: 72.75 - type: map_at_3 value: 70.41199999999999 - type: map_at_5 value: 71.80499999999999 - type: mrr_at_1 value: 65.212 - type: mrr_at_10 value: 76.613 - type: mrr_at_100 value: 76.79899999999999 - type: mrr_at_1000 value: 76.801 - type: mrr_at_3 value: 74.8 - type: mrr_at_5 value: 76.12400000000001 - type: ndcg_at_1 value: 65.212 - type: ndcg_at_10 value: 77.988 - type: ndcg_at_100 value: 79.167 - type: ndcg_at_1000 value: 79.452 - type: ndcg_at_3 value: 74.362 - type: ndcg_at_5 value: 76.666 - type: precision_at_1 value: 65.212 - type: precision_at_10 value: 10.003 - type: precision_at_100 value: 1.077 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 29.518 - type: precision_at_5 value: 19.016 - type: recall_at_1 value: 60.734 - type: recall_at_10 value: 90.824 - type: recall_at_100 value: 95.71600000000001 - type: recall_at_1000 value: 97.577 - type: recall_at_3 value: 81.243 - type: recall_at_5 value: 86.90299999999999 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 23.845 - type: map_at_10 value: 39.281 - type: map_at_100 value: 41.422 - type: map_at_1000 value: 41.593 - type: map_at_3 value: 34.467 - type: map_at_5 value: 37.017 - type: mrr_at_1 value: 47.531 - type: mrr_at_10 value: 56.204 - type: mrr_at_100 value: 56.928999999999995 - type: mrr_at_1000 value: 56.962999999999994 - type: mrr_at_3 value: 54.115 - type: mrr_at_5 value: 55.373000000000005 - type: ndcg_at_1 value: 47.531 - type: ndcg_at_10 value: 47.711999999999996 - type: ndcg_at_100 value: 54.510999999999996 - type: ndcg_at_1000 value: 57.103 - type: ndcg_at_3 value: 44.145 - type: ndcg_at_5 value: 45.032 - type: precision_at_1 value: 47.531 - type: precision_at_10 value: 13.194 - type: precision_at_100 value: 2.045 - type: precision_at_1000 value: 0.249 - type: precision_at_3 value: 29.424 - type: precision_at_5 value: 21.451 - type: recall_at_1 value: 23.845 - type: recall_at_10 value: 54.967 - type: recall_at_100 value: 79.11399999999999 - type: recall_at_1000 value: 94.56700000000001 - type: recall_at_3 value: 40.256 - type: recall_at_5 value: 46.215 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 37.819 - type: map_at_10 value: 60.889 - type: map_at_100 value: 61.717999999999996 - type: map_at_1000 value: 61.778 - type: map_at_3 value: 57.254000000000005 - type: map_at_5 value: 59.541 - type: mrr_at_1 value: 75.638 - type: mrr_at_10 value: 82.173 - type: mrr_at_100 value: 82.362 - type: mrr_at_1000 value: 82.37 - type: mrr_at_3 value: 81.089 - type: mrr_at_5 value: 81.827 - type: ndcg_at_1 value: 75.638 - type: ndcg_at_10 value: 69.317 - type: ndcg_at_100 value: 72.221 - type: ndcg_at_1000 value: 73.382 - type: ndcg_at_3 value: 64.14 - type: ndcg_at_5 value: 67.07600000000001 - type: precision_at_1 value: 75.638 - type: precision_at_10 value: 14.704999999999998 - type: precision_at_100 value: 1.698 - type: precision_at_1000 value: 0.185 - type: precision_at_3 value: 41.394999999999996 - type: precision_at_5 value: 27.162999999999997 - type: recall_at_1 value: 37.819 - type: recall_at_10 value: 73.52499999999999 - type: recall_at_100 value: 84.875 - type: recall_at_1000 value: 92.559 - type: recall_at_3 value: 62.092999999999996 - type: recall_at_5 value: 67.907 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 94.60079999999999 - type: ap value: 92.67396345347356 - type: f1 value: 94.5988098167121 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 21.285 - type: map_at_10 value: 33.436 - type: map_at_100 value: 34.63 - type: map_at_1000 value: 34.681 - type: map_at_3 value: 29.412 - type: map_at_5 value: 31.715 - type: mrr_at_1 value: 21.848 - type: mrr_at_10 value: 33.979 - type: mrr_at_100 value: 35.118 - type: mrr_at_1000 value: 35.162 - type: mrr_at_3 value: 30.036 - type: mrr_at_5 value: 32.298 - type: ndcg_at_1 value: 21.862000000000002 - type: ndcg_at_10 value: 40.43 - type: ndcg_at_100 value: 46.17 - type: ndcg_at_1000 value: 47.412 - type: ndcg_at_3 value: 32.221 - type: ndcg_at_5 value: 36.332 - type: precision_at_1 value: 21.862000000000002 - type: precision_at_10 value: 6.491 - type: precision_at_100 value: 0.935 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 13.744 - type: precision_at_5 value: 10.331999999999999 - type: recall_at_1 value: 21.285 - type: recall_at_10 value: 62.083 - type: recall_at_100 value: 88.576 - type: recall_at_1000 value: 98.006 - type: recall_at_3 value: 39.729 - type: recall_at_5 value: 49.608000000000004 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.92612859097127 - type: f1 value: 93.82370333372853 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (de) type: mteb/mtop_domain config: de split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.67681036911807 - type: f1 value: 92.14191382411472 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (es) type: mteb/mtop_domain config: es split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.26817878585723 - type: f1 value: 91.92824250337878 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (fr) type: mteb/mtop_domain config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.96554963983714 - type: f1 value: 90.02859329630792 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (hi) type: mteb/mtop_domain config: hi split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 90.02509860164935 - type: f1 value: 89.30665159182062 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (th) type: mteb/mtop_domain config: th split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 87.55515370705244 - type: f1 value: 87.94449232331907 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 82.4623803009576 - type: f1 value: 66.06738378772725 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (de) type: mteb/mtop_intent config: de split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 79.3716539870386 - type: f1 value: 60.37614033396853 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (es) type: mteb/mtop_intent config: es split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 80.34022681787857 - type: f1 value: 58.302008026952 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (fr) type: mteb/mtop_intent config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 76.72095208268087 - type: f1 value: 59.64524724009049 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (hi) type: mteb/mtop_intent config: hi split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 77.87020437432773 - type: f1 value: 57.80202694670567 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (th) type: mteb/mtop_intent config: th split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 77.73598553345387 - type: f1 value: 58.19628250675031 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (af) type: mteb/amazon_massive_intent config: af split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.6630800268998 - type: f1 value: 65.00996668051691 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (am) type: mteb/amazon_massive_intent config: am split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.7128446536651 - type: f1 value: 57.95860594874963 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ar) type: mteb/amazon_massive_intent config: ar split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.61129791526563 - type: f1 value: 59.75328290206483 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (az) type: mteb/amazon_massive_intent config: az split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.00134498991257 - type: f1 value: 67.0230483991802 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (bn) type: mteb/amazon_massive_intent config: bn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.54068594485541 - type: f1 value: 65.54604628946976 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (cy) type: mteb/amazon_massive_intent config: cy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.032952252858095 - type: f1 value: 58.715741857057104 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (da) type: mteb/amazon_massive_intent config: da split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.80901143241427 - type: f1 value: 68.33963989243877 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (de) type: mteb/amazon_massive_intent config: de split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.47141896435777 - type: f1 value: 69.56765020308262 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (el) type: mteb/amazon_massive_intent config: el split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.2373907195696 - type: f1 value: 69.04529836036467 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 77.05783456624076 - type: f1 value: 74.69430584708174 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (es) type: mteb/amazon_massive_intent config: es split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.82111634162744 - type: f1 value: 70.77228952803762 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fa) type: mteb/amazon_massive_intent config: fa split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.25353059852051 - type: f1 value: 71.05310103416411 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fi) type: mteb/amazon_massive_intent config: fi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.28648285137861 - type: f1 value: 69.08020473732226 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fr) type: mteb/amazon_massive_intent config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.31540013449899 - type: f1 value: 70.9426355465791 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (he) type: mteb/amazon_massive_intent config: he split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.2151983860121 - type: f1 value: 67.52541755908858 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hi) type: mteb/amazon_massive_intent config: hi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.58372562205784 - type: f1 value: 69.49769064229827 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hu) type: mteb/amazon_massive_intent config: hu split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.9233355749832 - type: f1 value: 69.36311548259593 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hy) type: mteb/amazon_massive_intent config: hy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.07330195023538 - type: f1 value: 64.99882022345572 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (id) type: mteb/amazon_massive_intent config: id split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.62273032952253 - type: f1 value: 70.6394885471001 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (is) type: mteb/amazon_massive_intent config: is split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.77000672494957 - type: f1 value: 62.9368944815065 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (it) type: mteb/amazon_massive_intent config: it split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.453261600538 - type: f1 value: 70.85069934666681 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ja) type: mteb/amazon_massive_intent config: ja split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.6906523201076 - type: f1 value: 72.03249740074217 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (jv) type: mteb/amazon_massive_intent config: jv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.03631472763953 - type: f1 value: 59.3165215571852 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ka) type: mteb/amazon_massive_intent config: ka split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.913920645595155 - type: f1 value: 57.367337711611285 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (km) type: mteb/amazon_massive_intent config: km split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.42837928715535 - type: f1 value: 52.60527294970906 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (kn) type: mteb/amazon_massive_intent config: kn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.33490248823135 - type: f1 value: 63.213340969404065 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ko) type: mteb/amazon_massive_intent config: ko split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.58507061197041 - type: f1 value: 68.40256628040486 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (lv) type: mteb/amazon_massive_intent config: lv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.11230665770006 - type: f1 value: 66.44863577842305 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ml) type: mteb/amazon_massive_intent config: ml split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.70073974445192 - type: f1 value: 67.21291337273702 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (mn) type: mteb/amazon_massive_intent config: mn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.43913920645595 - type: f1 value: 64.09838087422806 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ms) type: mteb/amazon_massive_intent config: ms split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.80026899798251 - type: f1 value: 68.76986742962444 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (my) type: mteb/amazon_massive_intent config: my split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.78816408876934 - type: f1 value: 62.18781873428972 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nb) type: mteb/amazon_massive_intent config: nb split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.6577000672495 - type: f1 value: 68.75171511133003 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nl) type: mteb/amazon_massive_intent config: nl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.42501681237391 - type: f1 value: 71.18434963451544 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.64828513786146 - type: f1 value: 70.67741914007422 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pt) type: mteb/amazon_massive_intent config: pt split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.62811028917284 - type: f1 value: 71.36402039740959 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ro) type: mteb/amazon_massive_intent config: ro split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.88634835238736 - type: f1 value: 69.23701923480677 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ru) type: mteb/amazon_massive_intent config: ru split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.15938130464022 - type: f1 value: 71.87792218993388 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sl) type: mteb/amazon_massive_intent config: sl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.96301277740416 - type: f1 value: 67.29584200202983 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sq) type: mteb/amazon_massive_intent config: sq split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.49562878278412 - type: f1 value: 66.91716685679431 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sv) type: mteb/amazon_massive_intent config: sv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.6805648957633 - type: f1 value: 72.02723592594374 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sw) type: mteb/amazon_massive_intent config: sw split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.00605245460659 - type: f1 value: 60.16716669482932 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ta) type: mteb/amazon_massive_intent config: ta split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.90988567585742 - type: f1 value: 63.99405488777784 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (te) type: mteb/amazon_massive_intent config: te split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.62273032952253 - type: f1 value: 65.17213906909481 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (th) type: mteb/amazon_massive_intent config: th split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.50907868190988 - type: f1 value: 69.15165697194853 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tl) type: mteb/amazon_massive_intent config: tl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.30733019502352 - type: f1 value: 66.69024007380474 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tr) type: mteb/amazon_massive_intent config: tr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.24277067921989 - type: f1 value: 68.80515408492947 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ur) type: mteb/amazon_massive_intent config: ur split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.49831876260929 - type: f1 value: 64.83778567111116 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (vi) type: mteb/amazon_massive_intent config: vi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.28782784129119 - type: f1 value: 69.3294186700733 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.315400134499 - type: f1 value: 71.22674385243207 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-TW) type: mteb/amazon_massive_intent config: zh-TW split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.37794216543377 - type: f1 value: 68.96962492838232 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (af) type: mteb/amazon_massive_scenario config: af split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.33557498318764 - type: f1 value: 72.28949738478356 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (am) type: mteb/amazon_massive_scenario config: am split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 65.84398117014123 - type: f1 value: 64.71026362091463 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ar) type: mteb/amazon_massive_scenario config: ar split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.76462676529925 - type: f1 value: 69.8229667407667 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (az) type: mteb/amazon_massive_scenario config: az split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.02420981842636 - type: f1 value: 71.76576384895898 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (bn) type: mteb/amazon_massive_scenario config: bn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.7572293207801 - type: f1 value: 72.76840765295256 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (cy) type: mteb/amazon_massive_scenario config: cy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.02286482851379 - type: f1 value: 66.17237947327872 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (da) type: mteb/amazon_massive_scenario config: da split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.60928043039678 - type: f1 value: 77.27094731234773 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (de) type: mteb/amazon_massive_scenario config: de split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.68325487558843 - type: f1 value: 77.97530399082261 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (el) type: mteb/amazon_massive_scenario config: el split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.13315400134498 - type: f1 value: 75.97558584796424 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 80.47410894418292 - type: f1 value: 80.52244841473792 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (es) type: mteb/amazon_massive_scenario config: es split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.9670477471419 - type: f1 value: 77.37318805793146 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fa) type: mteb/amazon_massive_scenario config: fa split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 78.09683927370544 - type: f1 value: 77.69773737430847 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fi) type: mteb/amazon_massive_scenario config: fi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.20847343644922 - type: f1 value: 75.17071738727348 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fr) type: mteb/amazon_massive_scenario config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.07464694014796 - type: f1 value: 77.16136207698571 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (he) type: mteb/amazon_massive_scenario config: he split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.53396099529255 - type: f1 value: 73.58296404484122 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hi) type: mteb/amazon_massive_scenario config: hi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.75319435104237 - type: f1 value: 75.24674707850833 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hu) type: mteb/amazon_massive_scenario config: hu split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.0948217888366 - type: f1 value: 76.47559490205028 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hy) type: mteb/amazon_massive_scenario config: hy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.07599193006052 - type: f1 value: 70.76028043093511 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (id) type: mteb/amazon_massive_scenario config: id split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.10490921318089 - type: f1 value: 77.01215275283272 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (is) type: mteb/amazon_massive_scenario config: is split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.25756556825824 - type: f1 value: 70.20605314648762 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (it) type: mteb/amazon_massive_scenario config: it split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.08137188971082 - type: f1 value: 77.3899269057439 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ja) type: mteb/amazon_massive_scenario config: ja split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 79.35440484196369 - type: f1 value: 79.58964690002772 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (jv) type: mteb/amazon_massive_scenario config: jv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.42299932750504 - type: f1 value: 68.07844356925413 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ka) type: mteb/amazon_massive_scenario config: ka split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.15669132481507 - type: f1 value: 65.89383352608513 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (km) type: mteb/amazon_massive_scenario config: km split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.11432414256894 - type: f1 value: 57.69910594559806 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (kn) type: mteb/amazon_massive_scenario config: kn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.24747814391392 - type: f1 value: 70.42455553830918 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ko) type: mteb/amazon_massive_scenario config: ko split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.46267652992603 - type: f1 value: 76.8854559308316 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (lv) type: mteb/amazon_massive_scenario config: lv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.24815063887021 - type: f1 value: 72.77805034658074 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ml) type: mteb/amazon_massive_scenario config: ml split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.11566913248151 - type: f1 value: 73.86147988001356 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (mn) type: mteb/amazon_massive_scenario config: mn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.0168123739072 - type: f1 value: 69.38515920054571 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ms) type: mteb/amazon_massive_scenario config: ms split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.41156691324814 - type: f1 value: 73.43474953408237 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (my) type: mteb/amazon_massive_scenario config: my split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.39609952925353 - type: f1 value: 67.29731681109291 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nb) type: mteb/amazon_massive_scenario config: nb split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.20914593140552 - type: f1 value: 77.07066497935367 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nl) type: mteb/amazon_massive_scenario config: nl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 78.52387357094821 - type: f1 value: 78.5259569473291 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.6913248150639 - type: f1 value: 76.91201656350455 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pt) type: mteb/amazon_massive_scenario config: pt split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.1217215870881 - type: f1 value: 77.41179937912504 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ro) type: mteb/amazon_massive_scenario config: ro split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.25891055817083 - type: f1 value: 75.8089244542887 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ru) type: mteb/amazon_massive_scenario config: ru split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.70679219905851 - type: f1 value: 78.21459594517711 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sl) type: mteb/amazon_massive_scenario config: sl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.83523873570948 - type: f1 value: 74.86847028401978 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sq) type: mteb/amazon_massive_scenario config: sq split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.71755211835911 - type: f1 value: 74.0214326485662 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sv) type: mteb/amazon_massive_scenario config: sv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 79.06523201075991 - type: f1 value: 79.10545620325138 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sw) type: mteb/amazon_massive_scenario config: sw split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.91862811028918 - type: f1 value: 66.50386121217983 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ta) type: mteb/amazon_massive_scenario config: ta split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.93140551445865 - type: f1 value: 70.755435928495 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (te) type: mteb/amazon_massive_scenario config: te split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.40753194351042 - type: f1 value: 71.61816115782923 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (th) type: mteb/amazon_massive_scenario config: th split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.1815736381977 - type: f1 value: 75.08016717887205 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tl) type: mteb/amazon_massive_scenario config: tl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.86482851378614 - type: f1 value: 72.39521180006291 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tr) type: mteb/amazon_massive_scenario config: tr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.46940147948891 - type: f1 value: 76.70044085362349 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ur) type: mteb/amazon_massive_scenario config: ur split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.89307330195024 - type: f1 value: 71.5721825332298 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (vi) type: mteb/amazon_massive_scenario config: vi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.7511768661735 - type: f1 value: 75.17918654541515 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 78.69535978480162 - type: f1 value: 78.90019070153316 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-TW) type: mteb/amazon_massive_scenario config: zh-TW split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.45729657027572 - type: f1 value: 76.19578371794672 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 36.92715354123554 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 35.53536244162518 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 33.08507884504006 - type: mrr value: 34.32436977159129 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.935 - type: map_at_10 value: 13.297 - type: map_at_100 value: 16.907 - type: map_at_1000 value: 18.391 - type: map_at_3 value: 9.626999999999999 - type: map_at_5 value: 11.190999999999999 - type: mrr_at_1 value: 46.129999999999995 - type: mrr_at_10 value: 54.346000000000004 - type: mrr_at_100 value: 55.067 - type: mrr_at_1000 value: 55.1 - type: mrr_at_3 value: 51.961 - type: mrr_at_5 value: 53.246 - type: ndcg_at_1 value: 44.118 - type: ndcg_at_10 value: 35.534 - type: ndcg_at_100 value: 32.946999999999996 - type: ndcg_at_1000 value: 41.599000000000004 - type: ndcg_at_3 value: 40.25 - type: ndcg_at_5 value: 37.978 - type: precision_at_1 value: 46.129999999999995 - type: precision_at_10 value: 26.842 - type: precision_at_100 value: 8.427 - type: precision_at_1000 value: 2.128 - type: precision_at_3 value: 37.977 - type: precision_at_5 value: 32.879000000000005 - type: recall_at_1 value: 5.935 - type: recall_at_10 value: 17.211000000000002 - type: recall_at_100 value: 34.33 - type: recall_at_1000 value: 65.551 - type: recall_at_3 value: 10.483 - type: recall_at_5 value: 13.078999999999999 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 35.231 - type: map_at_10 value: 50.202000000000005 - type: map_at_100 value: 51.154999999999994 - type: map_at_1000 value: 51.181 - type: map_at_3 value: 45.774 - type: map_at_5 value: 48.522 - type: mrr_at_1 value: 39.687 - type: mrr_at_10 value: 52.88 - type: mrr_at_100 value: 53.569 - type: mrr_at_1000 value: 53.58500000000001 - type: mrr_at_3 value: 49.228 - type: mrr_at_5 value: 51.525 - type: ndcg_at_1 value: 39.687 - type: ndcg_at_10 value: 57.754000000000005 - type: ndcg_at_100 value: 61.597 - type: ndcg_at_1000 value: 62.18900000000001 - type: ndcg_at_3 value: 49.55 - type: ndcg_at_5 value: 54.11899999999999 - type: precision_at_1 value: 39.687 - type: precision_at_10 value: 9.313 - type: precision_at_100 value: 1.146 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 22.229 - type: precision_at_5 value: 15.939 - type: recall_at_1 value: 35.231 - type: recall_at_10 value: 78.083 - type: recall_at_100 value: 94.42099999999999 - type: recall_at_1000 value: 98.81 - type: recall_at_3 value: 57.047000000000004 - type: recall_at_5 value: 67.637 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 71.241 - type: map_at_10 value: 85.462 - type: map_at_100 value: 86.083 - type: map_at_1000 value: 86.09700000000001 - type: map_at_3 value: 82.49499999999999 - type: map_at_5 value: 84.392 - type: mrr_at_1 value: 82.09 - type: mrr_at_10 value: 88.301 - type: mrr_at_100 value: 88.383 - type: mrr_at_1000 value: 88.384 - type: mrr_at_3 value: 87.37 - type: mrr_at_5 value: 88.035 - type: ndcg_at_1 value: 82.12 - type: ndcg_at_10 value: 89.149 - type: ndcg_at_100 value: 90.235 - type: ndcg_at_1000 value: 90.307 - type: ndcg_at_3 value: 86.37599999999999 - type: ndcg_at_5 value: 87.964 - type: precision_at_1 value: 82.12 - type: precision_at_10 value: 13.56 - type: precision_at_100 value: 1.539 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.88 - type: precision_at_5 value: 24.92 - type: recall_at_1 value: 71.241 - type: recall_at_10 value: 96.128 - type: recall_at_100 value: 99.696 - type: recall_at_1000 value: 99.994 - type: recall_at_3 value: 88.181 - type: recall_at_5 value: 92.694 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 56.59757799655151 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 64.27391998854624 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.243 - type: map_at_10 value: 10.965 - type: map_at_100 value: 12.934999999999999 - type: map_at_1000 value: 13.256 - type: map_at_3 value: 7.907 - type: map_at_5 value: 9.435 - type: mrr_at_1 value: 20.9 - type: mrr_at_10 value: 31.849 - type: mrr_at_100 value: 32.964 - type: mrr_at_1000 value: 33.024 - type: mrr_at_3 value: 28.517 - type: mrr_at_5 value: 30.381999999999998 - type: ndcg_at_1 value: 20.9 - type: ndcg_at_10 value: 18.723 - type: ndcg_at_100 value: 26.384999999999998 - type: ndcg_at_1000 value: 32.114 - type: ndcg_at_3 value: 17.753 - type: ndcg_at_5 value: 15.558 - type: precision_at_1 value: 20.9 - type: precision_at_10 value: 9.8 - type: precision_at_100 value: 2.078 - type: precision_at_1000 value: 0.345 - type: precision_at_3 value: 16.900000000000002 - type: precision_at_5 value: 13.88 - type: recall_at_1 value: 4.243 - type: recall_at_10 value: 19.885 - type: recall_at_100 value: 42.17 - type: recall_at_1000 value: 70.12 - type: recall_at_3 value: 10.288 - type: recall_at_5 value: 14.072000000000001 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 85.84209174935282 - type: cos_sim_spearman value: 81.73248048438833 - type: euclidean_pearson value: 83.02810070308149 - type: euclidean_spearman value: 81.73248295679514 - type: manhattan_pearson value: 82.95368060376002 - type: manhattan_spearman value: 81.60277910998718 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 88.52628804556943 - type: cos_sim_spearman value: 82.5713913555672 - type: euclidean_pearson value: 85.8796774746988 - type: euclidean_spearman value: 82.57137506803424 - type: manhattan_pearson value: 85.79671002960058 - type: manhattan_spearman value: 82.49445981618027 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 86.23682503505542 - type: cos_sim_spearman value: 87.15008956711806 - type: euclidean_pearson value: 86.79805401524959 - type: euclidean_spearman value: 87.15008956711806 - type: manhattan_pearson value: 86.65298502699244 - type: manhattan_spearman value: 86.97677821948562 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 85.63370304677802 - type: cos_sim_spearman value: 84.97105553540318 - type: euclidean_pearson value: 85.28896108687721 - type: euclidean_spearman value: 84.97105553540318 - type: manhattan_pearson value: 85.09663190337331 - type: manhattan_spearman value: 84.79126831644619 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 90.2614838800733 - type: cos_sim_spearman value: 91.0509162991835 - type: euclidean_pearson value: 90.33098317533373 - type: euclidean_spearman value: 91.05091625871644 - type: manhattan_pearson value: 90.26250435151107 - type: manhattan_spearman value: 90.97999594417519 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 85.80480973335091 - type: cos_sim_spearman value: 87.313695492969 - type: euclidean_pearson value: 86.49267251576939 - type: euclidean_spearman value: 87.313695492969 - type: manhattan_pearson value: 86.44019901831935 - type: manhattan_spearman value: 87.24205395460392 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 90.05662789380672 - type: cos_sim_spearman value: 90.02759424426651 - type: euclidean_pearson value: 90.4042483422981 - type: euclidean_spearman value: 90.02759424426651 - type: manhattan_pearson value: 90.51446975000226 - type: manhattan_spearman value: 90.08832889933616 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 67.5975528273532 - type: cos_sim_spearman value: 67.62969861411354 - type: euclidean_pearson value: 69.224275734323 - type: euclidean_spearman value: 67.62969861411354 - type: manhattan_pearson value: 69.3761447059927 - type: manhattan_spearman value: 67.90921005611467 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 87.11244327231684 - type: cos_sim_spearman value: 88.37902438979035 - type: euclidean_pearson value: 87.86054279847336 - type: euclidean_spearman value: 88.37902438979035 - type: manhattan_pearson value: 87.77257757320378 - type: manhattan_spearman value: 88.25208966098123 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 85.87174608143563 - type: mrr value: 96.12836872640794 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 57.760999999999996 - type: map_at_10 value: 67.258 - type: map_at_100 value: 67.757 - type: map_at_1000 value: 67.78800000000001 - type: map_at_3 value: 64.602 - type: map_at_5 value: 65.64 - type: mrr_at_1 value: 60.667 - type: mrr_at_10 value: 68.441 - type: mrr_at_100 value: 68.825 - type: mrr_at_1000 value: 68.853 - type: mrr_at_3 value: 66.444 - type: mrr_at_5 value: 67.26100000000001 - type: ndcg_at_1 value: 60.667 - type: ndcg_at_10 value: 71.852 - type: ndcg_at_100 value: 73.9 - type: ndcg_at_1000 value: 74.628 - type: ndcg_at_3 value: 67.093 - type: ndcg_at_5 value: 68.58 - type: precision_at_1 value: 60.667 - type: precision_at_10 value: 9.6 - type: precision_at_100 value: 1.0670000000000002 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 26.111 - type: precision_at_5 value: 16.733 - type: recall_at_1 value: 57.760999999999996 - type: recall_at_10 value: 84.967 - type: recall_at_100 value: 93.833 - type: recall_at_1000 value: 99.333 - type: recall_at_3 value: 71.589 - type: recall_at_5 value: 75.483 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.66633663366336 - type: cos_sim_ap value: 91.17685358899108 - type: cos_sim_f1 value: 82.16818642350559 - type: cos_sim_precision value: 83.26488706365504 - type: cos_sim_recall value: 81.10000000000001 - type: dot_accuracy value: 99.66633663366336 - type: dot_ap value: 91.17663411119032 - type: dot_f1 value: 82.16818642350559 - type: dot_precision value: 83.26488706365504 - type: dot_recall value: 81.10000000000001 - type: euclidean_accuracy value: 99.66633663366336 - type: euclidean_ap value: 91.17685189882275 - type: euclidean_f1 value: 82.16818642350559 - type: euclidean_precision value: 83.26488706365504 - type: euclidean_recall value: 81.10000000000001 - type: manhattan_accuracy value: 99.66633663366336 - type: manhattan_ap value: 91.2241619496737 - type: manhattan_f1 value: 82.20472440944883 - type: manhattan_precision value: 86.51933701657458 - type: manhattan_recall value: 78.3 - type: max_accuracy value: 99.66633663366336 - type: max_ap value: 91.2241619496737 - type: max_f1 value: 82.20472440944883 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 66.85101268897951 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 42.461184054706905 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 51.44542568873886 - type: mrr value: 52.33656151854681 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.75982974997539 - type: cos_sim_spearman value: 30.385405026539914 - type: dot_pearson value: 30.75982433546523 - type: dot_spearman value: 30.385405026539914 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.22799999999999998 - type: map_at_10 value: 2.064 - type: map_at_100 value: 13.056000000000001 - type: map_at_1000 value: 31.747999999999998 - type: map_at_3 value: 0.67 - type: map_at_5 value: 1.097 - type: mrr_at_1 value: 90.0 - type: mrr_at_10 value: 94.667 - type: mrr_at_100 value: 94.667 - type: mrr_at_1000 value: 94.667 - type: mrr_at_3 value: 94.667 - type: mrr_at_5 value: 94.667 - type: ndcg_at_1 value: 86.0 - type: ndcg_at_10 value: 82.0 - type: ndcg_at_100 value: 64.307 - type: ndcg_at_1000 value: 57.023999999999994 - type: ndcg_at_3 value: 85.816 - type: ndcg_at_5 value: 84.904 - type: precision_at_1 value: 90.0 - type: precision_at_10 value: 85.8 - type: precision_at_100 value: 66.46 - type: precision_at_1000 value: 25.202 - type: precision_at_3 value: 90.0 - type: precision_at_5 value: 89.2 - type: recall_at_1 value: 0.22799999999999998 - type: recall_at_10 value: 2.235 - type: recall_at_100 value: 16.185 - type: recall_at_1000 value: 53.620999999999995 - type: recall_at_3 value: 0.7040000000000001 - type: recall_at_5 value: 1.172 - task: type: BitextMining dataset: name: MTEB Tatoeba (sqi-eng) type: mteb/tatoeba-bitext-mining config: sqi-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.75 - type: precision value: 96.45 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (fry-eng) type: mteb/tatoeba-bitext-mining config: fry-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.54913294797689 - type: f1 value: 82.46628131021194 - type: precision value: 81.1175337186898 - type: recall value: 85.54913294797689 - task: type: BitextMining dataset: name: MTEB Tatoeba (kur-eng) type: mteb/tatoeba-bitext-mining config: kur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81.21951219512195 - type: f1 value: 77.33333333333334 - type: precision value: 75.54878048780488 - type: recall value: 81.21951219512195 - task: type: BitextMining dataset: name: MTEB Tatoeba (tur-eng) type: mteb/tatoeba-bitext-mining config: tur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.6 - type: f1 value: 98.26666666666665 - type: precision value: 98.1 - type: recall value: 98.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (deu-eng) type: mteb/tatoeba-bitext-mining config: deu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 99.5 - type: f1 value: 99.33333333333333 - type: precision value: 99.25 - type: recall value: 99.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (nld-eng) type: mteb/tatoeba-bitext-mining config: nld-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.8 - type: f1 value: 97.2 - type: precision value: 96.89999999999999 - type: recall value: 97.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (ron-eng) type: mteb/tatoeba-bitext-mining config: ron-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.8 - type: f1 value: 97.18333333333334 - type: precision value: 96.88333333333333 - type: recall value: 97.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (ang-eng) type: mteb/tatoeba-bitext-mining config: ang-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.61194029850746 - type: f1 value: 72.81094527363183 - type: precision value: 70.83333333333333 - type: recall value: 77.61194029850746 - task: type: BitextMining dataset: name: MTEB Tatoeba (ido-eng) type: mteb/tatoeba-bitext-mining config: ido-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.91666666666667 - type: precision value: 91.08333333333334 - type: recall value: 93.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (jav-eng) type: mteb/tatoeba-bitext-mining config: jav-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.29268292682927 - type: f1 value: 85.27642276422765 - type: precision value: 84.01277584204414 - type: recall value: 88.29268292682927 - task: type: BitextMining dataset: name: MTEB Tatoeba (isl-eng) type: mteb/tatoeba-bitext-mining config: isl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.1 - type: f1 value: 95.0 - type: precision value: 94.46666666666668 - type: recall value: 96.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (slv-eng) type: mteb/tatoeba-bitext-mining config: slv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.681652490887 - type: f1 value: 91.90765492102065 - type: precision value: 91.05913325232888 - type: recall value: 93.681652490887 - task: type: BitextMining dataset: name: MTEB Tatoeba (cym-eng) type: mteb/tatoeba-bitext-mining config: cym-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.17391304347827 - type: f1 value: 89.97101449275361 - type: precision value: 88.96811594202899 - type: recall value: 92.17391304347827 - task: type: BitextMining dataset: name: MTEB Tatoeba (kaz-eng) type: mteb/tatoeba-bitext-mining config: kaz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.43478260869566 - type: f1 value: 87.72173913043478 - type: precision value: 86.42028985507245 - type: recall value: 90.43478260869566 - task: type: BitextMining dataset: name: MTEB Tatoeba (est-eng) type: mteb/tatoeba-bitext-mining config: est-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.4 - type: f1 value: 88.03 - type: precision value: 86.95 - type: recall value: 90.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (heb-eng) type: mteb/tatoeba-bitext-mining config: heb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.4 - type: f1 value: 91.45666666666666 - type: precision value: 90.525 - type: recall value: 93.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (gla-eng) type: mteb/tatoeba-bitext-mining config: gla-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81.9059107358263 - type: f1 value: 78.32557872364869 - type: precision value: 76.78260286824823 - type: recall value: 81.9059107358263 - task: type: BitextMining dataset: name: MTEB Tatoeba (mar-eng) type: mteb/tatoeba-bitext-mining config: mar-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.3 - type: f1 value: 92.58333333333333 - type: precision value: 91.73333333333332 - type: recall value: 94.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (lat-eng) type: mteb/tatoeba-bitext-mining config: lat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 79.10000000000001 - type: f1 value: 74.50500000000001 - type: precision value: 72.58928571428571 - type: recall value: 79.10000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (bel-eng) type: mteb/tatoeba-bitext-mining config: bel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.6 - type: f1 value: 95.55 - type: precision value: 95.05 - type: recall value: 96.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (pms-eng) type: mteb/tatoeba-bitext-mining config: pms-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.0952380952381 - type: f1 value: 77.98458049886621 - type: precision value: 76.1968253968254 - type: recall value: 82.0952380952381 - task: type: BitextMining dataset: name: MTEB Tatoeba (gle-eng) type: mteb/tatoeba-bitext-mining config: gle-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.9 - type: f1 value: 84.99190476190476 - type: precision value: 83.65 - type: recall value: 87.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (pes-eng) type: mteb/tatoeba-bitext-mining config: pes-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.7 - type: f1 value: 94.56666666666666 - type: precision value: 94.01666666666667 - type: recall value: 95.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (nob-eng) type: mteb/tatoeba-bitext-mining config: nob-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.6 - type: f1 value: 98.2 - type: precision value: 98.0 - type: recall value: 98.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (bul-eng) type: mteb/tatoeba-bitext-mining config: bul-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.6 - type: f1 value: 94.38333333333334 - type: precision value: 93.78333333333335 - type: recall value: 95.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cbk-eng) type: mteb/tatoeba-bitext-mining config: cbk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.4 - type: f1 value: 84.10380952380952 - type: precision value: 82.67 - type: recall value: 87.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (hun-eng) type: mteb/tatoeba-bitext-mining config: hun-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.5 - type: f1 value: 94.33333333333334 - type: precision value: 93.78333333333333 - type: recall value: 95.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (uig-eng) type: mteb/tatoeba-bitext-mining config: uig-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.4 - type: f1 value: 86.82000000000001 - type: precision value: 85.64500000000001 - type: recall value: 89.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (rus-eng) type: mteb/tatoeba-bitext-mining config: rus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.1 - type: f1 value: 93.56666666666668 - type: precision value: 92.81666666666666 - type: recall value: 95.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (spa-eng) type: mteb/tatoeba-bitext-mining config: spa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.9 - type: f1 value: 98.6 - type: precision value: 98.45 - type: recall value: 98.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (hye-eng) type: mteb/tatoeba-bitext-mining config: hye-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.01347708894879 - type: f1 value: 93.51752021563343 - type: precision value: 92.82794249775381 - type: recall value: 95.01347708894879 - task: type: BitextMining dataset: name: MTEB Tatoeba (tel-eng) type: mteb/tatoeba-bitext-mining config: tel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.00854700854701 - type: f1 value: 96.08262108262107 - type: precision value: 95.65527065527067 - type: recall value: 97.00854700854701 - task: type: BitextMining dataset: name: MTEB Tatoeba (afr-eng) type: mteb/tatoeba-bitext-mining config: afr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.5 - type: f1 value: 95.39999999999999 - type: precision value: 94.88333333333333 - type: recall value: 96.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (mon-eng) type: mteb/tatoeba-bitext-mining config: mon-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.5909090909091 - type: f1 value: 95.49242424242425 - type: precision value: 94.9621212121212 - type: recall value: 96.5909090909091 - task: type: BitextMining dataset: name: MTEB Tatoeba (arz-eng) type: mteb/tatoeba-bitext-mining config: arz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 84.90566037735849 - type: f1 value: 81.85883997204752 - type: precision value: 80.54507337526205 - type: recall value: 84.90566037735849 - task: type: BitextMining dataset: name: MTEB Tatoeba (hrv-eng) type: mteb/tatoeba-bitext-mining config: hrv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.5 - type: f1 value: 96.75 - type: precision value: 96.38333333333333 - type: recall value: 97.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (nov-eng) type: mteb/tatoeba-bitext-mining config: nov-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.7704280155642 - type: f1 value: 82.99610894941635 - type: precision value: 81.32295719844358 - type: recall value: 86.7704280155642 - task: type: BitextMining dataset: name: MTEB Tatoeba (gsw-eng) type: mteb/tatoeba-bitext-mining config: gsw-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 67.52136752136752 - type: f1 value: 61.89662189662191 - type: precision value: 59.68660968660969 - type: recall value: 67.52136752136752 - task: type: BitextMining dataset: name: MTEB Tatoeba (nds-eng) type: mteb/tatoeba-bitext-mining config: nds-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.2 - type: f1 value: 86.32 - type: precision value: 85.015 - type: recall value: 89.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (ukr-eng) type: mteb/tatoeba-bitext-mining config: ukr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.0 - type: f1 value: 94.78333333333333 - type: precision value: 94.18333333333334 - type: recall value: 96.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (uzb-eng) type: mteb/tatoeba-bitext-mining config: uzb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.8785046728972 - type: f1 value: 80.54517133956385 - type: precision value: 79.154984423676 - type: recall value: 83.8785046728972 - task: type: BitextMining dataset: name: MTEB Tatoeba (lit-eng) type: mteb/tatoeba-bitext-mining config: lit-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.60000000000001 - type: f1 value: 92.01333333333334 - type: precision value: 91.28333333333333 - type: recall value: 93.60000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (ina-eng) type: mteb/tatoeba-bitext-mining config: ina-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.1 - type: f1 value: 96.26666666666667 - type: precision value: 95.85000000000001 - type: recall value: 97.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (lfn-eng) type: mteb/tatoeba-bitext-mining config: lfn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 84.3 - type: f1 value: 80.67833333333333 - type: precision value: 79.03928571428571 - type: recall value: 84.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (zsm-eng) type: mteb/tatoeba-bitext-mining config: zsm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.3 - type: f1 value: 96.48333333333332 - type: precision value: 96.08333333333331 - type: recall value: 97.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (ita-eng) type: mteb/tatoeba-bitext-mining config: ita-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.7 - type: f1 value: 94.66666666666667 - type: precision value: 94.16666666666667 - type: recall value: 95.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (cmn-eng) type: mteb/tatoeba-bitext-mining config: cmn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.2 - type: f1 value: 96.36666666666667 - type: precision value: 95.96666666666668 - type: recall value: 97.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (lvs-eng) type: mteb/tatoeba-bitext-mining config: lvs-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.3 - type: f1 value: 92.80666666666667 - type: precision value: 92.12833333333333 - type: recall value: 94.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (glg-eng) type: mteb/tatoeba-bitext-mining config: glg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.0 - type: f1 value: 96.22333333333334 - type: precision value: 95.875 - type: recall value: 97.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (ceb-eng) type: mteb/tatoeba-bitext-mining config: ceb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.33333333333333 - type: f1 value: 70.78174603174602 - type: precision value: 69.28333333333332 - type: recall value: 74.33333333333333 - task: type: BitextMining dataset: name: MTEB Tatoeba (bre-eng) type: mteb/tatoeba-bitext-mining config: bre-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 37.6 - type: f1 value: 32.938348952090365 - type: precision value: 31.2811038961039 - type: recall value: 37.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (ben-eng) type: mteb/tatoeba-bitext-mining config: ben-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.5 - type: f1 value: 89.13333333333333 - type: precision value: 88.03333333333333 - type: recall value: 91.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (swg-eng) type: mteb/tatoeba-bitext-mining config: swg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.14285714285714 - type: f1 value: 77.67857142857143 - type: precision value: 75.59523809523809 - type: recall value: 82.14285714285714 - task: type: BitextMining dataset: name: MTEB Tatoeba (arq-eng) type: mteb/tatoeba-bitext-mining config: arq-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 69.0450054884742 - type: f1 value: 63.070409283362075 - type: precision value: 60.58992781824835 - type: recall value: 69.0450054884742 - task: type: BitextMining dataset: name: MTEB Tatoeba (kab-eng) type: mteb/tatoeba-bitext-mining config: kab-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.1 - type: f1 value: 57.848333333333336 - type: precision value: 55.69500000000001 - type: recall value: 63.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (fra-eng) type: mteb/tatoeba-bitext-mining config: fra-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.1 - type: f1 value: 95.01666666666667 - type: precision value: 94.5 - type: recall value: 96.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (por-eng) type: mteb/tatoeba-bitext-mining config: por-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.89999999999999 - type: f1 value: 94.90666666666667 - type: precision value: 94.425 - type: recall value: 95.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (tat-eng) type: mteb/tatoeba-bitext-mining config: tat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.6 - type: f1 value: 84.61333333333333 - type: precision value: 83.27 - type: recall value: 87.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (oci-eng) type: mteb/tatoeba-bitext-mining config: oci-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.4 - type: f1 value: 71.90746031746032 - type: precision value: 70.07027777777778 - type: recall value: 76.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (pol-eng) type: mteb/tatoeba-bitext-mining config: pol-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.89999999999999 - type: f1 value: 97.26666666666667 - type: precision value: 96.95 - type: recall value: 97.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (war-eng) type: mteb/tatoeba-bitext-mining config: war-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 78.8 - type: f1 value: 74.39555555555555 - type: precision value: 72.59416666666667 - type: recall value: 78.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (aze-eng) type: mteb/tatoeba-bitext-mining config: aze-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.19999999999999 - type: f1 value: 93.78999999999999 - type: precision value: 93.125 - type: recall value: 95.19999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (vie-eng) type: mteb/tatoeba-bitext-mining config: vie-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.8 - type: f1 value: 97.1 - type: precision value: 96.75 - type: recall value: 97.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (nno-eng) type: mteb/tatoeba-bitext-mining config: nno-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.6 - type: f1 value: 94.25666666666666 - type: precision value: 93.64166666666668 - type: recall value: 95.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cha-eng) type: mteb/tatoeba-bitext-mining config: cha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 56.934306569343065 - type: f1 value: 51.461591936044485 - type: precision value: 49.37434827945776 - type: recall value: 56.934306569343065 - task: type: BitextMining dataset: name: MTEB Tatoeba (mhr-eng) type: mteb/tatoeba-bitext-mining config: mhr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 20.200000000000003 - type: f1 value: 16.91799284049284 - type: precision value: 15.791855158730158 - type: recall value: 20.200000000000003 - task: type: BitextMining dataset: name: MTEB Tatoeba (dan-eng) type: mteb/tatoeba-bitext-mining config: dan-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.2 - type: f1 value: 95.3 - type: precision value: 94.85 - type: recall value: 96.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (ell-eng) type: mteb/tatoeba-bitext-mining config: ell-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.3 - type: f1 value: 95.11666666666667 - type: precision value: 94.53333333333333 - type: recall value: 96.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (amh-eng) type: mteb/tatoeba-bitext-mining config: amh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.88095238095238 - type: f1 value: 87.14285714285714 - type: precision value: 85.96230158730161 - type: recall value: 89.88095238095238 - task: type: BitextMining dataset: name: MTEB Tatoeba (pam-eng) type: mteb/tatoeba-bitext-mining config: pam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 24.099999999999998 - type: f1 value: 19.630969083349783 - type: precision value: 18.275094905094907 - type: recall value: 24.099999999999998 - task: type: BitextMining dataset: name: MTEB Tatoeba (hsb-eng) type: mteb/tatoeba-bitext-mining config: hsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.4368530020704 - type: f1 value: 79.45183870649709 - type: precision value: 77.7432712215321 - type: recall value: 83.4368530020704 - task: type: BitextMining dataset: name: MTEB Tatoeba (srp-eng) type: mteb/tatoeba-bitext-mining config: srp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.8 - type: f1 value: 94.53333333333333 - type: precision value: 93.91666666666666 - type: recall value: 95.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (epo-eng) type: mteb/tatoeba-bitext-mining config: epo-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.8 - type: f1 value: 98.48333333333332 - type: precision value: 98.33333333333334 - type: recall value: 98.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (kzj-eng) type: mteb/tatoeba-bitext-mining config: kzj-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 17.5 - type: f1 value: 14.979285714285714 - type: precision value: 14.23235060690943 - type: recall value: 17.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (awa-eng) type: mteb/tatoeba-bitext-mining config: awa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.93939393939394 - type: f1 value: 91.991341991342 - type: precision value: 91.05339105339105 - type: recall value: 93.93939393939394 - task: type: BitextMining dataset: name: MTEB Tatoeba (fao-eng) type: mteb/tatoeba-bitext-mining config: fao-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.31297709923665 - type: f1 value: 86.76844783715012 - type: precision value: 85.63613231552164 - type: recall value: 89.31297709923665 - task: type: BitextMining dataset: name: MTEB Tatoeba (mal-eng) type: mteb/tatoeba-bitext-mining config: mal-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 99.12663755458514 - type: f1 value: 98.93255701115964 - type: precision value: 98.83551673944687 - type: recall value: 99.12663755458514 - task: type: BitextMining dataset: name: MTEB Tatoeba (ile-eng) type: mteb/tatoeba-bitext-mining config: ile-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.0 - type: f1 value: 89.77999999999999 - type: precision value: 88.78333333333333 - type: recall value: 92.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (bos-eng) type: mteb/tatoeba-bitext-mining config: bos-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.89265536723164 - type: f1 value: 95.85687382297553 - type: precision value: 95.33898305084746 - type: recall value: 96.89265536723164 - task: type: BitextMining dataset: name: MTEB Tatoeba (cor-eng) type: mteb/tatoeba-bitext-mining config: cor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 14.6 - type: f1 value: 11.820611790170615 - type: precision value: 11.022616224355355 - type: recall value: 14.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cat-eng) type: mteb/tatoeba-bitext-mining config: cat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.89999999999999 - type: f1 value: 94.93333333333334 - type: precision value: 94.48666666666666 - type: recall value: 95.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (eus-eng) type: mteb/tatoeba-bitext-mining config: eus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.6 - type: f1 value: 84.72333333333334 - type: precision value: 83.44166666666666 - type: recall value: 87.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (yue-eng) type: mteb/tatoeba-bitext-mining config: yue-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.8 - type: f1 value: 93.47333333333333 - type: precision value: 92.875 - type: recall value: 94.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (swe-eng) type: mteb/tatoeba-bitext-mining config: swe-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.6 - type: f1 value: 95.71666666666665 - type: precision value: 95.28333333333335 - type: recall value: 96.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (dtp-eng) type: mteb/tatoeba-bitext-mining config: dtp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 17.8 - type: f1 value: 14.511074040901628 - type: precision value: 13.503791000666002 - type: recall value: 17.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (kat-eng) type: mteb/tatoeba-bitext-mining config: kat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.10187667560321 - type: f1 value: 92.46648793565683 - type: precision value: 91.71134941912423 - type: recall value: 94.10187667560321 - task: type: BitextMining dataset: name: MTEB Tatoeba (jpn-eng) type: mteb/tatoeba-bitext-mining config: jpn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.0 - type: f1 value: 96.11666666666666 - type: precision value: 95.68333333333334 - type: recall value: 97.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (csb-eng) type: mteb/tatoeba-bitext-mining config: csb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 72.72727272727273 - type: f1 value: 66.58949745906267 - type: precision value: 63.86693017127799 - type: recall value: 72.72727272727273 - task: type: BitextMining dataset: name: MTEB Tatoeba (xho-eng) type: mteb/tatoeba-bitext-mining config: xho-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.14084507042254 - type: f1 value: 88.26291079812206 - type: precision value: 87.32394366197182 - type: recall value: 90.14084507042254 - task: type: BitextMining dataset: name: MTEB Tatoeba (orv-eng) type: mteb/tatoeba-bitext-mining config: orv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 64.67065868263472 - type: f1 value: 58.2876627696987 - type: precision value: 55.79255774165953 - type: recall value: 64.67065868263472 - task: type: BitextMining dataset: name: MTEB Tatoeba (ind-eng) type: mteb/tatoeba-bitext-mining config: ind-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.6 - type: f1 value: 94.41666666666667 - type: precision value: 93.85 - type: recall value: 95.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (tuk-eng) type: mteb/tatoeba-bitext-mining config: tuk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 55.172413793103445 - type: f1 value: 49.63992493549144 - type: precision value: 47.71405113769646 - type: recall value: 55.172413793103445 - task: type: BitextMining dataset: name: MTEB Tatoeba (max-eng) type: mteb/tatoeba-bitext-mining config: max-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.46478873239437 - type: f1 value: 73.4417616811983 - type: precision value: 71.91607981220658 - type: recall value: 77.46478873239437 - task: type: BitextMining dataset: name: MTEB Tatoeba (swh-eng) type: mteb/tatoeba-bitext-mining config: swh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 84.61538461538461 - type: f1 value: 80.91452991452994 - type: precision value: 79.33760683760683 - type: recall value: 84.61538461538461 - task: type: BitextMining dataset: name: MTEB Tatoeba (hin-eng) type: mteb/tatoeba-bitext-mining config: hin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.2 - type: f1 value: 97.6 - type: precision value: 97.3 - type: recall value: 98.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (dsb-eng) type: mteb/tatoeba-bitext-mining config: dsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 75.5741127348643 - type: f1 value: 72.00417536534445 - type: precision value: 70.53467872883321 - type: recall value: 75.5741127348643 - task: type: BitextMining dataset: name: MTEB Tatoeba (ber-eng) type: mteb/tatoeba-bitext-mining config: ber-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 62.2 - type: f1 value: 55.577460317460314 - type: precision value: 52.98583333333333 - type: recall value: 62.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (tam-eng) type: mteb/tatoeba-bitext-mining config: tam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.18241042345277 - type: f1 value: 90.6468124709167 - type: precision value: 89.95656894679696 - type: recall value: 92.18241042345277 - task: type: BitextMining dataset: name: MTEB Tatoeba (slk-eng) type: mteb/tatoeba-bitext-mining config: slk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.1 - type: f1 value: 95.13333333333333 - type: precision value: 94.66666666666667 - type: recall value: 96.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (tgl-eng) type: mteb/tatoeba-bitext-mining config: tgl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.8 - type: f1 value: 95.85000000000001 - type: precision value: 95.39999999999999 - type: recall value: 96.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (ast-eng) type: mteb/tatoeba-bitext-mining config: ast-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.1259842519685 - type: f1 value: 89.76377952755905 - type: precision value: 88.71391076115485 - type: recall value: 92.1259842519685 - task: type: BitextMining dataset: name: MTEB Tatoeba (mkd-eng) type: mteb/tatoeba-bitext-mining config: mkd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.1 - type: f1 value: 92.49 - type: precision value: 91.725 - type: recall value: 94.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (khm-eng) type: mteb/tatoeba-bitext-mining config: khm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.5623268698061 - type: f1 value: 73.27364463791058 - type: precision value: 71.51947852086357 - type: recall value: 77.5623268698061 - task: type: BitextMining dataset: name: MTEB Tatoeba (ces-eng) type: mteb/tatoeba-bitext-mining config: ces-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.56666666666666 - type: precision value: 96.16666666666667 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (tzl-eng) type: mteb/tatoeba-bitext-mining config: tzl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 66.34615384615384 - type: f1 value: 61.092032967032964 - type: precision value: 59.27197802197802 - type: recall value: 66.34615384615384 - task: type: BitextMining dataset: name: MTEB Tatoeba (urd-eng) type: mteb/tatoeba-bitext-mining config: urd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.41190476190476 - type: precision value: 92.7 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (ara-eng) type: mteb/tatoeba-bitext-mining config: ara-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.10000000000001 - type: f1 value: 91.10000000000001 - type: precision value: 90.13333333333333 - type: recall value: 93.10000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (kor-eng) type: mteb/tatoeba-bitext-mining config: kor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.97333333333334 - type: precision value: 91.14166666666667 - type: recall value: 93.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (yid-eng) type: mteb/tatoeba-bitext-mining config: yid-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.21698113207547 - type: f1 value: 90.3796046720575 - type: precision value: 89.56367924528303 - type: recall value: 92.21698113207547 - task: type: BitextMining dataset: name: MTEB Tatoeba (fin-eng) type: mteb/tatoeba-bitext-mining config: fin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.6 - type: f1 value: 96.91666666666667 - type: precision value: 96.6 - type: recall value: 97.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (tha-eng) type: mteb/tatoeba-bitext-mining config: tha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.44525547445255 - type: f1 value: 96.71532846715328 - type: precision value: 96.35036496350365 - type: recall value: 97.44525547445255 - task: type: BitextMining dataset: name: MTEB Tatoeba (wuu-eng) type: mteb/tatoeba-bitext-mining config: wuu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.1 - type: f1 value: 92.34000000000002 - type: precision value: 91.49166666666667 - type: recall value: 94.1 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 3.2910000000000004 - type: map_at_10 value: 10.373000000000001 - type: map_at_100 value: 15.612 - type: map_at_1000 value: 17.06 - type: map_at_3 value: 6.119 - type: map_at_5 value: 7.917000000000001 - type: mrr_at_1 value: 44.897999999999996 - type: mrr_at_10 value: 56.054 - type: mrr_at_100 value: 56.82000000000001 - type: mrr_at_1000 value: 56.82000000000001 - type: mrr_at_3 value: 52.381 - type: mrr_at_5 value: 53.81 - type: ndcg_at_1 value: 42.857 - type: ndcg_at_10 value: 27.249000000000002 - type: ndcg_at_100 value: 36.529 - type: ndcg_at_1000 value: 48.136 - type: ndcg_at_3 value: 33.938 - type: ndcg_at_5 value: 29.951 - type: precision_at_1 value: 44.897999999999996 - type: precision_at_10 value: 22.653000000000002 - type: precision_at_100 value: 7.000000000000001 - type: precision_at_1000 value: 1.48 - type: precision_at_3 value: 32.653 - type: precision_at_5 value: 27.755000000000003 - type: recall_at_1 value: 3.2910000000000004 - type: recall_at_10 value: 16.16 - type: recall_at_100 value: 43.908 - type: recall_at_1000 value: 79.823 - type: recall_at_3 value: 7.156 - type: recall_at_5 value: 10.204 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.05879999999999 - type: ap value: 14.609748142799111 - type: f1 value: 54.878956295843096 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 64.61799660441426 - type: f1 value: 64.8698191961434 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 51.32860036611885 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 88.34714192048638 - type: cos_sim_ap value: 80.26732975975634 - type: cos_sim_f1 value: 73.53415148134374 - type: cos_sim_precision value: 69.34767360299276 - type: cos_sim_recall value: 78.25857519788919 - type: dot_accuracy value: 88.34714192048638 - type: dot_ap value: 80.26733698491206 - type: dot_f1 value: 73.53415148134374 - type: dot_precision value: 69.34767360299276 - type: dot_recall value: 78.25857519788919 - type: euclidean_accuracy value: 88.34714192048638 - type: euclidean_ap value: 80.26734337771738 - type: euclidean_f1 value: 73.53415148134374 - type: euclidean_precision value: 69.34767360299276 - type: euclidean_recall value: 78.25857519788919 - type: manhattan_accuracy value: 88.30541813196639 - type: manhattan_ap value: 80.19415808104145 - type: manhattan_f1 value: 73.55143870713441 - type: manhattan_precision value: 73.25307511122743 - type: manhattan_recall value: 73.85224274406332 - type: max_accuracy value: 88.34714192048638 - type: max_ap value: 80.26734337771738 - type: max_f1 value: 73.55143870713441 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.81061047075717 - type: cos_sim_ap value: 87.11747055081017 - type: cos_sim_f1 value: 80.04355498817256 - type: cos_sim_precision value: 78.1165262000733 - type: cos_sim_recall value: 82.06806282722513 - type: dot_accuracy value: 89.81061047075717 - type: dot_ap value: 87.11746902745236 - type: dot_f1 value: 80.04355498817256 - type: dot_precision value: 78.1165262000733 - type: dot_recall value: 82.06806282722513 - type: euclidean_accuracy value: 89.81061047075717 - type: euclidean_ap value: 87.11746919324248 - type: euclidean_f1 value: 80.04355498817256 - type: euclidean_precision value: 78.1165262000733 - type: euclidean_recall value: 82.06806282722513 - type: manhattan_accuracy value: 89.79508673885202 - type: manhattan_ap value: 87.11074390832218 - type: manhattan_f1 value: 80.13002540726349 - type: manhattan_precision value: 77.83826945412311 - type: manhattan_recall value: 82.56082537727133 - type: max_accuracy value: 89.81061047075717 - type: max_ap value: 87.11747055081017 - type: max_f1 value: 80.13002540726349 --- ## Multilingual-E5-large-instruct [Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/pdf/2402.05672). Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024 This model has 24 layers and the embedding size is 1024. ## Usage Below are examples to encode queries and passages from the MS-MARCO passage ranking dataset. ### Transformers ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] def get_detailed_instruct(task_description: str, query: str) -> str: return f'Instruct: {task_description}\nQuery: {query}' # Each query must come with a one-sentence instruction that describes the task task = 'Given a web search query, retrieve relevant passages that answer the query' queries = [ get_detailed_instruct(task, 'how much protein should a female eat'), get_detailed_instruct(task, '南瓜的家常做法') ] # No need to add instruction for retrieval documents documents = [ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅" ] input_texts = queries + documents tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-large-instruct') model = AutoModel.from_pretrained('intfloat/multilingual-e5-large-instruct') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) # => [[91.92852783203125, 67.580322265625], [70.3814468383789, 92.1330795288086]] ``` ### Sentence Transformers ```python from sentence_transformers import SentenceTransformer def get_detailed_instruct(task_description: str, query: str) -> str: return f'Instruct: {task_description}\nQuery: {query}' # Each query must come with a one-sentence instruction that describes the task task = 'Given a web search query, retrieve relevant passages that answer the query' queries = [ get_detailed_instruct(task, 'how much protein should a female eat'), get_detailed_instruct(task, '南瓜的家常做法') ] # No need to add instruction for retrieval documents documents = [ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅" ] input_texts = queries + documents model = SentenceTransformer('intfloat/multilingual-e5-large-instruct') embeddings = model.encode(input_texts, convert_to_tensor=True, normalize_embeddings=True) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) # [[91.92853546142578, 67.5802993774414], [70.38143157958984, 92.13307189941406]] ``` ## Supported Languages This model is initialized from [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) and continually trained on a mixture of multilingual datasets. It supports 100 languages from xlm-roberta, but low-resource languages may see performance degradation. ## Training Details **Initialization**: [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) **First stage**: contrastive pre-training with 1 billion weakly supervised text pairs. **Second stage**: fine-tuning on datasets from the [E5-mistral](https://arxiv.org/abs/2401.00368) paper. ## MTEB Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## FAQ **1. Do I need to add instructions to the query?** Yes, this is how the model is trained, otherwise you will see a performance degradation. The task definition should be a one-sentence instruction that describes the task. This is a way to customize text embeddings for different scenarios through natural language instructions. Please check out [unilm/e5/utils.py](https://github.com/microsoft/unilm/blob/9c0f1ff7ca53431fe47d2637dfe253643d94185b/e5/utils.py#L106) for instructions we used for evaluation. On the other hand, there is no need to add instructions to the document side. **2. Why are my reproduced results slightly different from reported in the model card?** Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences. **3. Why does the cosine similarity scores distribute around 0.7 to 1.0?** This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss. For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue. ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2024multilingual, title={Multilingual E5 Text Embeddings: A Technical Report}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2402.05672}, year={2024} } ``` ## Limitations Long texts will be truncated to at most 512 tokens.
[ "BIOSSES", "SCIFACT" ]
openbmb/MiniCPM-Embedding-Light
openbmb
feature-extraction
[ "transformers", "safetensors", "minicpm", "feature-extraction", "mteb", "sentence-transformers", "custom_code", "arxiv:2202.08904", "model-index", "region:us" ]
2025-01-17T08:19:39Z
2025-02-05T03:14:44+00:00
329
11
--- library_name: transformers pipeline_tag: feature-extraction tags: - mteb - sentence-transformers model-index: - name: no_model_name_available results: - task: type: STS dataset: name: MTEB AFQMC (default) type: C-MTEB/AFQMC config: default split: validation revision: b44c3b011063adb25877c13823db83bb193913c4 metrics: - type: cosine_pearson value: 31.60219205269865 - type: cosine_spearman value: 32.26566089398552 - type: euclidean_pearson value: 31.38659295608159 - type: euclidean_spearman value: 32.265680997074284 - type: main_score value: 32.26566089398552 - type: manhattan_pearson value: 31.012318343485934 - type: manhattan_spearman value: 31.881347232593882 - type: pearson value: 31.60219205269865 - type: spearman value: 32.26566089398552 - task: type: STS dataset: name: MTEB ATEC (default) type: C-MTEB/ATEC config: default split: test revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865 metrics: - type: cosine_pearson value: 40.89963324512739 - type: cosine_spearman value: 40.342262626966686 - type: euclidean_pearson value: 43.26579075620696 - type: euclidean_spearman value: 40.34226375259283 - type: main_score value: 40.342262626966686 - type: manhattan_pearson value: 43.09428997760782 - type: manhattan_spearman value: 40.132604575720485 - type: pearson value: 40.89963324512739 - type: spearman value: 40.342262626966686 - task: type: STS dataset: name: MTEB ATEC (default) type: C-MTEB/ATEC config: default split: validation revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865 metrics: - type: cosine_pearson value: 40.97674579633659 - type: cosine_spearman value: 41.15073385665892 - type: euclidean_pearson value: 43.12674145119401 - type: euclidean_spearman value: 41.15073497290901 - type: main_score value: 41.15073385665892 - type: manhattan_pearson value: 43.016332350517416 - type: manhattan_spearman value: 40.99128368771293 - type: pearson value: 40.97674579633659 - type: spearman value: 41.15073385665892 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 74.1492537313433 - type: ap value: 36.58820102143676 - type: ap_weighted value: 36.58820102143676 - type: f1 value: 67.93641050300623 - type: f1_weighted value: 76.47946936836382 - type: main_score value: 74.1492537313433 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification (default) type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 92.57937499999998 - type: ap value: 89.09881932276382 - type: ap_weighted value: 89.09881932276382 - type: f1 value: 92.57389464257594 - type: f1_weighted value: 92.57389464257594 - type: main_score value: 92.57937499999998 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.95399999999999 - type: f1 value: 45.23480325168402 - type: f1_weighted value: 45.23480325168403 - type: main_score value: 47.95399999999999 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 43.916000000000004 - type: f1 value: 40.79038102586015 - type: f1_weighted value: 40.79038102586015 - type: main_score value: 43.916000000000004 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: validation revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.716 - type: f1 value: 44.97469896514136 - type: f1_weighted value: 44.97469896514136 - type: main_score value: 47.716 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: validation revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 43.016000000000005 - type: f1 value: 39.88062282479835 - type: f1_weighted value: 39.88062282479835 - type: main_score value: 43.016000000000005 - task: type: Retrieval dataset: name: MTEB ArguAna (default) type: mteb/arguana config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: main_score value: 55.97299999999999 - type: map_at_1 value: 31.009999999999998 - type: map_at_10 value: 46.951 - type: map_at_100 value: 47.788000000000004 - type: map_at_1000 value: 47.794 - type: map_at_20 value: 47.656 - type: map_at_3 value: 41.69 - type: map_at_5 value: 44.795 - type: mrr_at_1 value: 31.57894736842105 - type: mrr_at_10 value: 47.150336426652245 - type: mrr_at_100 value: 48.00003421265431 - type: mrr_at_1000 value: 48.006517491673485 - type: mrr_at_20 value: 47.86823495425013 - type: mrr_at_3 value: 41.90374585111427 - type: mrr_at_5 value: 45.00474158368897 - type: nauc_map_at_1000_diff1 value: 14.400156277962079 - type: nauc_map_at_1000_max value: -6.074701279893042 - type: nauc_map_at_1000_std value: -12.047730490841793 - type: nauc_map_at_100_diff1 value: 14.400167976253817 - type: nauc_map_at_100_max value: -6.0697710559623825 - type: nauc_map_at_100_std value: -12.03623231778573 - type: nauc_map_at_10_diff1 value: 14.39390977335818 - type: nauc_map_at_10_max value: -5.937292882369333 - type: nauc_map_at_10_std value: -11.955448521986341 - type: nauc_map_at_1_diff1 value: 18.2188090059407 - type: nauc_map_at_1_max value: -6.90680836409332 - type: nauc_map_at_1_std value: -11.42044016086847 - type: nauc_map_at_20_diff1 value: 14.25797265657041 - type: nauc_map_at_20_max value: -6.136254023725178 - type: nauc_map_at_20_std value: -12.095812481204513 - type: nauc_map_at_3_diff1 value: 14.694055542759067 - type: nauc_map_at_3_max value: -5.922208526639951 - type: nauc_map_at_3_std value: -12.637146606706324 - type: nauc_map_at_5_diff1 value: 14.034909746881796 - type: nauc_map_at_5_max value: -6.037648673220035 - type: nauc_map_at_5_std value: -12.488119466760367 - type: nauc_mrr_at_1000_diff1 value: 12.907349893032888 - type: nauc_mrr_at_1000_max value: -6.476631933744489 - type: nauc_mrr_at_1000_std value: -12.135655638319898 - type: nauc_mrr_at_100_diff1 value: 12.90767904668398 - type: nauc_mrr_at_100_max value: -6.471625560815013 - type: nauc_mrr_at_100_std value: -12.124160525865376 - type: nauc_mrr_at_10_diff1 value: 12.898084989549307 - type: nauc_mrr_at_10_max value: -6.371999485392878 - type: nauc_mrr_at_10_std value: -12.060712822104344 - type: nauc_mrr_at_1_diff1 value: 16.534028417854632 - type: nauc_mrr_at_1_max value: -6.531221880816804 - type: nauc_mrr_at_1_std value: -11.427032725801363 - type: nauc_mrr_at_20_diff1 value: 12.772149932536516 - type: nauc_mrr_at_20_max value: -6.536237532046593 - type: nauc_mrr_at_20_std value: -12.18322445801735 - type: nauc_mrr_at_3_diff1 value: 13.294722540439723 - type: nauc_mrr_at_3_max value: -6.270285589254632 - type: nauc_mrr_at_3_std value: -12.590739373950477 - type: nauc_mrr_at_5_diff1 value: 12.701572066028916 - type: nauc_mrr_at_5_max value: -6.35025779804965 - type: nauc_mrr_at_5_std value: -12.567997847961006 - type: nauc_ndcg_at_1000_diff1 value: 14.04477346308097 - type: nauc_ndcg_at_1000_max value: -5.805803656284627 - type: nauc_ndcg_at_1000_std value: -11.903389341799974 - type: nauc_ndcg_at_100_diff1 value: 14.046024694124535 - type: nauc_ndcg_at_100_max value: -5.638595406841976 - type: nauc_ndcg_at_100_std value: -11.563718937605266 - type: nauc_ndcg_at_10_diff1 value: 13.774482728152659 - type: nauc_ndcg_at_10_max value: -5.112671934691593 - type: nauc_ndcg_at_10_std value: -11.45598979914733 - type: nauc_ndcg_at_1_diff1 value: 18.2188090059407 - type: nauc_ndcg_at_1_max value: -6.90680836409332 - type: nauc_ndcg_at_1_std value: -11.42044016086847 - type: nauc_ndcg_at_20_diff1 value: 13.19308743032763 - type: nauc_ndcg_at_20_max value: -5.925869069550241 - type: nauc_ndcg_at_20_std value: -12.002174058926709 - type: nauc_ndcg_at_3_diff1 value: 14.098445595476438 - type: nauc_ndcg_at_3_max value: -5.438990657735945 - type: nauc_ndcg_at_3_std value: -13.026198448199588 - type: nauc_ndcg_at_5_diff1 value: 12.887695825204021 - type: nauc_ndcg_at_5_max value: -5.527892954283733 - type: nauc_ndcg_at_5_std value: -12.79674424315614 - type: nauc_precision_at_1000_diff1 value: 15.720975272424962 - type: nauc_precision_at_1000_max value: -9.434922353859656 - type: nauc_precision_at_1000_std value: -12.201774463835351 - type: nauc_precision_at_100_diff1 value: 14.822568320368415 - type: nauc_precision_at_100_max value: 16.970591395955335 - type: nauc_precision_at_100_std value: 34.44303415297543 - type: nauc_precision_at_10_diff1 value: 10.924572747165758 - type: nauc_precision_at_10_max value: 0.7245336905113386 - type: nauc_precision_at_10_std value: -7.246984906362029 - type: nauc_precision_at_1_diff1 value: 18.2188090059407 - type: nauc_precision_at_1_max value: -6.90680836409332 - type: nauc_precision_at_1_std value: -11.42044016086847 - type: nauc_precision_at_20_diff1 value: -3.338584460694707 - type: nauc_precision_at_20_max value: -4.566280243136391 - type: nauc_precision_at_20_std value: -10.006136097038183 - type: nauc_precision_at_3_diff1 value: 12.491306916226456 - type: nauc_precision_at_3_max value: -3.939014391748743 - type: nauc_precision_at_3_std value: -14.18952698929006 - type: nauc_precision_at_5_diff1 value: 8.856000600248196 - type: nauc_precision_at_5_max value: -3.5855091847389 - type: nauc_precision_at_5_std value: -13.869699312071923 - type: nauc_recall_at_1000_diff1 value: 15.720975272417975 - type: nauc_recall_at_1000_max value: -9.434922353860903 - type: nauc_recall_at_1000_std value: -12.201774463832038 - type: nauc_recall_at_100_diff1 value: 14.822568320369559 - type: nauc_recall_at_100_max value: 16.970591395954745 - type: nauc_recall_at_100_std value: 34.443034152975024 - type: nauc_recall_at_10_diff1 value: 10.924572747165762 - type: nauc_recall_at_10_max value: 0.724533690511315 - type: nauc_recall_at_10_std value: -7.246984906362018 - type: nauc_recall_at_1_diff1 value: 18.2188090059407 - type: nauc_recall_at_1_max value: -6.90680836409332 - type: nauc_recall_at_1_std value: -11.42044016086847 - type: nauc_recall_at_20_diff1 value: -3.3385844606947677 - type: nauc_recall_at_20_max value: -4.566280243136629 - type: nauc_recall_at_20_std value: -10.006136097038366 - type: nauc_recall_at_3_diff1 value: 12.491306916226472 - type: nauc_recall_at_3_max value: -3.939014391748735 - type: nauc_recall_at_3_std value: -14.189526989290059 - type: nauc_recall_at_5_diff1 value: 8.856000600248263 - type: nauc_recall_at_5_max value: -3.5855091847388603 - type: nauc_recall_at_5_std value: -13.869699312071909 - type: ndcg_at_1 value: 31.009999999999998 - type: ndcg_at_10 value: 55.97299999999999 - type: ndcg_at_100 value: 59.272000000000006 - type: ndcg_at_1000 value: 59.407 - type: ndcg_at_20 value: 58.449 - type: ndcg_at_3 value: 45.227000000000004 - type: ndcg_at_5 value: 50.792 - type: precision_at_1 value: 31.009999999999998 - type: precision_at_10 value: 8.485 - type: precision_at_100 value: 0.9860000000000001 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.723 - type: precision_at_3 value: 18.492 - type: precision_at_5 value: 13.783999999999999 - type: recall_at_1 value: 31.009999999999998 - type: recall_at_10 value: 84.851 - type: recall_at_100 value: 98.649 - type: recall_at_1000 value: 99.644 - type: recall_at_20 value: 94.452 - type: recall_at_3 value: 55.477 - type: recall_at_5 value: 68.919 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P (default) type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: main_score value: 48.31683216128774 - type: v_measure value: 48.31683216128774 - type: v_measure_std value: 13.795207109799703 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S (default) type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: main_score value: 40.2951016935384 - type: v_measure value: 40.2951016935384 - type: v_measure_std value: 14.193710444297869 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions (default) type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: main_score value: 60.45095169935259 - type: map value: 60.45095169935259 - type: mrr value: 73.43567251461988 - type: nAUC_map_diff1 value: 15.357222913791704 - type: nAUC_map_max value: 24.301239659848346 - type: nAUC_map_std value: 18.26732583044278 - type: nAUC_mrr_diff1 value: 24.108010981589057 - type: nAUC_mrr_max value: 34.90261214387396 - type: nAUC_mrr_std value: 20.350034497982126 - task: type: STS dataset: name: MTEB BIOSSES (default) type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cosine_pearson value: 90.16604991710759 - type: cosine_spearman value: 88.4670760233051 - type: euclidean_pearson value: 89.02378164860428 - type: euclidean_spearman value: 88.4670760233051 - type: main_score value: 88.4670760233051 - type: manhattan_pearson value: 88.8866912507422 - type: manhattan_spearman value: 88.2755053931781 - type: pearson value: 90.16604991710759 - type: spearman value: 88.4670760233051 - task: type: STS dataset: name: MTEB BQ (default) type: C-MTEB/BQ config: default split: test revision: e3dda5e115e487b39ec7e618c0c6a29137052a55 metrics: - type: cosine_pearson value: 49.45233904713201 - type: cosine_spearman value: 49.77342815602789 - type: euclidean_pearson value: 49.13579036236359 - type: euclidean_spearman value: 49.77342122767529 - type: main_score value: 49.77342815602789 - type: manhattan_pearson value: 49.01322677955527 - type: manhattan_spearman value: 49.702538779772226 - type: pearson value: 49.45233904713201 - type: spearman value: 49.77342815602789 - task: type: STS dataset: name: MTEB BQ (default) type: C-MTEB/BQ config: default split: validation revision: e3dda5e115e487b39ec7e618c0c6a29137052a55 metrics: - type: cosine_pearson value: 53.43473222697715 - type: cosine_spearman value: 54.24325202324013 - type: euclidean_pearson value: 53.4053341221681 - type: euclidean_spearman value: 54.2432485591385 - type: main_score value: 54.24325202324013 - type: manhattan_pearson value: 53.31602762068146 - type: manhattan_spearman value: 54.180811590825925 - type: pearson value: 53.43473222697715 - type: spearman value: 54.24325202324013 - task: type: Classification dataset: name: MTEB Banking77Classification (default) type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 82.11038961038962 - type: f1 value: 81.50275371635729 - type: f1_weighted value: 81.50275371635732 - type: main_score value: 82.11038961038962 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P (default) type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: main_score value: 39.85718105201471 - type: v_measure value: 39.85718105201471 - type: v_measure_std value: 0.9098592525717781 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S (default) type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: main_score value: 34.901371726743854 - type: v_measure value: 34.901371726743854 - type: v_measure_std value: 0.49131958662099773 - task: type: Clustering dataset: name: MTEB CLSClusteringP2P (default) type: C-MTEB/CLSClusteringP2P config: default split: test revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476 metrics: - type: main_score value: 42.580911514601844 - type: v_measure value: 42.580911514601844 - type: v_measure_std value: 1.3262494874619402 - task: type: Clustering dataset: name: MTEB CLSClusteringS2S (default) type: C-MTEB/CLSClusteringS2S config: default split: test revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f metrics: - type: main_score value: 38.36369670561906 - type: v_measure value: 38.36369670561906 - type: v_measure_std value: 1.3030031287521193 - task: type: Reranking dataset: name: MTEB CMedQAv1-reranking (default) type: C-MTEB/CMedQAv1-reranking config: default split: test revision: 8d7f1e942507dac42dc58017c1a001c3717da7df metrics: - type: main_score value: 82.23318409776884 - type: map value: 82.23318409776884 - type: mrr value: 85.05289682539681 - type: nAUC_map_diff1 value: 53.922817335441664 - type: nAUC_map_max value: 63.38587877583035 - type: nAUC_map_std value: 26.58945323149115 - type: nAUC_mrr_diff1 value: 61.2457871312172 - type: nAUC_mrr_max value: 71.77558608272952 - type: nAUC_mrr_std value: 35.945961549335976 - task: type: Reranking dataset: name: MTEB CMedQAv2-reranking (default) type: C-MTEB/CMedQAv2-reranking config: default split: test revision: 23d186750531a14a0357ca22cd92d712fd512ea0 metrics: - type: main_score value: 83.28208766373744 - type: map value: 83.28208766373744 - type: mrr value: 85.81444444444443 - type: nAUC_map_diff1 value: 59.23043241198723 - type: nAUC_map_max value: 63.96198552688328 - type: nAUC_map_std value: 17.563221080927807 - type: nAUC_mrr_diff1 value: 66.27403933527562 - type: nAUC_mrr_max value: 74.24319995478142 - type: nAUC_mrr_std value: 26.84913877864022 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval (default) type: mteb/cqadupstack-android config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: main_score value: 51.791 - type: map_at_1 value: 33.489000000000004 - type: map_at_10 value: 45.362 - type: map_at_100 value: 46.847 - type: map_at_1000 value: 46.963 - type: map_at_20 value: 46.167 - type: map_at_3 value: 41.737 - type: map_at_5 value: 43.747 - type: mrr_at_1 value: 40.486409155937054 - type: mrr_at_10 value: 51.12570111497148 - type: mrr_at_100 value: 51.86187493461626 - type: mrr_at_1000 value: 51.89536424646558 - type: mrr_at_20 value: 51.54190377431117 - type: mrr_at_3 value: 48.56938483547925 - type: mrr_at_5 value: 50.171673819742466 - type: nauc_map_at_1000_diff1 value: 45.83742367768875 - type: nauc_map_at_1000_max value: 36.666030418631365 - type: nauc_map_at_1000_std value: -3.0749754490409598 - type: nauc_map_at_100_diff1 value: 45.81723006290297 - type: nauc_map_at_100_max value: 36.669471954500835 - type: nauc_map_at_100_std value: -3.0711605055120037 - type: nauc_map_at_10_diff1 value: 46.11671975824962 - type: nauc_map_at_10_max value: 36.41961760572779 - type: nauc_map_at_10_std value: -3.5676307490322294 - type: nauc_map_at_1_diff1 value: 48.99600869130432 - type: nauc_map_at_1_max value: 30.72533190025592 - type: nauc_map_at_1_std value: -7.210226805142472 - type: nauc_map_at_20_diff1 value: 45.730620597411416 - type: nauc_map_at_20_max value: 36.67067673690639 - type: nauc_map_at_20_std value: -3.0616760792842874 - type: nauc_map_at_3_diff1 value: 46.3900637210476 - type: nauc_map_at_3_max value: 35.04691686861482 - type: nauc_map_at_3_std value: -4.855804907542516 - type: nauc_map_at_5_diff1 value: 46.30354693063511 - type: nauc_map_at_5_max value: 36.160207495289946 - type: nauc_map_at_5_std value: -3.7612546075044024 - type: nauc_mrr_at_1000_diff1 value: 44.94342955084924 - type: nauc_mrr_at_1000_max value: 36.5868635648845 - type: nauc_mrr_at_1000_std value: -3.7279540299450598 - type: nauc_mrr_at_100_diff1 value: 44.9241145632844 - type: nauc_mrr_at_100_max value: 36.58379839831864 - type: nauc_mrr_at_100_std value: -3.7418032288649385 - type: nauc_mrr_at_10_diff1 value: 45.00805694123448 - type: nauc_mrr_at_10_max value: 36.705567574937454 - type: nauc_mrr_at_10_std value: -3.602116114964355 - type: nauc_mrr_at_1_diff1 value: 47.14298489978003 - type: nauc_mrr_at_1_max value: 33.38843521905287 - type: nauc_mrr_at_1_std value: -8.505210257231145 - type: nauc_mrr_at_20_diff1 value: 44.83329863262661 - type: nauc_mrr_at_20_max value: 36.589698139628496 - type: nauc_mrr_at_20_std value: -3.620200313971379 - type: nauc_mrr_at_3_diff1 value: 44.95899691734053 - type: nauc_mrr_at_3_max value: 36.61014661536669 - type: nauc_mrr_at_3_std value: -4.235751267084451 - type: nauc_mrr_at_5_diff1 value: 45.43301143912572 - type: nauc_mrr_at_5_max value: 37.016764711532716 - type: nauc_mrr_at_5_std value: -3.7811565499003232 - type: nauc_ndcg_at_1000_diff1 value: 44.56347509930279 - type: nauc_ndcg_at_1000_max value: 37.58231608565612 - type: nauc_ndcg_at_1000_std value: -1.0148805105229683 - type: nauc_ndcg_at_100_diff1 value: 44.21798254097979 - type: nauc_ndcg_at_100_max value: 37.55836639241636 - type: nauc_ndcg_at_100_std value: -1.119038291236023 - type: nauc_ndcg_at_10_diff1 value: 44.77884245032202 - type: nauc_ndcg_at_10_max value: 37.800051548342246 - type: nauc_ndcg_at_10_std value: -1.48841695838196 - type: nauc_ndcg_at_1_diff1 value: 47.14298489978003 - type: nauc_ndcg_at_1_max value: 33.38843521905287 - type: nauc_ndcg_at_1_std value: -8.505210257231145 - type: nauc_ndcg_at_20_diff1 value: 43.65031596123121 - type: nauc_ndcg_at_20_max value: 37.69836062122585 - type: nauc_ndcg_at_20_std value: -0.8253052163035528 - type: nauc_ndcg_at_3_diff1 value: 45.00478060029277 - type: nauc_ndcg_at_3_max value: 36.75297532264166 - type: nauc_ndcg_at_3_std value: -3.0054585641131655 - type: nauc_ndcg_at_5_diff1 value: 45.24437062894877 - type: nauc_ndcg_at_5_max value: 37.88266316994465 - type: nauc_ndcg_at_5_std value: -1.701786097430671 - type: nauc_precision_at_1000_diff1 value: -11.911798432587343 - type: nauc_precision_at_1000_max value: -10.189977280120303 - type: nauc_precision_at_1000_std value: -5.213316467405967 - type: nauc_precision_at_100_diff1 value: -6.795008520695643 - type: nauc_precision_at_100_max value: 1.308872758510908 - type: nauc_precision_at_100_std value: 3.1390422505657627 - type: nauc_precision_at_10_diff1 value: 12.648590902867074 - type: nauc_precision_at_10_max value: 24.68660171555869 - type: nauc_precision_at_10_std value: 7.893487447107204 - type: nauc_precision_at_1_diff1 value: 47.14298489978003 - type: nauc_precision_at_1_max value: 33.38843521905287 - type: nauc_precision_at_1_std value: -8.505210257231145 - type: nauc_precision_at_20_diff1 value: 2.7434758735468048 - type: nauc_precision_at_20_max value: 17.55565926646876 - type: nauc_precision_at_20_std value: 10.321439048951452 - type: nauc_precision_at_3_diff1 value: 29.566919929400875 - type: nauc_precision_at_3_max value: 33.95479571575024 - type: nauc_precision_at_3_std value: 1.7592238216915597 - type: nauc_precision_at_5_diff1 value: 22.428208270307856 - type: nauc_precision_at_5_max value: 31.004215116158413 - type: nauc_precision_at_5_std value: 5.279489297223801 - type: nauc_recall_at_1000_diff1 value: 31.890454093099407 - type: nauc_recall_at_1000_max value: 51.376825921063386 - type: nauc_recall_at_1000_std value: 59.90888686683735 - type: nauc_recall_at_100_diff1 value: 31.697335059128505 - type: nauc_recall_at_100_max value: 38.760900054389786 - type: nauc_recall_at_100_std value: 14.477418407176682 - type: nauc_recall_at_10_diff1 value: 37.593976107308166 - type: nauc_recall_at_10_max value: 37.120867787083576 - type: nauc_recall_at_10_std value: 4.0458731062140165 - type: nauc_recall_at_1_diff1 value: 48.99600869130432 - type: nauc_recall_at_1_max value: 30.72533190025592 - type: nauc_recall_at_1_std value: -7.210226805142472 - type: nauc_recall_at_20_diff1 value: 31.75084814121109 - type: nauc_recall_at_20_max value: 36.78465637755701 - type: nauc_recall_at_20_std value: 7.600404385507733 - type: nauc_recall_at_3_diff1 value: 40.91244393504077 - type: nauc_recall_at_3_max value: 35.611100064289175 - type: nauc_recall_at_3_std value: -1.7314625087631257 - type: nauc_recall_at_5_diff1 value: 40.48529204446073 - type: nauc_recall_at_5_max value: 37.96938179146327 - type: nauc_recall_at_5_std value: 2.243463426136501 - type: ndcg_at_1 value: 40.486 - type: ndcg_at_10 value: 51.791 - type: ndcg_at_100 value: 57.218999999999994 - type: ndcg_at_1000 value: 58.846 - type: ndcg_at_20 value: 53.82900000000001 - type: ndcg_at_3 value: 46.727999999999994 - type: ndcg_at_5 value: 49.126 - type: precision_at_1 value: 40.486 - type: precision_at_10 value: 9.771 - type: precision_at_100 value: 1.562 - type: precision_at_1000 value: 0.202 - type: precision_at_20 value: 5.7509999999999994 - type: precision_at_3 value: 22.556 - type: precision_at_5 value: 16.052 - type: recall_at_1 value: 33.489000000000004 - type: recall_at_10 value: 64.071 - type: recall_at_100 value: 86.47500000000001 - type: recall_at_1000 value: 96.408 - type: recall_at_20 value: 71.273 - type: recall_at_3 value: 49.547999999999995 - type: recall_at_5 value: 56.393 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval (default) type: mteb/cqadupstack-english config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: main_score value: 49.274 - type: map_at_1 value: 33.019 - type: map_at_10 value: 43.469 - type: map_at_100 value: 44.818999999999996 - type: map_at_1000 value: 44.944 - type: map_at_20 value: 44.204 - type: map_at_3 value: 40.215 - type: map_at_5 value: 42.138999999999996 - type: mrr_at_1 value: 41.082802547770704 - type: mrr_at_10 value: 49.50763320190077 - type: mrr_at_100 value: 50.15386440914099 - type: mrr_at_1000 value: 50.1948078078438 - type: mrr_at_20 value: 49.86890003378296 - type: mrr_at_3 value: 47.250530785562646 - type: mrr_at_5 value: 48.65817409766459 - type: nauc_map_at_1000_diff1 value: 53.82535875039235 - type: nauc_map_at_1000_max value: 45.453250348612215 - type: nauc_map_at_1000_std value: -1.9559612984873571 - type: nauc_map_at_100_diff1 value: 53.81013847448271 - type: nauc_map_at_100_max value: 45.392209330066066 - type: nauc_map_at_100_std value: -2.0524451381485234 - type: nauc_map_at_10_diff1 value: 54.209459779949384 - type: nauc_map_at_10_max value: 44.883275752243065 - type: nauc_map_at_10_std value: -3.6109937791207094 - type: nauc_map_at_1_diff1 value: 58.94514805782117 - type: nauc_map_at_1_max value: 39.37520774150509 - type: nauc_map_at_1_std value: -8.720964154916928 - type: nauc_map_at_20_diff1 value: 53.8348887034513 - type: nauc_map_at_20_max value: 44.99782089147465 - type: nauc_map_at_20_std value: -2.718742980010167 - type: nauc_map_at_3_diff1 value: 56.02884388647345 - type: nauc_map_at_3_max value: 43.415666030670124 - type: nauc_map_at_3_std value: -6.731028873830273 - type: nauc_map_at_5_diff1 value: 54.723746443656566 - type: nauc_map_at_5_max value: 44.58690708846215 - type: nauc_map_at_5_std value: -5.030535383171446 - type: nauc_mrr_at_1000_diff1 value: 53.153007923698894 - type: nauc_mrr_at_1000_max value: 47.498466648364534 - type: nauc_mrr_at_1000_std value: 1.2882577043538435 - type: nauc_mrr_at_100_diff1 value: 53.135489251238056 - type: nauc_mrr_at_100_max value: 47.48916134974268 - type: nauc_mrr_at_100_std value: 1.2889395420272438 - type: nauc_mrr_at_10_diff1 value: 53.1220415513986 - type: nauc_mrr_at_10_max value: 47.490791997767964 - type: nauc_mrr_at_10_std value: 1.1444407350516157 - type: nauc_mrr_at_1_diff1 value: 57.559058682171504 - type: nauc_mrr_at_1_max value: 46.89026874220749 - type: nauc_mrr_at_1_std value: -1.9116043469494446 - type: nauc_mrr_at_20_diff1 value: 53.034500689960275 - type: nauc_mrr_at_20_max value: 47.41450821815849 - type: nauc_mrr_at_20_std value: 1.240765437252736 - type: nauc_mrr_at_3_diff1 value: 54.25315882717826 - type: nauc_mrr_at_3_max value: 47.428006007217235 - type: nauc_mrr_at_3_std value: -0.12495431309209105 - type: nauc_mrr_at_5_diff1 value: 53.5054857141475 - type: nauc_mrr_at_5_max value: 47.83146647409837 - type: nauc_mrr_at_5_std value: 0.5629970448268111 - type: nauc_ndcg_at_1000_diff1 value: 51.261194449319504 - type: nauc_ndcg_at_1000_max value: 46.994312489862835 - type: nauc_ndcg_at_1000_std value: 3.2428209322165067 - type: nauc_ndcg_at_100_diff1 value: 50.84368410402597 - type: nauc_ndcg_at_100_max value: 46.73298393365377 - type: nauc_ndcg_at_100_std value: 2.904073356585609 - type: nauc_ndcg_at_10_diff1 value: 51.72255521298621 - type: nauc_ndcg_at_10_max value: 46.31005929924904 - type: nauc_ndcg_at_10_std value: 0.2715351422503746 - type: nauc_ndcg_at_1_diff1 value: 57.559058682171504 - type: nauc_ndcg_at_1_max value: 46.89026874220749 - type: nauc_ndcg_at_1_std value: -1.9116043469494446 - type: nauc_ndcg_at_20_diff1 value: 50.8506271301813 - type: nauc_ndcg_at_20_max value: 46.0583706384306 - type: nauc_ndcg_at_20_std value: 1.6396894489539218 - type: nauc_ndcg_at_3_diff1 value: 54.00038574913631 - type: nauc_ndcg_at_3_max value: 46.076178038905404 - type: nauc_ndcg_at_3_std value: -2.211424037505318 - type: nauc_ndcg_at_5_diff1 value: 52.628195775092316 - type: nauc_ndcg_at_5_max value: 46.78093894422556 - type: nauc_ndcg_at_5_std value: -1.3380283106634656 - type: nauc_precision_at_1000_diff1 value: -12.938958862510566 - type: nauc_precision_at_1000_max value: 8.556158319175314 - type: nauc_precision_at_1000_std value: 28.485389071197346 - type: nauc_precision_at_100_diff1 value: -8.770372899573491 - type: nauc_precision_at_100_max value: 18.05611676926777 - type: nauc_precision_at_100_std value: 33.603692427049545 - type: nauc_precision_at_10_diff1 value: 10.17936772396029 - type: nauc_precision_at_10_max value: 33.28847244292926 - type: nauc_precision_at_10_std value: 24.05529615188066 - type: nauc_precision_at_1_diff1 value: 57.559058682171504 - type: nauc_precision_at_1_max value: 46.89026874220749 - type: nauc_precision_at_1_std value: -1.9116043469494446 - type: nauc_precision_at_20_diff1 value: 0.46596639548970015 - type: nauc_precision_at_20_max value: 26.34396955936117 - type: nauc_precision_at_20_std value: 29.960110998616308 - type: nauc_precision_at_3_diff1 value: 32.1884032130926 - type: nauc_precision_at_3_max value: 42.9623864532112 - type: nauc_precision_at_3_std value: 9.406319207236965 - type: nauc_precision_at_5_diff1 value: 20.663922808040514 - type: nauc_precision_at_5_max value: 40.23784932763058 - type: nauc_precision_at_5_std value: 16.15485535812318 - type: nauc_recall_at_1000_diff1 value: 34.02276539506821 - type: nauc_recall_at_1000_max value: 51.78898549190249 - type: nauc_recall_at_1000_std value: 38.51821109938462 - type: nauc_recall_at_100_diff1 value: 35.11970287568031 - type: nauc_recall_at_100_max value: 45.26179169180922 - type: nauc_recall_at_100_std value: 19.468341893615374 - type: nauc_recall_at_10_diff1 value: 42.731604441196666 - type: nauc_recall_at_10_max value: 42.89410379930046 - type: nauc_recall_at_10_std value: 3.5259768753999587 - type: nauc_recall_at_1_diff1 value: 58.94514805782117 - type: nauc_recall_at_1_max value: 39.37520774150509 - type: nauc_recall_at_1_std value: -8.720964154916928 - type: nauc_recall_at_20_diff1 value: 38.6527326827719 - type: nauc_recall_at_20_max value: 41.81381796149285 - type: nauc_recall_at_20_std value: 9.447128423015046 - type: nauc_recall_at_3_diff1 value: 51.06019004682993 - type: nauc_recall_at_3_max value: 42.099338080420274 - type: nauc_recall_at_3_std value: -6.020642288695232 - type: nauc_recall_at_5_diff1 value: 46.14582217531629 - type: nauc_recall_at_5_max value: 43.94158387704093 - type: nauc_recall_at_5_std value: -2.0041618732754696 - type: ndcg_at_1 value: 41.083 - type: ndcg_at_10 value: 49.274 - type: ndcg_at_100 value: 53.835 - type: ndcg_at_1000 value: 55.69499999999999 - type: ndcg_at_20 value: 50.983000000000004 - type: ndcg_at_3 value: 44.912 - type: ndcg_at_5 value: 47.121 - type: precision_at_1 value: 41.083 - type: precision_at_10 value: 9.274000000000001 - type: precision_at_100 value: 1.488 - type: precision_at_1000 value: 0.193 - type: precision_at_20 value: 5.449 - type: precision_at_3 value: 21.741 - type: precision_at_5 value: 15.439 - type: recall_at_1 value: 33.019 - type: recall_at_10 value: 59.294999999999995 - type: recall_at_100 value: 78.545 - type: recall_at_1000 value: 90.12400000000001 - type: recall_at_20 value: 65.443 - type: recall_at_3 value: 46.21 - type: recall_at_5 value: 52.575 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval (default) type: mteb/cqadupstack-gaming config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: main_score value: 59.83500000000001 - type: map_at_1 value: 41.743 - type: map_at_10 value: 54.081999999999994 - type: map_at_100 value: 55.135999999999996 - type: map_at_1000 value: 55.184 - type: map_at_20 value: 54.767999999999994 - type: map_at_3 value: 50.89 - type: map_at_5 value: 52.636 - type: mrr_at_1 value: 47.39811912225706 - type: mrr_at_10 value: 57.38179827835008 - type: mrr_at_100 value: 58.01643316296891 - type: mrr_at_1000 value: 58.04110233372705 - type: mrr_at_20 value: 57.82176911544285 - type: mrr_at_3 value: 54.98432601880885 - type: mrr_at_5 value: 56.33542319749226 - type: nauc_map_at_1000_diff1 value: 56.38274182942337 - type: nauc_map_at_1000_max value: 39.63215709105948 - type: nauc_map_at_1000_std value: -6.245907717300131 - type: nauc_map_at_100_diff1 value: 56.36311874132528 - type: nauc_map_at_100_max value: 39.62470000319664 - type: nauc_map_at_100_std value: -6.271622755681494 - type: nauc_map_at_10_diff1 value: 56.410565627073225 - type: nauc_map_at_10_max value: 39.16425951389524 - type: nauc_map_at_10_std value: -7.206521474602716 - type: nauc_map_at_1_diff1 value: 58.34604316308072 - type: nauc_map_at_1_max value: 31.305799393516853 - type: nauc_map_at_1_std value: -9.67195266691713 - type: nauc_map_at_20_diff1 value: 56.38143625487464 - type: nauc_map_at_20_max value: 39.462438789562455 - type: nauc_map_at_20_std value: -6.599407894095691 - type: nauc_map_at_3_diff1 value: 56.90332449245052 - type: nauc_map_at_3_max value: 37.454195451703995 - type: nauc_map_at_3_std value: -9.382786205944821 - type: nauc_map_at_5_diff1 value: 56.538604915661004 - type: nauc_map_at_5_max value: 38.6588144327087 - type: nauc_map_at_5_std value: -7.932442776531816 - type: nauc_mrr_at_1000_diff1 value: 56.1537707758201 - type: nauc_mrr_at_1000_max value: 40.87392514538646 - type: nauc_mrr_at_1000_std value: -5.108268246986718 - type: nauc_mrr_at_100_diff1 value: 56.14434800759561 - type: nauc_mrr_at_100_max value: 40.88497861437684 - type: nauc_mrr_at_100_std value: -5.100160912125043 - type: nauc_mrr_at_10_diff1 value: 56.091546352822434 - type: nauc_mrr_at_10_max value: 41.04917579584731 - type: nauc_mrr_at_10_std value: -5.096011574407418 - type: nauc_mrr_at_1_diff1 value: 58.89486283556674 - type: nauc_mrr_at_1_max value: 36.877138420765164 - type: nauc_mrr_at_1_std value: -8.010727906497483 - type: nauc_mrr_at_20_diff1 value: 56.15532215594925 - type: nauc_mrr_at_20_max value: 40.91911784659166 - type: nauc_mrr_at_20_std value: -5.159856708038148 - type: nauc_mrr_at_3_diff1 value: 56.41304554774757 - type: nauc_mrr_at_3_max value: 40.599408683012975 - type: nauc_mrr_at_3_std value: -5.966503192813791 - type: nauc_mrr_at_5_diff1 value: 56.178462641991004 - type: nauc_mrr_at_5_max value: 40.88639915714814 - type: nauc_mrr_at_5_std value: -5.4712972818244205 - type: nauc_ndcg_at_1000_diff1 value: 55.46084562015493 - type: nauc_ndcg_at_1000_max value: 42.11339231750283 - type: nauc_ndcg_at_1000_std value: -2.933574308921646 - type: nauc_ndcg_at_100_diff1 value: 55.244408030279644 - type: nauc_ndcg_at_100_max value: 42.51902459556891 - type: nauc_ndcg_at_100_std value: -2.681903058600699 - type: nauc_ndcg_at_10_diff1 value: 55.07975132155747 - type: nauc_ndcg_at_10_max value: 41.86638367277626 - type: nauc_ndcg_at_10_std value: -4.574212407886393 - type: nauc_ndcg_at_1_diff1 value: 58.89486283556674 - type: nauc_ndcg_at_1_max value: 36.877138420765164 - type: nauc_ndcg_at_1_std value: -8.010727906497483 - type: nauc_ndcg_at_20_diff1 value: 55.239108306400865 - type: nauc_ndcg_at_20_max value: 42.19784330055704 - type: nauc_ndcg_at_20_std value: -3.690456034599944 - type: nauc_ndcg_at_3_diff1 value: 56.094939697467325 - type: nauc_ndcg_at_3_max value: 39.75116550436197 - type: nauc_ndcg_at_3_std value: -7.375673693822571 - type: nauc_ndcg_at_5_diff1 value: 55.377651199567794 - type: nauc_ndcg_at_5_max value: 41.20722954879245 - type: nauc_ndcg_at_5_std value: -5.679020392514973 - type: nauc_precision_at_1000_diff1 value: -10.756112623603697 - type: nauc_precision_at_1000_max value: 17.64732842181831 - type: nauc_precision_at_1000_std value: 32.742279334654306 - type: nauc_precision_at_100_diff1 value: -4.896852655342983 - type: nauc_precision_at_100_max value: 24.707372714988725 - type: nauc_precision_at_100_std value: 32.19414457350063 - type: nauc_precision_at_10_diff1 value: 16.228966073160773 - type: nauc_precision_at_10_max value: 35.39971659325401 - type: nauc_precision_at_10_std value: 15.975657844520837 - type: nauc_precision_at_1_diff1 value: 58.89486283556674 - type: nauc_precision_at_1_max value: 36.877138420765164 - type: nauc_precision_at_1_std value: -8.010727906497483 - type: nauc_precision_at_20_diff1 value: 6.765510087471395 - type: nauc_precision_at_20_max value: 31.77369794420453 - type: nauc_precision_at_20_std value: 24.487726333260845 - type: nauc_precision_at_3_diff1 value: 37.01533500883528 - type: nauc_precision_at_3_max value: 40.28829957277282 - type: nauc_precision_at_3_std value: 0.15790828521244832 - type: nauc_precision_at_5_diff1 value: 27.325187065547695 - type: nauc_precision_at_5_max value: 39.67710773459586 - type: nauc_precision_at_5_std value: 8.307845112173677 - type: nauc_recall_at_1000_diff1 value: 34.97259871293003 - type: nauc_recall_at_1000_max value: 73.36153616209499 - type: nauc_recall_at_1000_std value: 63.52466639318273 - type: nauc_recall_at_100_diff1 value: 43.84585939706463 - type: nauc_recall_at_100_max value: 58.75253788214712 - type: nauc_recall_at_100_std value: 23.779812502563956 - type: nauc_recall_at_10_diff1 value: 47.80161773501786 - type: nauc_recall_at_10_max value: 46.2174264798925 - type: nauc_recall_at_10_std value: 0.8663876046028921 - type: nauc_recall_at_1_diff1 value: 58.34604316308072 - type: nauc_recall_at_1_max value: 31.305799393516853 - type: nauc_recall_at_1_std value: -9.67195266691713 - type: nauc_recall_at_20_diff1 value: 46.90388293555046 - type: nauc_recall_at_20_max value: 49.28144135226787 - type: nauc_recall_at_20_std value: 7.537105099790044 - type: nauc_recall_at_3_diff1 value: 52.97073509767745 - type: nauc_recall_at_3_max value: 40.42098227210626 - type: nauc_recall_at_3_std value: -8.28013314935897 - type: nauc_recall_at_5_diff1 value: 50.35991406369175 - type: nauc_recall_at_5_max value: 43.442736162816395 - type: nauc_recall_at_5_std value: -3.893478526464003 - type: ndcg_at_1 value: 47.398 - type: ndcg_at_10 value: 59.83500000000001 - type: ndcg_at_100 value: 63.743 - type: ndcg_at_1000 value: 64.75800000000001 - type: ndcg_at_20 value: 61.78399999999999 - type: ndcg_at_3 value: 54.481 - type: ndcg_at_5 value: 57.034 - type: precision_at_1 value: 47.398 - type: precision_at_10 value: 9.504999999999999 - type: precision_at_100 value: 1.2449999999999999 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_20 value: 5.357 - type: precision_at_3 value: 24.18 - type: precision_at_5 value: 16.439 - type: recall_at_1 value: 41.743 - type: recall_at_10 value: 73.476 - type: recall_at_100 value: 89.875 - type: recall_at_1000 value: 97.311 - type: recall_at_20 value: 80.61500000000001 - type: recall_at_3 value: 59.192 - type: recall_at_5 value: 65.43299999999999 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval (default) type: mteb/cqadupstack-gis config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: main_score value: 42.451 - type: map_at_1 value: 28.996 - type: map_at_10 value: 37.616 - type: map_at_100 value: 38.702999999999996 - type: map_at_1000 value: 38.785 - type: map_at_20 value: 38.248 - type: map_at_3 value: 34.906 - type: map_at_5 value: 36.313 - type: mrr_at_1 value: 30.847457627118647 - type: mrr_at_10 value: 39.38054882970136 - type: mrr_at_100 value: 40.366915853040304 - type: mrr_at_1000 value: 40.422138866370375 - type: mrr_at_20 value: 39.963305509876314 - type: mrr_at_3 value: 36.81732580037664 - type: mrr_at_5 value: 38.28060263653482 - type: nauc_map_at_1000_diff1 value: 46.412845971748965 - type: nauc_map_at_1000_max value: 35.71820582656466 - type: nauc_map_at_1000_std value: -3.4396952487244543 - type: nauc_map_at_100_diff1 value: 46.395516885783515 - type: nauc_map_at_100_max value: 35.699005377624786 - type: nauc_map_at_100_std value: -3.4295307929848815 - type: nauc_map_at_10_diff1 value: 46.60661423872333 - type: nauc_map_at_10_max value: 35.76865437824633 - type: nauc_map_at_10_std value: -3.7286516914981194 - type: nauc_map_at_1_diff1 value: 50.67584728744242 - type: nauc_map_at_1_max value: 33.31838096723387 - type: nauc_map_at_1_std value: -7.017496210052664 - type: nauc_map_at_20_diff1 value: 46.336180127932245 - type: nauc_map_at_20_max value: 35.67863259884862 - type: nauc_map_at_20_std value: -3.532643797779482 - type: nauc_map_at_3_diff1 value: 47.60693220558914 - type: nauc_map_at_3_max value: 34.51587922644232 - type: nauc_map_at_3_std value: -5.094395358598097 - type: nauc_map_at_5_diff1 value: 47.06590116277457 - type: nauc_map_at_5_max value: 35.09758567281723 - type: nauc_map_at_5_std value: -4.594804514448893 - type: nauc_mrr_at_1000_diff1 value: 45.22776158670323 - type: nauc_mrr_at_1000_max value: 36.86081533470028 - type: nauc_mrr_at_1000_std value: -2.033205148222453 - type: nauc_mrr_at_100_diff1 value: 45.204577420420954 - type: nauc_mrr_at_100_max value: 36.849578433404155 - type: nauc_mrr_at_100_std value: -2.016257960786726 - type: nauc_mrr_at_10_diff1 value: 45.2359210975849 - type: nauc_mrr_at_10_max value: 37.01690402885584 - type: nauc_mrr_at_10_std value: -2.2602767431608597 - type: nauc_mrr_at_1_diff1 value: 48.87088666432611 - type: nauc_mrr_at_1_max value: 35.58051752132078 - type: nauc_mrr_at_1_std value: -4.731264758679752 - type: nauc_mrr_at_20_diff1 value: 45.107901559758574 - type: nauc_mrr_at_20_max value: 36.871010473007566 - type: nauc_mrr_at_20_std value: -2.09198313309596 - type: nauc_mrr_at_3_diff1 value: 46.532099561607964 - type: nauc_mrr_at_3_max value: 36.533535412036436 - type: nauc_mrr_at_3_std value: -3.1250129413210814 - type: nauc_mrr_at_5_diff1 value: 45.57186948675289 - type: nauc_mrr_at_5_max value: 36.46221116432317 - type: nauc_mrr_at_5_std value: -2.8206584854678916 - type: nauc_ndcg_at_1000_diff1 value: 44.329221962893975 - type: nauc_ndcg_at_1000_max value: 36.91867297213294 - type: nauc_ndcg_at_1000_std value: -0.4934939008290994 - type: nauc_ndcg_at_100_diff1 value: 44.02704131900571 - type: nauc_ndcg_at_100_max value: 36.73741523697531 - type: nauc_ndcg_at_100_std value: 0.056585087009301434 - type: nauc_ndcg_at_10_diff1 value: 44.46275070065777 - type: nauc_ndcg_at_10_max value: 37.08165048296797 - type: nauc_ndcg_at_10_std value: -1.4504178730008903 - type: nauc_ndcg_at_1_diff1 value: 48.87088666432611 - type: nauc_ndcg_at_1_max value: 35.58051752132078 - type: nauc_ndcg_at_1_std value: -4.731264758679752 - type: nauc_ndcg_at_20_diff1 value: 43.715351338600854 - type: nauc_ndcg_at_20_max value: 36.597558579484286 - type: nauc_ndcg_at_20_std value: -0.7442166823850342 - type: nauc_ndcg_at_3_diff1 value: 46.6559452141376 - type: nauc_ndcg_at_3_max value: 35.303431090059576 - type: nauc_ndcg_at_3_std value: -4.245048423792951 - type: nauc_ndcg_at_5_diff1 value: 45.46364843701738 - type: nauc_ndcg_at_5_max value: 35.786069703721715 - type: nauc_ndcg_at_5_std value: -3.225507760537463 - type: nauc_precision_at_1000_diff1 value: -8.813657843193829 - type: nauc_precision_at_1000_max value: 19.341916147889847 - type: nauc_precision_at_1000_std value: 11.83125844170699 - type: nauc_precision_at_100_diff1 value: 8.781439905664739 - type: nauc_precision_at_100_max value: 29.44860083085914 - type: nauc_precision_at_100_std value: 13.776934250429376 - type: nauc_precision_at_10_diff1 value: 28.889666145944 - type: nauc_precision_at_10_max value: 41.11966477643234 - type: nauc_precision_at_10_std value: 6.963197458201788 - type: nauc_precision_at_1_diff1 value: 48.87088666432611 - type: nauc_precision_at_1_max value: 35.58051752132078 - type: nauc_precision_at_1_std value: -4.731264758679752 - type: nauc_precision_at_20_diff1 value: 21.46418782701143 - type: nauc_precision_at_20_max value: 37.04050243855216 - type: nauc_precision_at_20_std value: 8.967545775130677 - type: nauc_precision_at_3_diff1 value: 39.977903525162525 - type: nauc_precision_at_3_max value: 37.8324727688519 - type: nauc_precision_at_3_std value: -0.09362980766141979 - type: nauc_precision_at_5_diff1 value: 36.05449702608607 - type: nauc_precision_at_5_max value: 39.31263152685144 - type: nauc_precision_at_5_std value: 1.4853599728966675 - type: nauc_recall_at_1000_diff1 value: 23.131220881305328 - type: nauc_recall_at_1000_max value: 43.09488375414571 - type: nauc_recall_at_1000_std value: 34.32484643072848 - type: nauc_recall_at_100_diff1 value: 32.509347146711775 - type: nauc_recall_at_100_max value: 38.31293004210284 - type: nauc_recall_at_100_std value: 20.31295020880922 - type: nauc_recall_at_10_diff1 value: 38.162435666945825 - type: nauc_recall_at_10_max value: 39.05783231051994 - type: nauc_recall_at_10_std value: 4.737164462571157 - type: nauc_recall_at_1_diff1 value: 50.67584728744242 - type: nauc_recall_at_1_max value: 33.31838096723387 - type: nauc_recall_at_1_std value: -7.017496210052664 - type: nauc_recall_at_20_diff1 value: 34.36040334628013 - type: nauc_recall_at_20_max value: 36.688387172616835 - type: nauc_recall_at_20_std value: 8.670145039799666 - type: nauc_recall_at_3_diff1 value: 44.33263333615946 - type: nauc_recall_at_3_max value: 34.21104932799129 - type: nauc_recall_at_3_std value: -3.4348954541060057 - type: nauc_recall_at_5_diff1 value: 41.3941366549961 - type: nauc_recall_at_5_max value: 35.61498401814357 - type: nauc_recall_at_5_std value: -0.5242808474696788 - type: ndcg_at_1 value: 30.847 - type: ndcg_at_10 value: 42.451 - type: ndcg_at_100 value: 47.666 - type: ndcg_at_1000 value: 49.559 - type: ndcg_at_20 value: 44.564 - type: ndcg_at_3 value: 37.141000000000005 - type: ndcg_at_5 value: 39.615 - type: precision_at_1 value: 30.847 - type: precision_at_10 value: 6.361999999999999 - type: precision_at_100 value: 0.9440000000000001 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_20 value: 3.695 - type: precision_at_3 value: 15.292 - type: precision_at_5 value: 10.644 - type: recall_at_1 value: 28.996 - type: recall_at_10 value: 55.584 - type: recall_at_100 value: 79.137 - type: recall_at_1000 value: 93.133 - type: recall_at_20 value: 63.344 - type: recall_at_3 value: 41.388999999999996 - type: recall_at_5 value: 47.302 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval (default) type: mteb/cqadupstack-mathematica config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: main_score value: 34.095 - type: map_at_1 value: 19.73 - type: map_at_10 value: 28.621999999999996 - type: map_at_100 value: 29.951 - type: map_at_1000 value: 30.063000000000002 - type: map_at_20 value: 29.309 - type: map_at_3 value: 25.667 - type: map_at_5 value: 27.594 - type: mrr_at_1 value: 24.502487562189053 - type: mrr_at_10 value: 33.665255073837145 - type: mrr_at_100 value: 34.59932347722826 - type: mrr_at_1000 value: 34.66003643326513 - type: mrr_at_20 value: 34.11376652638897 - type: mrr_at_3 value: 31.05306799336651 - type: mrr_at_5 value: 32.76326699834162 - type: nauc_map_at_1000_diff1 value: 34.72907872454501 - type: nauc_map_at_1000_max value: 28.254188806716968 - type: nauc_map_at_1000_std value: 1.766585437449934 - type: nauc_map_at_100_diff1 value: 34.72932621462264 - type: nauc_map_at_100_max value: 28.27419759099569 - type: nauc_map_at_100_std value: 1.7699849561943597 - type: nauc_map_at_10_diff1 value: 34.78565974033627 - type: nauc_map_at_10_max value: 27.986939554161456 - type: nauc_map_at_10_std value: 1.167749138251006 - type: nauc_map_at_1_diff1 value: 38.91003571707319 - type: nauc_map_at_1_max value: 26.48670439569984 - type: nauc_map_at_1_std value: -0.6581147831046584 - type: nauc_map_at_20_diff1 value: 34.930356018900085 - type: nauc_map_at_20_max value: 28.11826713770072 - type: nauc_map_at_20_std value: 1.4222869706417194 - type: nauc_map_at_3_diff1 value: 36.0762128105621 - type: nauc_map_at_3_max value: 28.565191344891815 - type: nauc_map_at_3_std value: 0.7825139863346278 - type: nauc_map_at_5_diff1 value: 35.51997355447966 - type: nauc_map_at_5_max value: 27.79640533393062 - type: nauc_map_at_5_std value: 0.4033822753367694 - type: nauc_mrr_at_1000_diff1 value: 35.086631245748286 - type: nauc_mrr_at_1000_max value: 28.00090704456733 - type: nauc_mrr_at_1000_std value: 2.7443538042856495 - type: nauc_mrr_at_100_diff1 value: 35.08022882692694 - type: nauc_mrr_at_100_max value: 28.02518055725871 - type: nauc_mrr_at_100_std value: 2.756913025485739 - type: nauc_mrr_at_10_diff1 value: 35.189138304228955 - type: nauc_mrr_at_10_max value: 27.893789610020132 - type: nauc_mrr_at_10_std value: 2.5277514271816273 - type: nauc_mrr_at_1_diff1 value: 38.49246887300505 - type: nauc_mrr_at_1_max value: 25.42106416145382 - type: nauc_mrr_at_1_std value: -0.3166610087713868 - type: nauc_mrr_at_20_diff1 value: 35.27168804507115 - type: nauc_mrr_at_20_max value: 28.012190140962623 - type: nauc_mrr_at_20_std value: 2.6699643794051733 - type: nauc_mrr_at_3_diff1 value: 35.244407269705356 - type: nauc_mrr_at_3_max value: 27.901137842346667 - type: nauc_mrr_at_3_std value: 1.536344232061536 - type: nauc_mrr_at_5_diff1 value: 35.60496636899887 - type: nauc_mrr_at_5_max value: 27.646092417250294 - type: nauc_mrr_at_5_std value: 1.7849129602744565 - type: nauc_ndcg_at_1000_diff1 value: 33.00641553083242 - type: nauc_ndcg_at_1000_max value: 29.281184042576324 - type: nauc_ndcg_at_1000_std value: 4.705354777869887 - type: nauc_ndcg_at_100_diff1 value: 32.73299739191785 - type: nauc_ndcg_at_100_max value: 29.733498550725486 - type: nauc_ndcg_at_100_std value: 5.051380591295473 - type: nauc_ndcg_at_10_diff1 value: 33.42778333197981 - type: nauc_ndcg_at_10_max value: 28.500230808790462 - type: nauc_ndcg_at_10_std value: 2.6279521120828426 - type: nauc_ndcg_at_1_diff1 value: 38.49246887300505 - type: nauc_ndcg_at_1_max value: 25.42106416145382 - type: nauc_ndcg_at_1_std value: -0.3166610087713868 - type: nauc_ndcg_at_20_diff1 value: 33.932374714340305 - type: nauc_ndcg_at_20_max value: 28.97338117740232 - type: nauc_ndcg_at_20_std value: 3.382234056656039 - type: nauc_ndcg_at_3_diff1 value: 35.06726185470219 - type: nauc_ndcg_at_3_max value: 28.769824175873655 - type: nauc_ndcg_at_3_std value: 0.9778290393744915 - type: nauc_ndcg_at_5_diff1 value: 34.73183576563172 - type: nauc_ndcg_at_5_max value: 27.92235378893707 - type: nauc_ndcg_at_5_std value: 0.931888346245052 - type: nauc_precision_at_1000_diff1 value: -4.969051807978748 - type: nauc_precision_at_1000_max value: 0.14144278477866445 - type: nauc_precision_at_1000_std value: 4.867244664069488 - type: nauc_precision_at_100_diff1 value: 3.4485901120482914 - type: nauc_precision_at_100_max value: 12.881970758272205 - type: nauc_precision_at_100_std value: 11.70053444498138 - type: nauc_precision_at_10_diff1 value: 19.652560943517372 - type: nauc_precision_at_10_max value: 22.721397508432503 - type: nauc_precision_at_10_std value: 6.4517755635275025 - type: nauc_precision_at_1_diff1 value: 38.49246887300505 - type: nauc_precision_at_1_max value: 25.42106416145382 - type: nauc_precision_at_1_std value: -0.3166610087713868 - type: nauc_precision_at_20_diff1 value: 17.228427222424315 - type: nauc_precision_at_20_max value: 20.728777641636476 - type: nauc_precision_at_20_std value: 7.817118735958645 - type: nauc_precision_at_3_diff1 value: 30.223066194086307 - type: nauc_precision_at_3_max value: 27.412166459133786 - type: nauc_precision_at_3_std value: 1.698402524212445 - type: nauc_precision_at_5_diff1 value: 26.619771134350295 - type: nauc_precision_at_5_max value: 23.208486114756507 - type: nauc_precision_at_5_std value: 1.214970586733223 - type: nauc_recall_at_1000_diff1 value: 11.623462125104215 - type: nauc_recall_at_1000_max value: 36.091211213022106 - type: nauc_recall_at_1000_std value: 32.23113490590334 - type: nauc_recall_at_100_diff1 value: 21.087538105716423 - type: nauc_recall_at_100_max value: 34.78408730230787 - type: nauc_recall_at_100_std value: 18.502764053088498 - type: nauc_recall_at_10_diff1 value: 28.025850341314616 - type: nauc_recall_at_10_max value: 28.278332371196424 - type: nauc_recall_at_10_std value: 5.215906443076799 - type: nauc_recall_at_1_diff1 value: 38.91003571707319 - type: nauc_recall_at_1_max value: 26.48670439569984 - type: nauc_recall_at_1_std value: -0.6581147831046584 - type: nauc_recall_at_20_diff1 value: 29.473435609654423 - type: nauc_recall_at_20_max value: 29.49664949068959 - type: nauc_recall_at_20_std value: 7.462607204613657 - type: nauc_recall_at_3_diff1 value: 32.75851316103734 - type: nauc_recall_at_3_max value: 30.05729454718181 - type: nauc_recall_at_3_std value: 1.9302697513077123 - type: nauc_recall_at_5_diff1 value: 31.4784165636263 - type: nauc_recall_at_5_max value: 27.066581269469197 - type: nauc_recall_at_5_std value: 1.3182034896545982 - type: ndcg_at_1 value: 24.502 - type: ndcg_at_10 value: 34.095 - type: ndcg_at_100 value: 40.278000000000006 - type: ndcg_at_1000 value: 42.845 - type: ndcg_at_20 value: 36.158 - type: ndcg_at_3 value: 29.002 - type: ndcg_at_5 value: 31.952 - type: precision_at_1 value: 24.502 - type: precision_at_10 value: 6.219 - type: precision_at_100 value: 1.082 - type: precision_at_1000 value: 0.14300000000000002 - type: precision_at_20 value: 3.7560000000000002 - type: precision_at_3 value: 13.764999999999999 - type: precision_at_5 value: 10.323 - type: recall_at_1 value: 19.73 - type: recall_at_10 value: 45.832 - type: recall_at_100 value: 72.90299999999999 - type: recall_at_1000 value: 91.12400000000001 - type: recall_at_20 value: 52.941 - type: recall_at_3 value: 32.147999999999996 - type: recall_at_5 value: 39.572 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval (default) type: mteb/cqadupstack-physics config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: main_score value: 48.891 - type: map_at_1 value: 31.075999999999997 - type: map_at_10 value: 42.577999999999996 - type: map_at_100 value: 43.998 - type: map_at_1000 value: 44.107 - type: map_at_20 value: 43.394 - type: map_at_3 value: 39.237 - type: map_at_5 value: 41.213 - type: mrr_at_1 value: 37.632338787295474 - type: mrr_at_10 value: 48.028705867974345 - type: mrr_at_100 value: 48.89056715596661 - type: mrr_at_1000 value: 48.92751183933152 - type: mrr_at_20 value: 48.572019107385856 - type: mrr_at_3 value: 45.58870709015074 - type: mrr_at_5 value: 47.090150786012124 - type: nauc_map_at_1000_diff1 value: 50.78783234215205 - type: nauc_map_at_1000_max value: 33.542131730312164 - type: nauc_map_at_1000_std value: -0.3678105032092534 - type: nauc_map_at_100_diff1 value: 50.801030214261935 - type: nauc_map_at_100_max value: 33.49117253773047 - type: nauc_map_at_100_std value: -0.424437332181341 - type: nauc_map_at_10_diff1 value: 50.665593124786014 - type: nauc_map_at_10_max value: 32.785196057455686 - type: nauc_map_at_10_std value: -1.1779549158534983 - type: nauc_map_at_1_diff1 value: 55.868642241264645 - type: nauc_map_at_1_max value: 30.544699698856615 - type: nauc_map_at_1_std value: -3.824717473245085 - type: nauc_map_at_20_diff1 value: 50.77114941389146 - type: nauc_map_at_20_max value: 33.26827708180765 - type: nauc_map_at_20_std value: -0.734677624886567 - type: nauc_map_at_3_diff1 value: 51.03832030578005 - type: nauc_map_at_3_max value: 32.39458212663325 - type: nauc_map_at_3_std value: -1.6494237804803646 - type: nauc_map_at_5_diff1 value: 50.97104795265703 - type: nauc_map_at_5_max value: 32.963257618296986 - type: nauc_map_at_5_std value: -1.2954427188265398 - type: nauc_mrr_at_1000_diff1 value: 50.087825368297565 - type: nauc_mrr_at_1000_max value: 35.696912235935315 - type: nauc_mrr_at_1000_std value: 0.9517029361871309 - type: nauc_mrr_at_100_diff1 value: 50.091410892116386 - type: nauc_mrr_at_100_max value: 35.701167670781956 - type: nauc_mrr_at_100_std value: 0.9492584917140756 - type: nauc_mrr_at_10_diff1 value: 49.88389091064117 - type: nauc_mrr_at_10_max value: 35.6067947110772 - type: nauc_mrr_at_10_std value: 0.7626165780679156 - type: nauc_mrr_at_1_diff1 value: 55.01931926385987 - type: nauc_mrr_at_1_max value: 35.731630359671044 - type: nauc_mrr_at_1_std value: 0.4765227639052635 - type: nauc_mrr_at_20_diff1 value: 50.04232795868649 - type: nauc_mrr_at_20_max value: 35.64757803934064 - type: nauc_mrr_at_20_std value: 0.8038895849793868 - type: nauc_mrr_at_3_diff1 value: 49.29102858426895 - type: nauc_mrr_at_3_max value: 35.511749287022596 - type: nauc_mrr_at_3_std value: 0.9607913501181212 - type: nauc_mrr_at_5_diff1 value: 49.90634335653725 - type: nauc_mrr_at_5_max value: 35.57725666069228 - type: nauc_mrr_at_5_std value: 0.5886034889984604 - type: nauc_ndcg_at_1000_diff1 value: 49.227101169579974 - type: nauc_ndcg_at_1000_max value: 35.304422697207904 - type: nauc_ndcg_at_1000_std value: 2.3564962090430357 - type: nauc_ndcg_at_100_diff1 value: 49.33636342826304 - type: nauc_ndcg_at_100_max value: 34.93271239347418 - type: nauc_ndcg_at_100_std value: 2.304638273222096 - type: nauc_ndcg_at_10_diff1 value: 48.62225183717284 - type: nauc_ndcg_at_10_max value: 33.013586201737816 - type: nauc_ndcg_at_10_std value: -0.3811388147797492 - type: nauc_ndcg_at_1_diff1 value: 55.01931926385987 - type: nauc_ndcg_at_1_max value: 35.731630359671044 - type: nauc_ndcg_at_1_std value: 0.4765227639052635 - type: nauc_ndcg_at_20_diff1 value: 49.02938009186652 - type: nauc_ndcg_at_20_max value: 34.07537935061685 - type: nauc_ndcg_at_20_std value: 0.7596556118589683 - type: nauc_ndcg_at_3_diff1 value: 48.53275134328913 - type: nauc_ndcg_at_3_max value: 33.72246853040944 - type: nauc_ndcg_at_3_std value: 0.07148157187994036 - type: nauc_ndcg_at_5_diff1 value: 49.125387965082595 - type: nauc_ndcg_at_5_max value: 33.89755823168926 - type: nauc_ndcg_at_5_std value: -0.23484468412288975 - type: nauc_precision_at_1000_diff1 value: -16.388718759022847 - type: nauc_precision_at_1000_max value: 5.237181961139354 - type: nauc_precision_at_1000_std value: 12.481420642405105 - type: nauc_precision_at_100_diff1 value: -5.613297466581972 - type: nauc_precision_at_100_max value: 13.871852332913598 - type: nauc_precision_at_100_std value: 15.784270811182186 - type: nauc_precision_at_10_diff1 value: 14.380456681659199 - type: nauc_precision_at_10_max value: 24.28938422113675 - type: nauc_precision_at_10_std value: 9.104016210929833 - type: nauc_precision_at_1_diff1 value: 55.01931926385987 - type: nauc_precision_at_1_max value: 35.731630359671044 - type: nauc_precision_at_1_std value: 0.4765227639052635 - type: nauc_precision_at_20_diff1 value: 6.997723624231359 - type: nauc_precision_at_20_max value: 22.242975253242793 - type: nauc_precision_at_20_std value: 12.460553518097337 - type: nauc_precision_at_3_diff1 value: 31.93565478138394 - type: nauc_precision_at_3_max value: 32.245381961758554 - type: nauc_precision_at_3_std value: 6.3778575720255635 - type: nauc_precision_at_5_diff1 value: 25.360806939232344 - type: nauc_precision_at_5_max value: 29.95777944809185 - type: nauc_precision_at_5_std value: 8.192950259472545 - type: nauc_recall_at_1000_diff1 value: 33.00760032876783 - type: nauc_recall_at_1000_max value: 52.825856604033994 - type: nauc_recall_at_1000_std value: 45.239442029547384 - type: nauc_recall_at_100_diff1 value: 40.32600076465021 - type: nauc_recall_at_100_max value: 35.20651551017542 - type: nauc_recall_at_100_std value: 18.2866715724604 - type: nauc_recall_at_10_diff1 value: 40.19090180531315 - type: nauc_recall_at_10_max value: 27.727160089866675 - type: nauc_recall_at_10_std value: -0.34152382508922086 - type: nauc_recall_at_1_diff1 value: 55.868642241264645 - type: nauc_recall_at_1_max value: 30.544699698856615 - type: nauc_recall_at_1_std value: -3.824717473245085 - type: nauc_recall_at_20_diff1 value: 40.53509773756395 - type: nauc_recall_at_20_max value: 30.879328024854107 - type: nauc_recall_at_20_std value: 4.5165469550975255 - type: nauc_recall_at_3_diff1 value: 43.27936784610322 - type: nauc_recall_at_3_max value: 30.443511585383586 - type: nauc_recall_at_3_std value: -0.4500440621385532 - type: nauc_recall_at_5_diff1 value: 42.84235237573527 - type: nauc_recall_at_5_max value: 30.6861143937192 - type: nauc_recall_at_5_std value: -0.6079883050754419 - type: ndcg_at_1 value: 37.632 - type: ndcg_at_10 value: 48.891 - type: ndcg_at_100 value: 54.44 - type: ndcg_at_1000 value: 56.218 - type: ndcg_at_20 value: 51.242 - type: ndcg_at_3 value: 43.618 - type: ndcg_at_5 value: 46.321 - type: precision_at_1 value: 37.632 - type: precision_at_10 value: 8.884 - type: precision_at_100 value: 1.362 - type: precision_at_1000 value: 0.169 - type: precision_at_20 value: 5.221 - type: precision_at_3 value: 20.788999999999998 - type: precision_at_5 value: 14.802999999999999 - type: recall_at_1 value: 31.075999999999997 - type: recall_at_10 value: 62.087 - type: recall_at_100 value: 84.615 - type: recall_at_1000 value: 95.809 - type: recall_at_20 value: 70.092 - type: recall_at_3 value: 47.49 - type: recall_at_5 value: 54.359 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval (default) type: mteb/cqadupstack-programmers config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: main_score value: 45.334 - type: map_at_1 value: 27.275 - type: map_at_10 value: 38.818000000000005 - type: map_at_100 value: 40.245999999999995 - type: map_at_1000 value: 40.348 - type: map_at_20 value: 39.641 - type: map_at_3 value: 35.115 - type: map_at_5 value: 37.232 - type: mrr_at_1 value: 33.789954337899545 - type: mrr_at_10 value: 44.11923787779952 - type: mrr_at_100 value: 45.028806593226676 - type: mrr_at_1000 value: 45.0740604466411 - type: mrr_at_20 value: 44.67688886801231 - type: mrr_at_3 value: 41.248097412480966 - type: mrr_at_5 value: 42.97754946727547 - type: nauc_map_at_1000_diff1 value: 43.013762963519305 - type: nauc_map_at_1000_max value: 39.10459163352522 - type: nauc_map_at_1000_std value: 0.5686019526390734 - type: nauc_map_at_100_diff1 value: 43.01344839983274 - type: nauc_map_at_100_max value: 39.12155667806109 - type: nauc_map_at_100_std value: 0.5850312262411536 - type: nauc_map_at_10_diff1 value: 43.40889409121977 - type: nauc_map_at_10_max value: 38.86352881392226 - type: nauc_map_at_10_std value: -0.09138731580615166 - type: nauc_map_at_1_diff1 value: 49.580098743143 - type: nauc_map_at_1_max value: 33.624185967920326 - type: nauc_map_at_1_std value: -7.896295502496881 - type: nauc_map_at_20_diff1 value: 43.11936673331683 - type: nauc_map_at_20_max value: 39.07709188651765 - type: nauc_map_at_20_std value: 0.4602382023590104 - type: nauc_map_at_3_diff1 value: 43.67107257453258 - type: nauc_map_at_3_max value: 36.84244693065489 - type: nauc_map_at_3_std value: -3.289874933863321 - type: nauc_map_at_5_diff1 value: 43.758122467637826 - type: nauc_map_at_5_max value: 38.294511650248126 - type: nauc_map_at_5_std value: -1.4279289313215355 - type: nauc_mrr_at_1000_diff1 value: 41.19785571847013 - type: nauc_mrr_at_1000_max value: 38.55497179205239 - type: nauc_mrr_at_1000_std value: 1.7188770740469619 - type: nauc_mrr_at_100_diff1 value: 41.177608254142875 - type: nauc_mrr_at_100_max value: 38.55707450419509 - type: nauc_mrr_at_100_std value: 1.742333253511747 - type: nauc_mrr_at_10_diff1 value: 41.16178606855569 - type: nauc_mrr_at_10_max value: 38.53198828945776 - type: nauc_mrr_at_10_std value: 1.4657516877125125 - type: nauc_mrr_at_1_diff1 value: 47.42346510865722 - type: nauc_mrr_at_1_max value: 36.48815188158201 - type: nauc_mrr_at_1_std value: -2.34134882449636 - type: nauc_mrr_at_20_diff1 value: 41.246202514418584 - type: nauc_mrr_at_20_max value: 38.69180784192216 - type: nauc_mrr_at_20_std value: 1.8205983742560619 - type: nauc_mrr_at_3_diff1 value: 41.09603949294592 - type: nauc_mrr_at_3_max value: 37.95896498227977 - type: nauc_mrr_at_3_std value: 0.2874075190886481 - type: nauc_mrr_at_5_diff1 value: 41.18455834868946 - type: nauc_mrr_at_5_max value: 38.456998347163065 - type: nauc_mrr_at_5_std value: 0.9867811075887676 - type: nauc_ndcg_at_1000_diff1 value: 40.54615364663546 - type: nauc_ndcg_at_1000_max value: 40.42616803864886 - type: nauc_ndcg_at_1000_std value: 4.363693436984652 - type: nauc_ndcg_at_100_diff1 value: 40.44224861178897 - type: nauc_ndcg_at_100_max value: 40.94806712564172 - type: nauc_ndcg_at_100_std value: 5.196573771400126 - type: nauc_ndcg_at_10_diff1 value: 40.92593737099367 - type: nauc_ndcg_at_10_max value: 40.26823363364135 - type: nauc_ndcg_at_10_std value: 3.192020901707987 - type: nauc_ndcg_at_1_diff1 value: 47.42346510865722 - type: nauc_ndcg_at_1_max value: 36.48815188158201 - type: nauc_ndcg_at_1_std value: -2.34134882449636 - type: nauc_ndcg_at_20_diff1 value: 40.70844796238177 - type: nauc_ndcg_at_20_max value: 41.066915934122356 - type: nauc_ndcg_at_20_std value: 4.941739690696084 - type: nauc_ndcg_at_3_diff1 value: 40.22388347943839 - type: nauc_ndcg_at_3_max value: 37.97075355659086 - type: nauc_ndcg_at_3_std value: -0.3952100142870558 - type: nauc_ndcg_at_5_diff1 value: 40.95268317695563 - type: nauc_ndcg_at_5_max value: 39.3554650798222 - type: nauc_ndcg_at_5_std value: 1.02690752358091 - type: nauc_precision_at_1000_diff1 value: -21.508622681866868 - type: nauc_precision_at_1000_max value: -5.391055753734811 - type: nauc_precision_at_1000_std value: 7.148967890675029 - type: nauc_precision_at_100_diff1 value: -9.555610415584772 - type: nauc_precision_at_100_max value: 12.841520305380632 - type: nauc_precision_at_100_std value: 19.88687702744806 - type: nauc_precision_at_10_diff1 value: 11.710375921485369 - type: nauc_precision_at_10_max value: 34.61710718960949 - type: nauc_precision_at_10_std value: 21.07494229065057 - type: nauc_precision_at_1_diff1 value: 47.42346510865722 - type: nauc_precision_at_1_max value: 36.48815188158201 - type: nauc_precision_at_1_std value: -2.34134882449636 - type: nauc_precision_at_20_diff1 value: 4.261943900088042 - type: nauc_precision_at_20_max value: 29.277336528563648 - type: nauc_precision_at_20_std value: 23.809798696946697 - type: nauc_precision_at_3_diff1 value: 24.180190068545883 - type: nauc_precision_at_3_max value: 37.86395654258292 - type: nauc_precision_at_3_std value: 9.925473230392306 - type: nauc_precision_at_5_diff1 value: 18.51298619639024 - type: nauc_precision_at_5_max value: 36.483902995937235 - type: nauc_precision_at_5_std value: 15.45543901748184 - type: nauc_recall_at_1000_diff1 value: 19.402855614334317 - type: nauc_recall_at_1000_max value: 54.58840809219886 - type: nauc_recall_at_1000_std value: 53.59980637963878 - type: nauc_recall_at_100_diff1 value: 27.63391689753813 - type: nauc_recall_at_100_max value: 48.11832053014399 - type: nauc_recall_at_100_std value: 30.476790377619945 - type: nauc_recall_at_10_diff1 value: 34.00655805236221 - type: nauc_recall_at_10_max value: 41.78819015238207 - type: nauc_recall_at_10_std value: 11.709621782547302 - type: nauc_recall_at_1_diff1 value: 49.580098743143 - type: nauc_recall_at_1_max value: 33.624185967920326 - type: nauc_recall_at_1_std value: -7.896295502496881 - type: nauc_recall_at_20_diff1 value: 32.58237251319437 - type: nauc_recall_at_20_max value: 45.64540237392343 - type: nauc_recall_at_20_std value: 20.49216050873925 - type: nauc_recall_at_3_diff1 value: 35.68042917162092 - type: nauc_recall_at_3_max value: 36.41986013001979 - type: nauc_recall_at_3_std value: -0.24966469870022118 - type: nauc_recall_at_5_diff1 value: 35.53479753080461 - type: nauc_recall_at_5_max value: 39.57047856279735 - type: nauc_recall_at_5_std value: 3.999123969896682 - type: ndcg_at_1 value: 33.79 - type: ndcg_at_10 value: 45.334 - type: ndcg_at_100 value: 51.06 - type: ndcg_at_1000 value: 52.908 - type: ndcg_at_20 value: 47.776 - type: ndcg_at_3 value: 39.503 - type: ndcg_at_5 value: 42.308 - type: precision_at_1 value: 33.79 - type: precision_at_10 value: 8.505 - type: precision_at_100 value: 1.307 - type: precision_at_1000 value: 0.165 - type: precision_at_20 value: 4.994 - type: precision_at_3 value: 19.33 - type: precision_at_5 value: 14.063999999999998 - type: recall_at_1 value: 27.275 - type: recall_at_10 value: 59.453 - type: recall_at_100 value: 83.417 - type: recall_at_1000 value: 95.174 - type: recall_at_20 value: 68.195 - type: recall_at_3 value: 43.206 - type: recall_at_5 value: 50.397000000000006 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval (default) type: CQADupstackRetrieval_is_a_combined_dataset config: default split: test revision: CQADupstackRetrieval_is_a_combined_dataset metrics: - type: main_score value: 43.816 - type: ndcg_at_10 value: 43.816 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval (default) type: mteb/cqadupstack-stats config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: main_score value: 38.894 - type: map_at_1 value: 26.427 - type: map_at_10 value: 34.255 - type: map_at_100 value: 35.303000000000004 - type: map_at_1000 value: 35.404 - type: map_at_20 value: 34.774 - type: map_at_3 value: 31.695 - type: map_at_5 value: 33.202999999999996 - type: mrr_at_1 value: 29.601226993865033 - type: mrr_at_10 value: 37.08016846820527 - type: mrr_at_100 value: 37.98796986670183 - type: mrr_at_1000 value: 38.05125035681772 - type: mrr_at_20 value: 37.55764205112128 - type: mrr_at_3 value: 34.81595092024541 - type: mrr_at_5 value: 36.019938650306756 - type: nauc_map_at_1000_diff1 value: 56.60072218558477 - type: nauc_map_at_1000_max value: 38.11726050477455 - type: nauc_map_at_1000_std value: 1.0589013948647812 - type: nauc_map_at_100_diff1 value: 56.592212691088264 - type: nauc_map_at_100_max value: 38.09568149860661 - type: nauc_map_at_100_std value: 1.0392153444561998 - type: nauc_map_at_10_diff1 value: 56.86378015345323 - type: nauc_map_at_10_max value: 37.875244017016946 - type: nauc_map_at_10_std value: 0.6492500472958144 - type: nauc_map_at_1_diff1 value: 61.06511889290507 - type: nauc_map_at_1_max value: 37.14377732406466 - type: nauc_map_at_1_std value: -3.0410115573638064 - type: nauc_map_at_20_diff1 value: 56.65960048389336 - type: nauc_map_at_20_max value: 38.01063372743232 - type: nauc_map_at_20_std value: 0.7887134640638815 - type: nauc_map_at_3_diff1 value: 58.07356810929091 - type: nauc_map_at_3_max value: 37.49068261785256 - type: nauc_map_at_3_std value: -1.1929095993889525 - type: nauc_map_at_5_diff1 value: 57.50901814735278 - type: nauc_map_at_5_max value: 37.85923289090272 - type: nauc_map_at_5_std value: 0.059903065225492776 - type: nauc_mrr_at_1000_diff1 value: 54.58824792518784 - type: nauc_mrr_at_1000_max value: 38.86931059709252 - type: nauc_mrr_at_1000_std value: 2.9986997791166368 - type: nauc_mrr_at_100_diff1 value: 54.57585597713184 - type: nauc_mrr_at_100_max value: 38.87313557690555 - type: nauc_mrr_at_100_std value: 3.004154480090834 - type: nauc_mrr_at_10_diff1 value: 54.750538678542725 - type: nauc_mrr_at_10_max value: 38.91736870335598 - type: nauc_mrr_at_10_std value: 2.827831779250098 - type: nauc_mrr_at_1_diff1 value: 58.42689852509982 - type: nauc_mrr_at_1_max value: 38.738304414401156 - type: nauc_mrr_at_1_std value: 0.20380762325184898 - type: nauc_mrr_at_20_diff1 value: 54.571333128033274 - type: nauc_mrr_at_20_max value: 38.82683538226168 - type: nauc_mrr_at_20_std value: 2.81272631376222 - type: nauc_mrr_at_3_diff1 value: 55.402618824410055 - type: nauc_mrr_at_3_max value: 38.770457076566686 - type: nauc_mrr_at_3_std value: 2.053522695739241 - type: nauc_mrr_at_5_diff1 value: 55.247338994146354 - type: nauc_mrr_at_5_max value: 39.03504319610805 - type: nauc_mrr_at_5_std value: 2.625757410773132 - type: nauc_ndcg_at_1000_diff1 value: 53.96113307294218 - type: nauc_ndcg_at_1000_max value: 39.50706897713246 - type: nauc_ndcg_at_1000_std value: 4.9387998806714934 - type: nauc_ndcg_at_100_diff1 value: 53.85402259839868 - type: nauc_ndcg_at_100_max value: 39.56983171505153 - type: nauc_ndcg_at_100_std value: 4.972045278289709 - type: nauc_ndcg_at_10_diff1 value: 54.71242559860603 - type: nauc_ndcg_at_10_max value: 38.581472160487685 - type: nauc_ndcg_at_10_std value: 2.839169333745226 - type: nauc_ndcg_at_1_diff1 value: 58.42689852509982 - type: nauc_ndcg_at_1_max value: 38.738304414401156 - type: nauc_ndcg_at_1_std value: 0.20380762325184898 - type: nauc_ndcg_at_20_diff1 value: 53.978219129570896 - type: nauc_ndcg_at_20_max value: 38.862218171161544 - type: nauc_ndcg_at_20_std value: 3.239351254035964 - type: nauc_ndcg_at_3_diff1 value: 56.19488839726825 - type: nauc_ndcg_at_3_max value: 38.43663271574053 - type: nauc_ndcg_at_3_std value: 0.963285267513604 - type: nauc_ndcg_at_5_diff1 value: 55.92862198714638 - type: nauc_ndcg_at_5_max value: 38.680176574203585 - type: nauc_ndcg_at_5_std value: 2.0517484488591657 - type: nauc_precision_at_1000_diff1 value: -10.093484727725837 - type: nauc_precision_at_1000_max value: 11.599506756878041 - type: nauc_precision_at_1000_std value: 16.104303375916956 - type: nauc_precision_at_100_diff1 value: 8.969090844678053 - type: nauc_precision_at_100_max value: 27.083136012889142 - type: nauc_precision_at_100_std value: 21.583675042204572 - type: nauc_precision_at_10_diff1 value: 33.02398417235467 - type: nauc_precision_at_10_max value: 36.19574777774318 - type: nauc_precision_at_10_std value: 15.536283231055586 - type: nauc_precision_at_1_diff1 value: 58.42689852509982 - type: nauc_precision_at_1_max value: 38.738304414401156 - type: nauc_precision_at_1_std value: 0.20380762325184898 - type: nauc_precision_at_20_diff1 value: 25.782064865016064 - type: nauc_precision_at_20_max value: 34.40259494180231 - type: nauc_precision_at_20_std value: 16.217527374266183 - type: nauc_precision_at_3_diff1 value: 47.01043944309824 - type: nauc_precision_at_3_max value: 38.470771808417766 - type: nauc_precision_at_3_std value: 7.132839594950563 - type: nauc_precision_at_5_diff1 value: 41.11616429779191 - type: nauc_precision_at_5_max value: 37.09283603644687 - type: nauc_precision_at_5_std value: 11.627051542109017 - type: nauc_recall_at_1000_diff1 value: 29.344095205506555 - type: nauc_recall_at_1000_max value: 46.58735252578747 - type: nauc_recall_at_1000_std value: 43.34763296426759 - type: nauc_recall_at_100_diff1 value: 40.43843747747295 - type: nauc_recall_at_100_max value: 42.50706821532735 - type: nauc_recall_at_100_std value: 21.22093617475044 - type: nauc_recall_at_10_diff1 value: 48.26433832406352 - type: nauc_recall_at_10_max value: 37.79745160062501 - type: nauc_recall_at_10_std value: 6.695186585419338 - type: nauc_recall_at_1_diff1 value: 61.06511889290507 - type: nauc_recall_at_1_max value: 37.14377732406466 - type: nauc_recall_at_1_std value: -3.0410115573638064 - type: nauc_recall_at_20_diff1 value: 44.50773149894022 - type: nauc_recall_at_20_max value: 38.219843285381856 - type: nauc_recall_at_20_std value: 8.199016503969196 - type: nauc_recall_at_3_diff1 value: 54.15714160224081 - type: nauc_recall_at_3_max value: 37.840736226935725 - type: nauc_recall_at_3_std value: 1.4933386616317446 - type: nauc_recall_at_5_diff1 value: 52.58026028311702 - type: nauc_recall_at_5_max value: 38.484030122838305 - type: nauc_recall_at_5_std value: 4.460832900300881 - type: ndcg_at_1 value: 29.601 - type: ndcg_at_10 value: 38.894 - type: ndcg_at_100 value: 44.04 - type: ndcg_at_1000 value: 46.382 - type: ndcg_at_20 value: 40.663 - type: ndcg_at_3 value: 34.236 - type: ndcg_at_5 value: 36.52 - type: precision_at_1 value: 29.601 - type: precision_at_10 value: 6.181 - type: precision_at_100 value: 0.9570000000000001 - type: precision_at_1000 value: 0.124 - type: precision_at_20 value: 3.566 - type: precision_at_3 value: 14.571000000000002 - type: precision_at_5 value: 10.337 - type: recall_at_1 value: 26.427 - type: recall_at_10 value: 50.214000000000006 - type: recall_at_100 value: 73.598 - type: recall_at_1000 value: 90.659 - type: recall_at_20 value: 56.842000000000006 - type: recall_at_3 value: 37.509 - type: recall_at_5 value: 43.061 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval (default) type: mteb/cqadupstack-tex config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: main_score value: 31.662000000000003 - type: map_at_1 value: 18.723 - type: map_at_10 value: 26.701000000000004 - type: map_at_100 value: 27.828000000000003 - type: map_at_1000 value: 27.954 - type: map_at_20 value: 27.278000000000002 - type: map_at_3 value: 24.154 - type: map_at_5 value: 25.55 - type: mrr_at_1 value: 22.40192704748796 - type: mrr_at_10 value: 30.449357650837396 - type: mrr_at_100 value: 31.366622972747187 - type: mrr_at_1000 value: 31.435964919479986 - type: mrr_at_20 value: 30.927227547273077 - type: mrr_at_3 value: 28.05115852259698 - type: mrr_at_5 value: 29.43106217022262 - type: nauc_map_at_1000_diff1 value: 40.29294330784634 - type: nauc_map_at_1000_max value: 30.36951944693726 - type: nauc_map_at_1000_std value: -0.3414834335787859 - type: nauc_map_at_100_diff1 value: 40.28120265458076 - type: nauc_map_at_100_max value: 30.36471186375651 - type: nauc_map_at_100_std value: -0.3335024521355652 - type: nauc_map_at_10_diff1 value: 40.54922279010274 - type: nauc_map_at_10_max value: 30.06425681128433 - type: nauc_map_at_10_std value: -0.9498753795017445 - type: nauc_map_at_1_diff1 value: 46.531783062841534 - type: nauc_map_at_1_max value: 27.458325853105315 - type: nauc_map_at_1_std value: -4.597119334637891 - type: nauc_map_at_20_diff1 value: 40.382854954927524 - type: nauc_map_at_20_max value: 30.250152473037033 - type: nauc_map_at_20_std value: -0.612621247842456 - type: nauc_map_at_3_diff1 value: 41.805903548458296 - type: nauc_map_at_3_max value: 29.902476093359216 - type: nauc_map_at_3_std value: -1.7418548848229358 - type: nauc_map_at_5_diff1 value: 40.971548027716956 - type: nauc_map_at_5_max value: 30.02180838754201 - type: nauc_map_at_5_std value: -1.341795240943666 - type: nauc_mrr_at_1000_diff1 value: 39.95697123995655 - type: nauc_mrr_at_1000_max value: 31.997575460481613 - type: nauc_mrr_at_1000_std value: 0.4064232742934565 - type: nauc_mrr_at_100_diff1 value: 39.93042399360589 - type: nauc_mrr_at_100_max value: 31.996106010277902 - type: nauc_mrr_at_100_std value: 0.42019195064055487 - type: nauc_mrr_at_10_diff1 value: 40.07298006475225 - type: nauc_mrr_at_10_max value: 31.919593394912855 - type: nauc_mrr_at_10_std value: 0.02350450115938819 - type: nauc_mrr_at_1_diff1 value: 45.92456630155256 - type: nauc_mrr_at_1_max value: 30.624291065723035 - type: nauc_mrr_at_1_std value: -3.080621621197733 - type: nauc_mrr_at_20_diff1 value: 39.961455846237456 - type: nauc_mrr_at_20_max value: 32.006415548052416 - type: nauc_mrr_at_20_std value: 0.3198094423486476 - type: nauc_mrr_at_3_diff1 value: 41.32816822053059 - type: nauc_mrr_at_3_max value: 32.41066911321068 - type: nauc_mrr_at_3_std value: -0.6529950528921229 - type: nauc_mrr_at_5_diff1 value: 40.34219346934063 - type: nauc_mrr_at_5_max value: 32.04615580231512 - type: nauc_mrr_at_5_std value: -0.2914250580085147 - type: nauc_ndcg_at_1000_diff1 value: 37.474994751920576 - type: nauc_ndcg_at_1000_max value: 31.41222657464391 - type: nauc_ndcg_at_1000_std value: 3.240693443312849 - type: nauc_ndcg_at_100_diff1 value: 37.03474261474229 - type: nauc_ndcg_at_100_max value: 31.497431680733584 - type: nauc_ndcg_at_100_std value: 3.700027399857788 - type: nauc_ndcg_at_10_diff1 value: 38.03921533314436 - type: nauc_ndcg_at_10_max value: 30.78682453138251 - type: nauc_ndcg_at_10_std value: 0.8769594573808579 - type: nauc_ndcg_at_1_diff1 value: 45.92456630155256 - type: nauc_ndcg_at_1_max value: 30.624291065723035 - type: nauc_ndcg_at_1_std value: -3.080621621197733 - type: nauc_ndcg_at_20_diff1 value: 37.62104689563685 - type: nauc_ndcg_at_20_max value: 31.221003974077853 - type: nauc_ndcg_at_20_std value: 1.9883412769611548 - type: nauc_ndcg_at_3_diff1 value: 40.17572316262669 - type: nauc_ndcg_at_3_max value: 31.203439927044585 - type: nauc_ndcg_at_3_std value: -0.712868414940749 - type: nauc_ndcg_at_5_diff1 value: 38.848965800200695 - type: nauc_ndcg_at_5_max value: 30.90409092278334 - type: nauc_ndcg_at_5_std value: -0.07380105331601196 - type: nauc_precision_at_1000_diff1 value: 0.3488459536942616 - type: nauc_precision_at_1000_max value: 11.974221111911714 - type: nauc_precision_at_1000_std value: 5.545029664089995 - type: nauc_precision_at_100_diff1 value: 8.558130903347076 - type: nauc_precision_at_100_max value: 23.313159347579884 - type: nauc_precision_at_100_std value: 12.667615203365548 - type: nauc_precision_at_10_diff1 value: 23.1055686548991 - type: nauc_precision_at_10_max value: 30.62764918957524 - type: nauc_precision_at_10_std value: 5.655860099998371 - type: nauc_precision_at_1_diff1 value: 45.92456630155256 - type: nauc_precision_at_1_max value: 30.624291065723035 - type: nauc_precision_at_1_std value: -3.080621621197733 - type: nauc_precision_at_20_diff1 value: 19.02845795878823 - type: nauc_precision_at_20_max value: 29.986698288034308 - type: nauc_precision_at_20_std value: 8.65839413322005 - type: nauc_precision_at_3_diff1 value: 33.537119810716284 - type: nauc_precision_at_3_max value: 33.88768604457864 - type: nauc_precision_at_3_std value: 2.2581668899844054 - type: nauc_precision_at_5_diff1 value: 28.667111448412143 - type: nauc_precision_at_5_max value: 32.707947446614234 - type: nauc_precision_at_5_std value: 3.633065285428966 - type: nauc_recall_at_1000_diff1 value: 19.260215950407126 - type: nauc_recall_at_1000_max value: 29.880298126037186 - type: nauc_recall_at_1000_std value: 29.313220294243376 - type: nauc_recall_at_100_diff1 value: 22.618647334080375 - type: nauc_recall_at_100_max value: 30.06708168274523 - type: nauc_recall_at_100_std value: 19.578709404274342 - type: nauc_recall_at_10_diff1 value: 29.745906783751813 - type: nauc_recall_at_10_max value: 28.613864193571125 - type: nauc_recall_at_10_std value: 4.836841344636072 - type: nauc_recall_at_1_diff1 value: 46.531783062841534 - type: nauc_recall_at_1_max value: 27.458325853105315 - type: nauc_recall_at_1_std value: -4.597119334637891 - type: nauc_recall_at_20_diff1 value: 28.092320196353327 - type: nauc_recall_at_20_max value: 29.617127996080235 - type: nauc_recall_at_20_std value: 8.59271280643495 - type: nauc_recall_at_3_diff1 value: 35.81724087499039 - type: nauc_recall_at_3_max value: 30.1701581709378 - type: nauc_recall_at_3_std value: 1.038654228057759 - type: nauc_recall_at_5_diff1 value: 32.38568644423286 - type: nauc_recall_at_5_max value: 29.263454173692914 - type: nauc_recall_at_5_std value: 2.1188458895997964 - type: ndcg_at_1 value: 22.402 - type: ndcg_at_10 value: 31.662000000000003 - type: ndcg_at_100 value: 37.065 - type: ndcg_at_1000 value: 39.864 - type: ndcg_at_20 value: 33.533 - type: ndcg_at_3 value: 27.131 - type: ndcg_at_5 value: 29.223 - type: precision_at_1 value: 22.402 - type: precision_at_10 value: 5.7669999999999995 - type: precision_at_100 value: 0.992 - type: precision_at_1000 value: 0.14200000000000002 - type: precision_at_20 value: 3.45 - type: precision_at_3 value: 12.801000000000002 - type: precision_at_5 value: 9.277000000000001 - type: recall_at_1 value: 18.723 - type: recall_at_10 value: 42.738 - type: recall_at_100 value: 67.066 - type: recall_at_1000 value: 86.825 - type: recall_at_20 value: 49.641999999999996 - type: recall_at_3 value: 30.176 - type: recall_at_5 value: 35.5 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval (default) type: mteb/cqadupstack-unix config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: main_score value: 44.449 - type: map_at_1 value: 28.502 - type: map_at_10 value: 38.763 - type: map_at_100 value: 39.904 - type: map_at_1000 value: 40.003 - type: map_at_20 value: 39.379 - type: map_at_3 value: 35.367 - type: map_at_5 value: 37.480000000000004 - type: mrr_at_1 value: 33.2089552238806 - type: mrr_at_10 value: 42.56833392561001 - type: mrr_at_100 value: 43.42198225922794 - type: mrr_at_1000 value: 43.47454573766307 - type: mrr_at_20 value: 43.07361302641885 - type: mrr_at_3 value: 39.69216417910445 - type: mrr_at_5 value: 41.55783582089548 - type: nauc_map_at_1000_diff1 value: 51.09337515742835 - type: nauc_map_at_1000_max value: 45.11970808684597 - type: nauc_map_at_1000_std value: -0.3050907542347147 - type: nauc_map_at_100_diff1 value: 51.06721661390107 - type: nauc_map_at_100_max value: 45.10647782134187 - type: nauc_map_at_100_std value: -0.3098521100683701 - type: nauc_map_at_10_diff1 value: 51.09634307647701 - type: nauc_map_at_10_max value: 44.88824042512123 - type: nauc_map_at_10_std value: -0.738023844952336 - type: nauc_map_at_1_diff1 value: 58.91703951287665 - type: nauc_map_at_1_max value: 45.72426414838986 - type: nauc_map_at_1_std value: -4.450728836265055 - type: nauc_map_at_20_diff1 value: 51.07388859373564 - type: nauc_map_at_20_max value: 45.00318357068444 - type: nauc_map_at_20_std value: -0.4592556029173754 - type: nauc_map_at_3_diff1 value: 52.42891770025886 - type: nauc_map_at_3_max value: 44.64071416768749 - type: nauc_map_at_3_std value: -1.973140517009083 - type: nauc_map_at_5_diff1 value: 51.46402142721789 - type: nauc_map_at_5_max value: 44.626241564092766 - type: nauc_map_at_5_std value: -1.3987944859200176 - type: nauc_mrr_at_1000_diff1 value: 49.56702747138606 - type: nauc_mrr_at_1000_max value: 44.979023748989455 - type: nauc_mrr_at_1000_std value: 0.25357932059734145 - type: nauc_mrr_at_100_diff1 value: 49.55224379363242 - type: nauc_mrr_at_100_max value: 44.97552508561541 - type: nauc_mrr_at_100_std value: 0.2748073187838927 - type: nauc_mrr_at_10_diff1 value: 49.39262295091568 - type: nauc_mrr_at_10_max value: 44.86831322043138 - type: nauc_mrr_at_10_std value: -0.04250684838053287 - type: nauc_mrr_at_1_diff1 value: 56.601138443656374 - type: nauc_mrr_at_1_max value: 46.155192599962 - type: nauc_mrr_at_1_std value: -3.841997988555605 - type: nauc_mrr_at_20_diff1 value: 49.48965201514485 - type: nauc_mrr_at_20_max value: 44.95960437502683 - type: nauc_mrr_at_20_std value: 0.26731422621033557 - type: nauc_mrr_at_3_diff1 value: 50.29192393046979 - type: nauc_mrr_at_3_max value: 45.211752965469316 - type: nauc_mrr_at_3_std value: -0.815057190995277 - type: nauc_mrr_at_5_diff1 value: 49.351603311309944 - type: nauc_mrr_at_5_max value: 44.88983601960641 - type: nauc_mrr_at_5_std value: -0.20982880810105417 - type: nauc_ndcg_at_1000_diff1 value: 48.58354462551937 - type: nauc_ndcg_at_1000_max value: 45.35705584395072 - type: nauc_ndcg_at_1000_std value: 2.54888435337591 - type: nauc_ndcg_at_100_diff1 value: 47.83163408000412 - type: nauc_ndcg_at_100_max value: 45.0343949365134 - type: nauc_ndcg_at_100_std value: 2.980663545406531 - type: nauc_ndcg_at_10_diff1 value: 47.7815366065242 - type: nauc_ndcg_at_10_max value: 44.36773394568082 - type: nauc_ndcg_at_10_std value: 1.02609790224527 - type: nauc_ndcg_at_1_diff1 value: 56.601138443656374 - type: nauc_ndcg_at_1_max value: 46.155192599962 - type: nauc_ndcg_at_1_std value: -3.841997988555605 - type: nauc_ndcg_at_20_diff1 value: 47.811909658082875 - type: nauc_ndcg_at_20_max value: 44.75137852464418 - type: nauc_ndcg_at_20_std value: 2.134275377210533 - type: nauc_ndcg_at_3_diff1 value: 49.47165833829449 - type: nauc_ndcg_at_3_max value: 44.262246595483504 - type: nauc_ndcg_at_3_std value: -0.7284730096045571 - type: nauc_ndcg_at_5_diff1 value: 48.32213730788881 - type: nauc_ndcg_at_5_max value: 44.132802200940915 - type: nauc_ndcg_at_5_std value: -0.08748854908072565 - type: nauc_precision_at_1000_diff1 value: -12.118988897199308 - type: nauc_precision_at_1000_max value: -0.7874363151972603 - type: nauc_precision_at_1000_std value: 8.882438027481804 - type: nauc_precision_at_100_diff1 value: -1.5152805469221087 - type: nauc_precision_at_100_max value: 14.090477325838059 - type: nauc_precision_at_100_std value: 14.149937999086665 - type: nauc_precision_at_10_diff1 value: 17.801742598469346 - type: nauc_precision_at_10_max value: 30.090739958907363 - type: nauc_precision_at_10_std value: 8.436791801910433 - type: nauc_precision_at_1_diff1 value: 56.601138443656374 - type: nauc_precision_at_1_max value: 46.155192599962 - type: nauc_precision_at_1_std value: -3.841997988555605 - type: nauc_precision_at_20_diff1 value: 12.84761699215353 - type: nauc_precision_at_20_max value: 26.67211391302849 - type: nauc_precision_at_20_std value: 11.133320866028658 - type: nauc_precision_at_3_diff1 value: 34.16116836040259 - type: nauc_precision_at_3_max value: 38.22148520643311 - type: nauc_precision_at_3_std value: 2.5818944979518905 - type: nauc_precision_at_5_diff1 value: 26.530376251979483 - type: nauc_precision_at_5_max value: 34.69034452388472 - type: nauc_precision_at_5_std value: 4.676074349833495 - type: nauc_recall_at_1000_diff1 value: 28.911934383429955 - type: nauc_recall_at_1000_max value: 50.212785017522506 - type: nauc_recall_at_1000_std value: 42.3629198766138 - type: nauc_recall_at_100_diff1 value: 31.381571692996857 - type: nauc_recall_at_100_max value: 41.01191885765792 - type: nauc_recall_at_100_std value: 20.857143634593037 - type: nauc_recall_at_10_diff1 value: 37.163994333372706 - type: nauc_recall_at_10_max value: 39.94892539019631 - type: nauc_recall_at_10_std value: 5.290418976361259 - type: nauc_recall_at_1_diff1 value: 58.91703951287665 - type: nauc_recall_at_1_max value: 45.72426414838986 - type: nauc_recall_at_1_std value: -4.450728836265055 - type: nauc_recall_at_20_diff1 value: 35.99281443407049 - type: nauc_recall_at_20_max value: 40.83481293624789 - type: nauc_recall_at_20_std value: 10.3889242981396 - type: nauc_recall_at_3_diff1 value: 44.15971877810932 - type: nauc_recall_at_3_max value: 41.75661191827119 - type: nauc_recall_at_3_std value: 0.22409370715719445 - type: nauc_recall_at_5_diff1 value: 39.79497306179428 - type: nauc_recall_at_5_max value: 40.39551747161536 - type: nauc_recall_at_5_std value: 2.3509968624532975 - type: ndcg_at_1 value: 33.209 - type: ndcg_at_10 value: 44.449 - type: ndcg_at_100 value: 49.541000000000004 - type: ndcg_at_1000 value: 51.66 - type: ndcg_at_20 value: 46.361000000000004 - type: ndcg_at_3 value: 38.61 - type: ndcg_at_5 value: 41.802 - type: precision_at_1 value: 33.209 - type: precision_at_10 value: 7.556 - type: precision_at_100 value: 1.124 - type: precision_at_1000 value: 0.14200000000000002 - type: precision_at_20 value: 4.314 - type: precision_at_3 value: 17.506 - type: precision_at_5 value: 12.668 - type: recall_at_1 value: 28.502 - type: recall_at_10 value: 57.781000000000006 - type: recall_at_100 value: 79.831 - type: recall_at_1000 value: 94.462 - type: recall_at_20 value: 64.565 - type: recall_at_3 value: 42.229 - type: recall_at_5 value: 50.144 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval (default) type: mteb/cqadupstack-webmasters config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: main_score value: 43.342000000000006 - type: map_at_1 value: 26.924999999999997 - type: map_at_10 value: 37.29 - type: map_at_100 value: 38.906 - type: map_at_1000 value: 39.129999999999995 - type: map_at_20 value: 38.1 - type: map_at_3 value: 34.35 - type: map_at_5 value: 35.955999999999996 - type: mrr_at_1 value: 32.21343873517787 - type: mrr_at_10 value: 41.73168015559322 - type: mrr_at_100 value: 42.81722926727115 - type: mrr_at_1000 value: 42.8578246510941 - type: mrr_at_20 value: 42.40869869880568 - type: mrr_at_3 value: 39.32806324110674 - type: mrr_at_5 value: 40.632411067193694 - type: nauc_map_at_1000_diff1 value: 39.56475855970276 - type: nauc_map_at_1000_max value: 38.177848390247235 - type: nauc_map_at_1000_std value: 6.9335353798460675 - type: nauc_map_at_100_diff1 value: 39.516419907405805 - type: nauc_map_at_100_max value: 38.36036331899402 - type: nauc_map_at_100_std value: 6.831480671192796 - type: nauc_map_at_10_diff1 value: 39.48256807532493 - type: nauc_map_at_10_max value: 38.197849516463194 - type: nauc_map_at_10_std value: 5.253983146776727 - type: nauc_map_at_1_diff1 value: 46.567254266614846 - type: nauc_map_at_1_max value: 37.732540483896635 - type: nauc_map_at_1_std value: 2.2489282023963955 - type: nauc_map_at_20_diff1 value: 39.378259059028046 - type: nauc_map_at_20_max value: 38.2189463642111 - type: nauc_map_at_20_std value: 6.056542688093049 - type: nauc_map_at_3_diff1 value: 40.40449060760161 - type: nauc_map_at_3_max value: 37.99871952048906 - type: nauc_map_at_3_std value: 3.4100661197624476 - type: nauc_map_at_5_diff1 value: 40.1519126124995 - type: nauc_map_at_5_max value: 37.95919343694378 - type: nauc_map_at_5_std value: 4.571457569129526 - type: nauc_mrr_at_1000_diff1 value: 38.64403309308046 - type: nauc_mrr_at_1000_max value: 37.17525534091487 - type: nauc_mrr_at_1000_std value: 8.438248626531607 - type: nauc_mrr_at_100_diff1 value: 38.62191111052577 - type: nauc_mrr_at_100_max value: 37.16381346460307 - type: nauc_mrr_at_100_std value: 8.473494626840806 - type: nauc_mrr_at_10_diff1 value: 38.66598566082418 - type: nauc_mrr_at_10_max value: 37.356872781907384 - type: nauc_mrr_at_10_std value: 8.494041634436822 - type: nauc_mrr_at_1_diff1 value: 43.817215062943916 - type: nauc_mrr_at_1_max value: 37.39185593941398 - type: nauc_mrr_at_1_std value: 7.28642602050739 - type: nauc_mrr_at_20_diff1 value: 38.44686926468191 - type: nauc_mrr_at_20_max value: 37.09242707803003 - type: nauc_mrr_at_20_std value: 8.336904051478186 - type: nauc_mrr_at_3_diff1 value: 38.15060994005348 - type: nauc_mrr_at_3_max value: 36.815987651583306 - type: nauc_mrr_at_3_std value: 6.854787905916098 - type: nauc_mrr_at_5_diff1 value: 38.89757601751886 - type: nauc_mrr_at_5_max value: 37.19178420763993 - type: nauc_mrr_at_5_std value: 7.704930194711135 - type: nauc_ndcg_at_1000_diff1 value: 37.52136803315935 - type: nauc_ndcg_at_1000_max value: 38.92408944416557 - type: nauc_ndcg_at_1000_std value: 10.871928230197692 - type: nauc_ndcg_at_100_diff1 value: 37.13360414141896 - type: nauc_ndcg_at_100_max value: 39.0053807375677 - type: nauc_ndcg_at_100_std value: 11.489300764908352 - type: nauc_ndcg_at_10_diff1 value: 36.90485505709437 - type: nauc_ndcg_at_10_max value: 37.617894869105406 - type: nauc_ndcg_at_10_std value: 8.905497675458868 - type: nauc_ndcg_at_1_diff1 value: 43.817215062943916 - type: nauc_ndcg_at_1_max value: 37.39185593941398 - type: nauc_ndcg_at_1_std value: 7.28642602050739 - type: nauc_ndcg_at_20_diff1 value: 36.48691469681143 - type: nauc_ndcg_at_20_max value: 37.621472858058546 - type: nauc_ndcg_at_20_std value: 9.632107687173814 - type: nauc_ndcg_at_3_diff1 value: 37.5454366452348 - type: nauc_ndcg_at_3_max value: 37.26941098955138 - type: nauc_ndcg_at_3_std value: 6.299967228476719 - type: nauc_ndcg_at_5_diff1 value: 38.11812602665602 - type: nauc_ndcg_at_5_max value: 37.1666041307787 - type: nauc_ndcg_at_5_std value: 7.918994950799998 - type: nauc_precision_at_1000_diff1 value: -2.2969824543205806 - type: nauc_precision_at_1000_max value: -15.419366952284975 - type: nauc_precision_at_1000_std value: 19.12966399374656 - type: nauc_precision_at_100_diff1 value: -1.021770567948756 - type: nauc_precision_at_100_max value: -1.8775299175206996 - type: nauc_precision_at_100_std value: 27.24690244968834 - type: nauc_precision_at_10_diff1 value: 10.980118436692694 - type: nauc_precision_at_10_max value: 22.43559969209056 - type: nauc_precision_at_10_std value: 23.820891112348573 - type: nauc_precision_at_1_diff1 value: 43.817215062943916 - type: nauc_precision_at_1_max value: 37.39185593941398 - type: nauc_precision_at_1_std value: 7.28642602050739 - type: nauc_precision_at_20_diff1 value: 4.804175264538657 - type: nauc_precision_at_20_max value: 15.499790519728988 - type: nauc_precision_at_20_std value: 29.509462091568256 - type: nauc_precision_at_3_diff1 value: 21.43695233004016 - type: nauc_precision_at_3_max value: 31.880319956722815 - type: nauc_precision_at_3_std value: 13.059502909551176 - type: nauc_precision_at_5_diff1 value: 18.363478978651912 - type: nauc_precision_at_5_max value: 27.088121521248816 - type: nauc_precision_at_5_std value: 18.341614521330147 - type: nauc_recall_at_1000_diff1 value: 31.26577486561114 - type: nauc_recall_at_1000_max value: 64.08514957152025 - type: nauc_recall_at_1000_std value: 59.55425703698939 - type: nauc_recall_at_100_diff1 value: 26.66049572028577 - type: nauc_recall_at_100_max value: 43.36087610846491 - type: nauc_recall_at_100_std value: 35.593597922216865 - type: nauc_recall_at_10_diff1 value: 27.7772025462008 - type: nauc_recall_at_10_max value: 35.99035214574843 - type: nauc_recall_at_10_std value: 12.180058133691604 - type: nauc_recall_at_1_diff1 value: 46.567254266614846 - type: nauc_recall_at_1_max value: 37.732540483896635 - type: nauc_recall_at_1_std value: 2.2489282023963955 - type: nauc_recall_at_20_diff1 value: 25.280727909671363 - type: nauc_recall_at_20_max value: 34.24681065861685 - type: nauc_recall_at_20_std value: 16.674472276356063 - type: nauc_recall_at_3_diff1 value: 32.639943281033354 - type: nauc_recall_at_3_max value: 35.48634586230576 - type: nauc_recall_at_3_std value: 2.7588471369487557 - type: nauc_recall_at_5_diff1 value: 32.46681634072349 - type: nauc_recall_at_5_max value: 35.526045994502745 - type: nauc_recall_at_5_std value: 6.660060598477094 - type: ndcg_at_1 value: 32.213 - type: ndcg_at_10 value: 43.342000000000006 - type: ndcg_at_100 value: 49.484 - type: ndcg_at_1000 value: 51.507999999999996 - type: ndcg_at_20 value: 45.614 - type: ndcg_at_3 value: 38.84 - type: ndcg_at_5 value: 40.894999999999996 - type: precision_at_1 value: 32.213 - type: precision_at_10 value: 8.103 - type: precision_at_100 value: 1.625 - type: precision_at_1000 value: 0.246 - type: precision_at_20 value: 5.0889999999999995 - type: precision_at_3 value: 18.379 - type: precision_at_5 value: 13.123000000000001 - type: recall_at_1 value: 26.924999999999997 - type: recall_at_10 value: 55.249 - type: recall_at_100 value: 82.34 - type: recall_at_1000 value: 94.368 - type: recall_at_20 value: 63.757 - type: recall_at_3 value: 42.062 - type: recall_at_5 value: 47.615 - task: type: Retrieval dataset: name: MTEB CQADupstackWordpressRetrieval (default) type: mteb/cqadupstack-wordpress config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: main_score value: 35.774 - type: map_at_1 value: 22.779 - type: map_at_10 value: 30.747000000000003 - type: map_at_100 value: 31.783 - type: map_at_1000 value: 31.872 - type: map_at_20 value: 31.274 - type: map_at_3 value: 27.96 - type: map_at_5 value: 29.537999999999997 - type: mrr_at_1 value: 25.13863216266174 - type: mrr_at_10 value: 33.15311152187305 - type: mrr_at_100 value: 34.0591979023387 - type: mrr_at_1000 value: 34.120150414093445 - type: mrr_at_20 value: 33.62239000549977 - type: mrr_at_3 value: 30.499075785582253 - type: mrr_at_5 value: 32.01478743068391 - type: nauc_map_at_1000_diff1 value: 40.9666267178634 - type: nauc_map_at_1000_max value: 35.512382177489464 - type: nauc_map_at_1000_std value: 0.6224247525822328 - type: nauc_map_at_100_diff1 value: 40.935071530613016 - type: nauc_map_at_100_max value: 35.4689334665505 - type: nauc_map_at_100_std value: 0.5881898556397818 - type: nauc_map_at_10_diff1 value: 41.09587027828798 - type: nauc_map_at_10_max value: 35.57960780251561 - type: nauc_map_at_10_std value: 0.21238247793288179 - type: nauc_map_at_1_diff1 value: 46.79723740072981 - type: nauc_map_at_1_max value: 37.68968438458517 - type: nauc_map_at_1_std value: -2.785325878591901 - type: nauc_map_at_20_diff1 value: 41.02661398711254 - type: nauc_map_at_20_max value: 35.61017903374831 - type: nauc_map_at_20_std value: 0.45478618525492803 - type: nauc_map_at_3_diff1 value: 41.89999642256378 - type: nauc_map_at_3_max value: 35.97333460925634 - type: nauc_map_at_3_std value: -1.0669866710282385 - type: nauc_map_at_5_diff1 value: 41.18936334778094 - type: nauc_map_at_5_max value: 35.651730615108626 - type: nauc_map_at_5_std value: 0.011285859606637189 - type: nauc_mrr_at_1000_diff1 value: 41.39497842969287 - type: nauc_mrr_at_1000_max value: 36.819933081607815 - type: nauc_mrr_at_1000_std value: 1.8448453831538831 - type: nauc_mrr_at_100_diff1 value: 41.37543086993865 - type: nauc_mrr_at_100_max value: 36.79132840589643 - type: nauc_mrr_at_100_std value: 1.838173324273364 - type: nauc_mrr_at_10_diff1 value: 41.470486495069444 - type: nauc_mrr_at_10_max value: 36.94185193360758 - type: nauc_mrr_at_10_std value: 1.594158944434542 - type: nauc_mrr_at_1_diff1 value: 46.9714558985573 - type: nauc_mrr_at_1_max value: 39.30041031657009 - type: nauc_mrr_at_1_std value: -1.4033670246089662 - type: nauc_mrr_at_20_diff1 value: 41.43921225771939 - type: nauc_mrr_at_20_max value: 36.940835903316156 - type: nauc_mrr_at_20_std value: 1.8059880944253215 - type: nauc_mrr_at_3_diff1 value: 42.56076877140861 - type: nauc_mrr_at_3_max value: 37.4774293466681 - type: nauc_mrr_at_3_std value: 0.38144918993605603 - type: nauc_mrr_at_5_diff1 value: 41.6764265802116 - type: nauc_mrr_at_5_max value: 37.16536369010265 - type: nauc_mrr_at_5_std value: 1.5570583318968474 - type: nauc_ndcg_at_1000_diff1 value: 38.41316402205857 - type: nauc_ndcg_at_1000_max value: 34.85354630049824 - type: nauc_ndcg_at_1000_std value: 4.1987917490658795 - type: nauc_ndcg_at_100_diff1 value: 37.86931389576125 - type: nauc_ndcg_at_100_max value: 33.82378543079163 - type: nauc_ndcg_at_100_std value: 3.71084103573832 - type: nauc_ndcg_at_10_diff1 value: 38.69891586370789 - type: nauc_ndcg_at_10_max value: 34.69158263560064 - type: nauc_ndcg_at_10_std value: 2.1218981018673686 - type: nauc_ndcg_at_1_diff1 value: 46.9714558985573 - type: nauc_ndcg_at_1_max value: 39.30041031657009 - type: nauc_ndcg_at_1_std value: -1.4033670246089662 - type: nauc_ndcg_at_20_diff1 value: 38.363883413392486 - type: nauc_ndcg_at_20_max value: 34.667105813813535 - type: nauc_ndcg_at_20_std value: 2.8624626654781267 - type: nauc_ndcg_at_3_diff1 value: 40.5184686636588 - type: nauc_ndcg_at_3_max value: 36.186749852210276 - type: nauc_ndcg_at_3_std value: 0.09474904645558901 - type: nauc_ndcg_at_5_diff1 value: 39.24674105485247 - type: nauc_ndcg_at_5_max value: 35.322707726631805 - type: nauc_ndcg_at_5_std value: 1.7788731747792517 - type: nauc_precision_at_1000_diff1 value: 0.840976692854083 - type: nauc_precision_at_1000_max value: 3.261240112540733 - type: nauc_precision_at_1000_std value: 13.248030938023359 - type: nauc_precision_at_100_diff1 value: 10.072671120295702 - type: nauc_precision_at_100_max value: 17.240545350712175 - type: nauc_precision_at_100_std value: 20.314577652155904 - type: nauc_precision_at_10_diff1 value: 27.23270077955099 - type: nauc_precision_at_10_max value: 31.79041137664875 - type: nauc_precision_at_10_std value: 12.36209307812828 - type: nauc_precision_at_1_diff1 value: 46.9714558985573 - type: nauc_precision_at_1_max value: 39.30041031657009 - type: nauc_precision_at_1_std value: -1.4033670246089662 - type: nauc_precision_at_20_diff1 value: 23.795751404068003 - type: nauc_precision_at_20_max value: 29.82598945857867 - type: nauc_precision_at_20_std value: 14.92149587103534 - type: nauc_precision_at_3_diff1 value: 35.61737074241893 - type: nauc_precision_at_3_max value: 36.40376544125899 - type: nauc_precision_at_3_std value: 3.957970514402529 - type: nauc_precision_at_5_diff1 value: 30.87385523346844 - type: nauc_precision_at_5_max value: 34.27637004357153 - type: nauc_precision_at_5_std value: 9.030928793088314 - type: nauc_recall_at_1000_diff1 value: 11.601671012375652 - type: nauc_recall_at_1000_max value: 26.78951022752499 - type: nauc_recall_at_1000_std value: 40.83415411964083 - type: nauc_recall_at_100_diff1 value: 21.74556181581925 - type: nauc_recall_at_100_max value: 20.184610136900506 - type: nauc_recall_at_100_std value: 14.965834834698247 - type: nauc_recall_at_10_diff1 value: 30.115838102082716 - type: nauc_recall_at_10_max value: 29.06496783929028 - type: nauc_recall_at_10_std value: 5.597874206979672 - type: nauc_recall_at_1_diff1 value: 46.79723740072981 - type: nauc_recall_at_1_max value: 37.68968438458517 - type: nauc_recall_at_1_std value: -2.785325878591901 - type: nauc_recall_at_20_diff1 value: 28.02766014573457 - type: nauc_recall_at_20_max value: 28.239856197087665 - type: nauc_recall_at_20_std value: 8.29181316012612 - type: nauc_recall_at_3_diff1 value: 35.432867785333514 - type: nauc_recall_at_3_max value: 34.103779675298135 - type: nauc_recall_at_3_std value: 0.7732759979316737 - type: nauc_recall_at_5_diff1 value: 32.427691466534284 - type: nauc_recall_at_5_max value: 31.865805435351113 - type: nauc_recall_at_5_std value: 4.798978447571004 - type: ndcg_at_1 value: 25.139 - type: ndcg_at_10 value: 35.774 - type: ndcg_at_100 value: 40.96 - type: ndcg_at_1000 value: 43.246 - type: ndcg_at_20 value: 37.556 - type: ndcg_at_3 value: 30.312 - type: ndcg_at_5 value: 32.99 - type: precision_at_1 value: 25.139 - type: precision_at_10 value: 5.638 - type: precision_at_100 value: 0.889 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_20 value: 3.235 - type: precision_at_3 value: 12.815999999999999 - type: precision_at_5 value: 9.316 - type: recall_at_1 value: 22.779 - type: recall_at_10 value: 49.199 - type: recall_at_100 value: 73.063 - type: recall_at_1000 value: 90.239 - type: recall_at_20 value: 55.92700000000001 - type: recall_at_3 value: 34.187 - type: recall_at_5 value: 40.792 - task: type: Retrieval dataset: name: MTEB ClimateFEVER (default) type: mteb/climate-fever config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: main_score value: 38.916000000000004 - type: map_at_1 value: 17.651 - type: map_at_10 value: 29.168 - type: map_at_100 value: 31.227 - type: map_at_1000 value: 31.408 - type: map_at_20 value: 30.307000000000002 - type: map_at_3 value: 24.647 - type: map_at_5 value: 26.951999999999998 - type: mrr_at_1 value: 40.78175895765472 - type: mrr_at_10 value: 51.68646915878195 - type: mrr_at_100 value: 52.40689532965702 - type: mrr_at_1000 value: 52.43209269167376 - type: mrr_at_20 value: 52.154150109013976 - type: mrr_at_3 value: 48.88165038002178 - type: mrr_at_5 value: 50.595005428881755 - type: nauc_map_at_1000_diff1 value: 25.616803628427053 - type: nauc_map_at_1000_max value: 38.197304271991534 - type: nauc_map_at_1000_std value: 19.5031903830227 - type: nauc_map_at_100_diff1 value: 25.57833406545184 - type: nauc_map_at_100_max value: 38.14502692959517 - type: nauc_map_at_100_std value: 19.44077348129036 - type: nauc_map_at_10_diff1 value: 25.95091383785147 - type: nauc_map_at_10_max value: 37.4399376489563 - type: nauc_map_at_10_std value: 17.947548047679657 - type: nauc_map_at_1_diff1 value: 34.7369005540742 - type: nauc_map_at_1_max value: 31.66881962992226 - type: nauc_map_at_1_std value: 8.246764177334914 - type: nauc_map_at_20_diff1 value: 25.495779918015018 - type: nauc_map_at_20_max value: 37.929125739632724 - type: nauc_map_at_20_std value: 18.793855321849914 - type: nauc_map_at_3_diff1 value: 27.187355367399856 - type: nauc_map_at_3_max value: 34.81150705282639 - type: nauc_map_at_3_std value: 14.014713401966459 - type: nauc_map_at_5_diff1 value: 26.08862309808681 - type: nauc_map_at_5_max value: 36.53941111535009 - type: nauc_map_at_5_std value: 16.116511338225646 - type: nauc_mrr_at_1000_diff1 value: 26.59855134120458 - type: nauc_mrr_at_1000_max value: 35.77488039055326 - type: nauc_mrr_at_1000_std value: 20.7389528120806 - type: nauc_mrr_at_100_diff1 value: 26.590778113868673 - type: nauc_mrr_at_100_max value: 35.78358772121562 - type: nauc_mrr_at_100_std value: 20.744288172404808 - type: nauc_mrr_at_10_diff1 value: 26.5870707300715 - type: nauc_mrr_at_10_max value: 35.913843868573636 - type: nauc_mrr_at_10_std value: 20.976090226892623 - type: nauc_mrr_at_1_diff1 value: 29.56564017983464 - type: nauc_mrr_at_1_max value: 31.301768011417288 - type: nauc_mrr_at_1_std value: 14.75858762264703 - type: nauc_mrr_at_20_diff1 value: 26.56231125681433 - type: nauc_mrr_at_20_max value: 35.86261857417216 - type: nauc_mrr_at_20_std value: 20.800435951726282 - type: nauc_mrr_at_3_diff1 value: 25.559942762135485 - type: nauc_mrr_at_3_max value: 33.97715426818164 - type: nauc_mrr_at_3_std value: 19.351416325209865 - type: nauc_mrr_at_5_diff1 value: 26.141041525037817 - type: nauc_mrr_at_5_max value: 35.71438745282619 - type: nauc_mrr_at_5_std value: 20.74875586641808 - type: nauc_ndcg_at_1000_diff1 value: 25.041788617432932 - type: nauc_ndcg_at_1000_max value: 41.54576923132739 - type: nauc_ndcg_at_1000_std value: 26.51151915620546 - type: nauc_ndcg_at_100_diff1 value: 24.43191211493594 - type: nauc_ndcg_at_100_max value: 40.847650283984564 - type: nauc_ndcg_at_100_std value: 25.75277697297615 - type: nauc_ndcg_at_10_diff1 value: 25.233390869628174 - type: nauc_ndcg_at_10_max value: 39.62949324017721 - type: nauc_ndcg_at_10_std value: 22.2244036894323 - type: nauc_ndcg_at_1_diff1 value: 29.56564017983464 - type: nauc_ndcg_at_1_max value: 31.301768011417288 - type: nauc_ndcg_at_1_std value: 14.75858762264703 - type: nauc_ndcg_at_20_diff1 value: 24.27597965113978 - type: nauc_ndcg_at_20_max value: 40.393728924358356 - type: nauc_ndcg_at_20_std value: 23.674954170697884 - type: nauc_ndcg_at_3_diff1 value: 24.922976501121497 - type: nauc_ndcg_at_3_max value: 35.03015688782362 - type: nauc_ndcg_at_3_std value: 17.155078928887757 - type: nauc_ndcg_at_5_diff1 value: 24.781977823206624 - type: nauc_ndcg_at_5_max value: 38.07227204290295 - type: nauc_ndcg_at_5_std value: 19.693694672125837 - type: nauc_precision_at_1000_diff1 value: -4.115704930962564 - type: nauc_precision_at_1000_max value: 11.647989646622849 - type: nauc_precision_at_1000_std value: 25.566852614568838 - type: nauc_precision_at_100_diff1 value: 0.5157774949932177 - type: nauc_precision_at_100_max value: 21.45532828240429 - type: nauc_precision_at_100_std value: 30.553114749973965 - type: nauc_precision_at_10_diff1 value: 9.34584765889552 - type: nauc_precision_at_10_max value: 32.16000278371526 - type: nauc_precision_at_10_std value: 29.35892375659281 - type: nauc_precision_at_1_diff1 value: 29.56564017983464 - type: nauc_precision_at_1_max value: 31.301768011417288 - type: nauc_precision_at_1_std value: 14.75858762264703 - type: nauc_precision_at_20_diff1 value: 4.9990736206660396 - type: nauc_precision_at_20_max value: 29.872088450680923 - type: nauc_precision_at_20_std value: 30.216489116173488 - type: nauc_precision_at_3_diff1 value: 12.798858008292857 - type: nauc_precision_at_3_max value: 32.78603926269799 - type: nauc_precision_at_3_std value: 23.721222519146444 - type: nauc_precision_at_5_diff1 value: 10.229001376896228 - type: nauc_precision_at_5_max value: 34.26562428041649 - type: nauc_precision_at_5_std value: 27.123249202755378 - type: nauc_recall_at_1000_diff1 value: 15.410365830541176 - type: nauc_recall_at_1000_max value: 49.072553664240615 - type: nauc_recall_at_1000_std value: 49.891439906063205 - type: nauc_recall_at_100_diff1 value: 14.456552580056254 - type: nauc_recall_at_100_max value: 39.36987722093516 - type: nauc_recall_at_100_std value: 34.38344967422128 - type: nauc_recall_at_10_diff1 value: 20.23507425095135 - type: nauc_recall_at_10_max value: 39.51589936709692 - type: nauc_recall_at_10_std value: 24.500141888364887 - type: nauc_recall_at_1_diff1 value: 34.7369005540742 - type: nauc_recall_at_1_max value: 31.66881962992226 - type: nauc_recall_at_1_std value: 8.246764177334914 - type: nauc_recall_at_20_diff1 value: 16.101425670461474 - type: nauc_recall_at_20_max value: 39.169188223543586 - type: nauc_recall_at_20_std value: 26.926527703712676 - type: nauc_recall_at_3_diff1 value: 22.562156632821342 - type: nauc_recall_at_3_max value: 35.43366423709469 - type: nauc_recall_at_3_std value: 17.267094045670074 - type: nauc_recall_at_5_diff1 value: 20.62436789996695 - type: nauc_recall_at_5_max value: 38.89822406895274 - type: nauc_recall_at_5_std value: 21.051016860426518 - type: ndcg_at_1 value: 40.782000000000004 - type: ndcg_at_10 value: 38.916000000000004 - type: ndcg_at_100 value: 46.146 - type: ndcg_at_1000 value: 49.107 - type: ndcg_at_20 value: 41.888999999999996 - type: ndcg_at_3 value: 32.963 - type: ndcg_at_5 value: 34.872 - type: precision_at_1 value: 40.782000000000004 - type: precision_at_10 value: 11.87 - type: precision_at_100 value: 1.967 - type: precision_at_1000 value: 0.252 - type: precision_at_20 value: 7.234999999999999 - type: precision_at_3 value: 24.343 - type: precision_at_5 value: 18.279999999999998 - type: recall_at_1 value: 17.651 - type: recall_at_10 value: 44.321 - type: recall_at_100 value: 68.74 - type: recall_at_1000 value: 85.052 - type: recall_at_20 value: 52.693999999999996 - type: recall_at_3 value: 29.206 - type: recall_at_5 value: 35.363 - task: type: Retrieval dataset: name: MTEB CmedqaRetrieval (default) type: C-MTEB/CmedqaRetrieval config: default split: test revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301 metrics: - type: main_score value: 39.815 - type: map_at_1 value: 22.969 - type: map_at_10 value: 33.717000000000006 - type: map_at_100 value: 35.527 - type: map_at_1000 value: 35.669000000000004 - type: map_at_20 value: 34.701 - type: map_at_3 value: 30.145 - type: map_at_5 value: 32.101 - type: mrr_at_1 value: 35.408852213053265 - type: mrr_at_10 value: 42.79636377348293 - type: mrr_at_100 value: 43.799505469431466 - type: mrr_at_1000 value: 43.86305911334322 - type: mrr_at_20 value: 43.37854533926947 - type: mrr_at_3 value: 40.49762440610144 - type: mrr_at_5 value: 41.77294323580891 - type: nauc_map_at_1000_diff1 value: 48.1112957243525 - type: nauc_map_at_1000_max value: 45.10562447159609 - type: nauc_map_at_1000_std value: -3.612447606339434 - type: nauc_map_at_100_diff1 value: 48.070912563729344 - type: nauc_map_at_100_max value: 45.07885363880964 - type: nauc_map_at_100_std value: -3.632303318030044 - type: nauc_map_at_10_diff1 value: 48.018021707958944 - type: nauc_map_at_10_max value: 44.065132802797564 - type: nauc_map_at_10_std value: -4.847442985423416 - type: nauc_map_at_1_diff1 value: 52.57342490380559 - type: nauc_map_at_1_max value: 36.33892790190369 - type: nauc_map_at_1_std value: -7.554830062634192 - type: nauc_map_at_20_diff1 value: 48.000558518340675 - type: nauc_map_at_20_max value: 44.67149150982179 - type: nauc_map_at_20_std value: -4.281837169639415 - type: nauc_map_at_3_diff1 value: 48.907586769875444 - type: nauc_map_at_3_max value: 41.84368493765325 - type: nauc_map_at_3_std value: -6.132808996468529 - type: nauc_map_at_5_diff1 value: 48.41430811303262 - type: nauc_map_at_5_max value: 43.275175700700395 - type: nauc_map_at_5_std value: -5.606090404923237 - type: nauc_mrr_at_1000_diff1 value: 54.23928137454549 - type: nauc_mrr_at_1000_max value: 51.52141320015717 - type: nauc_mrr_at_1000_std value: 0.45535832049480274 - type: nauc_mrr_at_100_diff1 value: 54.21443316125873 - type: nauc_mrr_at_100_max value: 51.51643391109282 - type: nauc_mrr_at_100_std value: 0.4736988328304099 - type: nauc_mrr_at_10_diff1 value: 54.21743294084247 - type: nauc_mrr_at_10_max value: 51.30368757017215 - type: nauc_mrr_at_10_std value: 0.15181063835569913 - type: nauc_mrr_at_1_diff1 value: 58.86488915793501 - type: nauc_mrr_at_1_max value: 52.31620000332108 - type: nauc_mrr_at_1_std value: -1.2963965803823345 - type: nauc_mrr_at_20_diff1 value: 54.17613850147242 - type: nauc_mrr_at_20_max value: 51.44963601888931 - type: nauc_mrr_at_20_std value: 0.33917399702518963 - type: nauc_mrr_at_3_diff1 value: 55.31466096640199 - type: nauc_mrr_at_3_max value: 52.070134962817136 - type: nauc_mrr_at_3_std value: -0.08530830198238608 - type: nauc_mrr_at_5_diff1 value: 54.57731938989671 - type: nauc_mrr_at_5_max value: 51.64739472086174 - type: nauc_mrr_at_5_std value: 0.18268948774575638 - type: nauc_ndcg_at_1000_diff1 value: 48.29500205761512 - type: nauc_ndcg_at_1000_max value: 47.88171483119307 - type: nauc_ndcg_at_1000_std value: 0.21403741411733387 - type: nauc_ndcg_at_100_diff1 value: 47.37366257013038 - type: nauc_ndcg_at_100_max value: 47.76387826206963 - type: nauc_ndcg_at_100_std value: 0.826427463545558 - type: nauc_ndcg_at_10_diff1 value: 47.34506821207949 - type: nauc_ndcg_at_10_max value: 45.488029750609286 - type: nauc_ndcg_at_10_std value: -2.944846404404074 - type: nauc_ndcg_at_1_diff1 value: 58.86488915793501 - type: nauc_ndcg_at_1_max value: 52.31620000332108 - type: nauc_ndcg_at_1_std value: -1.2963965803823345 - type: nauc_ndcg_at_20_diff1 value: 47.12752770930654 - type: nauc_ndcg_at_20_max value: 46.47247388716809 - type: nauc_ndcg_at_20_std value: -1.7736602031427529 - type: nauc_ndcg_at_3_diff1 value: 49.34262730364437 - type: nauc_ndcg_at_3_max value: 47.22347634095395 - type: nauc_ndcg_at_3_std value: -2.563363733347789 - type: nauc_ndcg_at_5_diff1 value: 48.284555734671144 - type: nauc_ndcg_at_5_max value: 46.07891305494883 - type: nauc_ndcg_at_5_std value: -3.107232535187627 - type: nauc_precision_at_1000_diff1 value: 5.125101705233774 - type: nauc_precision_at_1000_max value: 26.065307083522 - type: nauc_precision_at_1000_std value: 20.610223746634322 - type: nauc_precision_at_100_diff1 value: 11.866045880453454 - type: nauc_precision_at_100_max value: 36.42189620035723 - type: nauc_precision_at_100_std value: 22.521956326496763 - type: nauc_precision_at_10_diff1 value: 26.505548355872428 - type: nauc_precision_at_10_max value: 47.29524117494792 - type: nauc_precision_at_10_std value: 10.1116614235421 - type: nauc_precision_at_1_diff1 value: 58.86488915793501 - type: nauc_precision_at_1_max value: 52.31620000332108 - type: nauc_precision_at_1_std value: -1.2963965803823345 - type: nauc_precision_at_20_diff1 value: 20.886976295880487 - type: nauc_precision_at_20_max value: 44.30883416965209 - type: nauc_precision_at_20_std value: 14.011145517217743 - type: nauc_precision_at_3_diff1 value: 38.031908830120805 - type: nauc_precision_at_3_max value: 51.6114119909547 - type: nauc_precision_at_3_std value: 4.32752822701211 - type: nauc_precision_at_5_diff1 value: 32.521121686482275 - type: nauc_precision_at_5_max value: 50.65631029971074 - type: nauc_precision_at_5_std value: 6.649966273827001 - type: nauc_recall_at_1000_diff1 value: 22.845413165121183 - type: nauc_recall_at_1000_max value: 48.373939794348146 - type: nauc_recall_at_1000_std value: 40.974710828793064 - type: nauc_recall_at_100_diff1 value: 28.38725602654593 - type: nauc_recall_at_100_max value: 42.48910788250242 - type: nauc_recall_at_100_std value: 16.35920233861213 - type: nauc_recall_at_10_diff1 value: 35.870608122691 - type: nauc_recall_at_10_max value: 37.03672822253722 - type: nauc_recall_at_10_std value: -3.0810688213417867 - type: nauc_recall_at_1_diff1 value: 52.57342490380559 - type: nauc_recall_at_1_max value: 36.33892790190369 - type: nauc_recall_at_1_std value: -7.554830062634192 - type: nauc_recall_at_20_diff1 value: 33.42775765293211 - type: nauc_recall_at_20_max value: 38.55476461511651 - type: nauc_recall_at_20_std value: 0.4517674601589859 - type: nauc_recall_at_3_diff1 value: 43.48789481619157 - type: nauc_recall_at_3_max value: 39.17833917043277 - type: nauc_recall_at_3_std value: -5.279192048237245 - type: nauc_recall_at_5_diff1 value: 39.99694394881568 - type: nauc_recall_at_5_max value: 38.8498445921524 - type: nauc_recall_at_5_std value: -4.480536508614665 - type: ndcg_at_1 value: 35.409 - type: ndcg_at_10 value: 39.815 - type: ndcg_at_100 value: 47.034 - type: ndcg_at_1000 value: 49.697 - type: ndcg_at_20 value: 42.565 - type: ndcg_at_3 value: 35.249 - type: ndcg_at_5 value: 37.074 - type: precision_at_1 value: 35.409 - type: precision_at_10 value: 8.85 - type: precision_at_100 value: 1.469 - type: precision_at_1000 value: 0.18 - type: precision_at_20 value: 5.335 - type: precision_at_3 value: 19.947 - type: precision_at_5 value: 14.404 - type: recall_at_1 value: 22.969 - type: recall_at_10 value: 48.884 - type: recall_at_100 value: 78.777 - type: recall_at_1000 value: 96.914 - type: recall_at_20 value: 58.208000000000006 - type: recall_at_3 value: 34.929 - type: recall_at_5 value: 40.772000000000006 - task: type: PairClassification dataset: name: MTEB Cmnli (default) type: C-MTEB/CMNLI config: default split: validation revision: 41bc36f332156f7adc9e38f53777c959b2ae9766 metrics: - type: cosine_accuracy value: 63.042693926638606 - type: cosine_accuracy_threshold value: 76.46265029907227 - type: cosine_ap value: 69.87113533991577 - type: cosine_f1 value: 68.94233255753814 - type: cosine_f1_threshold value: 56.28046989440918 - type: cosine_precision value: 54.60131075914801 - type: cosine_recall value: 93.50011690437222 - type: dot_accuracy value: 63.042693926638606 - type: dot_accuracy_threshold value: 76.46265029907227 - type: dot_ap value: 69.8933256876927 - type: dot_f1 value: 68.94233255753814 - type: dot_f1_threshold value: 56.28047585487366 - type: dot_precision value: 54.60131075914801 - type: dot_recall value: 93.50011690437222 - type: euclidean_accuracy value: 63.042693926638606 - type: euclidean_accuracy_threshold value: 68.61100196838379 - type: euclidean_ap value: 69.87119389082798 - type: euclidean_f1 value: 68.94233255753814 - type: euclidean_f1_threshold value: 93.50885152816772 - type: euclidean_precision value: 54.60131075914801 - type: euclidean_recall value: 93.50011690437222 - type: main_score value: 63.042693926638606 - type: manhattan_accuracy value: 62.657847263980756 - type: manhattan_accuracy_threshold value: 1850.9063720703125 - type: manhattan_ap value: 69.61681898409992 - type: manhattan_f1 value: 68.82984159427696 - type: manhattan_f1_threshold value: 2378.0029296875 - type: manhattan_precision value: 54.13261888814468 - type: manhattan_recall value: 94.4821136310498 - type: max_accuracy value: 63.042693926638606 - type: max_ap value: 69.8933256876927 - type: max_f1 value: 68.94233255753814 - type: max_precision value: 54.60131075914801 - type: max_recall value: 94.4821136310498 - type: similarity_accuracy value: 63.042693926638606 - type: similarity_accuracy_threshold value: 76.46265029907227 - type: similarity_ap value: 69.87113533991577 - type: similarity_f1 value: 68.94233255753814 - type: similarity_f1_threshold value: 56.28046989440918 - type: similarity_precision value: 54.60131075914801 - type: similarity_recall value: 93.50011690437222 - task: type: Retrieval dataset: name: MTEB CovidRetrieval (default) type: C-MTEB/CovidRetrieval config: default split: test revision: 1271c7809071a13532e05f25fb53511ffce77117 metrics: - type: main_score value: 84.379 - type: map_at_1 value: 73.604 - type: map_at_10 value: 81.03099999999999 - type: map_at_100 value: 81.274 - type: map_at_1000 value: 81.279 - type: map_at_20 value: 81.24 - type: map_at_3 value: 79.47 - type: map_at_5 value: 80.351 - type: mrr_at_1 value: 73.76185458377239 - type: mrr_at_10 value: 81.04311974174986 - type: mrr_at_100 value: 81.28215454043445 - type: mrr_at_1000 value: 81.28770523087694 - type: mrr_at_20 value: 81.24925687445058 - type: mrr_at_3 value: 79.53986652616791 - type: mrr_at_5 value: 80.4039339655778 - type: nauc_map_at_1000_diff1 value: 85.46427375215013 - type: nauc_map_at_1000_max value: 40.21490219185455 - type: nauc_map_at_1000_std value: -48.96793139327797 - type: nauc_map_at_100_diff1 value: 85.4644446768987 - type: nauc_map_at_100_max value: 40.23035336715615 - type: nauc_map_at_100_std value: -48.95308048383819 - type: nauc_map_at_10_diff1 value: 85.26224356683778 - type: nauc_map_at_10_max value: 40.25661447376166 - type: nauc_map_at_10_std value: -49.29368521251329 - type: nauc_map_at_1_diff1 value: 88.21629957013796 - type: nauc_map_at_1_max value: 37.83080773532884 - type: nauc_map_at_1_std value: -46.99042808069899 - type: nauc_map_at_20_diff1 value: 85.4572363555803 - type: nauc_map_at_20_max value: 40.30192541144375 - type: nauc_map_at_20_std value: -48.91376074777295 - type: nauc_map_at_3_diff1 value: 85.28229181056648 - type: nauc_map_at_3_max value: 39.60815875036543 - type: nauc_map_at_3_std value: -50.63770740326208 - type: nauc_map_at_5_diff1 value: 85.09928505788696 - type: nauc_map_at_5_max value: 39.896858679730634 - type: nauc_map_at_5_std value: -49.88022568110867 - type: nauc_mrr_at_1000_diff1 value: 85.45890746495375 - type: nauc_mrr_at_1000_max value: 40.05596978582016 - type: nauc_mrr_at_1000_std value: -48.90941955475384 - type: nauc_mrr_at_100_diff1 value: 85.45908015824078 - type: nauc_mrr_at_100_max value: 40.07147434757591 - type: nauc_mrr_at_100_std value: -48.89458630706267 - type: nauc_mrr_at_10_diff1 value: 85.25764726917897 - type: nauc_mrr_at_10_max value: 40.099603315928206 - type: nauc_mrr_at_10_std value: -49.22980069856444 - type: nauc_mrr_at_1_diff1 value: 88.06183536385691 - type: nauc_mrr_at_1_max value: 37.83536214932872 - type: nauc_mrr_at_1_std value: -46.5713429052071 - type: nauc_mrr_at_20_diff1 value: 85.45090467444484 - type: nauc_mrr_at_20_max value: 40.13935028363459 - type: nauc_mrr_at_20_std value: -48.86537813167268 - type: nauc_mrr_at_3_diff1 value: 85.24167310516863 - type: nauc_mrr_at_3_max value: 39.682497832837186 - type: nauc_mrr_at_3_std value: -50.3559548925 - type: nauc_mrr_at_5_diff1 value: 85.09268565431421 - type: nauc_mrr_at_5_max value: 39.89031371475337 - type: nauc_mrr_at_5_std value: -49.655551830291884 - type: nauc_ndcg_at_1000_diff1 value: 85.12411679630183 - type: nauc_ndcg_at_1000_max value: 40.899838982860544 - type: nauc_ndcg_at_1000_std value: -48.61715011026588 - type: nauc_ndcg_at_100_diff1 value: 85.12154652091637 - type: nauc_ndcg_at_100_max value: 41.32723015786323 - type: nauc_ndcg_at_100_std value: -48.16002090072688 - type: nauc_ndcg_at_10_diff1 value: 84.25132198886159 - type: nauc_ndcg_at_10_max value: 41.78100587782578 - type: nauc_ndcg_at_10_std value: -49.16607901207903 - type: nauc_ndcg_at_1_diff1 value: 88.06183536385691 - type: nauc_ndcg_at_1_max value: 37.83536214932872 - type: nauc_ndcg_at_1_std value: -46.5713429052071 - type: nauc_ndcg_at_20_diff1 value: 85.0475628940421 - type: nauc_ndcg_at_20_max value: 41.96174817137773 - type: nauc_ndcg_at_20_std value: -47.58844892697574 - type: nauc_ndcg_at_3_diff1 value: 84.27960098412159 - type: nauc_ndcg_at_3_max value: 40.33786907741922 - type: nauc_ndcg_at_3_std value: -52.004340720165864 - type: nauc_ndcg_at_5_diff1 value: 83.84602758477916 - type: nauc_ndcg_at_5_max value: 40.85719695462724 - type: nauc_ndcg_at_5_std value: -50.58889323761097 - type: nauc_precision_at_1000_diff1 value: -48.76476929204374 - type: nauc_precision_at_1000_max value: -0.1666337888458482 - type: nauc_precision_at_1000_std value: 53.81780659847315 - type: nauc_precision_at_100_diff1 value: -3.2951420875662727 - type: nauc_precision_at_100_max value: 29.2186017772724 - type: nauc_precision_at_100_std value: 35.55529557180356 - type: nauc_precision_at_10_diff1 value: 48.4879108305862 - type: nauc_precision_at_10_max value: 46.36995270449671 - type: nauc_precision_at_10_std value: -22.72318400911746 - type: nauc_precision_at_1_diff1 value: 88.06183536385691 - type: nauc_precision_at_1_max value: 37.83536214932872 - type: nauc_precision_at_1_std value: -46.5713429052071 - type: nauc_precision_at_20_diff1 value: 29.69237300240173 - type: nauc_precision_at_20_max value: 48.71484019554503 - type: nauc_precision_at_20_std value: 20.65240722122367 - type: nauc_precision_at_3_diff1 value: 75.05731813510343 - type: nauc_precision_at_3_max value: 41.16734979850893 - type: nauc_precision_at_3_std value: -52.92424557581844 - type: nauc_precision_at_5_diff1 value: 66.39813557698707 - type: nauc_precision_at_5_max value: 42.52370016987382 - type: nauc_precision_at_5_std value: -44.1213251901674 - type: nauc_recall_at_1000_diff1 value: .nan - type: nauc_recall_at_1000_max value: .nan - type: nauc_recall_at_1000_std value: .nan - type: nauc_recall_at_100_diff1 value: 84.51287278881901 - type: nauc_recall_at_100_max value: 85.96547568790173 - type: nauc_recall_at_100_std value: 0.28791075910543246 - type: nauc_recall_at_10_diff1 value: 75.47811216865622 - type: nauc_recall_at_10_max value: 57.077070588344036 - type: nauc_recall_at_10_std value: -46.27338964199063 - type: nauc_recall_at_1_diff1 value: 88.21629957013796 - type: nauc_recall_at_1_max value: 37.83080773532884 - type: nauc_recall_at_1_std value: -46.99042808069899 - type: nauc_recall_at_20_diff1 value: 82.38786919086273 - type: nauc_recall_at_20_max value: 77.62368007507331 - type: nauc_recall_at_20_std value: -10.317865962550622 - type: nauc_recall_at_3_diff1 value: 80.3430059663806 - type: nauc_recall_at_3_max value: 43.14918120883788 - type: nauc_recall_at_3_std value: -57.95792751840148 - type: nauc_recall_at_5_diff1 value: 77.33264062978374 - type: nauc_recall_at_5_max value: 45.54667757514487 - type: nauc_recall_at_5_std value: -54.44876978926033 - type: ndcg_at_1 value: 73.762 - type: ndcg_at_10 value: 84.379 - type: ndcg_at_100 value: 85.383 - type: ndcg_at_1000 value: 85.508 - type: ndcg_at_20 value: 85.114 - type: ndcg_at_3 value: 81.255 - type: ndcg_at_5 value: 82.83 - type: precision_at_1 value: 73.762 - type: precision_at_10 value: 9.557 - type: precision_at_100 value: 1.001 - type: precision_at_1000 value: 0.101 - type: precision_at_20 value: 4.926 - type: precision_at_3 value: 28.908 - type: precision_at_5 value: 18.145 - type: recall_at_1 value: 73.604 - type: recall_at_10 value: 94.731 - type: recall_at_100 value: 99.05199999999999 - type: recall_at_1000 value: 100.0 - type: recall_at_20 value: 97.576 - type: recall_at_3 value: 86.301 - type: recall_at_5 value: 90.095 - task: type: Retrieval dataset: name: MTEB DBPedia (default) type: mteb/dbpedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: main_score value: 45.153 - type: map_at_1 value: 9.046999999999999 - type: map_at_10 value: 20.629 - type: map_at_100 value: 29.964000000000002 - type: map_at_1000 value: 31.912000000000003 - type: map_at_20 value: 24.342 - type: map_at_3 value: 14.399999999999999 - type: map_at_5 value: 16.933999999999997 - type: mrr_at_1 value: 70.5 - type: mrr_at_10 value: 77.9640873015873 - type: mrr_at_100 value: 78.30766270365284 - type: mrr_at_1000 value: 78.31022444762601 - type: mrr_at_20 value: 78.22993596960315 - type: mrr_at_3 value: 76.75 - type: mrr_at_5 value: 77.36250000000001 - type: nauc_map_at_1000_diff1 value: 28.045576396300202 - type: nauc_map_at_1000_max value: 26.153471210607186 - type: nauc_map_at_1000_std value: 22.391104663325024 - type: nauc_map_at_100_diff1 value: 27.708397480512936 - type: nauc_map_at_100_max value: 23.341985202750255 - type: nauc_map_at_100_std value: 19.14393027622429 - type: nauc_map_at_10_diff1 value: 29.604187778342343 - type: nauc_map_at_10_max value: 9.607565758548718 - type: nauc_map_at_10_std value: -6.812067636191434 - type: nauc_map_at_1_diff1 value: 33.78958074380661 - type: nauc_map_at_1_max value: -4.668230310194967 - type: nauc_map_at_1_std value: -23.529326539297614 - type: nauc_map_at_20_diff1 value: 28.67331151769839 - type: nauc_map_at_20_max value: 14.895897875509375 - type: nauc_map_at_20_std value: 3.1963949053883187 - type: nauc_map_at_3_diff1 value: 29.722385417980412 - type: nauc_map_at_3_max value: -0.2811142783569912 - type: nauc_map_at_3_std value: -19.299586690821332 - type: nauc_map_at_5_diff1 value: 28.66624240695108 - type: nauc_map_at_5_max value: 3.360052191036737 - type: nauc_map_at_5_std value: -14.851723430211013 - type: nauc_mrr_at_1000_diff1 value: 56.423221856530894 - type: nauc_mrr_at_1000_max value: 60.317065501501 - type: nauc_mrr_at_1000_std value: 33.698591024900175 - type: nauc_mrr_at_100_diff1 value: 56.425561584153606 - type: nauc_mrr_at_100_max value: 60.31984977402958 - type: nauc_mrr_at_100_std value: 33.70006799711308 - type: nauc_mrr_at_10_diff1 value: 56.43259894878052 - type: nauc_mrr_at_10_max value: 60.374499288909945 - type: nauc_mrr_at_10_std value: 33.4294830409633 - type: nauc_mrr_at_1_diff1 value: 55.75393019283295 - type: nauc_mrr_at_1_max value: 56.564175641482315 - type: nauc_mrr_at_1_std value: 28.295104729019933 - type: nauc_mrr_at_20_diff1 value: 56.33520950522991 - type: nauc_mrr_at_20_max value: 60.28399700243428 - type: nauc_mrr_at_20_std value: 33.63189278260014 - type: nauc_mrr_at_3_diff1 value: 56.7275475900834 - type: nauc_mrr_at_3_max value: 60.933993343835155 - type: nauc_mrr_at_3_std value: 35.25440863470142 - type: nauc_mrr_at_5_diff1 value: 56.627733469260036 - type: nauc_mrr_at_5_max value: 60.52601047103946 - type: nauc_mrr_at_5_std value: 34.06919416028891 - type: nauc_ndcg_at_1000_diff1 value: 35.51891017935117 - type: nauc_ndcg_at_1000_max value: 43.63290111887676 - type: nauc_ndcg_at_1000_std value: 38.27645609360528 - type: nauc_ndcg_at_100_diff1 value: 34.95565666939815 - type: nauc_ndcg_at_100_max value: 35.603879842392054 - type: nauc_ndcg_at_100_std value: 29.535182565117623 - type: nauc_ndcg_at_10_diff1 value: 34.25164503584335 - type: nauc_ndcg_at_10_max value: 36.161839357245015 - type: nauc_ndcg_at_10_std value: 22.057343756689214 - type: nauc_ndcg_at_1_diff1 value: 46.86872053620517 - type: nauc_ndcg_at_1_max value: 39.03060882424493 - type: nauc_ndcg_at_1_std value: 20.21898747028476 - type: nauc_ndcg_at_20_diff1 value: 34.39534638745961 - type: nauc_ndcg_at_20_max value: 33.42062258555372 - type: nauc_ndcg_at_20_std value: 21.677461411920135 - type: nauc_ndcg_at_3_diff1 value: 35.54249517020183 - type: nauc_ndcg_at_3_max value: 38.5502021300953 - type: nauc_ndcg_at_3_std value: 20.87941638879022 - type: nauc_ndcg_at_5_diff1 value: 33.139218138659594 - type: nauc_ndcg_at_5_max value: 37.74145771932368 - type: nauc_ndcg_at_5_std value: 21.60307300259375 - type: nauc_precision_at_1000_diff1 value: -3.4926442688521444 - type: nauc_precision_at_1000_max value: 9.33810183416714 - type: nauc_precision_at_1000_std value: 9.091298908761424 - type: nauc_precision_at_100_diff1 value: -0.8681013692695503 - type: nauc_precision_at_100_max value: 29.92488145588432 - type: nauc_precision_at_100_std value: 43.243564317268365 - type: nauc_precision_at_10_diff1 value: 8.354685886799782 - type: nauc_precision_at_10_max value: 40.88350345790237 - type: nauc_precision_at_10_std value: 43.53467360875934 - type: nauc_precision_at_1_diff1 value: 55.75393019283295 - type: nauc_precision_at_1_max value: 56.564175641482315 - type: nauc_precision_at_1_std value: 28.295104729019933 - type: nauc_precision_at_20_diff1 value: 3.7269285981427953 - type: nauc_precision_at_20_max value: 36.999904619801605 - type: nauc_precision_at_20_std value: 47.03245724966235 - type: nauc_precision_at_3_diff1 value: 19.58602295951204 - type: nauc_precision_at_3_max value: 40.774756975430684 - type: nauc_precision_at_3_std value: 30.313382731334386 - type: nauc_precision_at_5_diff1 value: 11.501462854603371 - type: nauc_precision_at_5_max value: 41.11491741352496 - type: nauc_precision_at_5_std value: 36.306292126509184 - type: nauc_recall_at_1000_diff1 value: 21.965267428294624 - type: nauc_recall_at_1000_max value: 37.73121016970661 - type: nauc_recall_at_1000_std value: 49.67514738459122 - type: nauc_recall_at_100_diff1 value: 23.36978996552894 - type: nauc_recall_at_100_max value: 25.803297478273763 - type: nauc_recall_at_100_std value: 28.323018838882152 - type: nauc_recall_at_10_diff1 value: 25.191581940489176 - type: nauc_recall_at_10_max value: 5.481367733091858 - type: nauc_recall_at_10_std value: -9.302647109645827 - type: nauc_recall_at_1_diff1 value: 33.78958074380661 - type: nauc_recall_at_1_max value: -4.668230310194967 - type: nauc_recall_at_1_std value: -23.529326539297614 - type: nauc_recall_at_20_diff1 value: 22.78382683996787 - type: nauc_recall_at_20_max value: 10.59760940055021 - type: nauc_recall_at_20_std value: 0.5482029877052178 - type: nauc_recall_at_3_diff1 value: 26.517579502576506 - type: nauc_recall_at_3_max value: -1.9201875876437906 - type: nauc_recall_at_3_std value: -19.530894582297815 - type: nauc_recall_at_5_diff1 value: 24.999441835514016 - type: nauc_recall_at_5_max value: 0.5801717047366033 - type: nauc_recall_at_5_std value: -16.443290167984774 - type: ndcg_at_1 value: 58.875 - type: ndcg_at_10 value: 45.153 - type: ndcg_at_100 value: 49.58 - type: ndcg_at_1000 value: 56.667 - type: ndcg_at_20 value: 44.497 - type: ndcg_at_3 value: 49.856 - type: ndcg_at_5 value: 47.043 - type: precision_at_1 value: 70.5 - type: precision_at_10 value: 36.65 - type: precision_at_100 value: 11.975 - type: precision_at_1000 value: 2.375 - type: precision_at_20 value: 28.337 - type: precision_at_3 value: 52.917 - type: precision_at_5 value: 45.25 - type: recall_at_1 value: 9.046999999999999 - type: recall_at_10 value: 26.662999999999997 - type: recall_at_100 value: 55.293000000000006 - type: recall_at_1000 value: 78.224 - type: recall_at_20 value: 35.278999999999996 - type: recall_at_3 value: 15.549 - type: recall_at_5 value: 19.657 - task: type: Retrieval dataset: name: MTEB DuRetrieval (default) type: C-MTEB/DuRetrieval config: default split: test revision: a1a333e290fe30b10f3f56498e3a0d911a693ced metrics: - type: main_score value: 89.229 - type: map_at_1 value: 27.054000000000002 - type: map_at_10 value: 82.759 - type: map_at_100 value: 85.296 - type: map_at_1000 value: 85.33699999999999 - type: map_at_20 value: 84.75399999999999 - type: map_at_3 value: 57.766 - type: map_at_5 value: 72.907 - type: mrr_at_1 value: 92.15 - type: mrr_at_10 value: 94.4686309523809 - type: mrr_at_100 value: 94.534648802311 - type: mrr_at_1000 value: 94.53710776939944 - type: mrr_at_20 value: 94.52034224031897 - type: mrr_at_3 value: 94.24999999999997 - type: mrr_at_5 value: 94.37999999999997 - type: nauc_map_at_1000_diff1 value: -0.02796039048781859 - type: nauc_map_at_1000_max value: 49.7576338418727 - type: nauc_map_at_1000_std value: 23.05361731789902 - type: nauc_map_at_100_diff1 value: 0.008194529679268515 - type: nauc_map_at_100_max value: 49.74922545472186 - type: nauc_map_at_100_std value: 22.9835777977635 - type: nauc_map_at_10_diff1 value: 4.783611792072549 - type: nauc_map_at_10_max value: 45.78791767479075 - type: nauc_map_at_10_std value: 9.771217776003757 - type: nauc_map_at_1_diff1 value: 45.840411752765256 - type: nauc_map_at_1_max value: -12.570873671746416 - type: nauc_map_at_1_std value: -36.84331524265176 - type: nauc_map_at_20_diff1 value: 0.8823743065735505 - type: nauc_map_at_20_max value: 49.26910137511861 - type: nauc_map_at_20_std value: 20.3235967552421 - type: nauc_map_at_3_diff1 value: 27.914571236903722 - type: nauc_map_at_3_max value: 4.499387162458829 - type: nauc_map_at_3_std value: -30.155523730943536 - type: nauc_map_at_5_diff1 value: 16.7111462880928 - type: nauc_map_at_5_max value: 23.207688391011054 - type: nauc_map_at_5_std value: -16.489281202052332 - type: nauc_mrr_at_1000_diff1 value: 31.30664037709492 - type: nauc_mrr_at_1000_max value: 84.27608202330632 - type: nauc_mrr_at_1000_std value: 51.23558134732731 - type: nauc_mrr_at_100_diff1 value: 31.306386401837504 - type: nauc_mrr_at_100_max value: 84.2804599579358 - type: nauc_mrr_at_100_std value: 51.22749075580445 - type: nauc_mrr_at_10_diff1 value: 31.437829554947523 - type: nauc_mrr_at_10_max value: 84.43324944772354 - type: nauc_mrr_at_10_std value: 51.46000653619227 - type: nauc_mrr_at_1_diff1 value: 31.36273617727332 - type: nauc_mrr_at_1_max value: 80.81143285339608 - type: nauc_mrr_at_1_std value: 46.5075202055344 - type: nauc_mrr_at_20_diff1 value: 31.158185456044674 - type: nauc_mrr_at_20_max value: 84.3105159105071 - type: nauc_mrr_at_20_std value: 51.295225958764725 - type: nauc_mrr_at_3_diff1 value: 31.774042950513493 - type: nauc_mrr_at_3_max value: 84.4039838157486 - type: nauc_mrr_at_3_std value: 51.11063749171155 - type: nauc_mrr_at_5_diff1 value: 31.96778711484582 - type: nauc_mrr_at_5_max value: 84.57777733473773 - type: nauc_mrr_at_5_std value: 51.642536215751576 - type: nauc_ndcg_at_1000_diff1 value: 3.0995160786463867 - type: nauc_ndcg_at_1000_max value: 60.44661643509836 - type: nauc_ndcg_at_1000_std value: 36.19905274044387 - type: nauc_ndcg_at_100_diff1 value: 2.7767404830386506 - type: nauc_ndcg_at_100_max value: 60.1441312933469 - type: nauc_ndcg_at_100_std value: 36.17340932069341 - type: nauc_ndcg_at_10_diff1 value: 3.494723781736116 - type: nauc_ndcg_at_10_max value: 55.33863484422592 - type: nauc_ndcg_at_10_std value: 27.925059533697226 - type: nauc_ndcg_at_1_diff1 value: 31.36273617727332 - type: nauc_ndcg_at_1_max value: 80.81143285339608 - type: nauc_ndcg_at_1_std value: 46.5075202055344 - type: nauc_ndcg_at_20_diff1 value: 3.1373926565607406 - type: nauc_ndcg_at_20_max value: 58.49392402871737 - type: nauc_ndcg_at_20_std value: 32.094872831601 - type: nauc_ndcg_at_3_diff1 value: -3.5044344394018196 - type: nauc_ndcg_at_3_max value: 56.21333251222252 - type: nauc_ndcg_at_3_std value: 33.93829033390993 - type: nauc_ndcg_at_5_diff1 value: 1.658983298277881 - type: nauc_ndcg_at_5_max value: 48.94373808616266 - type: nauc_ndcg_at_5_std value: 23.803470422940855 - type: nauc_precision_at_1000_diff1 value: -32.902170215030935 - type: nauc_precision_at_1000_max value: 17.300907716587215 - type: nauc_precision_at_1000_std value: 52.51296253560843 - type: nauc_precision_at_100_diff1 value: -33.573728937434666 - type: nauc_precision_at_100_max value: 19.513743085739247 - type: nauc_precision_at_100_std value: 54.49616149633364 - type: nauc_precision_at_10_diff1 value: -35.16204430421235 - type: nauc_precision_at_10_max value: 32.18559538422582 - type: nauc_precision_at_10_std value: 54.167767973795286 - type: nauc_precision_at_1_diff1 value: 31.36273617727332 - type: nauc_precision_at_1_max value: 80.81143285339608 - type: nauc_precision_at_1_std value: 46.5075202055344 - type: nauc_precision_at_20_diff1 value: -34.2922112064245 - type: nauc_precision_at_20_max value: 24.67123112050235 - type: nauc_precision_at_20_std value: 55.38739984439128 - type: nauc_precision_at_3_diff1 value: -38.89657014112433 - type: nauc_precision_at_3_max value: 51.718392836961435 - type: nauc_precision_at_3_std value: 51.76733613564855 - type: nauc_precision_at_5_diff1 value: -38.297772070172165 - type: nauc_precision_at_5_max value: 41.64917637118582 - type: nauc_precision_at_5_std value: 51.161765176162476 - type: nauc_recall_at_1000_diff1 value: -0.17513872888091864 - type: nauc_recall_at_1000_max value: 72.38708288076182 - type: nauc_recall_at_1000_std value: 73.03296451601094 - type: nauc_recall_at_100_diff1 value: -7.289991660683619 - type: nauc_recall_at_100_max value: 60.30163206236221 - type: nauc_recall_at_100_std value: 52.57173609584166 - type: nauc_recall_at_10_diff1 value: 6.444643365227589 - type: nauc_recall_at_10_max value: 44.23969322390307 - type: nauc_recall_at_10_std value: 6.639619762390987 - type: nauc_recall_at_1_diff1 value: 45.840411752765256 - type: nauc_recall_at_1_max value: -12.570873671746416 - type: nauc_recall_at_1_std value: -36.84331524265176 - type: nauc_recall_at_20_diff1 value: -1.6465114691507572 - type: nauc_recall_at_20_max value: 52.55212477208588 - type: nauc_recall_at_20_std value: 29.282880316927745 - type: nauc_recall_at_3_diff1 value: 27.984237671618846 - type: nauc_recall_at_3_max value: -0.9271310366095001 - type: nauc_recall_at_3_std value: -34.16035939832247 - type: nauc_recall_at_5_diff1 value: 19.547943686458623 - type: nauc_recall_at_5_max value: 15.20222704175238 - type: nauc_recall_at_5_std value: -24.074202172178282 - type: ndcg_at_1 value: 92.15 - type: ndcg_at_10 value: 89.229 - type: ndcg_at_100 value: 91.515 - type: ndcg_at_1000 value: 91.872 - type: ndcg_at_20 value: 90.51 - type: ndcg_at_3 value: 88.765 - type: ndcg_at_5 value: 87.543 - type: precision_at_1 value: 92.15 - type: precision_at_10 value: 42.375 - type: precision_at_100 value: 4.798 - type: precision_at_1000 value: 0.48900000000000005 - type: precision_at_20 value: 22.888 - type: precision_at_3 value: 79.617 - type: precision_at_5 value: 67.02 - type: recall_at_1 value: 27.054000000000002 - type: recall_at_10 value: 89.815 - type: recall_at_100 value: 97.592 - type: recall_at_1000 value: 99.41799999999999 - type: recall_at_20 value: 94.293 - type: recall_at_3 value: 59.602 - type: recall_at_5 value: 76.706 - task: type: Retrieval dataset: name: MTEB EcomRetrieval (default) type: C-MTEB/EcomRetrieval config: default split: test revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9 metrics: - type: main_score value: 65.84100000000001 - type: map_at_1 value: 52.5 - type: map_at_10 value: 61.192 - type: map_at_100 value: 61.83 - type: map_at_1000 value: 61.844 - type: map_at_20 value: 61.617 - type: map_at_3 value: 58.75 - type: map_at_5 value: 60.245000000000005 - type: mrr_at_1 value: 52.5 - type: mrr_at_10 value: 61.192380952380944 - type: mrr_at_100 value: 61.829977663371 - type: mrr_at_1000 value: 61.84414119129694 - type: mrr_at_20 value: 61.6173393067201 - type: mrr_at_3 value: 58.74999999999999 - type: mrr_at_5 value: 60.24499999999999 - type: nauc_map_at_1000_diff1 value: 68.63381898499689 - type: nauc_map_at_1000_max value: 29.906802700314362 - type: nauc_map_at_1000_std value: -13.043778448362591 - type: nauc_map_at_100_diff1 value: 68.61922846433136 - type: nauc_map_at_100_max value: 29.911914135083894 - type: nauc_map_at_100_std value: -13.03583241535797 - type: nauc_map_at_10_diff1 value: 68.42254379324686 - type: nauc_map_at_10_max value: 30.11120757393897 - type: nauc_map_at_10_std value: -13.140082134227866 - type: nauc_map_at_1_diff1 value: 72.79827709978464 - type: nauc_map_at_1_max value: 26.28489385704028 - type: nauc_map_at_1_std value: -15.273100194851812 - type: nauc_map_at_20_diff1 value: 68.5543044735401 - type: nauc_map_at_20_max value: 29.880811074859288 - type: nauc_map_at_20_std value: -13.119173248800491 - type: nauc_map_at_3_diff1 value: 68.78227539274147 - type: nauc_map_at_3_max value: 28.160898614800654 - type: nauc_map_at_3_std value: -15.797291523471626 - type: nauc_map_at_5_diff1 value: 68.3493580349966 - type: nauc_map_at_5_max value: 29.462226781090628 - type: nauc_map_at_5_std value: -13.823334723010062 - type: nauc_mrr_at_1000_diff1 value: 68.63381898499689 - type: nauc_mrr_at_1000_max value: 29.906802700314362 - type: nauc_mrr_at_1000_std value: -13.043778448362591 - type: nauc_mrr_at_100_diff1 value: 68.61922846433136 - type: nauc_mrr_at_100_max value: 29.911914135083894 - type: nauc_mrr_at_100_std value: -13.03583241535797 - type: nauc_mrr_at_10_diff1 value: 68.42254379324686 - type: nauc_mrr_at_10_max value: 30.11120757393897 - type: nauc_mrr_at_10_std value: -13.140082134227866 - type: nauc_mrr_at_1_diff1 value: 72.79827709978464 - type: nauc_mrr_at_1_max value: 26.28489385704028 - type: nauc_mrr_at_1_std value: -15.273100194851812 - type: nauc_mrr_at_20_diff1 value: 68.5543044735401 - type: nauc_mrr_at_20_max value: 29.880811074859288 - type: nauc_mrr_at_20_std value: -13.119173248800491 - type: nauc_mrr_at_3_diff1 value: 68.78227539274147 - type: nauc_mrr_at_3_max value: 28.160898614800654 - type: nauc_mrr_at_3_std value: -15.797291523471626 - type: nauc_mrr_at_5_diff1 value: 68.3493580349966 - type: nauc_mrr_at_5_max value: 29.462226781090628 - type: nauc_mrr_at_5_std value: -13.823334723010062 - type: nauc_ndcg_at_1000_diff1 value: 67.90196996106812 - type: nauc_ndcg_at_1000_max value: 32.36822400000294 - type: nauc_ndcg_at_1000_std value: -9.824494007845882 - type: nauc_ndcg_at_100_diff1 value: 67.54486587995649 - type: nauc_ndcg_at_100_max value: 32.7718926705024 - type: nauc_ndcg_at_100_std value: -9.26575359100604 - type: nauc_ndcg_at_10_diff1 value: 66.64847353850341 - type: nauc_ndcg_at_10_max value: 33.223671665163614 - type: nauc_ndcg_at_10_std value: -10.27829867720837 - type: nauc_ndcg_at_1_diff1 value: 72.79827709978464 - type: nauc_ndcg_at_1_max value: 26.28489385704028 - type: nauc_ndcg_at_1_std value: -15.273100194851812 - type: nauc_ndcg_at_20_diff1 value: 67.0754334299387 - type: nauc_ndcg_at_20_max value: 32.456199571793995 - type: nauc_ndcg_at_20_std value: -9.931114874548891 - type: nauc_ndcg_at_3_diff1 value: 67.44808597891617 - type: nauc_ndcg_at_3_max value: 28.81312271324233 - type: nauc_ndcg_at_3_std value: -15.900590447226456 - type: nauc_ndcg_at_5_diff1 value: 66.57742283926243 - type: nauc_ndcg_at_5_max value: 31.40058618065593 - type: nauc_ndcg_at_5_std value: -12.091744743636507 - type: nauc_precision_at_1000_diff1 value: 69.24992219109824 - type: nauc_precision_at_1000_max value: 92.99097416744418 - type: nauc_precision_at_1000_std value: 80.41083099906578 - type: nauc_precision_at_100_diff1 value: 58.40336134453783 - type: nauc_precision_at_100_max value: 75.6873119618219 - type: nauc_precision_at_100_std value: 53.056037229706746 - type: nauc_precision_at_10_diff1 value: 58.16256795502892 - type: nauc_precision_at_10_max value: 49.778279054949756 - type: nauc_precision_at_10_std value: 5.275571130545513 - type: nauc_precision_at_1_diff1 value: 72.79827709978464 - type: nauc_precision_at_1_max value: 26.28489385704028 - type: nauc_precision_at_1_std value: -15.273100194851812 - type: nauc_precision_at_20_diff1 value: 58.04381286465974 - type: nauc_precision_at_20_max value: 50.42452661345172 - type: nauc_precision_at_20_std value: 14.55202777678338 - type: nauc_precision_at_3_diff1 value: 63.10303988700744 - type: nauc_precision_at_3_max value: 30.947952093016234 - type: nauc_precision_at_3_std value: -16.210759642302556 - type: nauc_precision_at_5_diff1 value: 59.82134123823496 - type: nauc_precision_at_5_max value: 39.221486959906464 - type: nauc_precision_at_5_std value: -4.794642233334347 - type: nauc_recall_at_1000_diff1 value: 69.24992219109895 - type: nauc_recall_at_1000_max value: 92.99097416744485 - type: nauc_recall_at_1000_std value: 80.41083099906643 - type: nauc_recall_at_100_diff1 value: 58.40336134453772 - type: nauc_recall_at_100_max value: 75.6873119618217 - type: nauc_recall_at_100_std value: 53.056037229706696 - type: nauc_recall_at_10_diff1 value: 58.16256795502893 - type: nauc_recall_at_10_max value: 49.77827905494972 - type: nauc_recall_at_10_std value: 5.275571130545528 - type: nauc_recall_at_1_diff1 value: 72.79827709978464 - type: nauc_recall_at_1_max value: 26.28489385704028 - type: nauc_recall_at_1_std value: -15.273100194851812 - type: nauc_recall_at_20_diff1 value: 58.043812864659714 - type: nauc_recall_at_20_max value: 50.42452661345165 - type: nauc_recall_at_20_std value: 14.552027776783477 - type: nauc_recall_at_3_diff1 value: 63.10303988700737 - type: nauc_recall_at_3_max value: 30.9479520930162 - type: nauc_recall_at_3_std value: -16.21075964230267 - type: nauc_recall_at_5_diff1 value: 59.82134123823499 - type: nauc_recall_at_5_max value: 39.221486959906535 - type: nauc_recall_at_5_std value: -4.79464223333429 - type: ndcg_at_1 value: 52.5 - type: ndcg_at_10 value: 65.84100000000001 - type: ndcg_at_100 value: 68.738 - type: ndcg_at_1000 value: 69.148 - type: ndcg_at_20 value: 67.352 - type: ndcg_at_3 value: 60.839 - type: ndcg_at_5 value: 63.546 - type: precision_at_1 value: 52.5 - type: precision_at_10 value: 8.06 - type: precision_at_100 value: 0.9369999999999999 - type: precision_at_1000 value: 0.097 - type: precision_at_20 value: 4.324999999999999 - type: precision_at_3 value: 22.3 - type: precision_at_5 value: 14.7 - type: recall_at_1 value: 52.5 - type: recall_at_10 value: 80.60000000000001 - type: recall_at_100 value: 93.7 - type: recall_at_1000 value: 97.0 - type: recall_at_20 value: 86.5 - type: recall_at_3 value: 66.9 - type: recall_at_5 value: 73.5 - task: type: Classification dataset: name: MTEB EmotionClassification (default) type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 52.195 - type: f1 value: 46.11956692776424 - type: f1_weighted value: 53.928609352293456 - type: main_score value: 52.195 - task: type: Retrieval dataset: name: MTEB FEVER (default) type: mteb/fever config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: main_score value: 88.48700000000001 - type: map_at_1 value: 76.75699999999999 - type: map_at_10 value: 85.026 - type: map_at_100 value: 85.222 - type: map_at_1000 value: 85.233 - type: map_at_20 value: 85.153 - type: map_at_3 value: 83.995 - type: map_at_5 value: 84.72 - type: mrr_at_1 value: 82.74827482748275 - type: mrr_at_10 value: 89.36737721391175 - type: mrr_at_100 value: 89.42281791031236 - type: mrr_at_1000 value: 89.42338403651277 - type: mrr_at_20 value: 89.41000444854201 - type: mrr_at_3 value: 88.77887788778868 - type: mrr_at_5 value: 89.22892289228906 - type: nauc_map_at_1000_diff1 value: 54.25105398911303 - type: nauc_map_at_1000_max value: 23.429223523468487 - type: nauc_map_at_1000_std value: -1.7334687095777817 - type: nauc_map_at_100_diff1 value: 54.21667352719557 - type: nauc_map_at_100_max value: 23.41584289756298 - type: nauc_map_at_100_std value: -1.7268147416914834 - type: nauc_map_at_10_diff1 value: 53.891546320220954 - type: nauc_map_at_10_max value: 23.331511148674064 - type: nauc_map_at_10_std value: -1.7196488084463741 - type: nauc_map_at_1_diff1 value: 58.25983460638938 - type: nauc_map_at_1_max value: 16.201489550987855 - type: nauc_map_at_1_std value: -6.62306733738586 - type: nauc_map_at_20_diff1 value: 54.09715298373297 - type: nauc_map_at_20_max value: 23.404623777099566 - type: nauc_map_at_20_std value: -1.69489694977684 - type: nauc_map_at_3_diff1 value: 53.99844560192416 - type: nauc_map_at_3_max value: 23.394893454546985 - type: nauc_map_at_3_std value: -2.5898966262447085 - type: nauc_map_at_5_diff1 value: 53.71863187650913 - type: nauc_map_at_5_max value: 23.301072622171013 - type: nauc_map_at_5_std value: -2.0469972205599007 - type: nauc_mrr_at_1000_diff1 value: 71.17515639341956 - type: nauc_mrr_at_1000_max value: 25.0708046486158 - type: nauc_mrr_at_1000_std value: -9.416121374883016 - type: nauc_mrr_at_100_diff1 value: 71.1753164464445 - type: nauc_mrr_at_100_max value: 25.07419346909715 - type: nauc_mrr_at_100_std value: -9.412863113733295 - type: nauc_mrr_at_10_diff1 value: 71.09405552697164 - type: nauc_mrr_at_10_max value: 25.26590564954804 - type: nauc_mrr_at_10_std value: -9.298704438200227 - type: nauc_mrr_at_1_diff1 value: 72.03214906017645 - type: nauc_mrr_at_1_max value: 19.686864438615697 - type: nauc_mrr_at_1_std value: -11.46718406152579 - type: nauc_mrr_at_20_diff1 value: 71.16991459665647 - type: nauc_mrr_at_20_max value: 25.145335855346197 - type: nauc_mrr_at_20_std value: -9.37386189687834 - type: nauc_mrr_at_3_diff1 value: 70.94963146032211 - type: nauc_mrr_at_3_max value: 26.231208166551717 - type: nauc_mrr_at_3_std value: -9.356935646618718 - type: nauc_mrr_at_5_diff1 value: 71.19526205241235 - type: nauc_mrr_at_5_max value: 25.695147330404268 - type: nauc_mrr_at_5_std value: -9.3075800123897 - type: nauc_ndcg_at_1000_diff1 value: 56.06544294033199 - type: nauc_ndcg_at_1000_max value: 25.610054392303667 - type: nauc_ndcg_at_1000_std value: -0.6867283406842567 - type: nauc_ndcg_at_100_diff1 value: 55.25185300646874 - type: nauc_ndcg_at_100_max value: 25.399170365704 - type: nauc_ndcg_at_100_std value: -0.46547332840855044 - type: nauc_ndcg_at_10_diff1 value: 54.016224221161245 - type: nauc_ndcg_at_10_max value: 25.442317454780277 - type: nauc_ndcg_at_10_std value: -0.28391008237610216 - type: nauc_ndcg_at_1_diff1 value: 72.03214906017645 - type: nauc_ndcg_at_1_max value: 19.686864438615697 - type: nauc_ndcg_at_1_std value: -11.46718406152579 - type: nauc_ndcg_at_20_diff1 value: 54.658404506399464 - type: nauc_ndcg_at_20_max value: 25.495663741198126 - type: nauc_ndcg_at_20_std value: -0.26800797289758815 - type: nauc_ndcg_at_3_diff1 value: 55.5763557237763 - type: nauc_ndcg_at_3_max value: 26.4988763664988 - type: nauc_ndcg_at_3_std value: -2.3981007097238343 - type: nauc_ndcg_at_5_diff1 value: 54.27240486490372 - type: nauc_ndcg_at_5_max value: 25.82259059583224 - type: nauc_ndcg_at_5_std value: -1.1890812042559784 - type: nauc_precision_at_1000_diff1 value: -7.60072118353888 - type: nauc_precision_at_1000_max value: 4.620559244156039 - type: nauc_precision_at_1000_std value: 3.5812750588401463 - type: nauc_precision_at_100_diff1 value: -9.45455804679522 - type: nauc_precision_at_100_max value: 6.631695936980273 - type: nauc_precision_at_100_std value: 6.451478574268801 - type: nauc_precision_at_10_diff1 value: -6.726955843425629 - type: nauc_precision_at_10_max value: 14.546007414428736 - type: nauc_precision_at_10_std value: 10.825230172002183 - type: nauc_precision_at_1_diff1 value: 72.03214906017645 - type: nauc_precision_at_1_max value: 19.686864438615697 - type: nauc_precision_at_1_std value: -11.46718406152579 - type: nauc_precision_at_20_diff1 value: -8.734846281871329 - type: nauc_precision_at_20_max value: 10.730418617259506 - type: nauc_precision_at_20_std value: 8.801245191066164 - type: nauc_precision_at_3_diff1 value: 17.69525577378896 - type: nauc_precision_at_3_max value: 29.697514372659484 - type: nauc_precision_at_3_std value: 6.020200289148097 - type: nauc_precision_at_5_diff1 value: 0.24533250177984312 - type: nauc_precision_at_5_max value: 20.531345723824952 - type: nauc_precision_at_5_std value: 9.158699123344162 - type: nauc_recall_at_1000_diff1 value: 10.77222496473275 - type: nauc_recall_at_1000_max value: 50.734546876280554 - type: nauc_recall_at_1000_std value: 55.206779538792986 - type: nauc_recall_at_100_diff1 value: 10.839466570515647 - type: nauc_recall_at_100_max value: 36.14527764646136 - type: nauc_recall_at_100_std value: 36.76142693997371 - type: nauc_recall_at_10_diff1 value: 20.82687898494084 - type: nauc_recall_at_10_max value: 32.686402220501726 - type: nauc_recall_at_10_std value: 21.652679688979624 - type: nauc_recall_at_1_diff1 value: 58.25983460638938 - type: nauc_recall_at_1_max value: 16.201489550987855 - type: nauc_recall_at_1_std value: -6.62306733738586 - type: nauc_recall_at_20_diff1 value: 18.19772056092292 - type: nauc_recall_at_20_max value: 34.47222550318253 - type: nauc_recall_at_20_std value: 27.38829232665364 - type: nauc_recall_at_3_diff1 value: 37.124181479070806 - type: nauc_recall_at_3_max value: 32.43172426208055 - type: nauc_recall_at_3_std value: 6.335659080755107 - type: nauc_recall_at_5_diff1 value: 28.13275823420512 - type: nauc_recall_at_5_max value: 32.34074997818357 - type: nauc_recall_at_5_std value: 12.824487132524897 - type: ndcg_at_1 value: 82.748 - type: ndcg_at_10 value: 88.48700000000001 - type: ndcg_at_100 value: 89.121 - type: ndcg_at_1000 value: 89.31700000000001 - type: ndcg_at_20 value: 88.809 - type: ndcg_at_3 value: 87.01299999999999 - type: ndcg_at_5 value: 87.96300000000001 - type: precision_at_1 value: 82.748 - type: precision_at_10 value: 10.546 - type: precision_at_100 value: 1.107 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_20 value: 5.379 - type: precision_at_3 value: 33.173 - type: precision_at_5 value: 20.588 - type: recall_at_1 value: 76.75699999999999 - type: recall_at_10 value: 94.796 - type: recall_at_100 value: 97.174 - type: recall_at_1000 value: 98.349 - type: recall_at_20 value: 95.86 - type: recall_at_3 value: 90.814 - type: recall_at_5 value: 93.235 - task: type: Retrieval dataset: name: MTEB FiQA2018 (default) type: mteb/fiqa config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: main_score value: 47.035 - type: map_at_1 value: 24.295 - type: map_at_10 value: 39.029 - type: map_at_100 value: 41.016999999999996 - type: map_at_1000 value: 41.182 - type: map_at_20 value: 40.182 - type: map_at_3 value: 34.128 - type: map_at_5 value: 36.771 - type: mrr_at_1 value: 47.0679012345679 - type: mrr_at_10 value: 55.334852047815005 - type: mrr_at_100 value: 56.06915046612819 - type: mrr_at_1000 value: 56.10322909085006 - type: mrr_at_20 value: 55.82851614255695 - type: mrr_at_3 value: 52.98353909465019 - type: mrr_at_5 value: 54.21039094650203 - type: nauc_map_at_1000_diff1 value: 43.511658309461076 - type: nauc_map_at_1000_max value: 33.909990774712945 - type: nauc_map_at_1000_std value: -2.204768114675481 - type: nauc_map_at_100_diff1 value: 43.47352725497821 - type: nauc_map_at_100_max value: 33.831208204956994 - type: nauc_map_at_100_std value: -2.2426244577314565 - type: nauc_map_at_10_diff1 value: 43.44192893607367 - type: nauc_map_at_10_max value: 32.46143380861397 - type: nauc_map_at_10_std value: -3.424608611118555 - type: nauc_map_at_1_diff1 value: 48.36982230823535 - type: nauc_map_at_1_max value: 20.538023574672817 - type: nauc_map_at_1_std value: -6.313140012964799 - type: nauc_map_at_20_diff1 value: 43.3916601464767 - type: nauc_map_at_20_max value: 33.1621245847151 - type: nauc_map_at_20_std value: -2.4792616401386303 - type: nauc_map_at_3_diff1 value: 44.39544319440273 - type: nauc_map_at_3_max value: 28.173138602900078 - type: nauc_map_at_3_std value: -4.827558609939407 - type: nauc_map_at_5_diff1 value: 43.74632166484276 - type: nauc_map_at_5_max value: 30.962682241438205 - type: nauc_map_at_5_std value: -4.0383602707482975 - type: nauc_mrr_at_1000_diff1 value: 49.72736651560055 - type: nauc_mrr_at_1000_max value: 43.775987123828216 - type: nauc_mrr_at_1000_std value: 0.2605801030626549 - type: nauc_mrr_at_100_diff1 value: 49.698356271296944 - type: nauc_mrr_at_100_max value: 43.79303269950675 - type: nauc_mrr_at_100_std value: 0.27383247751044537 - type: nauc_mrr_at_10_diff1 value: 49.74695781871661 - type: nauc_mrr_at_10_max value: 43.70095639468644 - type: nauc_mrr_at_10_std value: -0.0910007265618897 - type: nauc_mrr_at_1_diff1 value: 52.72283694395142 - type: nauc_mrr_at_1_max value: 42.44702827453944 - type: nauc_mrr_at_1_std value: -2.8273823855670255 - type: nauc_mrr_at_20_diff1 value: 49.66790615633498 - type: nauc_mrr_at_20_max value: 43.758962529366194 - type: nauc_mrr_at_20_std value: 0.3426322120672393 - type: nauc_mrr_at_3_diff1 value: 50.24600816852405 - type: nauc_mrr_at_3_max value: 44.05231137252421 - type: nauc_mrr_at_3_std value: 0.3241339755957089 - type: nauc_mrr_at_5_diff1 value: 49.5975151012115 - type: nauc_mrr_at_5_max value: 43.68322913701036 - type: nauc_mrr_at_5_std value: -0.006452533848350892 - type: nauc_ndcg_at_1000_diff1 value: 44.375754408381894 - type: nauc_ndcg_at_1000_max value: 39.06222884248439 - type: nauc_ndcg_at_1000_std value: 1.2790165406784537 - type: nauc_ndcg_at_100_diff1 value: 43.55596660750431 - type: nauc_ndcg_at_100_max value: 38.58416939185971 - type: nauc_ndcg_at_100_std value: 1.3982431563388764 - type: nauc_ndcg_at_10_diff1 value: 43.42342985549579 - type: nauc_ndcg_at_10_max value: 35.39654814350948 - type: nauc_ndcg_at_10_std value: -1.9691263385874018 - type: nauc_ndcg_at_1_diff1 value: 52.72283694395142 - type: nauc_ndcg_at_1_max value: 42.44702827453944 - type: nauc_ndcg_at_1_std value: -2.8273823855670255 - type: nauc_ndcg_at_20_diff1 value: 43.18638092853598 - type: nauc_ndcg_at_20_max value: 36.12317468609796 - type: nauc_ndcg_at_20_std value: 0.25078096107927306 - type: nauc_ndcg_at_3_diff1 value: 44.586398632399366 - type: nauc_ndcg_at_3_max value: 37.89220961256707 - type: nauc_ndcg_at_3_std value: -2.448074667259283 - type: nauc_ndcg_at_5_diff1 value: 43.64088923894009 - type: nauc_ndcg_at_5_max value: 35.94499252340929 - type: nauc_ndcg_at_5_std value: -2.4540364610254857 - type: nauc_precision_at_1000_diff1 value: -1.6609012856010976 - type: nauc_precision_at_1000_max value: 30.951360889282455 - type: nauc_precision_at_1000_std value: 10.832115521132394 - type: nauc_precision_at_100_diff1 value: 3.8635753172116454 - type: nauc_precision_at_100_max value: 37.50549346606815 - type: nauc_precision_at_100_std value: 12.984264349425006 - type: nauc_precision_at_10_diff1 value: 15.096155551489035 - type: nauc_precision_at_10_max value: 41.157377147091935 - type: nauc_precision_at_10_std value: 6.541970514670327 - type: nauc_precision_at_1_diff1 value: 52.72283694395142 - type: nauc_precision_at_1_max value: 42.44702827453944 - type: nauc_precision_at_1_std value: -2.8273823855670255 - type: nauc_precision_at_20_diff1 value: 10.77837369063063 - type: nauc_precision_at_20_max value: 39.02870175375101 - type: nauc_precision_at_20_std value: 11.493998523134003 - type: nauc_precision_at_3_diff1 value: 27.719913494785082 - type: nauc_precision_at_3_max value: 42.32147757624575 - type: nauc_precision_at_3_std value: 1.675159078162856 - type: nauc_precision_at_5_diff1 value: 21.13559680138858 - type: nauc_precision_at_5_max value: 42.94690948385399 - type: nauc_precision_at_5_std value: 4.269082271873189 - type: nauc_recall_at_1000_diff1 value: 29.211629532377664 - type: nauc_recall_at_1000_max value: 38.27913905193411 - type: nauc_recall_at_1000_std value: 33.777853794495186 - type: nauc_recall_at_100_diff1 value: 27.540258851819743 - type: nauc_recall_at_100_max value: 34.93970481970824 - type: nauc_recall_at_100_std value: 14.696131816776942 - type: nauc_recall_at_10_diff1 value: 33.429209402623314 - type: nauc_recall_at_10_max value: 26.83870557170468 - type: nauc_recall_at_10_std value: -1.7141062811893624 - type: nauc_recall_at_1_diff1 value: 48.36982230823535 - type: nauc_recall_at_1_max value: 20.538023574672817 - type: nauc_recall_at_1_std value: -6.313140012964799 - type: nauc_recall_at_20_diff1 value: 30.122935323793588 - type: nauc_recall_at_20_max value: 26.510122532461565 - type: nauc_recall_at_20_std value: 4.836919434308895 - type: nauc_recall_at_3_diff1 value: 38.95587878059384 - type: nauc_recall_at_3_max value: 25.25220801695804 - type: nauc_recall_at_3_std value: -3.7202422156547095 - type: nauc_recall_at_5_diff1 value: 35.913508616203146 - type: nauc_recall_at_5_max value: 26.70575052525446 - type: nauc_recall_at_5_std value: -3.047557854303276 - type: ndcg_at_1 value: 47.068 - type: ndcg_at_10 value: 47.035 - type: ndcg_at_100 value: 53.72 - type: ndcg_at_1000 value: 56.35 - type: ndcg_at_20 value: 49.830999999999996 - type: ndcg_at_3 value: 43.327 - type: ndcg_at_5 value: 44.18 - type: precision_at_1 value: 47.068 - type: precision_at_10 value: 12.948 - type: precision_at_100 value: 1.992 - type: precision_at_1000 value: 0.244 - type: precision_at_20 value: 7.670000000000001 - type: precision_at_3 value: 28.601 - type: precision_at_5 value: 20.772 - type: recall_at_1 value: 24.295 - type: recall_at_10 value: 53.681999999999995 - type: recall_at_100 value: 78.072 - type: recall_at_1000 value: 93.866 - type: recall_at_20 value: 62.18900000000001 - type: recall_at_3 value: 38.836 - type: recall_at_5 value: 44.779 - task: type: Retrieval dataset: name: MTEB HotpotQA (default) type: mteb/hotpotqa config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: main_score value: 71.324 - type: map_at_1 value: 39.73 - type: map_at_10 value: 63.046 - type: map_at_100 value: 63.885999999999996 - type: map_at_1000 value: 63.94499999999999 - type: map_at_20 value: 63.548 - type: map_at_3 value: 59.655 - type: map_at_5 value: 61.795 - type: mrr_at_1 value: 79.45982444294395 - type: mrr_at_10 value: 85.08343247269632 - type: mrr_at_100 value: 85.24432818683675 - type: mrr_at_1000 value: 85.25040798349796 - type: mrr_at_20 value: 85.18723931309329 - type: mrr_at_3 value: 84.18861129867187 - type: mrr_at_5 value: 84.75647085302693 - type: nauc_map_at_1000_diff1 value: 18.712841235300363 - type: nauc_map_at_1000_max value: 17.872061255736362 - type: nauc_map_at_1000_std value: 5.493306103578567 - type: nauc_map_at_100_diff1 value: 18.67843452691231 - type: nauc_map_at_100_max value: 17.854433817054684 - type: nauc_map_at_100_std value: 5.505106020385131 - type: nauc_map_at_10_diff1 value: 18.431763547762873 - type: nauc_map_at_10_max value: 17.57826680597676 - type: nauc_map_at_10_std value: 4.849651427482733 - type: nauc_map_at_1_diff1 value: 71.41720657694039 - type: nauc_map_at_1_max value: 40.52361010802207 - type: nauc_map_at_1_std value: -3.4966985764484835 - type: nauc_map_at_20_diff1 value: 18.6008341594555 - type: nauc_map_at_20_max value: 17.7648824369578 - type: nauc_map_at_20_std value: 5.297255330344646 - type: nauc_map_at_3_diff1 value: 19.195615691261565 - type: nauc_map_at_3_max value: 17.466626379813636 - type: nauc_map_at_3_std value: 2.6243834953938427 - type: nauc_map_at_5_diff1 value: 18.55653332525488 - type: nauc_map_at_5_max value: 17.461088221922633 - type: nauc_map_at_5_std value: 3.853024189868032 - type: nauc_mrr_at_1000_diff1 value: 71.1366541327153 - type: nauc_mrr_at_1000_max value: 43.32142719320606 - type: nauc_mrr_at_1000_std value: -1.5540399660833892 - type: nauc_mrr_at_100_diff1 value: 71.13849060952187 - type: nauc_mrr_at_100_max value: 43.32989371868815 - type: nauc_mrr_at_100_std value: -1.545670176348025 - type: nauc_mrr_at_10_diff1 value: 71.14457402293097 - type: nauc_mrr_at_10_max value: 43.372769272903284 - type: nauc_mrr_at_10_std value: -1.5393348801875264 - type: nauc_mrr_at_1_diff1 value: 71.41720657694039 - type: nauc_mrr_at_1_max value: 40.52361010802207 - type: nauc_mrr_at_1_std value: -3.4966985764484835 - type: nauc_mrr_at_20_diff1 value: 71.15310136746406 - type: nauc_mrr_at_20_max value: 43.368918600166595 - type: nauc_mrr_at_20_std value: -1.5098290368260359 - type: nauc_mrr_at_3_diff1 value: 70.8374180772855 - type: nauc_mrr_at_3_max value: 43.496540465503756 - type: nauc_mrr_at_3_std value: -1.8531058023308264 - type: nauc_mrr_at_5_diff1 value: 71.0445313174241 - type: nauc_mrr_at_5_max value: 43.48491122151075 - type: nauc_mrr_at_5_std value: -1.7318092266342708 - type: nauc_ndcg_at_1000_diff1 value: 25.295005668245402 - type: nauc_ndcg_at_1000_max value: 22.505081411406884 - type: nauc_ndcg_at_1000_std value: 8.4229330023338 - type: nauc_ndcg_at_100_diff1 value: 24.384314427252505 - type: nauc_ndcg_at_100_max value: 22.07684629115631 - type: nauc_ndcg_at_100_std value: 8.829711507432597 - type: nauc_ndcg_at_10_diff1 value: 23.485927495206663 - type: nauc_ndcg_at_10_max value: 20.98700547109215 - type: nauc_ndcg_at_10_std value: 6.32811700835107 - type: nauc_ndcg_at_1_diff1 value: 71.41720657694039 - type: nauc_ndcg_at_1_max value: 40.52361010802207 - type: nauc_ndcg_at_1_std value: -3.4966985764484835 - type: nauc_ndcg_at_20_diff1 value: 23.756288073742923 - type: nauc_ndcg_at_20_max value: 21.413410660850303 - type: nauc_ndcg_at_20_std value: 7.650727686015016 - type: nauc_ndcg_at_3_diff1 value: 25.177960762972727 - type: nauc_ndcg_at_3_max value: 21.14305293776629 - type: nauc_ndcg_at_3_std value: 2.6923518302461122 - type: nauc_ndcg_at_5_diff1 value: 23.990749939332094 - type: nauc_ndcg_at_5_max value: 20.956913803574537 - type: nauc_ndcg_at_5_std value: 4.404215868616478 - type: nauc_precision_at_1000_diff1 value: -2.8791075901918743 - type: nauc_precision_at_1000_max value: 19.15239727544697 - type: nauc_precision_at_1000_std value: 42.77060377512116 - type: nauc_precision_at_100_diff1 value: 1.3378406367413058 - type: nauc_precision_at_100_max value: 16.182541384634163 - type: nauc_precision_at_100_std value: 30.335004023396756 - type: nauc_precision_at_10_diff1 value: 5.54285946463118 - type: nauc_precision_at_10_max value: 13.895094021698995 - type: nauc_precision_at_10_std value: 13.626255441544547 - type: nauc_precision_at_1_diff1 value: 71.41720657694039 - type: nauc_precision_at_1_max value: 40.52361010802207 - type: nauc_precision_at_1_std value: -3.4966985764484835 - type: nauc_precision_at_20_diff1 value: 3.999435996481638 - type: nauc_precision_at_20_max value: 14.198331269969081 - type: nauc_precision_at_20_std value: 19.01442245585053 - type: nauc_precision_at_3_diff1 value: 12.534698984735535 - type: nauc_precision_at_3_max value: 15.937838211499326 - type: nauc_precision_at_3_std value: 4.941150608267901 - type: nauc_precision_at_5_diff1 value: 8.924874254304342 - type: nauc_precision_at_5_max value: 14.839503680284109 - type: nauc_precision_at_5_std value: 8.354174458200886 - type: nauc_recall_at_1000_diff1 value: -2.8791075901918277 - type: nauc_recall_at_1000_max value: 19.15239727544736 - type: nauc_recall_at_1000_std value: 42.77060377512101 - type: nauc_recall_at_100_diff1 value: 1.3378406367413018 - type: nauc_recall_at_100_max value: 16.18254138463397 - type: nauc_recall_at_100_std value: 30.335004023396756 - type: nauc_recall_at_10_diff1 value: 5.542859464631227 - type: nauc_recall_at_10_max value: 13.89509402169902 - type: nauc_recall_at_10_std value: 13.62625544154447 - type: nauc_recall_at_1_diff1 value: 71.41720657694039 - type: nauc_recall_at_1_max value: 40.52361010802207 - type: nauc_recall_at_1_std value: -3.4966985764484835 - type: nauc_recall_at_20_diff1 value: 3.999435996481615 - type: nauc_recall_at_20_max value: 14.198331269969106 - type: nauc_recall_at_20_std value: 19.014422455850642 - type: nauc_recall_at_3_diff1 value: 12.534698984735476 - type: nauc_recall_at_3_max value: 15.937838211499253 - type: nauc_recall_at_3_std value: 4.941150608267872 - type: nauc_recall_at_5_diff1 value: 8.924874254304386 - type: nauc_recall_at_5_max value: 14.839503680284134 - type: nauc_recall_at_5_std value: 8.35417445820087 - type: ndcg_at_1 value: 79.46 - type: ndcg_at_10 value: 71.324 - type: ndcg_at_100 value: 74.18 - type: ndcg_at_1000 value: 75.316 - type: ndcg_at_20 value: 72.551 - type: ndcg_at_3 value: 66.57300000000001 - type: ndcg_at_5 value: 69.241 - type: precision_at_1 value: 79.46 - type: precision_at_10 value: 14.915999999999999 - type: precision_at_100 value: 1.714 - type: precision_at_1000 value: 0.186 - type: precision_at_20 value: 7.852 - type: precision_at_3 value: 42.732 - type: precision_at_5 value: 27.743000000000002 - type: recall_at_1 value: 39.73 - type: recall_at_10 value: 74.578 - type: recall_at_100 value: 85.69200000000001 - type: recall_at_1000 value: 93.194 - type: recall_at_20 value: 78.521 - type: recall_at_3 value: 64.09899999999999 - type: recall_at_5 value: 69.35900000000001 - task: type: Classification dataset: name: MTEB IFlyTek (default) type: C-MTEB/IFlyTek-classification config: default split: test revision: 421605374b29664c5fc098418fe20ada9bd55f8a metrics: - type: accuracy value: 0.0 - type: f1 value: 0.0 - type: f1_weighted value: 0.0 - type: main_score value: 0.0 - task: type: Classification dataset: name: MTEB IFlyTek (default) type: C-MTEB/IFlyTek-classification config: default split: validation revision: 421605374b29664c5fc098418fe20ada9bd55f8a metrics: - type: accuracy value: 47.97999230473259 - type: f1 value: 35.99868153324778 - type: f1_weighted value: 45.93902403943046 - type: main_score value: 47.97999230473259 - task: type: Classification dataset: name: MTEB ImdbClassification (default) type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 87.066 - type: ap value: 81.39504087177659 - type: ap_weighted value: 81.39504087177659 - type: f1 value: 87.03207693979114 - type: f1_weighted value: 87.03207693979114 - type: main_score value: 87.066 - task: type: Classification dataset: name: MTEB JDReview (default) type: C-MTEB/JDReview-classification config: default split: test revision: b7c64bd89eb87f8ded463478346f76731f07bf8b metrics: - type: accuracy value: 80.50656660412757 - type: ap value: 44.39524359482253 - type: ap_weighted value: 44.39524359482253 - type: f1 value: 74.47089881755461 - type: f1_weighted value: 82.26720272194022 - type: main_score value: 80.50656660412757 - task: type: STS dataset: name: MTEB LCQMC (default) type: C-MTEB/LCQMC config: default split: test revision: 17f9b096f80380fce5ed12a9be8be7784b337daf metrics: - type: cosine_pearson value: 67.51896934235906 - type: cosine_spearman value: 73.71926625669903 - type: euclidean_pearson value: 71.95413794810199 - type: euclidean_spearman value: 73.7192706889374 - type: main_score value: 73.71926625669903 - type: manhattan_pearson value: 71.99442345245122 - type: manhattan_spearman value: 73.70096693054006 - type: pearson value: 67.51896934235906 - type: spearman value: 73.71926625669903 - task: type: Reranking dataset: name: MTEB MMarcoReranking (default) type: C-MTEB/Mmarco-reranking config: default split: dev revision: 8e0c766dbe9e16e1d221116a3f36795fbade07f6 metrics: - type: main_score value: 32.47827175843312 - type: map value: 32.47827175843312 - type: mrr value: 31.68015873015873 - type: nAUC_map_diff1 value: 28.44752902999802 - type: nAUC_map_max value: -1.2720002819461194 - type: nAUC_map_std value: -17.183634811974066 - type: nAUC_mrr_diff1 value: 28.98249515778471 - type: nAUC_mrr_max value: -2.2626950880487264 - type: nAUC_mrr_std value: -18.15422633230884 - task: type: Retrieval dataset: name: MTEB MMarcoRetrieval (default) type: C-MTEB/MMarcoRetrieval config: default split: test revision: 539bbde593d947e2a124ba72651aafc09eb33fc2 metrics: - type: main_score value: 81.44200000000001 - type: map_at_1 value: 69.057 - type: map_at_10 value: 77.928 - type: map_at_100 value: 78.215 - type: map_at_1000 value: 78.223 - type: map_at_20 value: 78.125 - type: map_at_3 value: 76.149 - type: map_at_5 value: 77.279 - type: mrr_at_1 value: 71.30372492836676 - type: mrr_at_10 value: 78.42867944694576 - type: mrr_at_100 value: 78.68026411713835 - type: mrr_at_1000 value: 78.68715003278672 - type: mrr_at_20 value: 78.59844616726932 - type: mrr_at_3 value: 76.88634192932187 - type: mrr_at_5 value: 77.88061127029584 - type: nauc_map_at_1000_diff1 value: 79.16332823556652 - type: nauc_map_at_1000_max value: 42.136182569734615 - type: nauc_map_at_1000_std value: -14.021601911949988 - type: nauc_map_at_100_diff1 value: 79.1613187755636 - type: nauc_map_at_100_max value: 42.14855231595681 - type: nauc_map_at_100_std value: -14.000665861408082 - type: nauc_map_at_10_diff1 value: 79.06482011907491 - type: nauc_map_at_10_max value: 42.24609232573728 - type: nauc_map_at_10_std value: -14.077085057137879 - type: nauc_map_at_1_diff1 value: 81.21875382508544 - type: nauc_map_at_1_max value: 33.86760525968747 - type: nauc_map_at_1_std value: -20.298879297830517 - type: nauc_map_at_20_diff1 value: 79.11748052836347 - type: nauc_map_at_20_max value: 42.19786763146322 - type: nauc_map_at_20_std value: -13.95931426053525 - type: nauc_map_at_3_diff1 value: 79.04607858094613 - type: nauc_map_at_3_max value: 41.0549681711687 - type: nauc_map_at_3_std value: -15.79648875556265 - type: nauc_map_at_5_diff1 value: 78.98637879660431 - type: nauc_map_at_5_max value: 41.99578332353169 - type: nauc_map_at_5_std value: -14.858893068584978 - type: nauc_mrr_at_1000_diff1 value: 79.65531133349177 - type: nauc_mrr_at_1000_max value: 43.06959597550469 - type: nauc_mrr_at_1000_std value: -12.827067228472465 - type: nauc_mrr_at_100_diff1 value: 79.65273047660082 - type: nauc_mrr_at_100_max value: 43.081843491541136 - type: nauc_mrr_at_100_std value: -12.805057022346073 - type: nauc_mrr_at_10_diff1 value: 79.55218639018759 - type: nauc_mrr_at_10_max value: 43.22658416132472 - type: nauc_mrr_at_10_std value: -12.733112379030551 - type: nauc_mrr_at_1_diff1 value: 81.93745763388229 - type: nauc_mrr_at_1_max value: 38.419758204798775 - type: nauc_mrr_at_1_std value: -18.398257441907354 - type: nauc_mrr_at_20_diff1 value: 79.60762520867635 - type: nauc_mrr_at_20_max value: 43.1410526852498 - type: nauc_mrr_at_20_std value: -12.72941439930238 - type: nauc_mrr_at_3_diff1 value: 79.62059579843309 - type: nauc_mrr_at_3_max value: 42.449859611207934 - type: nauc_mrr_at_3_std value: -14.078192368091425 - type: nauc_mrr_at_5_diff1 value: 79.47461249319058 - type: nauc_mrr_at_5_max value: 42.990249727432364 - type: nauc_mrr_at_5_std value: -13.4004037752339 - type: nauc_ndcg_at_1000_diff1 value: 78.85521456806612 - type: nauc_ndcg_at_1000_max value: 44.20182195097601 - type: nauc_ndcg_at_1000_std value: -10.78863328246398 - type: nauc_ndcg_at_100_diff1 value: 78.78612088837218 - type: nauc_ndcg_at_100_max value: 44.67483956772654 - type: nauc_ndcg_at_100_std value: -9.952417799203559 - type: nauc_ndcg_at_10_diff1 value: 78.27575531032375 - type: nauc_ndcg_at_10_max value: 45.3567853674585 - type: nauc_ndcg_at_10_std value: -9.843451827530492 - type: nauc_ndcg_at_1_diff1 value: 81.93745763388229 - type: nauc_ndcg_at_1_max value: 38.419758204798775 - type: nauc_ndcg_at_1_std value: -18.398257441907354 - type: nauc_ndcg_at_20_diff1 value: 78.46005207162577 - type: nauc_ndcg_at_20_max value: 45.11954152664807 - type: nauc_ndcg_at_20_std value: -9.544486301913391 - type: nauc_ndcg_at_3_diff1 value: 78.36667983674094 - type: nauc_ndcg_at_3_max value: 42.9311716520143 - type: nauc_ndcg_at_3_std value: -13.742138987703386 - type: nauc_ndcg_at_5_diff1 value: 78.11043344351806 - type: nauc_ndcg_at_5_max value: 44.569017736822126 - type: nauc_ndcg_at_5_std value: -12.018408823200332 - type: nauc_precision_at_1000_diff1 value: -20.925206348791804 - type: nauc_precision_at_1000_max value: 16.508882531465748 - type: nauc_precision_at_1000_std value: 28.256279950018325 - type: nauc_precision_at_100_diff1 value: -9.112393184096334 - type: nauc_precision_at_100_max value: 25.493617547909665 - type: nauc_precision_at_100_std value: 32.40172394495665 - type: nauc_precision_at_10_diff1 value: 17.208835586727638 - type: nauc_precision_at_10_max value: 37.7546142144074 - type: nauc_precision_at_10_std value: 21.54342493539188 - type: nauc_precision_at_1_diff1 value: 81.93745763388229 - type: nauc_precision_at_1_max value: 38.419758204798775 - type: nauc_precision_at_1_std value: -18.398257441907354 - type: nauc_precision_at_20_diff1 value: 5.981358362224364 - type: nauc_precision_at_20_max value: 32.9389498605972 - type: nauc_precision_at_20_std value: 27.379135010444607 - type: nauc_precision_at_3_diff1 value: 44.160977179222705 - type: nauc_precision_at_3_max value: 40.772105564552746 - type: nauc_precision_at_3_std value: 1.4884707594160764 - type: nauc_precision_at_5_diff1 value: 32.05559296191691 - type: nauc_precision_at_5_max value: 41.200449688782385 - type: nauc_precision_at_5_std value: 9.780866426114939 - type: nauc_recall_at_1000_diff1 value: 71.8206413337552 - type: nauc_recall_at_1000_max value: 90.63640978316079 - type: nauc_recall_at_1000_std value: 77.30548952215615 - type: nauc_recall_at_100_diff1 value: 72.73716194333042 - type: nauc_recall_at_100_max value: 84.00954968630633 - type: nauc_recall_at_100_std value: 62.53425171186474 - type: nauc_recall_at_10_diff1 value: 71.50695852530222 - type: nauc_recall_at_10_max value: 64.81575522599766 - type: nauc_recall_at_10_std value: 17.67037787116186 - type: nauc_recall_at_1_diff1 value: 81.21875382508544 - type: nauc_recall_at_1_max value: 33.86760525968747 - type: nauc_recall_at_1_std value: -20.298879297830517 - type: nauc_recall_at_20_diff1 value: 70.85924807023792 - type: nauc_recall_at_20_max value: 70.62057405576428 - type: nauc_recall_at_20_std value: 31.790496314992726 - type: nauc_recall_at_3_diff1 value: 74.86279950554011 - type: nauc_recall_at_3_max value: 47.821889540251064 - type: nauc_recall_at_3_std value: -8.318141348316889 - type: nauc_recall_at_5_diff1 value: 72.88547370934779 - type: nauc_recall_at_5_max value: 55.21595143637733 - type: nauc_recall_at_5_std value: -0.5790325911766804 - type: ndcg_at_1 value: 71.304 - type: ndcg_at_10 value: 81.44200000000001 - type: ndcg_at_100 value: 82.69 - type: ndcg_at_1000 value: 82.901 - type: ndcg_at_20 value: 82.114 - type: ndcg_at_3 value: 78.091 - type: ndcg_at_5 value: 80.00500000000001 - type: precision_at_1 value: 71.304 - type: precision_at_10 value: 9.764000000000001 - type: precision_at_100 value: 1.0370000000000001 - type: precision_at_1000 value: 0.105 - type: precision_at_20 value: 5.024 - type: precision_at_3 value: 29.244999999999997 - type: precision_at_5 value: 18.567 - type: recall_at_1 value: 69.057 - type: recall_at_10 value: 91.742 - type: recall_at_100 value: 97.295 - type: recall_at_1000 value: 98.97399999999999 - type: recall_at_20 value: 94.328 - type: recall_at_3 value: 82.918 - type: recall_at_5 value: 87.477 - task: type: Retrieval dataset: name: MTEB MSMARCO (default) type: mteb/msmarco config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: main_score value: 41.827999999999996 - type: map_at_1 value: 22.245 - type: map_at_10 value: 34.741 - type: map_at_100 value: 35.958 - type: map_at_1000 value: 36.0 - type: map_at_20 value: 35.503 - type: map_at_3 value: 30.676 - type: map_at_5 value: 33.047 - type: mrr_at_1 value: 22.836676217765042 - type: mrr_at_10 value: 35.339734843316506 - type: mrr_at_100 value: 36.48494031661305 - type: mrr_at_1000 value: 36.5220826466321 - type: mrr_at_20 value: 36.061448870738566 - type: mrr_at_3 value: 31.363419293218588 - type: mrr_at_5 value: 33.6778892072588 - type: nauc_map_at_1000_diff1 value: 33.70269918652719 - type: nauc_map_at_1000_max value: -0.9666983449376146 - type: nauc_map_at_1000_std value: -24.106835117162635 - type: nauc_map_at_100_diff1 value: 33.69582567164444 - type: nauc_map_at_100_max value: -0.9713399181710164 - type: nauc_map_at_100_std value: -24.09732526417952 - type: nauc_map_at_10_diff1 value: 33.54760088792205 - type: nauc_map_at_10_max value: -1.108864145592058 - type: nauc_map_at_10_std value: -24.714000593926635 - type: nauc_map_at_1_diff1 value: 37.83176482911279 - type: nauc_map_at_1_max value: -1.2803428780118231 - type: nauc_map_at_1_std value: -21.43672521847787 - type: nauc_map_at_20_diff1 value: 33.56330277702434 - type: nauc_map_at_20_max value: -1.0453224620903316 - type: nauc_map_at_20_std value: -24.397377217635892 - type: nauc_map_at_3_diff1 value: 33.836449090455694 - type: nauc_map_at_3_max value: -1.4151762945866553 - type: nauc_map_at_3_std value: -24.53920025162081 - type: nauc_map_at_5_diff1 value: 33.540475708611254 - type: nauc_map_at_5_max value: -1.2270133827372984 - type: nauc_map_at_5_std value: -24.898031963382653 - type: nauc_mrr_at_1000_diff1 value: 33.546755355679295 - type: nauc_mrr_at_1000_max value: -0.9999621583376623 - type: nauc_mrr_at_1000_std value: -23.885688782118415 - type: nauc_mrr_at_100_diff1 value: 33.540941435457846 - type: nauc_mrr_at_100_max value: -1.001220466565939 - type: nauc_mrr_at_100_std value: -23.875083905633048 - type: nauc_mrr_at_10_diff1 value: 33.37588602067944 - type: nauc_mrr_at_10_max value: -1.0813769231834895 - type: nauc_mrr_at_10_std value: -24.438406987527287 - type: nauc_mrr_at_1_diff1 value: 37.70984030766279 - type: nauc_mrr_at_1_max value: -1.3745841550868614 - type: nauc_mrr_at_1_std value: -21.46461137322961 - type: nauc_mrr_at_20_diff1 value: 33.40614386892839 - type: nauc_mrr_at_20_max value: -1.0449149378336973 - type: nauc_mrr_at_20_std value: -24.13679244294705 - type: nauc_mrr_at_3_diff1 value: 33.644563276200735 - type: nauc_mrr_at_3_max value: -1.4969606485922458 - type: nauc_mrr_at_3_std value: -24.348111206749714 - type: nauc_mrr_at_5_diff1 value: 33.36640792187642 - type: nauc_mrr_at_5_max value: -1.2313355299819755 - type: nauc_mrr_at_5_std value: -24.630079858307177 - type: nauc_ndcg_at_1000_diff1 value: 33.0122331192661 - type: nauc_ndcg_at_1000_max value: -0.24697428352372258 - type: nauc_ndcg_at_1000_std value: -23.013467138693887 - type: nauc_ndcg_at_100_diff1 value: 32.86665293711552 - type: nauc_ndcg_at_100_max value: -0.276416624031757 - type: nauc_ndcg_at_100_std value: -22.45097004537971 - type: nauc_ndcg_at_10_diff1 value: 32.06009904567439 - type: nauc_ndcg_at_10_max value: -0.9105345903791483 - type: nauc_ndcg_at_10_std value: -25.661880461901248 - type: nauc_ndcg_at_1_diff1 value: 37.70984030766279 - type: nauc_ndcg_at_1_max value: -1.3745841550868614 - type: nauc_ndcg_at_1_std value: -21.46461137322961 - type: nauc_ndcg_at_20_diff1 value: 32.067609578292775 - type: nauc_ndcg_at_20_max value: -0.732282304094851 - type: nauc_ndcg_at_20_std value: -24.550324249058423 - type: nauc_ndcg_at_3_diff1 value: 32.60074846100642 - type: nauc_ndcg_at_3_max value: -1.5329621325967313 - type: nauc_ndcg_at_3_std value: -25.410306390920322 - type: nauc_ndcg_at_5_diff1 value: 32.05683625760298 - type: nauc_ndcg_at_5_max value: -1.155409896292399 - type: nauc_ndcg_at_5_std value: -25.997867512038702 - type: nauc_precision_at_1000_diff1 value: -0.582363922011796 - type: nauc_precision_at_1000_max value: 15.367854085208096 - type: nauc_precision_at_1000_std value: 16.62922885462353 - type: nauc_precision_at_100_diff1 value: 13.413869212944443 - type: nauc_precision_at_100_max value: 9.540599900741062 - type: nauc_precision_at_100_std value: 8.598685767883458 - type: nauc_precision_at_10_diff1 value: 24.607692201117835 - type: nauc_precision_at_10_max value: 0.4073275292029154 - type: nauc_precision_at_10_std value: -26.55809497339693 - type: nauc_precision_at_1_diff1 value: 37.70984030766279 - type: nauc_precision_at_1_max value: -1.3745841550868614 - type: nauc_precision_at_1_std value: -21.46461137322961 - type: nauc_precision_at_20_diff1 value: 20.76545064732853 - type: nauc_precision_at_20_max value: 2.1323836200645387 - type: nauc_precision_at_20_std value: -19.423536825556933 - type: nauc_precision_at_3_diff1 value: 28.62040804487786 - type: nauc_precision_at_3_max value: -1.7875552566437067 - type: nauc_precision_at_3_std value: -27.4938024637869 - type: nauc_precision_at_5_diff1 value: 26.57961892416209 - type: nauc_precision_at_5_max value: -0.821025657887804 - type: nauc_precision_at_5_std value: -28.4053588476215 - type: nauc_recall_at_1000_diff1 value: 29.957070547516786 - type: nauc_recall_at_1000_max value: 37.51269513653321 - type: nauc_recall_at_1000_std value: 50.832935513386445 - type: nauc_recall_at_100_diff1 value: 29.124873637284093 - type: nauc_recall_at_100_max value: 7.456855039971972 - type: nauc_recall_at_100_std value: 5.513183800655616 - type: nauc_recall_at_10_diff1 value: 27.2239066879356 - type: nauc_recall_at_10_max value: -0.40501611552803435 - type: nauc_recall_at_10_std value: -28.66151209173145 - type: nauc_recall_at_1_diff1 value: 37.83176482911279 - type: nauc_recall_at_1_max value: -1.2803428780118231 - type: nauc_recall_at_1_std value: -21.43672521847787 - type: nauc_recall_at_20_diff1 value: 25.996575895775436 - type: nauc_recall_at_20_max value: 0.371917541705145 - type: nauc_recall_at_20_std value: -24.05013745494552 - type: nauc_recall_at_3_diff1 value: 29.23774283371172 - type: nauc_recall_at_3_max value: -1.792638771577912 - type: nauc_recall_at_3_std value: -27.680214935573588 - type: nauc_recall_at_5_diff1 value: 27.78539931643594 - type: nauc_recall_at_5_max value: -0.9461596361216702 - type: nauc_recall_at_5_std value: -29.02852975571309 - type: ndcg_at_1 value: 22.837 - type: ndcg_at_10 value: 41.827999999999996 - type: ndcg_at_100 value: 47.602 - type: ndcg_at_1000 value: 48.638999999999996 - type: ndcg_at_20 value: 44.506 - type: ndcg_at_3 value: 33.594 - type: ndcg_at_5 value: 37.81 - type: precision_at_1 value: 22.837 - type: precision_at_10 value: 6.65 - type: precision_at_100 value: 0.954 - type: precision_at_1000 value: 0.104 - type: precision_at_20 value: 3.8859999999999997 - type: precision_at_3 value: 14.302999999999999 - type: precision_at_5 value: 10.719 - type: recall_at_1 value: 22.245 - type: recall_at_10 value: 63.660000000000004 - type: recall_at_100 value: 90.187 - type: recall_at_1000 value: 98.095 - type: recall_at_20 value: 74.008 - type: recall_at_3 value: 41.349999999999994 - type: recall_at_5 value: 51.480000000000004 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.86000911992704 - type: f1 value: 93.36462701030769 - type: f1_weighted value: 93.87166235541487 - type: main_score value: 93.86000911992704 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 71.77610579115368 - type: f1 value: 52.094627301273746 - type: f1_weighted value: 74.31447677132623 - type: main_score value: 71.77610579115368 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 72.14525891055817 - type: f1 value: 70.01668873115348 - type: f1_weighted value: 71.0196932891963 - type: main_score value: 72.14525891055817 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 70.3866845998655 - type: f1 value: 68.0106461866208 - type: f1_weighted value: 69.47183715090725 - type: main_score value: 70.3866845998655 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: validation revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 72.56763403836695 - type: f1 value: 68.74137086779079 - type: f1_weighted value: 71.17832082465809 - type: main_score value: 72.56763403836695 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: validation revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 71.17560255779637 - type: f1 value: 67.53436094771642 - type: f1_weighted value: 69.85911870240461 - type: main_score value: 71.17560255779637 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 75.83725622057834 - type: f1 value: 74.81652741027294 - type: f1_weighted value: 75.64384667945804 - type: main_score value: 75.83725622057834 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 74.77471418964359 - type: f1 value: 74.50834305419674 - type: f1_weighted value: 74.51089478391411 - type: main_score value: 74.77471418964359 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: validation revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 75.2680767338908 - type: f1 value: 73.84891408751763 - type: f1_weighted value: 75.0958616975504 - type: main_score value: 75.2680767338908 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: validation revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 74.20068863748155 - type: f1 value: 73.56517145836091 - type: f1_weighted value: 74.02483580359413 - type: main_score value: 74.20068863748155 - task: type: Retrieval dataset: name: MTEB MedicalRetrieval (default) type: C-MTEB/MedicalRetrieval config: default split: test revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6 metrics: - type: main_score value: 59.93000000000001 - type: map_at_1 value: 52.5 - type: map_at_10 value: 57.489000000000004 - type: map_at_100 value: 58.006 - type: map_at_1000 value: 58.06 - type: map_at_20 value: 57.757999999999996 - type: map_at_3 value: 56.2 - type: map_at_5 value: 56.974999999999994 - type: mrr_at_1 value: 52.7 - type: mrr_at_10 value: 57.588531746031755 - type: mrr_at_100 value: 58.10652803307724 - type: mrr_at_1000 value: 58.160443460868684 - type: mrr_at_20 value: 57.858160152540975 - type: mrr_at_3 value: 56.3 - type: mrr_at_5 value: 57.074999999999996 - type: nauc_map_at_1000_diff1 value: 79.08319909758636 - type: nauc_map_at_1000_max value: 66.40358430901192 - type: nauc_map_at_1000_std value: 24.962984166768837 - type: nauc_map_at_100_diff1 value: 79.06038136198957 - type: nauc_map_at_100_max value: 66.39726845981066 - type: nauc_map_at_100_std value: 24.949716444423807 - type: nauc_map_at_10_diff1 value: 79.17675719820811 - type: nauc_map_at_10_max value: 66.51678197592413 - type: nauc_map_at_10_std value: 24.85733388904244 - type: nauc_map_at_1_diff1 value: 82.65982976105012 - type: nauc_map_at_1_max value: 66.0229036338153 - type: nauc_map_at_1_std value: 22.09004204696952 - type: nauc_map_at_20_diff1 value: 79.0706949814673 - type: nauc_map_at_20_max value: 66.41921898804029 - type: nauc_map_at_20_std value: 24.9286448686172 - type: nauc_map_at_3_diff1 value: 79.7198001441378 - type: nauc_map_at_3_max value: 67.00063808028989 - type: nauc_map_at_3_std value: 24.074213865142884 - type: nauc_map_at_5_diff1 value: 79.35098048907732 - type: nauc_map_at_5_max value: 66.80815275648563 - type: nauc_map_at_5_std value: 24.54538796165573 - type: nauc_mrr_at_1000_diff1 value: 78.85646848963292 - type: nauc_mrr_at_1000_max value: 66.74961594120661 - type: nauc_mrr_at_1000_std value: 25.261834568256063 - type: nauc_mrr_at_100_diff1 value: 78.83397065160052 - type: nauc_mrr_at_100_max value: 66.74276613157386 - type: nauc_mrr_at_100_std value: 25.24809500309785 - type: nauc_mrr_at_10_diff1 value: 78.9530523067497 - type: nauc_mrr_at_10_max value: 66.85891502850205 - type: nauc_mrr_at_10_std value: 25.152891516847536 - type: nauc_mrr_at_1_diff1 value: 82.2465610017115 - type: nauc_mrr_at_1_max value: 66.66511371063773 - type: nauc_mrr_at_1_std value: 22.639906493776998 - type: nauc_mrr_at_20_diff1 value: 78.84558284790198 - type: nauc_mrr_at_20_max value: 66.76277766324108 - type: nauc_mrr_at_20_std value: 25.225303624662814 - type: nauc_mrr_at_3_diff1 value: 79.50156771547003 - type: nauc_mrr_at_3_max value: 67.33583901650987 - type: nauc_mrr_at_3_std value: 24.362469504761627 - type: nauc_mrr_at_5_diff1 value: 79.12943881636619 - type: nauc_mrr_at_5_max value: 67.14759555422152 - type: nauc_mrr_at_5_std value: 24.83799695076027 - type: nauc_ndcg_at_1000_diff1 value: 77.52019660815121 - type: nauc_ndcg_at_1000_max value: 65.83717552187926 - type: nauc_ndcg_at_1000_std value: 27.034582867678804 - type: nauc_ndcg_at_100_diff1 value: 76.93747758970423 - type: nauc_ndcg_at_100_max value: 65.60810295420501 - type: nauc_ndcg_at_100_std value: 26.941487810863034 - type: nauc_ndcg_at_10_diff1 value: 77.57948103065401 - type: nauc_ndcg_at_10_max value: 66.07222651913443 - type: nauc_ndcg_at_10_std value: 26.35911536261543 - type: nauc_ndcg_at_1_diff1 value: 82.65982976105012 - type: nauc_ndcg_at_1_max value: 66.0229036338153 - type: nauc_ndcg_at_1_std value: 22.09004204696952 - type: nauc_ndcg_at_20_diff1 value: 77.12409727019678 - type: nauc_ndcg_at_20_max value: 65.71984870176335 - type: nauc_ndcg_at_20_std value: 26.673365148606948 - type: nauc_ndcg_at_3_diff1 value: 78.75978575557033 - type: nauc_ndcg_at_3_max value: 67.13135093269904 - type: nauc_ndcg_at_3_std value: 24.706967615687816 - type: nauc_ndcg_at_5_diff1 value: 78.05104990867088 - type: nauc_ndcg_at_5_max value: 66.79111424562637 - type: nauc_ndcg_at_5_std value: 25.615575237732614 - type: nauc_precision_at_1000_diff1 value: 60.2983269810654 - type: nauc_precision_at_1000_max value: 58.90618542498941 - type: nauc_precision_at_1000_std value: 62.82775405244051 - type: nauc_precision_at_100_diff1 value: 64.11646723766715 - type: nauc_precision_at_100_max value: 60.282482009275654 - type: nauc_precision_at_100_std value: 39.473517969667135 - type: nauc_precision_at_10_diff1 value: 71.82001686121029 - type: nauc_precision_at_10_max value: 64.12657559785326 - type: nauc_precision_at_10_std value: 31.89716032543505 - type: nauc_precision_at_1_diff1 value: 82.65982976105012 - type: nauc_precision_at_1_max value: 66.0229036338153 - type: nauc_precision_at_1_std value: 22.09004204696952 - type: nauc_precision_at_20_diff1 value: 69.01813327818459 - type: nauc_precision_at_20_max value: 62.31511858543514 - type: nauc_precision_at_20_std value: 33.98133090177575 - type: nauc_precision_at_3_diff1 value: 75.85071053792088 - type: nauc_precision_at_3_max value: 67.4643531059972 - type: nauc_precision_at_3_std value: 26.61929747194295 - type: nauc_precision_at_5_diff1 value: 73.80236395769283 - type: nauc_precision_at_5_max value: 66.62363925820746 - type: nauc_precision_at_5_std value: 29.175770150771204 - type: nauc_recall_at_1000_diff1 value: 60.29832698106552 - type: nauc_recall_at_1000_max value: 58.9061854249895 - type: nauc_recall_at_1000_std value: 62.82775405244069 - type: nauc_recall_at_100_diff1 value: 64.11646723766702 - type: nauc_recall_at_100_max value: 60.282482009275654 - type: nauc_recall_at_100_std value: 39.47351796966711 - type: nauc_recall_at_10_diff1 value: 71.82001686121032 - type: nauc_recall_at_10_max value: 64.12657559785328 - type: nauc_recall_at_10_std value: 31.897160325435102 - type: nauc_recall_at_1_diff1 value: 82.65982976105012 - type: nauc_recall_at_1_max value: 66.0229036338153 - type: nauc_recall_at_1_std value: 22.09004204696952 - type: nauc_recall_at_20_diff1 value: 69.01813327818459 - type: nauc_recall_at_20_max value: 62.3151185854351 - type: nauc_recall_at_20_std value: 33.981330901775735 - type: nauc_recall_at_3_diff1 value: 75.85071053792085 - type: nauc_recall_at_3_max value: 67.4643531059972 - type: nauc_recall_at_3_std value: 26.619297471942975 - type: nauc_recall_at_5_diff1 value: 73.80236395769293 - type: nauc_recall_at_5_max value: 66.62363925820748 - type: nauc_recall_at_5_std value: 29.175770150771186 - type: ndcg_at_1 value: 52.5 - type: ndcg_at_10 value: 59.93000000000001 - type: ndcg_at_100 value: 62.697 - type: ndcg_at_1000 value: 64.28399999999999 - type: ndcg_at_20 value: 60.914 - type: ndcg_at_3 value: 57.336 - type: ndcg_at_5 value: 58.713 - type: precision_at_1 value: 52.5 - type: precision_at_10 value: 6.76 - type: precision_at_100 value: 0.8109999999999999 - type: precision_at_1000 value: 0.094 - type: precision_at_20 value: 3.5749999999999997 - type: precision_at_3 value: 20.200000000000003 - type: precision_at_5 value: 12.78 - type: recall_at_1 value: 52.5 - type: recall_at_10 value: 67.60000000000001 - type: recall_at_100 value: 81.10000000000001 - type: recall_at_1000 value: 93.89999999999999 - type: recall_at_20 value: 71.5 - type: recall_at_3 value: 60.6 - type: recall_at_5 value: 63.9 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P (default) type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: main_score value: 34.731017731166105 - type: v_measure value: 34.731017731166105 - type: v_measure_std value: 1.5618103916501433 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S (default) type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: main_score value: 31.545874099031675 - type: v_measure value: 31.545874099031675 - type: v_measure_std value: 1.4489482273302008 - task: type: Reranking dataset: name: MTEB MindSmallReranking (default) type: mteb/mind_small config: default split: test revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7 metrics: - type: main_score value: 32.68299163296308 - type: map value: 32.68299163296308 - type: mrr value: 33.94301395316366 - type: nAUC_map_diff1 value: 12.502931744458973 - type: nAUC_map_max value: -21.63110475017275 - type: nAUC_map_std value: 0.6459544098312916 - type: nAUC_mrr_diff1 value: 11.816048638685693 - type: nAUC_mrr_max value: -15.973240530490395 - type: nAUC_mrr_std value: 1.9732078672552686 - task: type: Classification dataset: name: MTEB MultilingualSentiment (default) type: C-MTEB/MultilingualSentiment-classification config: default split: test revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a metrics: - type: accuracy value: 72.95333333333333 - type: f1 value: 72.36132389042342 - type: f1_weighted value: 72.3613238904234 - type: main_score value: 72.95333333333333 - task: type: Classification dataset: name: MTEB MultilingualSentiment (default) type: C-MTEB/MultilingualSentiment-classification config: default split: validation revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a metrics: - type: accuracy value: 71.95333333333333 - type: f1 value: 71.30311389484186 - type: f1_weighted value: 71.30311389484187 - type: main_score value: 71.95333333333333 - task: type: Retrieval dataset: name: MTEB NFCorpus (default) type: mteb/nfcorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: main_score value: 37.749 - type: map_at_1 value: 6.4159999999999995 - type: map_at_10 value: 14.491000000000001 - type: map_at_100 value: 18.33 - type: map_at_1000 value: 19.953000000000003 - type: map_at_20 value: 15.973 - type: map_at_3 value: 10.417 - type: map_at_5 value: 12.303 - type: mrr_at_1 value: 48.91640866873065 - type: mrr_at_10 value: 57.561673792323944 - type: mrr_at_100 value: 58.053397955445895 - type: mrr_at_1000 value: 58.08848482119531 - type: mrr_at_20 value: 57.871279571741574 - type: mrr_at_3 value: 55.26315789473685 - type: mrr_at_5 value: 56.671826625387 - type: nauc_map_at_1000_diff1 value: 32.624665660376444 - type: nauc_map_at_1000_max value: 28.128385961803097 - type: nauc_map_at_1000_std value: 11.87883353166736 - type: nauc_map_at_100_diff1 value: 34.57600972779998 - type: nauc_map_at_100_max value: 27.32046439964767 - type: nauc_map_at_100_std value: 8.254050946905554 - type: nauc_map_at_10_diff1 value: 38.352776312240316 - type: nauc_map_at_10_max value: 20.71383460865022 - type: nauc_map_at_10_std value: -3.6182278698175008 - type: nauc_map_at_1_diff1 value: 51.364207086510284 - type: nauc_map_at_1_max value: 8.650050628738809 - type: nauc_map_at_1_std value: -17.30631242512481 - type: nauc_map_at_20_diff1 value: 36.6488106109285 - type: nauc_map_at_20_max value: 23.646170774047608 - type: nauc_map_at_20_std value: 0.9988156075379873 - type: nauc_map_at_3_diff1 value: 42.77938198571511 - type: nauc_map_at_3_max value: 13.04870823631715 - type: nauc_map_at_3_std value: -12.642790311189103 - type: nauc_map_at_5_diff1 value: 40.14997508237488 - type: nauc_map_at_5_max value: 16.841994515634102 - type: nauc_map_at_5_std value: -8.827177258110211 - type: nauc_mrr_at_1000_diff1 value: 33.19158658938379 - type: nauc_mrr_at_1000_max value: 50.78735289388637 - type: nauc_mrr_at_1000_std value: 29.483390453949475 - type: nauc_mrr_at_100_diff1 value: 33.181857073965006 - type: nauc_mrr_at_100_max value: 50.82192854045011 - type: nauc_mrr_at_100_std value: 29.530433535813316 - type: nauc_mrr_at_10_diff1 value: 33.13575237370853 - type: nauc_mrr_at_10_max value: 50.840928702265245 - type: nauc_mrr_at_10_std value: 29.393982460321617 - type: nauc_mrr_at_1_diff1 value: 35.38635024440146 - type: nauc_mrr_at_1_max value: 45.58280413169544 - type: nauc_mrr_at_1_std value: 20.118650543521753 - type: nauc_mrr_at_20_diff1 value: 33.32009076370569 - type: nauc_mrr_at_20_max value: 50.67114851216221 - type: nauc_mrr_at_20_std value: 29.421770858743024 - type: nauc_mrr_at_3_diff1 value: 33.17533789218473 - type: nauc_mrr_at_3_max value: 50.00421069382273 - type: nauc_mrr_at_3_std value: 28.784501459911233 - type: nauc_mrr_at_5_diff1 value: 32.66135736896744 - type: nauc_mrr_at_5_max value: 50.401707427923505 - type: nauc_mrr_at_5_std value: 29.357909892487232 - type: nauc_ndcg_at_1000_diff1 value: 29.02160255181641 - type: nauc_ndcg_at_1000_max value: 44.98065565601714 - type: nauc_ndcg_at_1000_std value: 31.652110733336887 - type: nauc_ndcg_at_100_diff1 value: 28.851190536083593 - type: nauc_ndcg_at_100_max value: 39.26997767014831 - type: nauc_ndcg_at_100_std value: 25.574099100530827 - type: nauc_ndcg_at_10_diff1 value: 24.733756826812474 - type: nauc_ndcg_at_10_max value: 39.51573298713868 - type: nauc_ndcg_at_10_std value: 26.10826723752759 - type: nauc_ndcg_at_1_diff1 value: 35.86747711483557 - type: nauc_ndcg_at_1_max value: 43.60593203885657 - type: nauc_ndcg_at_1_std value: 16.90139427357944 - type: nauc_ndcg_at_20_diff1 value: 24.701717110335373 - type: nauc_ndcg_at_20_max value: 37.56137361178106 - type: nauc_ndcg_at_20_std value: 25.65000140011744 - type: nauc_ndcg_at_3_diff1 value: 27.58703963162813 - type: nauc_ndcg_at_3_max value: 42.377949191047975 - type: nauc_ndcg_at_3_std value: 22.006636261926808 - type: nauc_ndcg_at_5_diff1 value: 25.323540164365394 - type: nauc_ndcg_at_5_max value: 42.077483541800355 - type: nauc_ndcg_at_5_std value: 24.38614012402223 - type: nauc_precision_at_1000_diff1 value: -18.554231105026798 - type: nauc_precision_at_1000_max value: 8.600104573044353 - type: nauc_precision_at_1000_std value: 35.24043924606992 - type: nauc_precision_at_100_diff1 value: -12.366332039473939 - type: nauc_precision_at_100_max value: 21.684056644697822 - type: nauc_precision_at_100_std value: 44.19851905373012 - type: nauc_precision_at_10_diff1 value: 4.981890145850079 - type: nauc_precision_at_10_max value: 39.26695926876921 - type: nauc_precision_at_10_std value: 39.6193781427142 - type: nauc_precision_at_1_diff1 value: 36.226882155693254 - type: nauc_precision_at_1_max value: 45.64116702800358 - type: nauc_precision_at_1_std value: 18.56622209173858 - type: nauc_precision_at_20_diff1 value: -1.3537073154842467 - type: nauc_precision_at_20_max value: 34.211750289968315 - type: nauc_precision_at_20_std value: 42.88840705138113 - type: nauc_precision_at_3_diff1 value: 16.785704006680017 - type: nauc_precision_at_3_max value: 43.41048951027902 - type: nauc_precision_at_3_std value: 29.20950983049612 - type: nauc_precision_at_5_diff1 value: 10.141401144736987 - type: nauc_precision_at_5_max value: 42.61295259785708 - type: nauc_precision_at_5_std value: 34.36808976552582 - type: nauc_recall_at_1000_diff1 value: 13.070490227131154 - type: nauc_recall_at_1000_max value: 23.16211600933428 - type: nauc_recall_at_1000_std value: 20.18400228183049 - type: nauc_recall_at_100_diff1 value: 21.791873990847225 - type: nauc_recall_at_100_max value: 24.534035934410444 - type: nauc_recall_at_100_std value: 15.02352427792638 - type: nauc_recall_at_10_diff1 value: 31.695078384281018 - type: nauc_recall_at_10_max value: 17.87955239768676 - type: nauc_recall_at_10_std value: -1.8766363765059346 - type: nauc_recall_at_1_diff1 value: 51.364207086510284 - type: nauc_recall_at_1_max value: 8.650050628738809 - type: nauc_recall_at_1_std value: -17.30631242512481 - type: nauc_recall_at_20_diff1 value: 27.518789645287413 - type: nauc_recall_at_20_max value: 19.248306687993665 - type: nauc_recall_at_20_std value: 1.8973437807943836 - type: nauc_recall_at_3_diff1 value: 40.176896668779975 - type: nauc_recall_at_3_max value: 12.609773638086294 - type: nauc_recall_at_3_std value: -11.078650386618978 - type: nauc_recall_at_5_diff1 value: 34.52328172005921 - type: nauc_recall_at_5_max value: 15.927267077298449 - type: nauc_recall_at_5_std value: -6.882800988990083 - type: ndcg_at_1 value: 47.214 - type: ndcg_at_10 value: 37.749 - type: ndcg_at_100 value: 34.941 - type: ndcg_at_1000 value: 43.763000000000005 - type: ndcg_at_20 value: 35.096 - type: ndcg_at_3 value: 42.778 - type: ndcg_at_5 value: 40.916999999999994 - type: precision_at_1 value: 48.607 - type: precision_at_10 value: 27.771 - type: precision_at_100 value: 8.873000000000001 - type: precision_at_1000 value: 2.205 - type: precision_at_20 value: 20.294 - type: precision_at_3 value: 39.732 - type: precision_at_5 value: 35.294 - type: recall_at_1 value: 6.4159999999999995 - type: recall_at_10 value: 18.912000000000003 - type: recall_at_100 value: 35.716 - type: recall_at_1000 value: 67.38199999999999 - type: recall_at_20 value: 22.902 - type: recall_at_3 value: 11.331 - type: recall_at_5 value: 14.488000000000001 - task: type: Retrieval dataset: name: MTEB NQ (default) type: mteb/nq config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: main_score value: 64.57300000000001 - type: map_at_1 value: 40.624 - type: map_at_10 value: 57.27100000000001 - type: map_at_100 value: 58.025000000000006 - type: map_at_1000 value: 58.042 - type: map_at_20 value: 57.797 - type: map_at_3 value: 53.198 - type: map_at_5 value: 55.894999999999996 - type: mrr_at_1 value: 45.48088064889919 - type: mrr_at_10 value: 59.63950045062427 - type: mrr_at_100 value: 60.16941078517045 - type: mrr_at_1000 value: 60.180798143896816 - type: mrr_at_20 value: 60.028840407404324 - type: mrr_at_3 value: 56.542101197373306 - type: mrr_at_5 value: 58.66550791811501 - type: nauc_map_at_1000_diff1 value: 40.75719420870384 - type: nauc_map_at_1000_max value: 26.619453317408276 - type: nauc_map_at_1000_std value: -1.3661680695508878 - type: nauc_map_at_100_diff1 value: 40.74618842719167 - type: nauc_map_at_100_max value: 26.63090879971882 - type: nauc_map_at_100_std value: -1.3472110757216529 - type: nauc_map_at_10_diff1 value: 40.567377605059974 - type: nauc_map_at_10_max value: 26.643510588231212 - type: nauc_map_at_10_std value: -1.5369955980308176 - type: nauc_map_at_1_diff1 value: 44.266492092577074 - type: nauc_map_at_1_max value: 22.46429746791908 - type: nauc_map_at_1_std value: -4.60330121276408 - type: nauc_map_at_20_diff1 value: 40.723395327361416 - type: nauc_map_at_20_max value: 26.66921091235507 - type: nauc_map_at_20_std value: -1.3548217510569764 - type: nauc_map_at_3_diff1 value: 40.70669851540585 - type: nauc_map_at_3_max value: 24.92583855784867 - type: nauc_map_at_3_std value: -3.3252674375061337 - type: nauc_map_at_5_diff1 value: 40.71452737114635 - type: nauc_map_at_5_max value: 26.334549142409934 - type: nauc_map_at_5_std value: -2.2994574589513563 - type: nauc_mrr_at_1000_diff1 value: 40.90181907233357 - type: nauc_mrr_at_1000_max value: 27.23972709490355 - type: nauc_mrr_at_1000_std value: 0.14293203689890255 - type: nauc_mrr_at_100_diff1 value: 40.894823740399 - type: nauc_mrr_at_100_max value: 27.2543152925248 - type: nauc_mrr_at_100_std value: 0.16145332462727016 - type: nauc_mrr_at_10_diff1 value: 40.70236566768764 - type: nauc_mrr_at_10_max value: 27.376281421880815 - type: nauc_mrr_at_10_std value: 0.2560670735926158 - type: nauc_mrr_at_1_diff1 value: 44.45758887813465 - type: nauc_mrr_at_1_max value: 24.93321743794642 - type: nauc_mrr_at_1_std value: -1.7773297601048152 - type: nauc_mrr_at_20_diff1 value: 40.880239024353024 - type: nauc_mrr_at_20_max value: 27.32513012463313 - type: nauc_mrr_at_20_std value: 0.22430042378671203 - type: nauc_mrr_at_3_diff1 value: 40.57781683564487 - type: nauc_mrr_at_3_max value: 26.35515394643849 - type: nauc_mrr_at_3_std value: -0.8399275552803087 - type: nauc_mrr_at_5_diff1 value: 40.58618030692413 - type: nauc_mrr_at_5_max value: 27.176058971300332 - type: nauc_mrr_at_5_std value: -0.290492953635725 - type: nauc_ndcg_at_1000_diff1 value: 40.19287643985469 - type: nauc_ndcg_at_1000_max value: 28.0748177881881 - type: nauc_ndcg_at_1000_std value: 0.5479989880947034 - type: nauc_ndcg_at_100_diff1 value: 39.985152217871736 - type: nauc_ndcg_at_100_max value: 28.549199334840196 - type: nauc_ndcg_at_100_std value: 1.1976149852470748 - type: nauc_ndcg_at_10_diff1 value: 39.258653400976094 - type: nauc_ndcg_at_10_max value: 28.78995376227268 - type: nauc_ndcg_at_10_std value: 0.7776396395610561 - type: nauc_ndcg_at_1_diff1 value: 44.53452792113141 - type: nauc_ndcg_at_1_max value: 24.959865554376062 - type: nauc_ndcg_at_1_std value: -1.7064088232497185 - type: nauc_ndcg_at_20_diff1 value: 39.86087719429661 - type: nauc_ndcg_at_20_max value: 28.83245623742156 - type: nauc_ndcg_at_20_std value: 1.214377441117911 - type: nauc_ndcg_at_3_diff1 value: 39.559374584504305 - type: nauc_ndcg_at_3_max value: 25.79384722635462 - type: nauc_ndcg_at_3_std value: -2.6232036598581128 - type: nauc_ndcg_at_5_diff1 value: 39.4735486654252 - type: nauc_ndcg_at_5_max value: 28.016157443317592 - type: nauc_ndcg_at_5_std value: -1.1037353009714006 - type: nauc_precision_at_1000_diff1 value: -11.339155238778668 - type: nauc_precision_at_1000_max value: 6.254052000341758 - type: nauc_precision_at_1000_std value: 15.950222098594327 - type: nauc_precision_at_100_diff1 value: -8.486482545268508 - type: nauc_precision_at_100_max value: 11.576264017770066 - type: nauc_precision_at_100_std value: 19.466754760840587 - type: nauc_precision_at_10_diff1 value: 5.8385831139291575 - type: nauc_precision_at_10_max value: 22.06247370356374 - type: nauc_precision_at_10_std value: 15.537018617227929 - type: nauc_precision_at_1_diff1 value: 44.53452792113141 - type: nauc_precision_at_1_max value: 24.959865554376062 - type: nauc_precision_at_1_std value: -1.7064088232497185 - type: nauc_precision_at_20_diff1 value: 0.46004341386248976 - type: nauc_precision_at_20_max value: 18.62298396479716 - type: nauc_precision_at_20_std value: 18.57322798830194 - type: nauc_precision_at_3_diff1 value: 23.61278777236713 - type: nauc_precision_at_3_max value: 24.430940659086023 - type: nauc_precision_at_3_std value: 3.654630451987505 - type: nauc_precision_at_5_diff1 value: 15.044899921606575 - type: nauc_precision_at_5_max value: 24.832870409550832 - type: nauc_precision_at_5_std value: 9.157167085230308 - type: nauc_recall_at_1000_diff1 value: 20.011280923509826 - type: nauc_recall_at_1000_max value: 77.0124542065231 - type: nauc_recall_at_1000_std value: 77.168446634178 - type: nauc_recall_at_100_diff1 value: 26.20020816542189 - type: nauc_recall_at_100_max value: 63.51438156454956 - type: nauc_recall_at_100_std value: 50.6515798452802 - type: nauc_recall_at_10_diff1 value: 30.672533975609245 - type: nauc_recall_at_10_max value: 38.04655658762951 - type: nauc_recall_at_10_std value: 10.401521020182201 - type: nauc_recall_at_1_diff1 value: 44.266492092577074 - type: nauc_recall_at_1_max value: 22.46429746791908 - type: nauc_recall_at_1_std value: -4.60330121276408 - type: nauc_recall_at_20_diff1 value: 32.410730671544556 - type: nauc_recall_at_20_max value: 43.56842328558742 - type: nauc_recall_at_20_std value: 18.786877985653163 - type: nauc_recall_at_3_diff1 value: 34.930964358124406 - type: nauc_recall_at_3_max value: 26.12903272130525 - type: nauc_recall_at_3_std value: -2.985516316701988 - type: nauc_recall_at_5_diff1 value: 33.47824880356667 - type: nauc_recall_at_5_max value: 32.449042774855855 - type: nauc_recall_at_5_std value: 0.6573399404508043 - type: ndcg_at_1 value: 45.452 - type: ndcg_at_10 value: 64.57300000000001 - type: ndcg_at_100 value: 67.56400000000001 - type: ndcg_at_1000 value: 67.927 - type: ndcg_at_20 value: 66.247 - type: ndcg_at_3 value: 57.32899999999999 - type: ndcg_at_5 value: 61.693 - type: precision_at_1 value: 45.452 - type: precision_at_10 value: 10.067 - type: precision_at_100 value: 1.176 - type: precision_at_1000 value: 0.121 - type: precision_at_20 value: 5.436 - type: precision_at_3 value: 25.628 - type: precision_at_5 value: 17.965999999999998 - type: recall_at_1 value: 40.624 - type: recall_at_10 value: 84.096 - type: recall_at_100 value: 96.734 - type: recall_at_1000 value: 99.401 - type: recall_at_20 value: 90.276 - type: recall_at_3 value: 65.892 - type: recall_at_5 value: 75.847 - task: type: PairClassification dataset: name: MTEB Ocnli (default) type: C-MTEB/OCNLI config: default split: validation revision: 66e76a618a34d6d565d5538088562851e6daa7ec metrics: - type: cosine_accuracy value: 57.715213860314016 - type: cosine_accuracy_threshold value: 70.3215479850769 - type: cosine_ap value: 58.8326699963807 - type: cosine_f1 value: 67.816091954023 - type: cosine_f1_threshold value: 42.0912504196167 - type: cosine_precision value: 51.38813282525857 - type: cosine_recall value: 99.68321013727561 - type: dot_accuracy value: 57.715213860314016 - type: dot_accuracy_threshold value: 70.3215479850769 - type: dot_ap value: 58.8326699963807 - type: dot_f1 value: 67.816091954023 - type: dot_f1_threshold value: 42.0912504196167 - type: dot_precision value: 51.38813282525857 - type: dot_recall value: 99.68321013727561 - type: euclidean_accuracy value: 57.715213860314016 - type: euclidean_accuracy_threshold value: 77.04342603683472 - type: euclidean_ap value: 58.8326699963807 - type: euclidean_f1 value: 67.816091954023 - type: euclidean_f1_threshold value: 107.61815309524536 - type: euclidean_precision value: 51.38813282525857 - type: euclidean_recall value: 99.68321013727561 - type: main_score value: 57.877639415268 - type: manhattan_accuracy value: 57.877639415268 - type: manhattan_accuracy_threshold value: 1952.0273208618164 - type: manhattan_ap value: 58.70263102269272 - type: manhattan_f1 value: 67.84172661870504 - type: manhattan_f1_threshold value: 2661.929702758789 - type: manhattan_precision value: 51.44571740316422 - type: manhattan_recall value: 99.57761351636748 - type: max_accuracy value: 57.877639415268 - type: max_ap value: 58.8326699963807 - type: max_f1 value: 67.84172661870504 - type: max_precision value: 51.44571740316422 - type: max_recall value: 99.68321013727561 - type: similarity_accuracy value: 57.715213860314016 - type: similarity_accuracy_threshold value: 70.3215479850769 - type: similarity_ap value: 58.8326699963807 - type: similarity_f1 value: 67.816091954023 - type: similarity_f1_threshold value: 42.0912504196167 - type: similarity_precision value: 51.38813282525857 - type: similarity_recall value: 99.68321013727561 - task: type: Classification dataset: name: MTEB OnlineShopping (default) type: C-MTEB/OnlineShopping-classification config: default split: test revision: e610f2ebd179a8fda30ae534c3878750a96db120 metrics: - type: accuracy value: 92.24 - type: ap value: 91.00427236651306 - type: ap_weighted value: 91.00427236651306 - type: f1 value: 92.23673939008314 - type: f1_weighted value: 92.24091264330853 - type: main_score value: 92.24 - task: type: STS dataset: name: MTEB PAWSX (default) type: C-MTEB/PAWSX config: default split: test revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1 metrics: - type: cosine_pearson value: 13.62339858857132 - type: cosine_spearman value: 14.886062653753804 - type: euclidean_pearson value: 16.711204002219453 - type: euclidean_spearman value: 14.8864092068256 - type: main_score value: 14.886062653753804 - type: manhattan_pearson value: 16.658236019215405 - type: manhattan_spearman value: 14.868816375702131 - type: pearson value: 13.62339858857132 - type: spearman value: 14.886062653753804 - task: type: STS dataset: name: MTEB QBQTC (default) type: C-MTEB/QBQTC config: default split: test revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7 metrics: - type: cosine_pearson value: 37.88649935702812 - type: cosine_spearman value: 38.59331260288544 - type: euclidean_pearson value: 37.492219553708004 - type: euclidean_spearman value: 38.59333707659388 - type: main_score value: 38.59331260288544 - type: manhattan_pearson value: 37.59659518440478 - type: manhattan_spearman value: 38.70529977801903 - type: pearson value: 37.88649935702812 - type: spearman value: 38.59331260288544 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval (default) type: mteb/quora config: default split: test revision: e4e08e0b7dbe3c8700f0daef558ff32256715259 metrics: - type: main_score value: 88.828 - type: map_at_1 value: 70.86200000000001 - type: map_at_10 value: 85.146 - type: map_at_100 value: 85.76599999999999 - type: map_at_1000 value: 85.78 - type: map_at_20 value: 85.56200000000001 - type: map_at_3 value: 82.15299999999999 - type: map_at_5 value: 84.06400000000001 - type: mrr_at_1 value: 81.55 - type: mrr_at_10 value: 87.83961904761877 - type: mrr_at_100 value: 87.9277183426686 - type: mrr_at_1000 value: 87.92847303870963 - type: mrr_at_20 value: 87.90789174563797 - type: mrr_at_3 value: 86.88999999999969 - type: mrr_at_5 value: 87.54949999999965 - type: nauc_map_at_1000_diff1 value: 75.67356164383055 - type: nauc_map_at_1000_max value: 18.854168889852875 - type: nauc_map_at_1000_std value: -36.828725949066445 - type: nauc_map_at_100_diff1 value: 75.6814052182551 - type: nauc_map_at_100_max value: 18.80911976095899 - type: nauc_map_at_100_std value: -36.859464706791876 - type: nauc_map_at_10_diff1 value: 75.97345767589778 - type: nauc_map_at_10_max value: 18.011922065772556 - type: nauc_map_at_10_std value: -38.58646259978914 - type: nauc_map_at_1_diff1 value: 79.64088789104214 - type: nauc_map_at_1_max value: 13.492832118158285 - type: nauc_map_at_1_std value: -35.069831511434984 - type: nauc_map_at_20_diff1 value: 75.80068897926556 - type: nauc_map_at_20_max value: 18.45100586838551 - type: nauc_map_at_20_std value: -37.58585203895838 - type: nauc_map_at_3_diff1 value: 76.57306543713516 - type: nauc_map_at_3_max value: 16.00971160952194 - type: nauc_map_at_3_std value: -40.49530239296166 - type: nauc_map_at_5_diff1 value: 76.30622902785689 - type: nauc_map_at_5_max value: 16.939904072532965 - type: nauc_map_at_5_std value: -40.231106124451166 - type: nauc_mrr_at_1000_diff1 value: 75.55280562909647 - type: nauc_mrr_at_1000_max value: 21.290797318017223 - type: nauc_mrr_at_1000_std value: -33.16189158862257 - type: nauc_mrr_at_100_diff1 value: 75.55304375022514 - type: nauc_mrr_at_100_max value: 21.29147005221132 - type: nauc_mrr_at_100_std value: -33.16308467728784 - type: nauc_mrr_at_10_diff1 value: 75.52812757986803 - type: nauc_mrr_at_10_max value: 21.324213275115707 - type: nauc_mrr_at_10_std value: -33.19426301054038 - type: nauc_mrr_at_1_diff1 value: 76.3028645714178 - type: nauc_mrr_at_1_max value: 21.43266251995086 - type: nauc_mrr_at_1_std value: -31.52048640923299 - type: nauc_mrr_at_20_diff1 value: 75.55919114256191 - type: nauc_mrr_at_20_max value: 21.295549776313212 - type: nauc_mrr_at_20_std value: -33.17019625161653 - type: nauc_mrr_at_3_diff1 value: 75.22527177974185 - type: nauc_mrr_at_3_max value: 20.97300902425478 - type: nauc_mrr_at_3_std value: -33.77499787473604 - type: nauc_mrr_at_5_diff1 value: 75.48235066264493 - type: nauc_mrr_at_5_max value: 21.267657410423936 - type: nauc_mrr_at_5_std value: -33.541730991271024 - type: nauc_ndcg_at_1000_diff1 value: 75.2765114539078 - type: nauc_ndcg_at_1000_max value: 20.20040796867573 - type: nauc_ndcg_at_1000_std value: -34.83796103543814 - type: nauc_ndcg_at_100_diff1 value: 75.29839170813271 - type: nauc_ndcg_at_100_max value: 20.049897014114226 - type: nauc_ndcg_at_100_std value: -34.88744910962141 - type: nauc_ndcg_at_10_diff1 value: 75.50031549242551 - type: nauc_ndcg_at_10_max value: 19.530680246918088 - type: nauc_ndcg_at_10_std value: -37.2132581016259 - type: nauc_ndcg_at_1_diff1 value: 76.2640011251013 - type: nauc_ndcg_at_1_max value: 21.5044133820231 - type: nauc_ndcg_at_1_std value: -31.441810766378154 - type: nauc_ndcg_at_20_diff1 value: 75.48825777295262 - type: nauc_ndcg_at_20_max value: 19.54316358183612 - type: nauc_ndcg_at_20_std value: -36.40213923640784 - type: nauc_ndcg_at_3_diff1 value: 74.95053482032748 - type: nauc_ndcg_at_3_max value: 18.3619895361753 - type: nauc_ndcg_at_3_std value: -37.89625025926027 - type: nauc_ndcg_at_5_diff1 value: 75.4723557202212 - type: nauc_ndcg_at_5_max value: 18.774895447817002 - type: nauc_ndcg_at_5_std value: -38.63359887929082 - type: nauc_precision_at_1000_diff1 value: -44.66021797376074 - type: nauc_precision_at_1000_max value: 4.986714020703288 - type: nauc_precision_at_1000_std value: 33.87174403678706 - type: nauc_precision_at_100_diff1 value: -44.37083805843052 - type: nauc_precision_at_100_max value: 4.205704372136468 - type: nauc_precision_at_100_std value: 33.751385069291466 - type: nauc_precision_at_10_diff1 value: -39.594958618164924 - type: nauc_precision_at_10_max value: 3.3213764067887017 - type: nauc_precision_at_10_std value: 24.514139443922584 - type: nauc_precision_at_1_diff1 value: 76.2640011251013 - type: nauc_precision_at_1_max value: 21.5044133820231 - type: nauc_precision_at_1_std value: -31.441810766378154 - type: nauc_precision_at_20_diff1 value: -42.515141197525665 - type: nauc_precision_at_20_max value: 3.270570048173852 - type: nauc_precision_at_20_std value: 29.464821564987304 - type: nauc_precision_at_3_diff1 value: -20.183805197256348 - type: nauc_precision_at_3_max value: 5.587068650916888 - type: nauc_precision_at_3_std value: 6.698987832594483 - type: nauc_precision_at_5_diff1 value: -32.29844211123163 - type: nauc_precision_at_5_max value: 3.678521295215363 - type: nauc_precision_at_5_std value: 15.83463108178084 - type: nauc_recall_at_1000_diff1 value: 57.09220215387219 - type: nauc_recall_at_1000_max value: 55.92562330520271 - type: nauc_recall_at_1000_std value: 55.55766930325453 - type: nauc_recall_at_100_diff1 value: 70.73870980404703 - type: nauc_recall_at_100_max value: 19.254114660154237 - type: nauc_recall_at_100_std value: -23.244583670383882 - type: nauc_recall_at_10_diff1 value: 72.37078223746651 - type: nauc_recall_at_10_max value: 14.81844588849609 - type: nauc_recall_at_10_std value: -49.52099798289318 - type: nauc_recall_at_1_diff1 value: 79.64088789104214 - type: nauc_recall_at_1_max value: 13.492832118158285 - type: nauc_recall_at_1_std value: -35.069831511434984 - type: nauc_recall_at_20_diff1 value: 73.92034634556907 - type: nauc_recall_at_20_max value: 12.42942772030806 - type: nauc_recall_at_20_std value: -49.565011337521874 - type: nauc_recall_at_3_diff1 value: 72.89507863914096 - type: nauc_recall_at_3_max value: 12.262751224508929 - type: nauc_recall_at_3_std value: -46.57376666697539 - type: nauc_recall_at_5_diff1 value: 72.22900202817202 - type: nauc_recall_at_5_max value: 12.163990495227434 - type: nauc_recall_at_5_std value: -51.071117062577656 - type: ndcg_at_1 value: 81.57 - type: ndcg_at_10 value: 88.828 - type: ndcg_at_100 value: 89.936 - type: ndcg_at_1000 value: 90.01599999999999 - type: ndcg_at_20 value: 89.459 - type: ndcg_at_3 value: 86.014 - type: ndcg_at_5 value: 87.619 - type: precision_at_1 value: 81.57 - type: precision_at_10 value: 13.535 - type: precision_at_100 value: 1.537 - type: precision_at_1000 value: 0.157 - type: precision_at_20 value: 7.1739999999999995 - type: precision_at_3 value: 37.763000000000005 - type: precision_at_5 value: 24.878 - type: recall_at_1 value: 70.86200000000001 - type: recall_at_10 value: 95.977 - type: recall_at_100 value: 99.644 - type: recall_at_1000 value: 99.984 - type: recall_at_20 value: 97.98 - type: recall_at_3 value: 87.878 - type: recall_at_5 value: 92.419 - task: type: Clustering dataset: name: MTEB RedditClustering (default) type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: main_score value: 53.22151362416875 - type: v_measure value: 53.22151362416875 - type: v_measure_std value: 4.187568171093669 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P (default) type: mteb/reddit-clustering-p2p config: default split: test revision: 385e3cb46b4cfa89021f56c4380204149d0efe33 metrics: - type: main_score value: 58.38112463653693 - type: v_measure value: 58.38112463653693 - type: v_measure_std value: 12.221880566455676 - task: type: Retrieval dataset: name: MTEB SCIDOCS (default) type: mteb/scidocs config: default split: test revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88 metrics: - type: main_score value: 21.714 - type: map_at_1 value: 5.208 - type: map_at_10 value: 13.075999999999999 - type: map_at_100 value: 15.334 - type: map_at_1000 value: 15.671 - type: map_at_20 value: 14.276 - type: map_at_3 value: 9.289 - type: map_at_5 value: 11.068 - type: mrr_at_1 value: 25.7 - type: mrr_at_10 value: 36.53083333333331 - type: mrr_at_100 value: 37.55081585500827 - type: mrr_at_1000 value: 37.59430456909269 - type: mrr_at_20 value: 37.13279102089857 - type: mrr_at_3 value: 33.08333333333333 - type: mrr_at_5 value: 35.09333333333329 - type: nauc_map_at_1000_diff1 value: 12.787713123133509 - type: nauc_map_at_1000_max value: 25.009736848507657 - type: nauc_map_at_1000_std value: 15.236663910283454 - type: nauc_map_at_100_diff1 value: 12.790709042725402 - type: nauc_map_at_100_max value: 24.925098118006847 - type: nauc_map_at_100_std value: 15.015816602174784 - type: nauc_map_at_10_diff1 value: 13.224119223857949 - type: nauc_map_at_10_max value: 23.147122480526107 - type: nauc_map_at_10_std value: 11.68007541947038 - type: nauc_map_at_1_diff1 value: 24.060065186932423 - type: nauc_map_at_1_max value: 15.813056289861308 - type: nauc_map_at_1_std value: 4.409040570335129 - type: nauc_map_at_20_diff1 value: 12.709789339255236 - type: nauc_map_at_20_max value: 24.131374778364897 - type: nauc_map_at_20_std value: 13.263463057764785 - type: nauc_map_at_3_diff1 value: 14.638035668154492 - type: nauc_map_at_3_max value: 20.171660202923068 - type: nauc_map_at_3_std value: 5.870864246185647 - type: nauc_map_at_5_diff1 value: 13.68973589831676 - type: nauc_map_at_5_max value: 20.87352332566476 - type: nauc_map_at_5_std value: 8.196922206894563 - type: nauc_mrr_at_1000_diff1 value: 20.27923928575517 - type: nauc_mrr_at_1000_max value: 20.43476539310186 - type: nauc_mrr_at_1000_std value: 9.086257898498179 - type: nauc_mrr_at_100_diff1 value: 20.249544877396524 - type: nauc_mrr_at_100_max value: 20.4491314629493 - type: nauc_mrr_at_100_std value: 9.109279519370439 - type: nauc_mrr_at_10_diff1 value: 20.41175715492326 - type: nauc_mrr_at_10_max value: 20.355827731272182 - type: nauc_mrr_at_10_std value: 9.050079285224115 - type: nauc_mrr_at_1_diff1 value: 23.915902231767276 - type: nauc_mrr_at_1_max value: 15.52850265693499 - type: nauc_mrr_at_1_std value: 4.29701292794671 - type: nauc_mrr_at_20_diff1 value: 20.32272040046418 - type: nauc_mrr_at_20_max value: 20.531506937718476 - type: nauc_mrr_at_20_std value: 9.12290760511003 - type: nauc_mrr_at_3_diff1 value: 19.595989955386596 - type: nauc_mrr_at_3_max value: 20.380814283632976 - type: nauc_mrr_at_3_std value: 7.6948438186508845 - type: nauc_mrr_at_5_diff1 value: 20.549316997450543 - type: nauc_mrr_at_5_max value: 20.10743609189009 - type: nauc_mrr_at_5_std value: 8.321337612314704 - type: nauc_ndcg_at_1000_diff1 value: 13.562157258877319 - type: nauc_ndcg_at_1000_max value: 28.044569762293936 - type: nauc_ndcg_at_1000_std value: 21.629002656029655 - type: nauc_ndcg_at_100_diff1 value: 13.565552060975996 - type: nauc_ndcg_at_100_max value: 28.243040397340337 - type: nauc_ndcg_at_100_std value: 21.195028071943252 - type: nauc_ndcg_at_10_diff1 value: 14.626206452933701 - type: nauc_ndcg_at_10_max value: 24.62354235467142 - type: nauc_ndcg_at_10_std value: 13.630591725420302 - type: nauc_ndcg_at_1_diff1 value: 23.915902231767276 - type: nauc_ndcg_at_1_max value: 15.52850265693499 - type: nauc_ndcg_at_1_std value: 4.29701292794671 - type: nauc_ndcg_at_20_diff1 value: 13.709922324110547 - type: nauc_ndcg_at_20_max value: 26.279960777638273 - type: nauc_ndcg_at_20_std value: 16.28928883290933 - type: nauc_ndcg_at_3_diff1 value: 15.040880440725592 - type: nauc_ndcg_at_3_max value: 21.22196654075134 - type: nauc_ndcg_at_3_std value: 7.300499957239256 - type: nauc_ndcg_at_5_diff1 value: 14.9765729251872 - type: nauc_ndcg_at_5_max value: 22.033371543291263 - type: nauc_ndcg_at_5_std value: 9.965196118819666 - type: nauc_precision_at_1000_diff1 value: 2.8533310358503177 - type: nauc_precision_at_1000_max value: 25.73187660681489 - type: nauc_precision_at_1000_std value: 34.02524614249728 - type: nauc_precision_at_100_diff1 value: 6.776893109391378 - type: nauc_precision_at_100_max value: 29.618164171587452 - type: nauc_precision_at_100_std value: 32.075686109744275 - type: nauc_precision_at_10_diff1 value: 10.271876054657762 - type: nauc_precision_at_10_max value: 25.868155025861185 - type: nauc_precision_at_10_std value: 17.773239751669788 - type: nauc_precision_at_1_diff1 value: 23.915902231767276 - type: nauc_precision_at_1_max value: 15.52850265693499 - type: nauc_precision_at_1_std value: 4.29701292794671 - type: nauc_precision_at_20_diff1 value: 7.936691341508646 - type: nauc_precision_at_20_max value: 27.402771907150463 - type: nauc_precision_at_20_std value: 21.84488210613182 - type: nauc_precision_at_3_diff1 value: 11.385560757276366 - type: nauc_precision_at_3_max value: 23.303172357044453 - type: nauc_precision_at_3_std value: 8.52130696871279 - type: nauc_precision_at_5_diff1 value: 11.148869134691138 - type: nauc_precision_at_5_max value: 22.810159658927525 - type: nauc_precision_at_5_std value: 12.185276871335153 - type: nauc_recall_at_1000_diff1 value: 2.945669773991137 - type: nauc_recall_at_1000_max value: 26.026287068033522 - type: nauc_recall_at_1000_std value: 35.19319242132944 - type: nauc_recall_at_100_diff1 value: 6.656374129402131 - type: nauc_recall_at_100_max value: 29.382006365887197 - type: nauc_recall_at_100_std value: 32.0759161910764 - type: nauc_recall_at_10_diff1 value: 10.2223319715612 - type: nauc_recall_at_10_max value: 25.809267124090663 - type: nauc_recall_at_10_std value: 17.763663395998343 - type: nauc_recall_at_1_diff1 value: 24.060065186932423 - type: nauc_recall_at_1_max value: 15.813056289861308 - type: nauc_recall_at_1_std value: 4.409040570335129 - type: nauc_recall_at_20_diff1 value: 7.869155882388014 - type: nauc_recall_at_20_max value: 27.397709947819465 - type: nauc_recall_at_20_std value: 21.82500756071004 - type: nauc_recall_at_3_diff1 value: 11.524772275159814 - type: nauc_recall_at_3_max value: 23.47243651501392 - type: nauc_recall_at_3_std value: 8.536277281164557 - type: nauc_recall_at_5_diff1 value: 11.115700398648645 - type: nauc_recall_at_5_max value: 22.942164352144758 - type: nauc_recall_at_5_std value: 12.219902826983432 - type: ndcg_at_1 value: 25.7 - type: ndcg_at_10 value: 21.714 - type: ndcg_at_100 value: 30.103 - type: ndcg_at_1000 value: 35.658 - type: ndcg_at_20 value: 24.808 - type: ndcg_at_3 value: 20.575 - type: ndcg_at_5 value: 17.887 - type: precision_at_1 value: 25.7 - type: precision_at_10 value: 11.32 - type: precision_at_100 value: 2.338 - type: precision_at_1000 value: 0.366 - type: precision_at_20 value: 7.449999999999999 - type: precision_at_3 value: 19.267 - type: precision_at_5 value: 15.72 - type: recall_at_1 value: 5.208 - type: recall_at_10 value: 22.936999999999998 - type: recall_at_100 value: 47.503 - type: recall_at_1000 value: 74.413 - type: recall_at_20 value: 30.205 - type: recall_at_3 value: 11.693000000000001 - type: recall_at_5 value: 15.898000000000001 - task: type: STS dataset: name: MTEB SICK-R (default) type: mteb/sickr-sts config: default split: test revision: 20a6d6f312dd54037fe07a32d58e5e168867909d metrics: - type: cosine_pearson value: 80.05738628240671 - type: cosine_spearman value: 74.67900925815495 - type: euclidean_pearson value: 78.96070377803025 - type: euclidean_spearman value: 74.67900210067978 - type: main_score value: 74.67900925815495 - type: manhattan_pearson value: 78.48511276416917 - type: manhattan_spearman value: 74.39838905100096 - type: pearson value: 80.05738628240671 - type: spearman value: 74.67900925815495 - task: type: STS dataset: name: MTEB STS12 (default) type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cosine_pearson value: 59.8887701896054 - type: cosine_spearman value: 61.76924747645064 - type: euclidean_pearson value: 59.33178599145238 - type: euclidean_spearman value: 61.76932878335521 - type: main_score value: 61.76924747645064 - type: manhattan_pearson value: 59.07929980423876 - type: manhattan_spearman value: 61.729703658805704 - type: pearson value: 59.8887701896054 - type: spearman value: 61.76924747645064 - task: type: STS dataset: name: MTEB STS13 (default) type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cosine_pearson value: 73.46830128106028 - type: cosine_spearman value: 75.65482017135808 - type: euclidean_pearson value: 75.32146398293357 - type: euclidean_spearman value: 75.65482017135808 - type: main_score value: 75.65482017135808 - type: manhattan_pearson value: 75.11839624254772 - type: manhattan_spearman value: 75.52002809163668 - type: pearson value: 73.46830128106028 - type: spearman value: 75.65482017135808 - task: type: STS dataset: name: MTEB STS14 (default) type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cosine_pearson value: 71.49707687668383 - type: cosine_spearman value: 68.16858223090989 - type: euclidean_pearson value: 71.22162204420484 - type: euclidean_spearman value: 68.16860152985392 - type: main_score value: 68.16858223090989 - type: manhattan_pearson value: 70.91495914707767 - type: manhattan_spearman value: 67.98861350196948 - type: pearson value: 71.49707687668383 - type: spearman value: 68.16858223090989 - task: type: STS dataset: name: MTEB STS15 (default) type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cosine_pearson value: 76.4178409313179 - type: cosine_spearman value: 78.88504631843803 - type: euclidean_pearson value: 78.71618727706142 - type: euclidean_spearman value: 78.88501053846501 - type: main_score value: 78.88504631843803 - type: manhattan_pearson value: 78.89331900480339 - type: manhattan_spearman value: 79.04826734191282 - type: pearson value: 76.4178409313179 - type: spearman value: 78.88504631843803 - task: type: STS dataset: name: MTEB STS16 (default) type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cosine_pearson value: 77.55495044464278 - type: cosine_spearman value: 80.3363642808641 - type: euclidean_pearson value: 79.96424489786347 - type: euclidean_spearman value: 80.3363642808641 - type: main_score value: 80.3363642808641 - type: manhattan_pearson value: 80.05244658476923 - type: manhattan_spearman value: 80.42606943747235 - type: pearson value: 77.55495044464278 - type: spearman value: 80.3363642808641 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 79.38935754316142 - type: cosine_spearman value: 85.46849738990943 - type: euclidean_pearson value: 83.64718180060812 - type: euclidean_spearman value: 85.46849738990943 - type: main_score value: 85.46849738990943 - type: manhattan_pearson value: 83.67702948761875 - type: manhattan_spearman value: 85.34710495908027 - type: pearson value: 79.38935754316142 - type: spearman value: 85.46849738990943 - task: type: STS dataset: name: MTEB STS22 (zh-en) type: mteb/sts22-crosslingual-sts config: zh-en split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 73.89485634957391 - type: cosine_spearman value: 73.90825698961848 - type: euclidean_pearson value: 75.25124910546262 - type: euclidean_spearman value: 73.90825698961848 - type: main_score value: 73.90825698961848 - type: manhattan_pearson value: 75.11860084263171 - type: manhattan_spearman value: 73.69593141677598 - type: pearson value: 73.89485634957391 - type: spearman value: 73.90825698961848 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 67.81059898950983 - type: cosine_spearman value: 68.30115315444206 - type: euclidean_pearson value: 69.27103790429173 - type: euclidean_spearman value: 68.30115315444206 - type: main_score value: 68.30115315444206 - type: manhattan_pearson value: 69.46849620900602 - type: manhattan_spearman value: 68.45651992521948 - type: pearson value: 67.81059898950983 - type: spearman value: 68.30115315444206 - task: type: STS dataset: name: MTEB STS22 (zh) type: mteb/sts22-crosslingual-sts config: zh split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 74.44033998959976 - type: cosine_spearman value: 73.58772060484971 - type: euclidean_pearson value: 73.06488074477468 - type: euclidean_spearman value: 73.58772060484971 - type: main_score value: 73.58772060484971 - type: manhattan_pearson value: 73.00608049548906 - type: manhattan_spearman value: 73.55105762622729 - type: pearson value: 74.44033998959976 - type: spearman value: 73.58772060484971 - task: type: STS dataset: name: MTEB STSB (default) type: C-MTEB/STSB config: default split: test revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0 metrics: - type: cosine_pearson value: 68.09021885283748 - type: cosine_spearman value: 69.45189378034146 - type: euclidean_pearson value: 69.51961611366887 - type: euclidean_spearman value: 69.45189378034146 - type: main_score value: 69.45189378034146 - type: manhattan_pearson value: 69.30192429794056 - type: manhattan_spearman value: 69.22518486689475 - type: pearson value: 68.09021885283748 - type: spearman value: 69.45189378034146 - task: type: STS dataset: name: MTEB STSB (default) type: C-MTEB/STSB config: default split: validation revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0 metrics: - type: cosine_pearson value: 69.85149506856348 - type: cosine_spearman value: 72.05319943168429 - type: euclidean_pearson value: 72.41468551288946 - type: euclidean_spearman value: 72.05319943168429 - type: main_score value: 72.05319943168429 - type: manhattan_pearson value: 72.08871687183135 - type: manhattan_spearman value: 71.63960073768047 - type: pearson value: 69.85149506856348 - type: spearman value: 72.05319943168429 - task: type: STS dataset: name: MTEB STSBenchmark (default) type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cosine_pearson value: 71.64554885631986 - type: cosine_spearman value: 74.23136916844818 - type: euclidean_pearson value: 74.06782419242319 - type: euclidean_spearman value: 74.23136916844818 - type: main_score value: 74.23136916844818 - type: manhattan_pearson value: 74.0008422515175 - type: manhattan_spearman value: 74.10730250032161 - type: pearson value: 71.64554885631986 - type: spearman value: 74.23136916844818 - task: type: Reranking dataset: name: MTEB SciDocsRR (default) type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: main_score value: 84.06925933025143 - type: map value: 84.06925933025143 - type: mrr value: 95.48704382037715 - type: nAUC_map_diff1 value: -1.2531960147273353 - type: nAUC_map_max value: 53.63794852890932 - type: nAUC_map_std value: 66.72236818008908 - type: nAUC_mrr_diff1 value: 44.74963455842425 - type: nAUC_mrr_max value: 85.95161801239465 - type: nAUC_mrr_std value: 81.36111675398224 - task: type: Retrieval dataset: name: MTEB SciFact (default) type: mteb/scifact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: main_score value: 76.10900000000001 - type: map_at_1 value: 60.760999999999996 - type: map_at_10 value: 71.654 - type: map_at_100 value: 72.137 - type: map_at_1000 value: 72.149 - type: map_at_20 value: 72.021 - type: map_at_3 value: 68.772 - type: map_at_5 value: 70.36200000000001 - type: mrr_at_1 value: 63.66666666666667 - type: mrr_at_10 value: 72.70595238095237 - type: mrr_at_100 value: 73.04863520816613 - type: mrr_at_1000 value: 73.06075018033764 - type: mrr_at_20 value: 72.93638474099001 - type: mrr_at_3 value: 70.66666666666667 - type: mrr_at_5 value: 71.78333333333333 - type: nauc_map_at_1000_diff1 value: 71.35087668762627 - type: nauc_map_at_1000_max value: 54.011748108009186 - type: nauc_map_at_1000_std value: 7.678845781955053 - type: nauc_map_at_100_diff1 value: 71.34472898717338 - type: nauc_map_at_100_max value: 54.0145296319552 - type: nauc_map_at_100_std value: 7.676258294947637 - type: nauc_map_at_10_diff1 value: 71.37574967668972 - type: nauc_map_at_10_max value: 53.92702576795545 - type: nauc_map_at_10_std value: 7.758848033974705 - type: nauc_map_at_1_diff1 value: 73.89906137509777 - type: nauc_map_at_1_max value: 44.9910089561678 - type: nauc_map_at_1_std value: -3.560024114726528 - type: nauc_map_at_20_diff1 value: 71.3016202130441 - type: nauc_map_at_20_max value: 54.0858492905278 - type: nauc_map_at_20_std value: 7.758638712425257 - type: nauc_map_at_3_diff1 value: 71.0655101710049 - type: nauc_map_at_3_max value: 51.54952135715274 - type: nauc_map_at_3_std value: 3.191945160660174 - type: nauc_map_at_5_diff1 value: 71.71790408466262 - type: nauc_map_at_5_max value: 53.98746745769737 - type: nauc_map_at_5_std value: 6.607248876321941 - type: nauc_mrr_at_1000_diff1 value: 71.35164281601696 - type: nauc_mrr_at_1000_max value: 55.476178340437 - type: nauc_mrr_at_1000_std value: 10.970185730462788 - type: nauc_mrr_at_100_diff1 value: 71.34535141418533 - type: nauc_mrr_at_100_max value: 55.47824801045241 - type: nauc_mrr_at_100_std value: 10.965792309322826 - type: nauc_mrr_at_10_diff1 value: 71.25095755664336 - type: nauc_mrr_at_10_max value: 55.697524040234235 - type: nauc_mrr_at_10_std value: 11.470516804375386 - type: nauc_mrr_at_1_diff1 value: 73.2821036264498 - type: nauc_mrr_at_1_max value: 50.30171935076129 - type: nauc_mrr_at_1_std value: 5.150119259795942 - type: nauc_mrr_at_20_diff1 value: 71.29800338727547 - type: nauc_mrr_at_20_max value: 55.54151075144868 - type: nauc_mrr_at_20_std value: 11.02928131756253 - type: nauc_mrr_at_3_diff1 value: 71.00310802253107 - type: nauc_mrr_at_3_max value: 55.197709395727045 - type: nauc_mrr_at_3_std value: 10.04566210045661 - type: nauc_mrr_at_5_diff1 value: 71.20451267727627 - type: nauc_mrr_at_5_max value: 56.42653941908357 - type: nauc_mrr_at_5_std value: 12.134736985103611 - type: nauc_ndcg_at_1000_diff1 value: 70.82229943036683 - type: nauc_ndcg_at_1000_max value: 55.754118035528776 - type: nauc_ndcg_at_1000_std value: 10.535060943270949 - type: nauc_ndcg_at_100_diff1 value: 70.56950271178324 - type: nauc_ndcg_at_100_max value: 55.8594687697972 - type: nauc_ndcg_at_100_std value: 10.666914593212478 - type: nauc_ndcg_at_10_diff1 value: 70.38141205486814 - type: nauc_ndcg_at_10_max value: 56.37560065613112 - type: nauc_ndcg_at_10_std value: 12.026555946404496 - type: nauc_ndcg_at_1_diff1 value: 73.2821036264498 - type: nauc_ndcg_at_1_max value: 50.30171935076129 - type: nauc_ndcg_at_1_std value: 5.150119259795942 - type: nauc_ndcg_at_20_diff1 value: 70.3390226823462 - type: nauc_ndcg_at_20_max value: 56.500553855618605 - type: nauc_ndcg_at_20_std value: 11.29004765829262 - type: nauc_ndcg_at_3_diff1 value: 69.49806863319228 - type: nauc_ndcg_at_3_max value: 54.71563247265625 - type: nauc_ndcg_at_3_std value: 7.436156809946794 - type: nauc_ndcg_at_5_diff1 value: 70.92542004817086 - type: nauc_ndcg_at_5_max value: 57.28530843114872 - type: nauc_ndcg_at_5_std value: 11.33887216009956 - type: nauc_precision_at_1000_diff1 value: -35.28994929585216 - type: nauc_precision_at_1000_max value: 14.893397453096902 - type: nauc_precision_at_1000_std value: 51.396256011227734 - type: nauc_precision_at_100_diff1 value: -23.15302836790124 - type: nauc_precision_at_100_max value: 20.885275194425965 - type: nauc_precision_at_100_std value: 47.38237004790941 - type: nauc_precision_at_10_diff1 value: 0.6508705327922056 - type: nauc_precision_at_10_max value: 35.86490378321761 - type: nauc_precision_at_10_std value: 47.767508836235066 - type: nauc_precision_at_1_diff1 value: 73.2821036264498 - type: nauc_precision_at_1_max value: 50.30171935076129 - type: nauc_precision_at_1_std value: 5.150119259795942 - type: nauc_precision_at_20_diff1 value: -10.7729801324503 - type: nauc_precision_at_20_max value: 29.079735099337757 - type: nauc_precision_at_20_std value: 45.845298013245014 - type: nauc_precision_at_3_diff1 value: 34.75492967022111 - type: nauc_precision_at_3_max value: 50.28675231734285 - type: nauc_precision_at_3_std value: 28.248258905786056 - type: nauc_precision_at_5_diff1 value: 20.985818128495684 - type: nauc_precision_at_5_max value: 49.058382386827766 - type: nauc_precision_at_5_std value: 42.45972725558781 - type: nauc_recall_at_1000_diff1 value: .nan - type: nauc_recall_at_1000_max value: .nan - type: nauc_recall_at_1000_std value: .nan - type: nauc_recall_at_100_diff1 value: 59.20634920634924 - type: nauc_recall_at_100_max value: 61.545284780578854 - type: nauc_recall_at_100_std value: 20.144724556489304 - type: nauc_recall_at_10_diff1 value: 64.66373744374164 - type: nauc_recall_at_10_max value: 63.97713233823764 - type: nauc_recall_at_10_std value: 26.664192167250576 - type: nauc_recall_at_1_diff1 value: 73.89906137509777 - type: nauc_recall_at_1_max value: 44.9910089561678 - type: nauc_recall_at_1_std value: -3.560024114726528 - type: nauc_recall_at_20_diff1 value: 62.63616557734212 - type: nauc_recall_at_20_max value: 67.73835460109973 - type: nauc_recall_at_20_std value: 23.744164332399645 - type: nauc_recall_at_3_diff1 value: 66.22059143579988 - type: nauc_recall_at_3_max value: 56.70083839895786 - type: nauc_recall_at_3_std value: 8.353413350691936 - type: nauc_recall_at_5_diff1 value: 67.74494960835959 - type: nauc_recall_at_5_max value: 65.60091243576522 - type: nauc_recall_at_5_std value: 21.75399137900112 - type: ndcg_at_1 value: 63.666999999999994 - type: ndcg_at_10 value: 76.10900000000001 - type: ndcg_at_100 value: 77.989 - type: ndcg_at_1000 value: 78.391 - type: ndcg_at_20 value: 77.199 - type: ndcg_at_3 value: 71.53699999999999 - type: ndcg_at_5 value: 73.662 - type: precision_at_1 value: 63.666999999999994 - type: precision_at_10 value: 10.033 - type: precision_at_100 value: 1.097 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_20 value: 5.267 - type: precision_at_3 value: 27.889000000000003 - type: precision_at_5 value: 18.267 - type: recall_at_1 value: 60.760999999999996 - type: recall_at_10 value: 88.43299999999999 - type: recall_at_100 value: 96.667 - type: recall_at_1000 value: 100.0 - type: recall_at_20 value: 92.5 - type: recall_at_3 value: 76.461 - type: recall_at_5 value: 81.678 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions (default) type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cosine_accuracy value: 99.76831683168317 - type: cosine_accuracy_threshold value: 75.98594427108765 - type: cosine_ap value: 94.10650199330435 - type: cosine_f1 value: 87.77429467084639 - type: cosine_f1_threshold value: 75.98594427108765 - type: cosine_precision value: 91.90371991247265 - type: cosine_recall value: 84.0 - type: dot_accuracy value: 99.76831683168317 - type: dot_accuracy_threshold value: 75.98594427108765 - type: dot_ap value: 94.10650199330435 - type: dot_f1 value: 87.77429467084639 - type: dot_f1_threshold value: 75.98594427108765 - type: dot_precision value: 91.90371991247265 - type: dot_recall value: 84.0 - type: euclidean_accuracy value: 99.76831683168317 - type: euclidean_accuracy_threshold value: 69.30232048034668 - type: euclidean_ap value: 94.10650199330436 - type: euclidean_f1 value: 87.77429467084639 - type: euclidean_f1_threshold value: 69.30232048034668 - type: euclidean_precision value: 91.90371991247265 - type: euclidean_recall value: 84.0 - type: main_score value: 94.31118902382526 - type: manhattan_accuracy value: 99.77227722772277 - type: manhattan_accuracy_threshold value: 1752.2960662841797 - type: manhattan_ap value: 94.31118902382526 - type: manhattan_f1 value: 87.98328108672936 - type: manhattan_f1_threshold value: 1752.2960662841797 - type: manhattan_precision value: 92.12253829321662 - type: manhattan_recall value: 84.2 - type: max_accuracy value: 99.77227722772277 - type: max_ap value: 94.31118902382526 - type: max_f1 value: 87.98328108672936 - type: max_precision value: 92.12253829321662 - type: max_recall value: 84.2 - type: similarity_accuracy value: 99.76831683168317 - type: similarity_accuracy_threshold value: 75.98594427108765 - type: similarity_ap value: 94.10650199330435 - type: similarity_f1 value: 87.77429467084639 - type: similarity_f1_threshold value: 75.98594427108765 - type: similarity_precision value: 91.90371991247265 - type: similarity_recall value: 84.0 - task: type: Clustering dataset: name: MTEB StackExchangeClustering (default) type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: main_score value: 65.28699161008417 - type: v_measure value: 65.28699161008417 - type: v_measure_std value: 5.01676559317753 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P (default) type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: main_score value: 32.18986400821423 - type: v_measure value: 32.18986400821423 - type: v_measure_std value: 1.7607695643068701 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions (default) type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: main_score value: 51.373735094128726 - type: map value: 51.373735094128726 - type: mrr value: 52.03188661828367 - type: nAUC_map_diff1 value: 37.47429390492891 - type: nAUC_map_max value: 11.050764243820572 - type: nAUC_map_std value: 8.32183046644254 - type: nAUC_mrr_diff1 value: 38.215874831509836 - type: nAUC_mrr_max value: 12.326444149252634 - type: nAUC_mrr_std value: 9.234015034873362 - task: type: Summarization dataset: name: MTEB SummEval (default) type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cosine_pearson value: 30.04547230414472 - type: cosine_spearman value: 30.62051882468504 - type: dot_pearson value: 30.04547442370404 - type: dot_spearman value: 30.62051882468504 - type: main_score value: 30.62051882468504 - type: pearson value: 30.04547230414472 - type: spearman value: 30.62051882468504 - task: type: Reranking dataset: name: MTEB T2Reranking (default) type: C-MTEB/T2Reranking config: default split: dev revision: 76631901a18387f85eaa53e5450019b87ad58ef9 metrics: - type: main_score value: 67.491940360769 - type: map value: 67.491940360769 - type: mrr value: 77.88394939334343 - type: nAUC_map_diff1 value: -9.62446631785462 - type: nAUC_map_max value: 36.249702987605744 - type: nAUC_map_std value: -2.805167498766831 - type: nAUC_mrr_diff1 value: -6.20324917287488 - type: nAUC_mrr_max value: 31.812094369246875 - type: nAUC_mrr_std value: -4.075688771938606 - task: type: Retrieval dataset: name: MTEB T2Retrieval (default) type: C-MTEB/T2Retrieval config: default split: test revision: 8731a845f1bf500a4f111cf1070785c793d10e64 metrics: - type: main_score value: 84.787 - type: map_at_1 value: 27.672 - type: map_at_10 value: 77.376 - type: map_at_100 value: 80.938 - type: map_at_1000 value: 81.003 - type: map_at_20 value: 80.10300000000001 - type: map_at_3 value: 54.591 - type: map_at_5 value: 66.989 - type: mrr_at_1 value: 90.43485884622129 - type: mrr_at_10 value: 92.71924029124197 - type: mrr_at_100 value: 92.80671884902306 - type: mrr_at_1000 value: 92.80992943051154 - type: mrr_at_20 value: 92.7778595796147 - type: mrr_at_3 value: 92.34394178502532 - type: mrr_at_5 value: 92.58197439943898 - type: nauc_map_at_1000_diff1 value: 14.187138773935896 - type: nauc_map_at_1000_max value: 47.31682799510659 - type: nauc_map_at_1000_std value: 21.741024677228047 - type: nauc_map_at_100_diff1 value: 14.184796108320008 - type: nauc_map_at_100_max value: 47.20099045909537 - type: nauc_map_at_100_std value: 21.64510249356838 - type: nauc_map_at_10_diff1 value: 18.09013734024722 - type: nauc_map_at_10_max value: 33.957693353315754 - type: nauc_map_at_10_std value: 4.632860952606258 - type: nauc_map_at_1_diff1 value: 52.11313588305665 - type: nauc_map_at_1_max value: -26.469624300417475 - type: nauc_map_at_1_std value: -39.53640793241912 - type: nauc_map_at_20_diff1 value: 14.76789274084956 - type: nauc_map_at_20_max value: 44.60301273979322 - type: nauc_map_at_20_std value: 18.04392206281267 - type: nauc_map_at_3_diff1 value: 37.91974561569971 - type: nauc_map_at_3_max value: -13.879081458775136 - type: nauc_map_at_3_std value: -37.05923027567862 - type: nauc_map_at_5_diff1 value: 29.93643633903826 - type: nauc_map_at_5_max value: 3.3751192173497846 - type: nauc_map_at_5_std value: -25.135169956144026 - type: nauc_mrr_at_1000_diff1 value: 50.63468735448507 - type: nauc_mrr_at_1000_max value: 80.9387975384782 - type: nauc_mrr_at_1000_std value: 47.45679490810955 - type: nauc_mrr_at_100_diff1 value: 50.63382425620111 - type: nauc_mrr_at_100_max value: 80.94557071660084 - type: nauc_mrr_at_100_std value: 47.4706521424307 - type: nauc_mrr_at_10_diff1 value: 50.65290897062318 - type: nauc_mrr_at_10_max value: 81.04383562705446 - type: nauc_mrr_at_10_std value: 47.512964315818635 - type: nauc_mrr_at_1_diff1 value: 50.784542961223664 - type: nauc_mrr_at_1_max value: 77.12020918334333 - type: nauc_mrr_at_1_std value: 41.62060992521132 - type: nauc_mrr_at_20_diff1 value: 50.642843237883554 - type: nauc_mrr_at_20_max value: 80.98753633072556 - type: nauc_mrr_at_20_std value: 47.51611228243667 - type: nauc_mrr_at_3_diff1 value: 50.55208949999196 - type: nauc_mrr_at_3_max value: 80.90088916202212 - type: nauc_mrr_at_3_std value: 47.108801496033706 - type: nauc_mrr_at_5_diff1 value: 50.575025944200426 - type: nauc_mrr_at_5_max value: 81.1682576960758 - type: nauc_mrr_at_5_std value: 47.66180863029945 - type: nauc_ndcg_at_1000_diff1 value: 19.248626358613777 - type: nauc_ndcg_at_1000_max value: 60.28098612146239 - type: nauc_ndcg_at_1000_std value: 35.566884443831256 - type: nauc_ndcg_at_100_diff1 value: 18.724475810243103 - type: nauc_ndcg_at_100_max value: 59.05126018911714 - type: nauc_ndcg_at_100_std value: 34.989411651595724 - type: nauc_ndcg_at_10_diff1 value: 18.365968693164454 - type: nauc_ndcg_at_10_max value: 49.3849363402422 - type: nauc_ndcg_at_10_std value: 22.725576324478418 - type: nauc_ndcg_at_1_diff1 value: 50.784542961223664 - type: nauc_ndcg_at_1_max value: 77.12020918334333 - type: nauc_ndcg_at_1_std value: 41.62060992521132 - type: nauc_ndcg_at_20_diff1 value: 18.609251604582088 - type: nauc_ndcg_at_20_max value: 53.4399532074586 - type: nauc_ndcg_at_20_std value: 27.89488334480925 - type: nauc_ndcg_at_3_diff1 value: 14.778198776216014 - type: nauc_ndcg_at_3_max value: 66.3906486945579 - type: nauc_ndcg_at_3_std value: 37.77884123555143 - type: nauc_ndcg_at_5_diff1 value: 14.738943082799125 - type: nauc_ndcg_at_5_max value: 58.68056564574427 - type: nauc_ndcg_at_5_std value: 31.79077827650123 - type: nauc_precision_at_1000_diff1 value: -32.96252557855019 - type: nauc_precision_at_1000_max value: 50.89186882409747 - type: nauc_precision_at_1000_std value: 65.12914817350996 - type: nauc_precision_at_100_diff1 value: -32.967363137612 - type: nauc_precision_at_100_max value: 52.371028986911206 - type: nauc_precision_at_100_std value: 66.22460187169995 - type: nauc_precision_at_10_diff1 value: -33.1994311022115 - type: nauc_precision_at_10_max value: 56.765578648101936 - type: nauc_precision_at_10_std value: 61.720110059524245 - type: nauc_precision_at_1_diff1 value: 50.784542961223664 - type: nauc_precision_at_1_max value: 77.12020918334333 - type: nauc_precision_at_1_std value: 41.62060992521132 - type: nauc_precision_at_20_diff1 value: -33.03547004009963 - type: nauc_precision_at_20_max value: 54.80165712635334 - type: nauc_precision_at_20_std value: 65.22155557747443 - type: nauc_precision_at_3_diff1 value: -29.686327824889048 - type: nauc_precision_at_3_max value: 67.05323085900143 - type: nauc_precision_at_3_std value: 54.17159434030293 - type: nauc_precision_at_5_diff1 value: -33.83722712457918 - type: nauc_precision_at_5_max value: 62.05827208111495 - type: nauc_precision_at_5_std value: 57.81072889200247 - type: nauc_recall_at_1000_diff1 value: 8.711894161388097 - type: nauc_recall_at_1000_max value: 65.25175068548329 - type: nauc_recall_at_1000_std value: 66.12749628458647 - type: nauc_recall_at_100_diff1 value: 8.66880971384389 - type: nauc_recall_at_100_max value: 52.0738003480146 - type: nauc_recall_at_100_std value: 46.06733146130884 - type: nauc_recall_at_10_diff1 value: 17.07505618926457 - type: nauc_recall_at_10_max value: 24.10644474612695 - type: nauc_recall_at_10_std value: -1.8168787700162745 - type: nauc_recall_at_1_diff1 value: 52.11313588305665 - type: nauc_recall_at_1_max value: -26.469624300417475 - type: nauc_recall_at_1_std value: -39.53640793241912 - type: nauc_recall_at_20_diff1 value: 11.613589798104606 - type: nauc_recall_at_20_max value: 38.97176582712362 - type: nauc_recall_at_20_std value: 20.84092179197353 - type: nauc_recall_at_3_diff1 value: 36.41927821015105 - type: nauc_recall_at_3_max value: -18.226003965381963 - type: nauc_recall_at_3_std value: -39.73849383747519 - type: nauc_recall_at_5_diff1 value: 29.151020828995026 - type: nauc_recall_at_5_max value: -4.815421889490071 - type: nauc_recall_at_5_std value: -30.957293364793664 - type: ndcg_at_1 value: 90.435 - type: ndcg_at_10 value: 84.787 - type: ndcg_at_100 value: 88.276 - type: ndcg_at_1000 value: 88.913 - type: ndcg_at_20 value: 86.502 - type: ndcg_at_3 value: 86.452 - type: ndcg_at_5 value: 84.951 - type: precision_at_1 value: 90.435 - type: precision_at_10 value: 42.034 - type: precision_at_100 value: 4.997 - type: precision_at_1000 value: 0.515 - type: precision_at_20 value: 23.299 - type: precision_at_3 value: 75.58399999999999 - type: precision_at_5 value: 63.243 - type: recall_at_1 value: 27.672 - type: recall_at_10 value: 83.404 - type: recall_at_100 value: 94.883 - type: recall_at_1000 value: 98.123 - type: recall_at_20 value: 89.252 - type: recall_at_3 value: 56.16 - type: recall_at_5 value: 70.13000000000001 - task: type: Classification dataset: name: MTEB TNews (default) type: C-MTEB/TNews-classification config: default split: test revision: 317f262bf1e6126357bbe89e875451e4b0938fe4 metrics: - type: accuracy value: 0.0 - type: f1 value: 0.0 - type: f1_weighted value: 0.0 - type: main_score value: 0.0 - task: type: Classification dataset: name: MTEB TNews (default) type: C-MTEB/TNews-classification config: default split: validation revision: 317f262bf1e6126357bbe89e875451e4b0938fe4 metrics: - type: accuracy value: 50.82700000000001 - type: f1 value: 48.96846676189542 - type: f1_weighted value: 50.893856756125246 - type: main_score value: 50.82700000000001 - task: type: Retrieval dataset: name: MTEB TRECCOVID (default) type: mteb/trec-covid config: default split: test revision: bb9466bac8153a0349341eb1b22e06409e78ef4e metrics: - type: main_score value: 77.628 - type: map_at_1 value: 0.22399999999999998 - type: map_at_10 value: 1.9449999999999998 - type: map_at_100 value: 12.856000000000002 - type: map_at_1000 value: 30.894 - type: map_at_20 value: 3.688 - type: map_at_3 value: 0.658 - type: map_at_5 value: 1.018 - type: mrr_at_1 value: 86.0 - type: mrr_at_10 value: 92.16666666666667 - type: mrr_at_100 value: 92.16666666666667 - type: mrr_at_1000 value: 92.16666666666667 - type: mrr_at_20 value: 92.16666666666667 - type: mrr_at_3 value: 91.66666666666667 - type: mrr_at_5 value: 92.16666666666667 - type: nauc_map_at_1000_diff1 value: 3.874275710501894 - type: nauc_map_at_1000_max value: 42.30871182822161 - type: nauc_map_at_1000_std value: 73.1469042142082 - type: nauc_map_at_100_diff1 value: 10.741316614996673 - type: nauc_map_at_100_max value: 20.209806325112854 - type: nauc_map_at_100_std value: 54.852351862227444 - type: nauc_map_at_10_diff1 value: 5.126853472907795 - type: nauc_map_at_10_max value: -1.0189783202285525 - type: nauc_map_at_10_std value: 14.740708060291375 - type: nauc_map_at_1_diff1 value: 15.257728237315499 - type: nauc_map_at_1_max value: -16.76524076910754 - type: nauc_map_at_1_std value: 2.467734630732928 - type: nauc_map_at_20_diff1 value: 5.09579525723425 - type: nauc_map_at_20_max value: 0.49956021541095297 - type: nauc_map_at_20_std value: 21.83633208703783 - type: nauc_map_at_3_diff1 value: 17.747855564334554 - type: nauc_map_at_3_max value: -16.856658500402272 - type: nauc_map_at_3_std value: 0.7848912375580602 - type: nauc_map_at_5_diff1 value: 6.113580740023873 - type: nauc_map_at_5_max value: -10.302723521258908 - type: nauc_map_at_5_std value: 4.462343264994648 - type: nauc_mrr_at_1000_diff1 value: -0.30725281048945086 - type: nauc_mrr_at_1000_max value: -5.665166368661917 - type: nauc_mrr_at_1000_std value: 55.91692870501676 - type: nauc_mrr_at_100_diff1 value: -0.30725281048945086 - type: nauc_mrr_at_100_max value: -5.665166368661917 - type: nauc_mrr_at_100_std value: 55.91692870501676 - type: nauc_mrr_at_10_diff1 value: -0.30725281048945086 - type: nauc_mrr_at_10_max value: -5.665166368661917 - type: nauc_mrr_at_10_std value: 55.91692870501676 - type: nauc_mrr_at_1_diff1 value: -5.556325342940264 - type: nauc_mrr_at_1_max value: -2.792018844395178 - type: nauc_mrr_at_1_std value: 56.948870721906545 - type: nauc_mrr_at_20_diff1 value: -0.30725281048945086 - type: nauc_mrr_at_20_max value: -5.665166368661917 - type: nauc_mrr_at_20_std value: 55.91692870501676 - type: nauc_mrr_at_3_diff1 value: 0.47233468286134095 - type: nauc_mrr_at_3_max value: -8.804704067861577 - type: nauc_mrr_at_3_std value: 55.88586851744749 - type: nauc_mrr_at_5_diff1 value: -0.30725281048945086 - type: nauc_mrr_at_5_max value: -5.665166368661917 - type: nauc_mrr_at_5_std value: 55.91692870501676 - type: nauc_ndcg_at_1000_diff1 value: -0.1345042550425933 - type: nauc_ndcg_at_1000_max value: 42.39024935806373 - type: nauc_ndcg_at_1000_std value: 72.7798720975461 - type: nauc_ndcg_at_100_diff1 value: -9.747555787088007 - type: nauc_ndcg_at_100_max value: 42.766831181803084 - type: nauc_ndcg_at_100_std value: 76.86015244416944 - type: nauc_ndcg_at_10_diff1 value: -26.27362680987509 - type: nauc_ndcg_at_10_max value: 30.431944046507574 - type: nauc_ndcg_at_10_std value: 66.53781705282887 - type: nauc_ndcg_at_1_diff1 value: -18.212689382945317 - type: nauc_ndcg_at_1_max value: -4.713531084924769 - type: nauc_ndcg_at_1_std value: 60.648981250362844 - type: nauc_ndcg_at_20_diff1 value: -20.086737440726655 - type: nauc_ndcg_at_20_max value: 34.37729157545477 - type: nauc_ndcg_at_20_std value: 72.61918470988022 - type: nauc_ndcg_at_3_diff1 value: -15.61833953537465 - type: nauc_ndcg_at_3_max value: -4.926385117627094 - type: nauc_ndcg_at_3_std value: 49.062914801546064 - type: nauc_ndcg_at_5_diff1 value: -26.24770979820179 - type: nauc_ndcg_at_5_max value: 10.694823304966544 - type: nauc_ndcg_at_5_std value: 55.162048508134575 - type: nauc_precision_at_1000_diff1 value: -7.8666941738399405 - type: nauc_precision_at_1000_max value: 47.7715994933915 - type: nauc_precision_at_1000_std value: 40.74870410349625 - type: nauc_precision_at_100_diff1 value: -8.388284738609048 - type: nauc_precision_at_100_max value: 48.91820270459412 - type: nauc_precision_at_100_std value: 79.23047106059042 - type: nauc_precision_at_10_diff1 value: -28.150110132914243 - type: nauc_precision_at_10_max value: 47.657016598848614 - type: nauc_precision_at_10_std value: 70.37939245057737 - type: nauc_precision_at_1_diff1 value: -5.556325342940264 - type: nauc_precision_at_1_max value: -2.792018844395178 - type: nauc_precision_at_1_std value: 56.948870721906545 - type: nauc_precision_at_20_diff1 value: -20.975405607247254 - type: nauc_precision_at_20_max value: 42.210933807639144 - type: nauc_precision_at_20_std value: 77.97688841426242 - type: nauc_precision_at_3_diff1 value: -8.477643393806025 - type: nauc_precision_at_3_max value: -1.736301997434395 - type: nauc_precision_at_3_std value: 48.117097306212315 - type: nauc_precision_at_5_diff1 value: -33.46599030296231 - type: nauc_precision_at_5_max value: 25.59065048625327 - type: nauc_precision_at_5_std value: 51.54564053698053 - type: nauc_recall_at_1000_diff1 value: 5.564800637005415 - type: nauc_recall_at_1000_max value: 38.79986663004598 - type: nauc_recall_at_1000_std value: 60.80970482914177 - type: nauc_recall_at_100_diff1 value: 12.714172839997923 - type: nauc_recall_at_100_max value: 11.122107061916715 - type: nauc_recall_at_100_std value: 40.48872128875498 - type: nauc_recall_at_10_diff1 value: 7.356834320854047 - type: nauc_recall_at_10_max value: -5.043558320134648 - type: nauc_recall_at_10_std value: 6.095900853363236 - type: nauc_recall_at_1_diff1 value: 15.257728237315499 - type: nauc_recall_at_1_max value: -16.76524076910754 - type: nauc_recall_at_1_std value: 2.467734630732928 - type: nauc_recall_at_20_diff1 value: 8.751020653150428 - type: nauc_recall_at_20_max value: -6.816867803112397 - type: nauc_recall_at_20_std value: 9.618881030590314 - type: nauc_recall_at_3_diff1 value: 16.423206556657334 - type: nauc_recall_at_3_max value: -19.342599897313622 - type: nauc_recall_at_3_std value: -4.494478463215018 - type: nauc_recall_at_5_diff1 value: 4.384028056599589 - type: nauc_recall_at_5_max value: -11.901838023947736 - type: nauc_recall_at_5_std value: -2.4395241014266964 - type: ndcg_at_1 value: 83.0 - type: ndcg_at_10 value: 77.628 - type: ndcg_at_100 value: 63.298 - type: ndcg_at_1000 value: 56.521 - type: ndcg_at_20 value: 76.32900000000001 - type: ndcg_at_3 value: 80.35799999999999 - type: ndcg_at_5 value: 79.266 - type: precision_at_1 value: 86.0 - type: precision_at_10 value: 82.6 - type: precision_at_100 value: 65.38000000000001 - type: precision_at_1000 value: 24.834 - type: precision_at_20 value: 81.10000000000001 - type: precision_at_3 value: 84.667 - type: precision_at_5 value: 83.6 - type: recall_at_1 value: 0.22399999999999998 - type: recall_at_10 value: 2.156 - type: recall_at_100 value: 15.928999999999998 - type: recall_at_1000 value: 53.191 - type: recall_at_20 value: 4.204 - type: recall_at_3 value: 0.6930000000000001 - type: recall_at_5 value: 1.097 - task: type: Clustering dataset: name: MTEB ThuNewsClusteringP2P (default) type: C-MTEB/ThuNewsClusteringP2P config: default split: test revision: 5798586b105c0434e4f0fe5e767abe619442cf93 metrics: - type: main_score value: 60.545793641097134 - type: v_measure value: 60.545793641097134 - type: v_measure_std value: 2.352957317776474 - task: type: Clustering dataset: name: MTEB ThuNewsClusteringS2S (default) type: C-MTEB/ThuNewsClusteringS2S config: default split: test revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d metrics: - type: main_score value: 57.01398421152894 - type: v_measure value: 57.01398421152894 - type: v_measure_std value: 1.2833070880511654 - task: type: Retrieval dataset: name: MTEB Touche2020 (default) type: mteb/touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: main_score value: 29.907 - type: map_at_1 value: 2.8000000000000003 - type: map_at_10 value: 11.801 - type: map_at_100 value: 18.684 - type: map_at_1000 value: 20.324 - type: map_at_20 value: 14.863000000000001 - type: map_at_3 value: 5.976 - type: map_at_5 value: 8.017000000000001 - type: mrr_at_1 value: 42.857142857142854 - type: mrr_at_10 value: 58.21914480077746 - type: mrr_at_100 value: 58.86865834193187 - type: mrr_at_1000 value: 58.86865834193187 - type: mrr_at_20 value: 58.80220098976022 - type: mrr_at_3 value: 55.10204081632652 - type: mrr_at_5 value: 57.04081632653061 - type: nauc_map_at_1000_diff1 value: 6.6751169504360055 - type: nauc_map_at_1000_max value: -3.7567600769254965 - type: nauc_map_at_1000_std value: -1.1124604009386834 - type: nauc_map_at_100_diff1 value: 6.651926585741258 - type: nauc_map_at_100_max value: -5.864307686139932 - type: nauc_map_at_100_std value: -4.0284486690638674 - type: nauc_map_at_10_diff1 value: 9.029404549337796 - type: nauc_map_at_10_max value: -3.0948376595412075 - type: nauc_map_at_10_std value: -16.028908061051876 - type: nauc_map_at_1_diff1 value: 17.548361776107765 - type: nauc_map_at_1_max value: 0.45804783229218043 - type: nauc_map_at_1_std value: -5.479184148805003 - type: nauc_map_at_20_diff1 value: 4.328228113463324 - type: nauc_map_at_20_max value: -9.231540897130337 - type: nauc_map_at_20_std value: -13.57340782183861 - type: nauc_map_at_3_diff1 value: 14.867523300835748 - type: nauc_map_at_3_max value: 0.07164381913812982 - type: nauc_map_at_3_std value: -14.682908765951844 - type: nauc_map_at_5_diff1 value: 10.234632572678201 - type: nauc_map_at_5_max value: -3.275900458361846 - type: nauc_map_at_5_std value: -16.435789249899617 - type: nauc_mrr_at_1000_diff1 value: 20.695606512580003 - type: nauc_mrr_at_1000_max value: 4.156101549595945 - type: nauc_mrr_at_1000_std value: 8.649527259525529 - type: nauc_mrr_at_100_diff1 value: 20.695606512580003 - type: nauc_mrr_at_100_max value: 4.156101549595945 - type: nauc_mrr_at_100_std value: 8.649527259525529 - type: nauc_mrr_at_10_diff1 value: 20.698427366086086 - type: nauc_mrr_at_10_max value: 3.954042031248478 - type: nauc_mrr_at_10_std value: 9.39591288641001 - type: nauc_mrr_at_1_diff1 value: 16.54599380455322 - type: nauc_mrr_at_1_max value: 0.24848203752485892 - type: nauc_mrr_at_1_std value: 1.2530112824133868 - type: nauc_mrr_at_20_diff1 value: 20.85307910624566 - type: nauc_mrr_at_20_max value: 4.245739796463719 - type: nauc_mrr_at_20_std value: 8.861423973155963 - type: nauc_mrr_at_3_diff1 value: 16.903075214698525 - type: nauc_mrr_at_3_max value: 2.093666156251405 - type: nauc_mrr_at_3_std value: 4.126025928366229 - type: nauc_mrr_at_5_diff1 value: 20.24944648545998 - type: nauc_mrr_at_5_max value: 6.445821430696172 - type: nauc_mrr_at_5_std value: 9.596892048537528 - type: nauc_ndcg_at_1000_diff1 value: 20.415433650593336 - type: nauc_ndcg_at_1000_max value: 12.366922931280424 - type: nauc_ndcg_at_1000_std value: 27.300631515965605 - type: nauc_ndcg_at_100_diff1 value: 20.856515124531192 - type: nauc_ndcg_at_100_max value: 0.4578830622365307 - type: nauc_ndcg_at_100_std value: 19.22066872166263 - type: nauc_ndcg_at_10_diff1 value: 18.56708265272367 - type: nauc_ndcg_at_10_max value: 3.4531275220348503 - type: nauc_ndcg_at_10_std value: 2.264480721925588 - type: nauc_ndcg_at_1_diff1 value: 13.704789776500043 - type: nauc_ndcg_at_1_max value: -3.7049166878413837 - type: nauc_ndcg_at_1_std value: 4.422031642782982 - type: nauc_ndcg_at_20_diff1 value: 14.90643434193072 - type: nauc_ndcg_at_20_max value: -8.621048644057323 - type: nauc_ndcg_at_20_std value: -0.4555067324883121 - type: nauc_ndcg_at_3_diff1 value: 19.361812396724805 - type: nauc_ndcg_at_3_max value: 5.817962867013526 - type: nauc_ndcg_at_3_std value: -0.8753252050514689 - type: nauc_ndcg_at_5_diff1 value: 19.512803989022927 - type: nauc_ndcg_at_5_max value: 4.4466800234390655 - type: nauc_ndcg_at_5_std value: 3.1909856882261396 - type: nauc_precision_at_1000_diff1 value: -13.554283710669521 - type: nauc_precision_at_1000_max value: 47.14610434487349 - type: nauc_precision_at_1000_std value: 36.39173956289614 - type: nauc_precision_at_100_diff1 value: 10.856084336362487 - type: nauc_precision_at_100_max value: 25.678326608203704 - type: nauc_precision_at_100_std value: 59.36676183382602 - type: nauc_precision_at_10_diff1 value: 14.37198412094934 - type: nauc_precision_at_10_max value: 9.104503069700481 - type: nauc_precision_at_10_std value: 10.71451519713279 - type: nauc_precision_at_1_diff1 value: 16.54599380455322 - type: nauc_precision_at_1_max value: 0.24848203752485892 - type: nauc_precision_at_1_std value: 1.2530112824133868 - type: nauc_precision_at_20_diff1 value: 3.2143260768611457 - type: nauc_precision_at_20_max value: -7.636401715682449 - type: nauc_precision_at_20_std value: 14.782100594161287 - type: nauc_precision_at_3_diff1 value: 21.50971938731398 - type: nauc_precision_at_3_max value: 9.186546888214185 - type: nauc_precision_at_3_std value: -6.357345547475153 - type: nauc_precision_at_5_diff1 value: 20.90086453668611 - type: nauc_precision_at_5_max value: 7.176558075115805 - type: nauc_precision_at_5_std value: 1.3728705241884456 - type: nauc_recall_at_1000_diff1 value: 28.074510232752107 - type: nauc_recall_at_1000_max value: 38.34389209274438 - type: nauc_recall_at_1000_std value: 63.33063604431367 - type: nauc_recall_at_100_diff1 value: 19.850551726108478 - type: nauc_recall_at_100_max value: -5.164677831273209 - type: nauc_recall_at_100_std value: 23.866106801134855 - type: nauc_recall_at_10_diff1 value: 11.964369616221267 - type: nauc_recall_at_10_max value: -7.987814146906532 - type: nauc_recall_at_10_std value: -12.980265883915317 - type: nauc_recall_at_1_diff1 value: 17.548361776107765 - type: nauc_recall_at_1_max value: 0.45804783229218043 - type: nauc_recall_at_1_std value: -5.479184148805003 - type: nauc_recall_at_20_diff1 value: 5.681314516223625 - type: nauc_recall_at_20_max value: -18.663653531510523 - type: nauc_recall_at_20_std value: -8.289499326600785 - type: nauc_recall_at_3_diff1 value: 14.990419827341613 - type: nauc_recall_at_3_max value: -3.66710023682901 - type: nauc_recall_at_3_std value: -15.496276012407181 - type: nauc_recall_at_5_diff1 value: 11.548731015024577 - type: nauc_recall_at_5_max value: -4.628411094603573 - type: nauc_recall_at_5_std value: -13.165660161388459 - type: ndcg_at_1 value: 39.796 - type: ndcg_at_10 value: 29.907 - type: ndcg_at_100 value: 41.347 - type: ndcg_at_1000 value: 52.688 - type: ndcg_at_20 value: 30.651 - type: ndcg_at_3 value: 35.419 - type: ndcg_at_5 value: 31.715 - type: precision_at_1 value: 42.857 - type: precision_at_10 value: 25.918000000000003 - type: precision_at_100 value: 8.469 - type: precision_at_1000 value: 1.614 - type: precision_at_20 value: 20.305999999999997 - type: precision_at_3 value: 36.735 - type: precision_at_5 value: 30.612000000000002 - type: recall_at_1 value: 2.8000000000000003 - type: recall_at_10 value: 18.722 - type: recall_at_100 value: 52.001 - type: recall_at_1000 value: 86.88 - type: recall_at_20 value: 27.805000000000003 - type: recall_at_3 value: 7.420999999999999 - type: recall_at_5 value: 10.663 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification (default) type: mteb/toxic_conversations_50k config: default split: test revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de metrics: - type: accuracy value: 64.833984375 - type: ap value: 11.670668141786788 - type: ap_weighted value: 11.670668141786788 - type: f1 value: 49.77377634658719 - type: f1_weighted value: 72.52437665595998 - type: main_score value: 64.833984375 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification (default) type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 64.88398415393323 - type: f1 value: 64.84361328633659 - type: f1_weighted value: 63.59840236775296 - type: main_score value: 64.88398415393323 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering (default) type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: main_score value: 46.12488681435932 - type: v_measure value: 46.12488681435932 - type: v_measure_std value: 1.5095413626412524 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 (default) type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cosine_accuracy value: 85.17017345174942 - type: cosine_accuracy_threshold value: 73.24392199516296 - type: cosine_ap value: 71.5689998301689 - type: cosine_f1 value: 66.13515565679575 - type: cosine_f1_threshold value: 69.43247318267822 - type: cosine_precision value: 63.545719844357976 - type: cosine_recall value: 68.94459102902375 - type: dot_accuracy value: 85.17017345174942 - type: dot_accuracy_threshold value: 73.24392199516296 - type: dot_ap value: 71.56899897384942 - type: dot_f1 value: 66.13515565679575 - type: dot_f1_threshold value: 69.43247318267822 - type: dot_precision value: 63.545719844357976 - type: dot_recall value: 68.94459102902375 - type: euclidean_accuracy value: 85.17017345174942 - type: euclidean_accuracy_threshold value: 73.15199971199036 - type: euclidean_ap value: 71.56900157150807 - type: euclidean_f1 value: 66.13515565679575 - type: euclidean_f1_threshold value: 78.18889617919922 - type: euclidean_precision value: 63.545719844357976 - type: euclidean_recall value: 68.94459102902375 - type: main_score value: 71.56900157150807 - type: manhattan_accuracy value: 85.03904154497228 - type: manhattan_accuracy_threshold value: 1829.3399810791016 - type: manhattan_ap value: 71.17145701434644 - type: manhattan_f1 value: 65.9017661467062 - type: manhattan_f1_threshold value: 1959.2126846313477 - type: manhattan_precision value: 64.43156037307789 - type: manhattan_recall value: 67.44063324538259 - type: max_accuracy value: 85.17017345174942 - type: max_ap value: 71.56900157150807 - type: max_f1 value: 66.13515565679575 - type: max_precision value: 64.43156037307789 - type: max_recall value: 68.94459102902375 - type: similarity_accuracy value: 85.17017345174942 - type: similarity_accuracy_threshold value: 73.24392199516296 - type: similarity_ap value: 71.5689998301689 - type: similarity_f1 value: 66.13515565679575 - type: similarity_f1_threshold value: 69.43247318267822 - type: similarity_precision value: 63.545719844357976 - type: similarity_recall value: 68.94459102902375 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus (default) type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cosine_accuracy value: 89.17995886211045 - type: cosine_accuracy_threshold value: 70.09445428848267 - type: cosine_ap value: 86.05864466686238 - type: cosine_f1 value: 78.36959806588094 - type: cosine_f1_threshold value: 67.36087203025818 - type: cosine_precision value: 76.92821121328983 - type: cosine_recall value: 79.8660301817062 - type: dot_accuracy value: 89.17995886211045 - type: dot_accuracy_threshold value: 70.09446620941162 - type: dot_ap value: 86.05863925325839 - type: dot_f1 value: 78.36959806588094 - type: dot_f1_threshold value: 67.36087799072266 - type: dot_precision value: 76.92821121328983 - type: dot_recall value: 79.8660301817062 - type: euclidean_accuracy value: 89.17995886211045 - type: euclidean_accuracy_threshold value: 77.33762264251709 - type: euclidean_ap value: 86.05864473129277 - type: euclidean_f1 value: 78.36959806588094 - type: euclidean_f1_threshold value: 80.79496026039124 - type: euclidean_precision value: 76.92821121328983 - type: euclidean_recall value: 79.8660301817062 - type: main_score value: 86.05864473129277 - type: manhattan_accuracy value: 89.15279233127644 - type: manhattan_accuracy_threshold value: 1960.135269165039 - type: manhattan_ap value: 86.00803071652211 - type: manhattan_f1 value: 78.28386279840602 - type: manhattan_f1_threshold value: 2062.195587158203 - type: manhattan_precision value: 75.81331602106326 - type: manhattan_recall value: 80.92085001539883 - type: max_accuracy value: 89.17995886211045 - type: max_ap value: 86.05864473129277 - type: max_f1 value: 78.36959806588094 - type: max_precision value: 76.92821121328983 - type: max_recall value: 80.92085001539883 - type: similarity_accuracy value: 89.17995886211045 - type: similarity_accuracy_threshold value: 70.09445428848267 - type: similarity_ap value: 86.05864466686238 - type: similarity_f1 value: 78.36959806588094 - type: similarity_f1_threshold value: 67.36087203025818 - type: similarity_precision value: 76.92821121328983 - type: similarity_recall value: 79.8660301817062 - task: type: Retrieval dataset: name: MTEB VideoRetrieval (default) type: C-MTEB/VideoRetrieval config: default split: test revision: 58c2597a5943a2ba48f4668c3b90d796283c5639 metrics: - type: main_score value: 76.259 - type: map_at_1 value: 63.1 - type: map_at_10 value: 72.214 - type: map_at_100 value: 72.595 - type: map_at_1000 value: 72.604 - type: map_at_20 value: 72.482 - type: map_at_3 value: 70.45 - type: map_at_5 value: 71.565 - type: mrr_at_1 value: 63.1 - type: mrr_at_10 value: 72.2140873015873 - type: mrr_at_100 value: 72.59450024817937 - type: mrr_at_1000 value: 72.60392898169485 - type: mrr_at_20 value: 72.48216831972637 - type: mrr_at_3 value: 70.45000000000005 - type: mrr_at_5 value: 71.56500000000004 - type: nauc_map_at_1000_diff1 value: 74.28035434647866 - type: nauc_map_at_1000_max value: 24.9986630946608 - type: nauc_map_at_1000_std value: -39.39693868573278 - type: nauc_map_at_100_diff1 value: 74.26844900002635 - type: nauc_map_at_100_max value: 25.018503837474537 - type: nauc_map_at_100_std value: -39.37583943184342 - type: nauc_map_at_10_diff1 value: 74.26680601603067 - type: nauc_map_at_10_max value: 25.09926459399667 - type: nauc_map_at_10_std value: -39.89610049620442 - type: nauc_map_at_1_diff1 value: 77.10780012025063 - type: nauc_map_at_1_max value: 19.806657111504965 - type: nauc_map_at_1_std value: -37.2015516410441 - type: nauc_map_at_20_diff1 value: 74.21838781014907 - type: nauc_map_at_20_max value: 25.063007078847527 - type: nauc_map_at_20_std value: -39.43384966895805 - type: nauc_map_at_3_diff1 value: 74.18933637722046 - type: nauc_map_at_3_max value: 24.353710639698495 - type: nauc_map_at_3_std value: -40.981597611432065 - type: nauc_map_at_5_diff1 value: 74.10850958842639 - type: nauc_map_at_5_max value: 24.97711616082726 - type: nauc_map_at_5_std value: -40.679823827809464 - type: nauc_mrr_at_1000_diff1 value: 74.28035434647866 - type: nauc_mrr_at_1000_max value: 24.9986630946608 - type: nauc_mrr_at_1000_std value: -39.39693868573278 - type: nauc_mrr_at_100_diff1 value: 74.26844900002635 - type: nauc_mrr_at_100_max value: 25.018503837474537 - type: nauc_mrr_at_100_std value: -39.37583943184342 - type: nauc_mrr_at_10_diff1 value: 74.26680601603067 - type: nauc_mrr_at_10_max value: 25.09926459399667 - type: nauc_mrr_at_10_std value: -39.89610049620442 - type: nauc_mrr_at_1_diff1 value: 77.10780012025063 - type: nauc_mrr_at_1_max value: 19.806657111504965 - type: nauc_mrr_at_1_std value: -37.2015516410441 - type: nauc_mrr_at_20_diff1 value: 74.21838781014907 - type: nauc_mrr_at_20_max value: 25.063007078847527 - type: nauc_mrr_at_20_std value: -39.43384966895805 - type: nauc_mrr_at_3_diff1 value: 74.18933637722046 - type: nauc_mrr_at_3_max value: 24.353710639698495 - type: nauc_mrr_at_3_std value: -40.981597611432065 - type: nauc_mrr_at_5_diff1 value: 74.10850958842639 - type: nauc_mrr_at_5_max value: 24.97711616082726 - type: nauc_mrr_at_5_std value: -40.679823827809464 - type: nauc_ndcg_at_1000_diff1 value: 73.61936333916759 - type: nauc_ndcg_at_1000_max value: 27.116854943289013 - type: nauc_ndcg_at_1000_std value: -37.54463044764084 - type: nauc_ndcg_at_100_diff1 value: 73.34360970001946 - type: nauc_ndcg_at_100_max value: 27.730908397954856 - type: nauc_ndcg_at_100_std value: -36.847816086959746 - type: nauc_ndcg_at_10_diff1 value: 73.26740567929617 - type: nauc_ndcg_at_10_max value: 28.07969523618901 - type: nauc_ndcg_at_10_std value: -39.5613267347007 - type: nauc_ndcg_at_1_diff1 value: 77.10780012025063 - type: nauc_ndcg_at_1_max value: 19.806657111504965 - type: nauc_ndcg_at_1_std value: -37.2015516410441 - type: nauc_ndcg_at_20_diff1 value: 73.03206507305698 - type: nauc_ndcg_at_20_max value: 27.998040243323953 - type: nauc_ndcg_at_20_std value: -37.708810040181056 - type: nauc_ndcg_at_3_diff1 value: 73.14698395698991 - type: nauc_ndcg_at_3_max value: 26.27353390787337 - type: nauc_ndcg_at_3_std value: -42.26161377498877 - type: nauc_ndcg_at_5_diff1 value: 72.95385082418426 - type: nauc_ndcg_at_5_max value: 27.616141938964926 - type: nauc_ndcg_at_5_std value: -41.73620432852272 - type: nauc_precision_at_1000_diff1 value: 62.222222222221426 - type: nauc_precision_at_1000_max value: 84.86150015561716 - type: nauc_precision_at_1000_std value: 70.37659508247653 - type: nauc_precision_at_100_diff1 value: 57.83489866534799 - type: nauc_precision_at_100_max value: 79.34448289119527 - type: nauc_precision_at_100_std value: 41.03229527104971 - type: nauc_precision_at_10_diff1 value: 67.38577178030296 - type: nauc_precision_at_10_max value: 47.06185741341976 - type: nauc_precision_at_10_std value: -35.28392180735959 - type: nauc_precision_at_1_diff1 value: 77.10780012025063 - type: nauc_precision_at_1_max value: 19.806657111504965 - type: nauc_precision_at_1_std value: -37.2015516410441 - type: nauc_precision_at_20_diff1 value: 62.29131652661053 - type: nauc_precision_at_20_max value: 54.85838779956423 - type: nauc_precision_at_20_std value: -14.987239340181135 - type: nauc_precision_at_3_diff1 value: 69.30752094380478 - type: nauc_precision_at_3_max value: 33.47195304146769 - type: nauc_precision_at_3_std value: -46.94646447336974 - type: nauc_precision_at_5_diff1 value: 67.75036818851238 - type: nauc_precision_at_5_max value: 39.96410162002936 - type: nauc_precision_at_5_std value: -46.30185321551299 - type: nauc_recall_at_1000_diff1 value: 62.22222222222214 - type: nauc_recall_at_1000_max value: 84.86150015561832 - type: nauc_recall_at_1000_std value: 70.37659508247839 - type: nauc_recall_at_100_diff1 value: 57.83489866534869 - type: nauc_recall_at_100_max value: 79.34448289119563 - type: nauc_recall_at_100_std value: 41.03229527105021 - type: nauc_recall_at_10_diff1 value: 67.38577178030309 - type: nauc_recall_at_10_max value: 47.061857413419816 - type: nauc_recall_at_10_std value: -35.283921807359164 - type: nauc_recall_at_1_diff1 value: 77.10780012025063 - type: nauc_recall_at_1_max value: 19.806657111504965 - type: nauc_recall_at_1_std value: -37.2015516410441 - type: nauc_recall_at_20_diff1 value: 62.29131652661064 - type: nauc_recall_at_20_max value: 54.858387799564234 - type: nauc_recall_at_20_std value: -14.987239340180611 - type: nauc_recall_at_3_diff1 value: 69.30752094380468 - type: nauc_recall_at_3_max value: 33.47195304146769 - type: nauc_recall_at_3_std value: -46.94646447336982 - type: nauc_recall_at_5_diff1 value: 67.75036818851241 - type: nauc_recall_at_5_max value: 39.96410162002941 - type: nauc_recall_at_5_std value: -46.30185321551307 - type: ndcg_at_1 value: 63.1 - type: ndcg_at_10 value: 76.259 - type: ndcg_at_100 value: 77.985 - type: ndcg_at_1000 value: 78.227 - type: ndcg_at_20 value: 77.208 - type: ndcg_at_3 value: 72.684 - type: ndcg_at_5 value: 74.698 - type: precision_at_1 value: 63.1 - type: precision_at_10 value: 8.88 - type: precision_at_100 value: 0.966 - type: precision_at_1000 value: 0.099 - type: precision_at_20 value: 4.625 - type: precision_at_3 value: 26.367 - type: precision_at_5 value: 16.8 - type: recall_at_1 value: 63.1 - type: recall_at_10 value: 88.8 - type: recall_at_100 value: 96.6 - type: recall_at_1000 value: 98.5 - type: recall_at_20 value: 92.5 - type: recall_at_3 value: 79.10000000000001 - type: recall_at_5 value: 84.0 - task: type: Classification dataset: name: MTEB Waimai (default) type: C-MTEB/waimai-classification config: default split: test revision: 339287def212450dcaa9df8c22bf93e9980c7023 metrics: - type: accuracy value: 85.77999999999999 - type: ap value: 68.2216414015039 - type: ap_weighted value: 68.2216414015039 - type: f1 value: 83.82399258910729 - type: f1_weighted value: 85.78923735869199 - type: main_score value: 85.77999999999999 --- ## MiniCPM-Embedding-Light **MiniCPM-Embedding-Light** 是面壁智能与清华大学自然语言处理实验室(THUNLP)、东北大学信息检索小组(NEUIR)共同开发的中英双语言文本嵌入模型,有如下特点: - 出色的中文、英文检索能力。 - 出色的中英跨语言检索能力。 - 支持长文本(最长8192token)。 - 提供稠密向量与token级别的稀疏向量。 - 可变的稠密向量维度(套娃表征)。 MiniCPM-Embedding-Light结构上采取双向注意力和 Weighted Mean Pooling [1]。采取多阶段训练方式,共使用包括开源数据、机造数据、闭源数据在内的约 260M 条训练数据。 欢迎关注 UltraRAG 系列: - 检索模型:[MiniCPM-Embedding-Light](https://huggingface.co/openbmb/MiniCPM-Embedding-Light) - 重排模型:[MiniCPM-Reranker-Light](https://huggingface.co/openbmb/MiniCPM-Reranker-Light) - 领域自适应RAG框架:[UltraRAG](https://github.com/openbmb/UltraRAG) **MiniCPM-Embedding-Light** is a bilingual & cross-lingual text embedding model developed by ModelBest Inc. , THUNLP and NEUIR , featuring: - Exceptional Chinese and English retrieval capabilities. - Outstanding cross-lingual retrieval capabilities between Chinese and English. - Long-text support (up to 8192 tokens). - Dense vectors and token-level sparse vectors. - Variable dense vector dimensions (Matryoshka representation [2]). MiniCPM-Embedding-Light incorporates bidirectional attention and Weighted Mean Pooling [1] in its architecture. The model underwent multi-stage training using approximately 260 million training examples, including open-source, synthetic, and proprietary data. We also invite you to explore the UltraRAG series: - Retrieval Model: [MiniCPM-Embedding-Light](https://huggingface.co/openbmb/MiniCPM-Embedding-Light) - Re-ranking Model: [MiniCPM-Reranker-Light](https://huggingface.co/openbmb/MiniCPM-Reranker-Light) - Domain Adaptive RAG Framework: [UltraRAG](https://github.com/openbmb/UltraRAG) [1] Muennighoff, N. (2022). Sgpt: Gpt sentence embeddings for semantic search. arXiv preprint arXiv:2202.08904. [2] Kusupati, Aditya, et al. "Matryoshka representation learning." Advances in Neural Information Processing Systems 35 (2022): 30233-30249. ## 模型信息 Model Information - 模型大小:440M - 嵌入维度:1024 - 最大输入token数:8192 - Model Size: 440M - Embedding Dimension: 1024 - Max Input Tokens: 8192 ## 使用方法 Usage ### 输入格式 Input Format 本模型支持 query 侧指令,格式如下: MiniCPM-Embedding-Light supports query-side instructions in the following format: ``` Instruction: {{ instruction }} Query: {{ query }} ``` 例如: For example: ``` Instruction: 为这个医学问题检索相关回答。Query: 咽喉癌的成因是什么? ``` ``` Instruction: Given a claim about climate change, retrieve documents that support or refute the claim. Query: However the warming trend is slower than most climate models have forecast. ``` 也可以不提供指令,即采取如下格式: MiniCPM-Embedding-Light also works in instruction-free mode in the following format: ``` Query: {{ query }} ``` ### 环境要求 Requirements ``` transformers==4.37.2 ``` ### 示例脚本 Demo #### Huggingface Transformers ```python from transformers import AutoModel import torch model_name = "openbmb/MiniCPM-Embedding-Light" model = AutoModel.from_pretrained(model_name, trust_remote_code=True, torch_dtype=torch.float16).to("cuda") # you can use flash_attention_2 for faster inference # model = AutoModel.from_pretrained(model_name, trust_remote_code=True, attn_implementation="flash_attention_2", torch_dtype=torch.float16).to("cuda") model.eval() queries = ["MiniCPM-o 2.6 A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming on Your Phone"] passages = ["MiniCPM-o 2.6 is the latest and most capable model in the MiniCPM-o series. The model is built in an end-to-end fashion based on SigLip-400M, Whisper-medium-300M, ChatTTS-200M, and Qwen2.5-7B with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-V 2.6, and introduces new features for real-time speech conversation and multimodal live streaming."] embeddings_query_dense, embeddings_query_sparse = model.encode_query(queries, return_sparse_vectors=True) embeddings_doc_dense, embeddings_doc_sparse = model.encode_corpus(passages, return_sparse_vectors=True) dense_scores = (embeddings_query_dense @ embeddings_doc_dense.T) print(dense_scores.tolist()) # [[0.6512398719787598]] print(model.compute_sparse_score_dicts(embeddings_query_sparse, embeddings_doc_sparse)) # [[0.27202296]] dense_scores, sparse_scores, mixed_scores = model.compute_score(queries, passages) print(dense_scores) # [[0.65123993]] print(sparse_scores) # [[0.27202296]] print(mixed_scores) # [[0.73284686]] ``` #### Sentence Transformers ```python import torch from sentence_transformers import SentenceTransformer model_name = "openbmb/MiniCPM-Embedding-Light" model = SentenceTransformer(model_name, trust_remote_code=True, model_kwargs={"torch_dtype": torch.float16}) # you can use flash_attention_2 for faster inference # model = SentenceTransformer(model_name, trust_remote_code=True, model_kwargs={"attn_implementation": "flash_attention_2", "torch_dtype": torch.float16}) queries = ["中国的首都是哪里?"] # "What is the capital of China?" passages = ["beijing", "shanghai"] # "北京", "上海" INSTRUCTION = "Query: " embeddings_query = model.encode(queries, prompt=INSTRUCTION) embeddings_doc = model.encode(passages) scores = (embeddings_query @ embeddings_doc.T) print(scores.tolist()) # [[0.40356746315956116, 0.36183440685272217]] ``` #### Infinity ```python import asyncio from infinity_emb import AsyncEngineArray, EngineArgs, AsyncEmbeddingEngine import numpy as np array = AsyncEngineArray.from_args([ EngineArgs(model_name_or_path = "openbmb/MiniCPM-Embedding-Light", engine="torch", dtype="float16", bettertransformer=False, pooling_method="mean", trust_remote_code=True), ]) queries = ["中国的首都是哪里?"] # "What is the capital of China?" passages = ["beijing", "shanghai"] # "北京", "上海" INSTRUCTION = "Query:" queries = [f"{INSTRUCTION} {query}" for query in queries] async def embed_text(engine: AsyncEmbeddingEngine,sentences): async with engine: embeddings, usage = await engine.embed(sentences=sentences) return embeddings queries_embedding = asyncio.run(embed_text(array[0],queries)) passages_embedding = asyncio.run(embed_text(array[0],passages)) scores = (np.array(queries_embedding) @ np.array(passages_embedding).T) print(scores.tolist()) # [[0.40356746315956116, 0.36183443665504456]] ``` #### FlagEmbedding ```python from FlagEmbedding import FlagModel model = FlagModel("openbmb/MiniCPM-Embedding-Light", query_instruction_for_retrieval="Query: ", pooling_method="mean", trust_remote_code=True, normalize_embeddings=True, use_fp16=True) # You can hack the __init__() method of the FlagEmbedding BaseEmbedder class to use flash_attention_2 for faster inference # self.model = AutoModel.from_pretrained( # model_name_or_path, # trust_remote_code=trust_remote_code, # cache_dir=cache_dir, # # torch_dtype=torch.float16, # we need to add this line to use fp16 # # attn_implementation="flash_attention_2", # we need to add this line to use flash_attention_2 # ) queries = ["中国的首都是哪里?"] # "What is the capital of China?" passages = ["beijing", "shanghai"] # "北京", "上海" embeddings_query = model.encode_queries(queries) embeddings_doc = model.encode_corpus(passages) scores = (embeddings_query @ embeddings_doc.T) print(scores.tolist()) # [[0.40356746315956116, 0.36183440685272217]] ``` ## 实验结果 Evaluation Results ### 中文与英文检索结果 CN/EN Retrieval Results | 模型 Model | C-MTEB/Retrieval(NDCG@10) | BEIR(NDCG@10) | |----------------------------------------------------|-------------------|---------------| | bge-large-zh-v1.5 | 70.46 | - | | gte-large-zh | 72.49 | - | | Conan-embedding-v1 | 76.67 | | | bge-large-en-v1.5 | - | 54.29 | | modernbert-embed-large | - | 54.36 | | snowflake-arctic-embed-l | - | 55.98 | | gte-en-large-v1.5 | - | 57.91 | | me5-large | 63.66 | 51.43 | | bge-m3(Dense) | 65.43 | 48.82 | | gte-multilingual-base(Dense) | 71.95 | 51.08 | | jina-embeddings-v3 | 68.60 | 53.88 | | gte-Qwen2-1.5B-instruct | 71.86 | 58.29 | | MiniCPM-Embedding | 76.76 | 58.56 | | MiniCPM-Embedding-Light(Dense) | 72.71 | 55.27 | | MiniCPM-Embedding-Light(Dense+Sparse) | 73.13 | 56.31 | | MiniCPM-Embedding-Light(Dense+Sparse)+MiniCPM-Reranker-Light | 76.34 | 61.49 | ### 中英跨语言检索结果 CN-EN Cross-lingual Retrieval Results | 模型 Model | MKQA En-Zh_CN (Recall@20) | NeuCLIR22 (NDCG@10) | NeuCLIR23 (NDCG@10) | |------------------------------|--------------------|--------------------|--------------------| | me5-large | 44.3 | 9.01 | 25.33 | | bge-m3(Dense) | 66.4 | 30.49 | 41.09 | | gte-multilingual-base(Dense) | 68.2 | 39.46 | 45.86 | | MiniCPM-Embedding | 72.95 | 52.65 | 49.95 | | MiniCPM-Embedding-Light(Dense) | 68.29 | 41.17 | 45.83 | | MiniCPM-Embedding-Light(Dense)+MiniCPM-Reranker-Light | 71.86 | 54.32 | 56.50 | ## 许可证 License - 本仓库中代码依照 [Apache-2.0 协议](https://github.com/openbmb/MiniCPM/blob/main/LICENSE)开源。 - MiniCPM-Embedding-Light 模型权重的使用则需要遵循 [MiniCPM 模型协议](https://github.com/openbmb/MiniCPM/blob/main/MiniCPM%20Model%20License.md)。 - MiniCPM-Embedding-Light 模型权重对学术研究完全开放。如需将模型用于商业用途,请填写[此问卷](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g)。 * The code in this repo is released under the [Apache-2.0](https://github.com/openbmb/MiniCPM/blob/main/LICENSE) License. * The usage of MiniCPM-Embedding-Light model weights must strictly follow [MiniCPM Model License.md](https://github.com/openbmb/MiniCPM/blob/main/MiniCPM%20Model%20License.md). * The models and weights of MiniCPM-Embedding-Light are completely free for academic research. After filling out a ["questionnaire"](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g) for registration, MiniCPM-Embedding-Light weights are also available for free commercial use.
[ "BIOSSES", "SCIFACT" ]
Muennighoff/SGPT-125M-weightedmean-nli-bitfit
Muennighoff
sentence-similarity
[ "sentence-transformers", "pytorch", "gpt_neo", "feature-extraction", "sentence-similarity", "mteb", "arxiv:2202.08904", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2023-05-31T14:48:58+00:00
327
3
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb model-index: - name: SGPT-125M-weightedmean-nli-bitfit results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996 metrics: - type: accuracy value: 65.88059701492537 - type: ap value: 28.685493163579785 - type: f1 value: 59.79951005816335 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (de) type: mteb/amazon_counterfactual config: de split: test revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996 metrics: - type: accuracy value: 59.07922912205568 - type: ap value: 73.91887421019034 - type: f1 value: 56.6316368658711 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en-ext) type: mteb/amazon_counterfactual config: en-ext split: test revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996 metrics: - type: accuracy value: 64.91754122938531 - type: ap value: 16.360681214864226 - type: f1 value: 53.126592061523766 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (ja) type: mteb/amazon_counterfactual config: ja split: test revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996 metrics: - type: accuracy value: 56.423982869378996 - type: ap value: 12.143003571907899 - type: f1 value: 45.76363777987471 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: 80714f8dcf8cefc218ef4f8c5a966dd83f75a0e1 metrics: - type: accuracy value: 74.938225 - type: ap value: 69.58187110320567 - type: f1 value: 74.72744058439321 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: c379a6705fec24a2493fa68e011692605f44e119 metrics: - type: accuracy value: 35.098 - type: f1 value: 34.73265651435726 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (de) type: mteb/amazon_reviews_multi config: de split: test revision: c379a6705fec24a2493fa68e011692605f44e119 metrics: - type: accuracy value: 24.516 - type: f1 value: 24.21748200448397 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (es) type: mteb/amazon_reviews_multi config: es split: test revision: c379a6705fec24a2493fa68e011692605f44e119 metrics: - type: accuracy value: 29.097999999999995 - type: f1 value: 28.620040162757093 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (fr) type: mteb/amazon_reviews_multi config: fr split: test revision: c379a6705fec24a2493fa68e011692605f44e119 metrics: - type: accuracy value: 27.395999999999997 - type: f1 value: 27.146888644986284 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (ja) type: mteb/amazon_reviews_multi config: ja split: test revision: c379a6705fec24a2493fa68e011692605f44e119 metrics: - type: accuracy value: 21.724 - type: f1 value: 21.37230564276654 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: c379a6705fec24a2493fa68e011692605f44e119 metrics: - type: accuracy value: 23.976 - type: f1 value: 23.741137981755482 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: 5b3e3697907184a9b77a3c99ee9ea1a9cbb1e4e3 metrics: - type: map_at_1 value: 13.442000000000002 - type: map_at_10 value: 24.275 - type: map_at_100 value: 25.588 - type: map_at_1000 value: 25.659 - type: map_at_3 value: 20.092 - type: map_at_5 value: 22.439999999999998 - type: ndcg_at_1 value: 13.442000000000002 - type: ndcg_at_10 value: 31.04 - type: ndcg_at_100 value: 37.529 - type: ndcg_at_1000 value: 39.348 - type: ndcg_at_3 value: 22.342000000000002 - type: ndcg_at_5 value: 26.595999999999997 - type: precision_at_1 value: 13.442000000000002 - type: precision_at_10 value: 5.299 - type: precision_at_100 value: 0.836 - type: precision_at_1000 value: 0.098 - type: precision_at_3 value: 9.625 - type: precision_at_5 value: 7.852 - type: recall_at_1 value: 13.442000000000002 - type: recall_at_10 value: 52.986999999999995 - type: recall_at_100 value: 83.64200000000001 - type: recall_at_1000 value: 97.795 - type: recall_at_3 value: 28.876 - type: recall_at_5 value: 39.26 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: 0bbdb47bcbe3a90093699aefeed338a0f28a7ee8 metrics: - type: v_measure value: 34.742482477870766 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: b73bd54100e5abfa6e3a23dcafb46fe4d2438dc3 metrics: - type: v_measure value: 24.67870651472156 - task: type: Clustering dataset: name: MTEB BlurbsClusteringS2S type: slvnwhrl/blurbs-clustering-s2s config: default split: test revision: 9bfff9a7f8f6dc6ffc9da71c48dd48b68696471d metrics: - type: v_measure value: 8.00311862863495 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 4d853f94cd57d85ec13805aeeac3ae3e5eb4c49c metrics: - type: map value: 52.63439984994702 - type: mrr value: 65.75704612408214 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: 9ee918f184421b6bd48b78f6c714d86546106103 metrics: - type: cos_sim_pearson value: 72.78000135012542 - type: cos_sim_spearman value: 70.92812216947605 - type: euclidean_pearson value: 77.1169214949292 - type: euclidean_spearman value: 77.10175681583313 - type: manhattan_pearson value: 76.84527031837595 - type: manhattan_spearman value: 77.0704308008438 - task: type: BitextMining dataset: name: MTEB BUCC (de-en) type: mteb/bucc-bitext-mining config: de-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 1.0960334029227559 - type: f1 value: 1.0925539318023658 - type: precision value: 1.0908141962421711 - type: recall value: 1.0960334029227559 - task: type: BitextMining dataset: name: MTEB BUCC (fr-en) type: mteb/bucc-bitext-mining config: fr-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 0.02201188641866608 - type: f1 value: 0.02201188641866608 - type: precision value: 0.02201188641866608 - type: recall value: 0.02201188641866608 - task: type: BitextMining dataset: name: MTEB BUCC (ru-en) type: mteb/bucc-bitext-mining config: ru-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 0.0 - type: f1 value: 0.0 - type: precision value: 0.0 - type: recall value: 0.0 - task: type: BitextMining dataset: name: MTEB BUCC (zh-en) type: mteb/bucc-bitext-mining config: zh-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 0.0 - type: f1 value: 0.0 - type: precision value: 0.0 - type: recall value: 0.0 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 44fa15921b4c889113cc5df03dd4901b49161ab7 metrics: - type: accuracy value: 74.67857142857142 - type: f1 value: 74.61743413995573 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 11d0121201d1f1f280e8cc8f3d98fb9c4d9f9c55 metrics: - type: v_measure value: 28.93427045246491 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: c0fab014e1bcb8d3a5e31b2088972a1e01547dc1 metrics: - type: v_measure value: 23.080939123955474 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db metrics: - type: map_at_1 value: 18.221999999999998 - type: map_at_10 value: 24.506 - type: map_at_100 value: 25.611 - type: map_at_1000 value: 25.758 - type: map_at_3 value: 22.264999999999997 - type: map_at_5 value: 23.698 - type: ndcg_at_1 value: 23.033 - type: ndcg_at_10 value: 28.719 - type: ndcg_at_100 value: 33.748 - type: ndcg_at_1000 value: 37.056 - type: ndcg_at_3 value: 25.240000000000002 - type: ndcg_at_5 value: 27.12 - type: precision_at_1 value: 23.033 - type: precision_at_10 value: 5.408 - type: precision_at_100 value: 1.004 - type: precision_at_1000 value: 0.158 - type: precision_at_3 value: 11.874 - type: precision_at_5 value: 8.927 - type: recall_at_1 value: 18.221999999999998 - type: recall_at_10 value: 36.355 - type: recall_at_100 value: 58.724 - type: recall_at_1000 value: 81.33500000000001 - type: recall_at_3 value: 26.334000000000003 - type: recall_at_5 value: 31.4 - type: map_at_1 value: 12.058 - type: map_at_10 value: 16.051000000000002 - type: map_at_100 value: 16.772000000000002 - type: map_at_1000 value: 16.871 - type: map_at_3 value: 14.78 - type: map_at_5 value: 15.5 - type: ndcg_at_1 value: 15.35 - type: ndcg_at_10 value: 18.804000000000002 - type: ndcg_at_100 value: 22.346 - type: ndcg_at_1000 value: 25.007 - type: ndcg_at_3 value: 16.768 - type: ndcg_at_5 value: 17.692 - type: precision_at_1 value: 15.35 - type: precision_at_10 value: 3.51 - type: precision_at_100 value: 0.664 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 7.983 - type: precision_at_5 value: 5.656 - type: recall_at_1 value: 12.058 - type: recall_at_10 value: 23.644000000000002 - type: recall_at_100 value: 39.76 - type: recall_at_1000 value: 58.56 - type: recall_at_3 value: 17.541999999999998 - type: recall_at_5 value: 20.232 - type: map_at_1 value: 21.183 - type: map_at_10 value: 28.9 - type: map_at_100 value: 29.858 - type: map_at_1000 value: 29.953999999999997 - type: map_at_3 value: 26.58 - type: map_at_5 value: 27.912 - type: ndcg_at_1 value: 24.765 - type: ndcg_at_10 value: 33.339999999999996 - type: ndcg_at_100 value: 37.997 - type: ndcg_at_1000 value: 40.416000000000004 - type: ndcg_at_3 value: 29.044999999999998 - type: ndcg_at_5 value: 31.121 - type: precision_at_1 value: 24.765 - type: precision_at_10 value: 5.599 - type: precision_at_100 value: 0.8699999999999999 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 13.270999999999999 - type: precision_at_5 value: 9.367 - type: recall_at_1 value: 21.183 - type: recall_at_10 value: 43.875 - type: recall_at_100 value: 65.005 - type: recall_at_1000 value: 83.017 - type: recall_at_3 value: 32.232 - type: recall_at_5 value: 37.308 - type: map_at_1 value: 11.350999999999999 - type: map_at_10 value: 14.953 - type: map_at_100 value: 15.623000000000001 - type: map_at_1000 value: 15.716 - type: map_at_3 value: 13.603000000000002 - type: map_at_5 value: 14.343 - type: ndcg_at_1 value: 12.429 - type: ndcg_at_10 value: 17.319000000000003 - type: ndcg_at_100 value: 20.990000000000002 - type: ndcg_at_1000 value: 23.899 - type: ndcg_at_3 value: 14.605 - type: ndcg_at_5 value: 15.89 - type: precision_at_1 value: 12.429 - type: precision_at_10 value: 2.701 - type: precision_at_100 value: 0.48700000000000004 - type: precision_at_1000 value: 0.078 - type: precision_at_3 value: 6.026 - type: precision_at_5 value: 4.3839999999999995 - type: recall_at_1 value: 11.350999999999999 - type: recall_at_10 value: 23.536 - type: recall_at_100 value: 40.942 - type: recall_at_1000 value: 64.05 - type: recall_at_3 value: 16.195 - type: recall_at_5 value: 19.264 - type: map_at_1 value: 8.08 - type: map_at_10 value: 11.691 - type: map_at_100 value: 12.312 - type: map_at_1000 value: 12.439 - type: map_at_3 value: 10.344000000000001 - type: map_at_5 value: 10.996 - type: ndcg_at_1 value: 10.697 - type: ndcg_at_10 value: 14.48 - type: ndcg_at_100 value: 18.160999999999998 - type: ndcg_at_1000 value: 21.886 - type: ndcg_at_3 value: 11.872 - type: ndcg_at_5 value: 12.834000000000001 - type: precision_at_1 value: 10.697 - type: precision_at_10 value: 2.811 - type: precision_at_100 value: 0.551 - type: precision_at_1000 value: 0.10200000000000001 - type: precision_at_3 value: 5.804 - type: precision_at_5 value: 4.154 - type: recall_at_1 value: 8.08 - type: recall_at_10 value: 20.235 - type: recall_at_100 value: 37.525999999999996 - type: recall_at_1000 value: 65.106 - type: recall_at_3 value: 12.803999999999998 - type: recall_at_5 value: 15.498999999999999 - type: map_at_1 value: 13.908999999999999 - type: map_at_10 value: 19.256 - type: map_at_100 value: 20.286 - type: map_at_1000 value: 20.429 - type: map_at_3 value: 17.399 - type: map_at_5 value: 18.398999999999997 - type: ndcg_at_1 value: 17.421 - type: ndcg_at_10 value: 23.105999999999998 - type: ndcg_at_100 value: 28.128999999999998 - type: ndcg_at_1000 value: 31.480999999999998 - type: ndcg_at_3 value: 19.789 - type: ndcg_at_5 value: 21.237000000000002 - type: precision_at_1 value: 17.421 - type: precision_at_10 value: 4.331 - type: precision_at_100 value: 0.839 - type: precision_at_1000 value: 0.131 - type: precision_at_3 value: 9.4 - type: precision_at_5 value: 6.776 - type: recall_at_1 value: 13.908999999999999 - type: recall_at_10 value: 31.086999999999996 - type: recall_at_100 value: 52.946000000000005 - type: recall_at_1000 value: 76.546 - type: recall_at_3 value: 21.351 - type: recall_at_5 value: 25.264999999999997 - type: map_at_1 value: 12.598 - type: map_at_10 value: 17.304 - type: map_at_100 value: 18.209 - type: map_at_1000 value: 18.328 - type: map_at_3 value: 15.784 - type: map_at_5 value: 16.669999999999998 - type: ndcg_at_1 value: 15.867999999999999 - type: ndcg_at_10 value: 20.623 - type: ndcg_at_100 value: 25.093 - type: ndcg_at_1000 value: 28.498 - type: ndcg_at_3 value: 17.912 - type: ndcg_at_5 value: 19.198 - type: precision_at_1 value: 15.867999999999999 - type: precision_at_10 value: 3.7670000000000003 - type: precision_at_100 value: 0.716 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 8.638 - type: precision_at_5 value: 6.21 - type: recall_at_1 value: 12.598 - type: recall_at_10 value: 27.144000000000002 - type: recall_at_100 value: 46.817 - type: recall_at_1000 value: 71.86099999999999 - type: recall_at_3 value: 19.231 - type: recall_at_5 value: 22.716 - type: map_at_1 value: 12.738416666666666 - type: map_at_10 value: 17.235916666666668 - type: map_at_100 value: 18.063333333333333 - type: map_at_1000 value: 18.18433333333333 - type: map_at_3 value: 15.74775 - type: map_at_5 value: 16.57825 - type: ndcg_at_1 value: 15.487416666666665 - type: ndcg_at_10 value: 20.290166666666668 - type: ndcg_at_100 value: 24.41291666666666 - type: ndcg_at_1000 value: 27.586333333333336 - type: ndcg_at_3 value: 17.622083333333332 - type: ndcg_at_5 value: 18.859916666666667 - type: precision_at_1 value: 15.487416666666665 - type: precision_at_10 value: 3.6226666666666665 - type: precision_at_100 value: 0.6820833333333334 - type: precision_at_1000 value: 0.11216666666666666 - type: precision_at_3 value: 8.163749999999999 - type: precision_at_5 value: 5.865416666666667 - type: recall_at_1 value: 12.738416666666666 - type: recall_at_10 value: 26.599416666666663 - type: recall_at_100 value: 45.41258333333334 - type: recall_at_1000 value: 68.7565 - type: recall_at_3 value: 19.008166666666668 - type: recall_at_5 value: 22.24991666666667 - type: map_at_1 value: 12.307 - type: map_at_10 value: 15.440000000000001 - type: map_at_100 value: 16.033 - type: map_at_1000 value: 16.14 - type: map_at_3 value: 14.393 - type: map_at_5 value: 14.856 - type: ndcg_at_1 value: 14.571000000000002 - type: ndcg_at_10 value: 17.685000000000002 - type: ndcg_at_100 value: 20.882 - type: ndcg_at_1000 value: 23.888 - type: ndcg_at_3 value: 15.739 - type: ndcg_at_5 value: 16.391 - type: precision_at_1 value: 14.571000000000002 - type: precision_at_10 value: 2.883 - type: precision_at_100 value: 0.49100000000000005 - type: precision_at_1000 value: 0.08 - type: precision_at_3 value: 7.0040000000000004 - type: precision_at_5 value: 4.693 - type: recall_at_1 value: 12.307 - type: recall_at_10 value: 22.566 - type: recall_at_100 value: 37.469 - type: recall_at_1000 value: 60.550000000000004 - type: recall_at_3 value: 16.742 - type: recall_at_5 value: 18.634 - type: map_at_1 value: 6.496 - type: map_at_10 value: 9.243 - type: map_at_100 value: 9.841 - type: map_at_1000 value: 9.946000000000002 - type: map_at_3 value: 8.395 - type: map_at_5 value: 8.872 - type: ndcg_at_1 value: 8.224 - type: ndcg_at_10 value: 11.24 - type: ndcg_at_100 value: 14.524999999999999 - type: ndcg_at_1000 value: 17.686 - type: ndcg_at_3 value: 9.617 - type: ndcg_at_5 value: 10.37 - type: precision_at_1 value: 8.224 - type: precision_at_10 value: 2.0820000000000003 - type: precision_at_100 value: 0.443 - type: precision_at_1000 value: 0.08499999999999999 - type: precision_at_3 value: 4.623 - type: precision_at_5 value: 3.331 - type: recall_at_1 value: 6.496 - type: recall_at_10 value: 15.310000000000002 - type: recall_at_100 value: 30.680000000000003 - type: recall_at_1000 value: 54.335 - type: recall_at_3 value: 10.691 - type: recall_at_5 value: 12.687999999999999 - type: map_at_1 value: 13.843 - type: map_at_10 value: 17.496000000000002 - type: map_at_100 value: 18.304000000000002 - type: map_at_1000 value: 18.426000000000002 - type: map_at_3 value: 16.225 - type: map_at_5 value: 16.830000000000002 - type: ndcg_at_1 value: 16.698 - type: ndcg_at_10 value: 20.301 - type: ndcg_at_100 value: 24.523 - type: ndcg_at_1000 value: 27.784 - type: ndcg_at_3 value: 17.822 - type: ndcg_at_5 value: 18.794 - type: precision_at_1 value: 16.698 - type: precision_at_10 value: 3.3579999999999997 - type: precision_at_100 value: 0.618 - type: precision_at_1000 value: 0.101 - type: precision_at_3 value: 7.898 - type: precision_at_5 value: 5.428999999999999 - type: recall_at_1 value: 13.843 - type: recall_at_10 value: 25.887999999999998 - type: recall_at_100 value: 45.028 - type: recall_at_1000 value: 68.991 - type: recall_at_3 value: 18.851000000000003 - type: recall_at_5 value: 21.462 - type: map_at_1 value: 13.757 - type: map_at_10 value: 19.27 - type: map_at_100 value: 20.461 - type: map_at_1000 value: 20.641000000000002 - type: map_at_3 value: 17.865000000000002 - type: map_at_5 value: 18.618000000000002 - type: ndcg_at_1 value: 16.996 - type: ndcg_at_10 value: 22.774 - type: ndcg_at_100 value: 27.675 - type: ndcg_at_1000 value: 31.145 - type: ndcg_at_3 value: 20.691000000000003 - type: ndcg_at_5 value: 21.741 - type: precision_at_1 value: 16.996 - type: precision_at_10 value: 4.545 - type: precision_at_100 value: 1.036 - type: precision_at_1000 value: 0.185 - type: precision_at_3 value: 10.145 - type: precision_at_5 value: 7.391 - type: recall_at_1 value: 13.757 - type: recall_at_10 value: 28.233999999999998 - type: recall_at_100 value: 51.05499999999999 - type: recall_at_1000 value: 75.35300000000001 - type: recall_at_3 value: 21.794 - type: recall_at_5 value: 24.614 - type: map_at_1 value: 9.057 - type: map_at_10 value: 12.720999999999998 - type: map_at_100 value: 13.450000000000001 - type: map_at_1000 value: 13.564000000000002 - type: map_at_3 value: 11.34 - type: map_at_5 value: 12.245000000000001 - type: ndcg_at_1 value: 9.797 - type: ndcg_at_10 value: 15.091 - type: ndcg_at_100 value: 18.886 - type: ndcg_at_1000 value: 22.29 - type: ndcg_at_3 value: 12.365 - type: ndcg_at_5 value: 13.931 - type: precision_at_1 value: 9.797 - type: precision_at_10 value: 2.477 - type: precision_at_100 value: 0.466 - type: precision_at_1000 value: 0.082 - type: precision_at_3 value: 5.299 - type: precision_at_5 value: 4.067 - type: recall_at_1 value: 9.057 - type: recall_at_10 value: 21.319 - type: recall_at_100 value: 38.999 - type: recall_at_1000 value: 65.374 - type: recall_at_3 value: 14.331 - type: recall_at_5 value: 17.916999999999998 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: 392b78eb68c07badcd7c2cd8f39af108375dfcce metrics: - type: map_at_1 value: 3.714 - type: map_at_10 value: 6.926 - type: map_at_100 value: 7.879 - type: map_at_1000 value: 8.032 - type: map_at_3 value: 5.504 - type: map_at_5 value: 6.357 - type: ndcg_at_1 value: 8.86 - type: ndcg_at_10 value: 11.007 - type: ndcg_at_100 value: 16.154 - type: ndcg_at_1000 value: 19.668 - type: ndcg_at_3 value: 8.103 - type: ndcg_at_5 value: 9.456000000000001 - type: precision_at_1 value: 8.86 - type: precision_at_10 value: 3.7199999999999998 - type: precision_at_100 value: 0.9169999999999999 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 6.254 - type: precision_at_5 value: 5.380999999999999 - type: recall_at_1 value: 3.714 - type: recall_at_10 value: 14.382 - type: recall_at_100 value: 33.166000000000004 - type: recall_at_1000 value: 53.444 - type: recall_at_3 value: 7.523000000000001 - type: recall_at_5 value: 10.91 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: f097057d03ed98220bc7309ddb10b71a54d667d6 metrics: - type: map_at_1 value: 1.764 - type: map_at_10 value: 3.8600000000000003 - type: map_at_100 value: 5.457 - type: map_at_1000 value: 5.938000000000001 - type: map_at_3 value: 2.667 - type: map_at_5 value: 3.2199999999999998 - type: ndcg_at_1 value: 14.000000000000002 - type: ndcg_at_10 value: 10.868 - type: ndcg_at_100 value: 12.866 - type: ndcg_at_1000 value: 17.43 - type: ndcg_at_3 value: 11.943 - type: ndcg_at_5 value: 11.66 - type: precision_at_1 value: 19.25 - type: precision_at_10 value: 10.274999999999999 - type: precision_at_100 value: 3.527 - type: precision_at_1000 value: 0.9119999999999999 - type: precision_at_3 value: 14.917 - type: precision_at_5 value: 13.5 - type: recall_at_1 value: 1.764 - type: recall_at_10 value: 6.609 - type: recall_at_100 value: 17.616 - type: recall_at_1000 value: 33.085 - type: recall_at_3 value: 3.115 - type: recall_at_5 value: 4.605 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 829147f8f75a25f005913200eb5ed41fae320aa1 metrics: - type: accuracy value: 42.225 - type: f1 value: 37.563516542112104 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: 1429cf27e393599b8b359b9b72c666f96b2525f9 metrics: - type: map_at_1 value: 11.497 - type: map_at_10 value: 15.744 - type: map_at_100 value: 16.3 - type: map_at_1000 value: 16.365 - type: map_at_3 value: 14.44 - type: map_at_5 value: 15.18 - type: ndcg_at_1 value: 12.346 - type: ndcg_at_10 value: 18.398999999999997 - type: ndcg_at_100 value: 21.399 - type: ndcg_at_1000 value: 23.442 - type: ndcg_at_3 value: 15.695 - type: ndcg_at_5 value: 17.027 - type: precision_at_1 value: 12.346 - type: precision_at_10 value: 2.798 - type: precision_at_100 value: 0.445 - type: precision_at_1000 value: 0.063 - type: precision_at_3 value: 6.586 - type: precision_at_5 value: 4.665 - type: recall_at_1 value: 11.497 - type: recall_at_10 value: 25.636 - type: recall_at_100 value: 39.894 - type: recall_at_1000 value: 56.181000000000004 - type: recall_at_3 value: 18.273 - type: recall_at_5 value: 21.474 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: 41b686a7f28c59bcaaa5791efd47c67c8ebe28be metrics: - type: map_at_1 value: 3.637 - type: map_at_10 value: 6.084 - type: map_at_100 value: 6.9190000000000005 - type: map_at_1000 value: 7.1080000000000005 - type: map_at_3 value: 5.071 - type: map_at_5 value: 5.5649999999999995 - type: ndcg_at_1 value: 7.407 - type: ndcg_at_10 value: 8.94 - type: ndcg_at_100 value: 13.594999999999999 - type: ndcg_at_1000 value: 18.29 - type: ndcg_at_3 value: 7.393 - type: ndcg_at_5 value: 7.854 - type: precision_at_1 value: 7.407 - type: precision_at_10 value: 2.778 - type: precision_at_100 value: 0.75 - type: precision_at_1000 value: 0.154 - type: precision_at_3 value: 5.144 - type: precision_at_5 value: 3.981 - type: recall_at_1 value: 3.637 - type: recall_at_10 value: 11.821 - type: recall_at_100 value: 30.18 - type: recall_at_1000 value: 60.207 - type: recall_at_3 value: 6.839 - type: recall_at_5 value: 8.649 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: 766870b35a1b9ca65e67a0d1913899973551fc6c metrics: - type: map_at_1 value: 9.676 - type: map_at_10 value: 13.350999999999999 - type: map_at_100 value: 13.919 - type: map_at_1000 value: 14.01 - type: map_at_3 value: 12.223 - type: map_at_5 value: 12.812000000000001 - type: ndcg_at_1 value: 19.352 - type: ndcg_at_10 value: 17.727 - type: ndcg_at_100 value: 20.837 - type: ndcg_at_1000 value: 23.412 - type: ndcg_at_3 value: 15.317 - type: ndcg_at_5 value: 16.436 - type: precision_at_1 value: 19.352 - type: precision_at_10 value: 3.993 - type: precision_at_100 value: 0.651 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 9.669 - type: precision_at_5 value: 6.69 - type: recall_at_1 value: 9.676 - type: recall_at_10 value: 19.966 - type: recall_at_100 value: 32.573 - type: recall_at_1000 value: 49.905 - type: recall_at_3 value: 14.504 - type: recall_at_5 value: 16.725 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 8d743909f834c38949e8323a8a6ce8721ea6c7f4 metrics: - type: accuracy value: 62.895999999999994 - type: ap value: 58.47769349850157 - type: f1 value: 62.67885149592086 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: validation revision: e6838a846e2408f22cf5cc337ebc83e0bcf77849 metrics: - type: map_at_1 value: 2.88 - type: map_at_10 value: 4.914000000000001 - type: map_at_100 value: 5.459 - type: map_at_1000 value: 5.538 - type: map_at_3 value: 4.087 - type: map_at_5 value: 4.518 - type: ndcg_at_1 value: 2.937 - type: ndcg_at_10 value: 6.273 - type: ndcg_at_100 value: 9.426 - type: ndcg_at_1000 value: 12.033000000000001 - type: ndcg_at_3 value: 4.513 - type: ndcg_at_5 value: 5.292 - type: precision_at_1 value: 2.937 - type: precision_at_10 value: 1.089 - type: precision_at_100 value: 0.27699999999999997 - type: precision_at_1000 value: 0.051000000000000004 - type: precision_at_3 value: 1.9290000000000003 - type: precision_at_5 value: 1.547 - type: recall_at_1 value: 2.88 - type: recall_at_10 value: 10.578 - type: recall_at_100 value: 26.267000000000003 - type: recall_at_1000 value: 47.589999999999996 - type: recall_at_3 value: 5.673 - type: recall_at_5 value: 7.545 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3 metrics: - type: accuracy value: 81.51846785225717 - type: f1 value: 81.648869152345 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (de) type: mteb/mtop_domain config: de split: test revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3 metrics: - type: accuracy value: 60.37475345167653 - type: f1 value: 58.452649375517026 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (es) type: mteb/mtop_domain config: es split: test revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3 metrics: - type: accuracy value: 67.36824549699799 - type: f1 value: 65.35927434998516 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (fr) type: mteb/mtop_domain config: fr split: test revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3 metrics: - type: accuracy value: 63.12871907297212 - type: f1 value: 61.37620329272278 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (hi) type: mteb/mtop_domain config: hi split: test revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3 metrics: - type: accuracy value: 47.04553603442094 - type: f1 value: 46.20389912644561 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (th) type: mteb/mtop_domain config: th split: test revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3 metrics: - type: accuracy value: 52.282097649186255 - type: f1 value: 50.75489206473579 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: 6299947a7777084cc2d4b64235bf7190381ce755 metrics: - type: accuracy value: 58.2421340629275 - type: f1 value: 40.11696046622642 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (de) type: mteb/mtop_intent config: de split: test revision: 6299947a7777084cc2d4b64235bf7190381ce755 metrics: - type: accuracy value: 45.069033530571986 - type: f1 value: 30.468468273374967 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (es) type: mteb/mtop_intent config: es split: test revision: 6299947a7777084cc2d4b64235bf7190381ce755 metrics: - type: accuracy value: 48.80920613742495 - type: f1 value: 32.65985375400447 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (fr) type: mteb/mtop_intent config: fr split: test revision: 6299947a7777084cc2d4b64235bf7190381ce755 metrics: - type: accuracy value: 44.337613529595984 - type: f1 value: 29.302047435606436 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (hi) type: mteb/mtop_intent config: hi split: test revision: 6299947a7777084cc2d4b64235bf7190381ce755 metrics: - type: accuracy value: 34.198637504481894 - type: f1 value: 22.063706032248408 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (th) type: mteb/mtop_intent config: th split: test revision: 6299947a7777084cc2d4b64235bf7190381ce755 metrics: - type: accuracy value: 43.11030741410488 - type: f1 value: 26.92408933648504 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (af) type: mteb/amazon_massive_intent config: af split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 37.79421654337593 - type: f1 value: 36.81580701507746 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (am) type: mteb/amazon_massive_intent config: am split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 23.722259583053127 - type: f1 value: 23.235269695764273 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ar) type: mteb/amazon_massive_intent config: ar split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 29.64021519838601 - type: f1 value: 28.273175327650137 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (az) type: mteb/amazon_massive_intent config: az split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 39.4754539340955 - type: f1 value: 39.25997361415121 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (bn) type: mteb/amazon_massive_intent config: bn split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 26.550100874243444 - type: f1 value: 25.607924873522975 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (cy) type: mteb/amazon_massive_intent config: cy split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 38.78278412911904 - type: f1 value: 37.64180582626517 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (da) type: mteb/amazon_massive_intent config: da split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 43.557498318762605 - type: f1 value: 41.35305173800667 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (de) type: mteb/amazon_massive_intent config: de split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 40.39340954942838 - type: f1 value: 38.33393219528934 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (el) type: mteb/amazon_massive_intent config: el split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 37.28648285137861 - type: f1 value: 36.64005906680284 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 58.080026899798256 - type: f1 value: 56.49243881660991 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (es) type: mteb/amazon_massive_intent config: es split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 41.176866173503704 - type: f1 value: 40.66779962225799 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fa) type: mteb/amazon_massive_intent config: fa split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 36.422326832548755 - type: f1 value: 34.6441738042885 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fi) type: mteb/amazon_massive_intent config: fi split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 38.75588433086752 - type: f1 value: 37.26725894668694 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fr) type: mteb/amazon_massive_intent config: fr split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 43.67182246133153 - type: f1 value: 42.351846624566605 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (he) type: mteb/amazon_massive_intent config: he split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 31.980497646267658 - type: f1 value: 30.557928872809008 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hi) type: mteb/amazon_massive_intent config: hi split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 28.039677202420982 - type: f1 value: 28.428418145508306 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hu) type: mteb/amazon_massive_intent config: hu split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 38.13718897108272 - type: f1 value: 37.057406988196874 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hy) type: mteb/amazon_massive_intent config: hy split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 26.05245460659045 - type: f1 value: 25.25483953344816 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (id) type: mteb/amazon_massive_intent config: id split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 41.156691324815064 - type: f1 value: 40.83715033247605 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (is) type: mteb/amazon_massive_intent config: is split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 38.62811028917284 - type: f1 value: 37.67691901246032 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (it) type: mteb/amazon_massive_intent config: it split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 44.0383322125084 - type: f1 value: 43.77259010877456 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ja) type: mteb/amazon_massive_intent config: ja split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 46.20712844653666 - type: f1 value: 44.66632875940824 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (jv) type: mteb/amazon_massive_intent config: jv split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 37.60591795561533 - type: f1 value: 36.581071742378015 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ka) type: mteb/amazon_massive_intent config: ka split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 24.47209145931405 - type: f1 value: 24.238209697895606 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (km) type: mteb/amazon_massive_intent config: km split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 26.23739071956961 - type: f1 value: 25.378783150845052 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (kn) type: mteb/amazon_massive_intent config: kn split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 17.831203765971754 - type: f1 value: 17.275078420466343 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ko) type: mteb/amazon_massive_intent config: ko split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 37.266308002689975 - type: f1 value: 36.92473791708214 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (lv) type: mteb/amazon_massive_intent config: lv split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 40.93140551445864 - type: f1 value: 40.825227889641965 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ml) type: mteb/amazon_massive_intent config: ml split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 17.88500336247478 - type: f1 value: 17.621569082971817 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (mn) type: mteb/amazon_massive_intent config: mn split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 32.975790181573636 - type: f1 value: 33.402014633349665 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ms) type: mteb/amazon_massive_intent config: ms split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 40.91123066577001 - type: f1 value: 40.09538559124075 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (my) type: mteb/amazon_massive_intent config: my split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 17.834566240753194 - type: f1 value: 17.006381849454314 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nb) type: mteb/amazon_massive_intent config: nb split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 39.47881640887693 - type: f1 value: 37.819934317839305 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nl) type: mteb/amazon_massive_intent config: nl split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 41.76193678547412 - type: f1 value: 40.281991759509694 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 42.61936785474109 - type: f1 value: 40.83673914649905 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pt) type: mteb/amazon_massive_intent config: pt split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 44.54270342972427 - type: f1 value: 43.45243164278448 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ro) type: mteb/amazon_massive_intent config: ro split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 39.96973772696705 - type: f1 value: 38.74209466530094 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ru) type: mteb/amazon_massive_intent config: ru split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 37.461331540013454 - type: f1 value: 36.91132021821187 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sl) type: mteb/amazon_massive_intent config: sl split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 38.28850033624748 - type: f1 value: 37.37259394049676 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sq) type: mteb/amazon_massive_intent config: sq split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 40.95494283792872 - type: f1 value: 39.767707902869084 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sv) type: mteb/amazon_massive_intent config: sv split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 41.85272360457296 - type: f1 value: 40.42848260365438 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sw) type: mteb/amazon_massive_intent config: sw split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 38.328850033624754 - type: f1 value: 36.90334596675622 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ta) type: mteb/amazon_massive_intent config: ta split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 19.031607262945528 - type: f1 value: 18.66510306325761 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (te) type: mteb/amazon_massive_intent config: te split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 19.38466711499664 - type: f1 value: 19.186399376652535 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (th) type: mteb/amazon_massive_intent config: th split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 34.088769334229994 - type: f1 value: 34.20383086009429 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tl) type: mteb/amazon_massive_intent config: tl split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 40.285810356422324 - type: f1 value: 39.361500249640414 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tr) type: mteb/amazon_massive_intent config: tr split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 38.860121049092136 - type: f1 value: 37.81916859627235 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ur) type: mteb/amazon_massive_intent config: ur split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 27.834566240753194 - type: f1 value: 26.898389386106487 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (vi) type: mteb/amazon_massive_intent config: vi split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 38.70544720914593 - type: f1 value: 38.280026442024415 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 45.78009414929387 - type: f1 value: 44.21526778674136 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-TW) type: mteb/amazon_massive_intent config: zh-TW split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 42.32010759919301 - type: f1 value: 42.25772977490916 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (af) type: mteb/amazon_massive_scenario config: af split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 40.24546065904506 - type: f1 value: 38.79924050989544 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (am) type: mteb/amazon_massive_scenario config: am split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 25.68930733019502 - type: f1 value: 25.488166279162712 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ar) type: mteb/amazon_massive_scenario config: ar split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 32.39744451916611 - type: f1 value: 31.863029579075775 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (az) type: mteb/amazon_massive_scenario config: az split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 40.53127101546738 - type: f1 value: 39.707079033948936 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (bn) type: mteb/amazon_massive_scenario config: bn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 27.23268325487559 - type: f1 value: 26.443653281858793 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (cy) type: mteb/amazon_massive_scenario config: cy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 38.69872225958305 - type: f1 value: 36.55930387892567 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (da) type: mteb/amazon_massive_scenario config: da split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 44.75453934095494 - type: f1 value: 42.87356484024154 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (de) type: mteb/amazon_massive_scenario config: de split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 41.355077336919976 - type: f1 value: 39.82365179458047 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (el) type: mteb/amazon_massive_scenario config: el split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 38.43981170141224 - type: f1 value: 37.02538368296387 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.33826496301278 - type: f1 value: 65.89634765029932 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (es) type: mteb/amazon_massive_scenario config: es split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 44.17955615332885 - type: f1 value: 43.10228811620319 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fa) type: mteb/amazon_massive_scenario config: fa split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 34.82851378614661 - type: f1 value: 33.95952441502803 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fi) type: mteb/amazon_massive_scenario config: fi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 40.561533288500335 - type: f1 value: 38.04939011733627 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fr) type: mteb/amazon_massive_scenario config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 45.917955615332886 - type: f1 value: 44.65741971572902 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (he) type: mteb/amazon_massive_scenario config: he split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 32.08473436449227 - type: f1 value: 29.53932929808133 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hi) type: mteb/amazon_massive_scenario config: hi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 28.369199731002016 - type: f1 value: 27.52902837981212 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hu) type: mteb/amazon_massive_scenario config: hu split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 39.49226630800269 - type: f1 value: 37.3272340470504 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hy) type: mteb/amazon_massive_scenario config: hy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 25.904505716207133 - type: f1 value: 24.547396574853444 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (id) type: mteb/amazon_massive_scenario config: id split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 40.95830531271016 - type: f1 value: 40.177843177422226 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (is) type: mteb/amazon_massive_scenario config: is split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 38.564223268325485 - type: f1 value: 37.35307758495248 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (it) type: mteb/amazon_massive_scenario config: it split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 46.58708809683928 - type: f1 value: 44.103900526804985 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ja) type: mteb/amazon_massive_scenario config: ja split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 46.24747814391393 - type: f1 value: 45.4107101796664 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (jv) type: mteb/amazon_massive_scenario config: jv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 39.6570275722932 - type: f1 value: 38.82737576832412 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ka) type: mteb/amazon_massive_scenario config: ka split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 25.279085406859448 - type: f1 value: 23.662661686788493 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (km) type: mteb/amazon_massive_scenario config: km split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 28.97108271687962 - type: f1 value: 27.195758324189246 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (kn) type: mteb/amazon_massive_scenario config: kn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 19.27370544720915 - type: f1 value: 18.694271924323637 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ko) type: mteb/amazon_massive_scenario config: ko split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 35.729657027572294 - type: f1 value: 34.38287006177308 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (lv) type: mteb/amazon_massive_scenario config: lv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 39.57296570275723 - type: f1 value: 38.074945140886925 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ml) type: mteb/amazon_massive_scenario config: ml split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 19.895763281775388 - type: f1 value: 20.00931364846829 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (mn) type: mteb/amazon_massive_scenario config: mn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 32.431069266980494 - type: f1 value: 31.395958664782576 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ms) type: mteb/amazon_massive_scenario config: ms split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 42.32347007397445 - type: f1 value: 40.81374026314701 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (my) type: mteb/amazon_massive_scenario config: my split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 20.864156018829856 - type: f1 value: 20.409870408935436 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nb) type: mteb/amazon_massive_scenario config: nb split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 40.47074646940148 - type: f1 value: 39.19044149415904 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nl) type: mteb/amazon_massive_scenario config: nl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 43.591123066577 - type: f1 value: 41.43420363064241 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 41.876260928043045 - type: f1 value: 41.192117676667614 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pt) type: mteb/amazon_massive_scenario config: pt split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 46.30800268997983 - type: f1 value: 45.25536730126799 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ro) type: mteb/amazon_massive_scenario config: ro split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 42.525218560860786 - type: f1 value: 41.02418109296485 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ru) type: mteb/amazon_massive_scenario config: ru split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 35.94821788836584 - type: f1 value: 35.08598314806566 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sl) type: mteb/amazon_massive_scenario config: sl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 38.69199731002017 - type: f1 value: 37.68119408674127 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sq) type: mteb/amazon_massive_scenario config: sq split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 40.474108944182916 - type: f1 value: 39.480530387013594 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sv) type: mteb/amazon_massive_scenario config: sv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 41.523201075991935 - type: f1 value: 40.20097996024383 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sw) type: mteb/amazon_massive_scenario config: sw split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 39.54942837928716 - type: f1 value: 38.185561243338064 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ta) type: mteb/amazon_massive_scenario config: ta split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 22.8782784129119 - type: f1 value: 22.239467186721456 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (te) type: mteb/amazon_massive_scenario config: te split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 20.51445864156019 - type: f1 value: 19.999047885530217 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (th) type: mteb/amazon_massive_scenario config: th split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 34.92602555480834 - type: f1 value: 33.24016717215723 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tl) type: mteb/amazon_massive_scenario config: tl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 40.74983187626093 - type: f1 value: 39.30274328728882 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tr) type: mteb/amazon_massive_scenario config: tr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 39.06859448554136 - type: f1 value: 39.21542039662971 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ur) type: mteb/amazon_massive_scenario config: ur split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 29.747814391392062 - type: f1 value: 28.261836892220447 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (vi) type: mteb/amazon_massive_scenario config: vi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 38.02286482851379 - type: f1 value: 37.8742438608697 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 48.550773369199725 - type: f1 value: 46.7399625882649 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-TW) type: mteb/amazon_massive_scenario config: zh-TW split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 45.17821116341628 - type: f1 value: 44.84809741811729 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: dcefc037ef84348e49b0d29109e891c01067226b metrics: - type: v_measure value: 28.301902023313875 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 3cd0e71dfbe09d4de0f9e5ecba43e7ce280959dc metrics: - type: v_measure value: 24.932123582259287 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 29.269341041468326 - type: mrr value: 30.132140876875717 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: 7eb63cc0c1eb59324d709ebed25fcab851fa7610 metrics: - type: map_at_1 value: 1.2269999999999999 - type: map_at_10 value: 3.081 - type: map_at_100 value: 4.104 - type: map_at_1000 value: 4.989 - type: map_at_3 value: 2.221 - type: map_at_5 value: 2.535 - type: ndcg_at_1 value: 15.015 - type: ndcg_at_10 value: 11.805 - type: ndcg_at_100 value: 12.452 - type: ndcg_at_1000 value: 22.284000000000002 - type: ndcg_at_3 value: 13.257 - type: ndcg_at_5 value: 12.199 - type: precision_at_1 value: 16.409000000000002 - type: precision_at_10 value: 9.102 - type: precision_at_100 value: 3.678 - type: precision_at_1000 value: 1.609 - type: precision_at_3 value: 12.797 - type: precision_at_5 value: 10.464 - type: recall_at_1 value: 1.2269999999999999 - type: recall_at_10 value: 5.838 - type: recall_at_100 value: 15.716 - type: recall_at_1000 value: 48.837 - type: recall_at_3 value: 2.828 - type: recall_at_5 value: 3.697 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: 6062aefc120bfe8ece5897809fb2e53bfe0d128c metrics: - type: map_at_1 value: 3.515 - type: map_at_10 value: 5.884 - type: map_at_100 value: 6.510000000000001 - type: map_at_1000 value: 6.598999999999999 - type: map_at_3 value: 4.8919999999999995 - type: map_at_5 value: 5.391 - type: ndcg_at_1 value: 4.056 - type: ndcg_at_10 value: 7.6259999999999994 - type: ndcg_at_100 value: 11.08 - type: ndcg_at_1000 value: 13.793 - type: ndcg_at_3 value: 5.537 - type: ndcg_at_5 value: 6.45 - type: precision_at_1 value: 4.056 - type: precision_at_10 value: 1.4569999999999999 - type: precision_at_100 value: 0.347 - type: precision_at_1000 value: 0.061 - type: precision_at_3 value: 2.6069999999999998 - type: precision_at_5 value: 2.086 - type: recall_at_1 value: 3.515 - type: recall_at_10 value: 12.312 - type: recall_at_100 value: 28.713 - type: recall_at_1000 value: 50.027 - type: recall_at_3 value: 6.701 - type: recall_at_5 value: 8.816 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: 6205996560df11e3a3da9ab4f926788fc30a7db4 metrics: - type: map_at_1 value: 61.697 - type: map_at_10 value: 74.20400000000001 - type: map_at_100 value: 75.023 - type: map_at_1000 value: 75.059 - type: map_at_3 value: 71.265 - type: map_at_5 value: 73.001 - type: ndcg_at_1 value: 70.95 - type: ndcg_at_10 value: 78.96 - type: ndcg_at_100 value: 81.26 - type: ndcg_at_1000 value: 81.679 - type: ndcg_at_3 value: 75.246 - type: ndcg_at_5 value: 77.092 - type: precision_at_1 value: 70.95 - type: precision_at_10 value: 11.998000000000001 - type: precision_at_100 value: 1.451 - type: precision_at_1000 value: 0.154 - type: precision_at_3 value: 32.629999999999995 - type: precision_at_5 value: 21.573999999999998 - type: recall_at_1 value: 61.697 - type: recall_at_10 value: 88.23299999999999 - type: recall_at_100 value: 96.961 - type: recall_at_1000 value: 99.401 - type: recall_at_3 value: 77.689 - type: recall_at_5 value: 82.745 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: b2805658ae38990172679479369a78b86de8c390 metrics: - type: v_measure value: 33.75741018380938 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 385e3cb46b4cfa89021f56c4380204149d0efe33 metrics: - type: v_measure value: 41.00799910099266 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: 5c59ef3e437a0a9651c8fe6fde943e7dce59fba5 metrics: - type: map_at_1 value: 1.72 - type: map_at_10 value: 3.8240000000000003 - type: map_at_100 value: 4.727 - type: map_at_1000 value: 4.932 - type: map_at_3 value: 2.867 - type: map_at_5 value: 3.3230000000000004 - type: ndcg_at_1 value: 8.5 - type: ndcg_at_10 value: 7.133000000000001 - type: ndcg_at_100 value: 11.911 - type: ndcg_at_1000 value: 16.962 - type: ndcg_at_3 value: 6.763 - type: ndcg_at_5 value: 5.832 - type: precision_at_1 value: 8.5 - type: precision_at_10 value: 3.6799999999999997 - type: precision_at_100 value: 1.0670000000000002 - type: precision_at_1000 value: 0.22999999999999998 - type: precision_at_3 value: 6.2330000000000005 - type: precision_at_5 value: 5.0200000000000005 - type: recall_at_1 value: 1.72 - type: recall_at_10 value: 7.487000000000001 - type: recall_at_100 value: 21.683 - type: recall_at_1000 value: 46.688 - type: recall_at_3 value: 3.798 - type: recall_at_5 value: 5.113 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: 20a6d6f312dd54037fe07a32d58e5e168867909d metrics: - type: cos_sim_pearson value: 80.96286245858941 - type: cos_sim_spearman value: 74.57093488947429 - type: euclidean_pearson value: 75.50377970259402 - type: euclidean_spearman value: 71.7498004622999 - type: manhattan_pearson value: 75.3256836091382 - type: manhattan_spearman value: 71.80676733410375 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: fdf84275bb8ce4b49c971d02e84dd1abc677a50f metrics: - type: cos_sim_pearson value: 80.20938796088339 - type: cos_sim_spearman value: 69.16914010333394 - type: euclidean_pearson value: 79.33415250097545 - type: euclidean_spearman value: 71.46707320292745 - type: manhattan_pearson value: 79.73669837981976 - type: manhattan_spearman value: 71.87919511134902 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 1591bfcbe8c69d4bf7fe2a16e2451017832cafb9 metrics: - type: cos_sim_pearson value: 76.401935081936 - type: cos_sim_spearman value: 77.23446219694267 - type: euclidean_pearson value: 74.61017160439877 - type: euclidean_spearman value: 75.85871531365609 - type: manhattan_pearson value: 74.83034779539724 - type: manhattan_spearman value: 75.95948993588429 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: e2125984e7df8b7871f6ae9949cf6b6795e7c54b metrics: - type: cos_sim_pearson value: 75.35551963935667 - type: cos_sim_spearman value: 70.98892671568665 - type: euclidean_pearson value: 73.24467338564628 - type: euclidean_spearman value: 71.97533151639425 - type: manhattan_pearson value: 73.2776559359938 - type: manhattan_spearman value: 72.2221421456084 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: 1cd7298cac12a96a373b6a2f18738bb3e739a9b6 metrics: - type: cos_sim_pearson value: 79.05293131911803 - type: cos_sim_spearman value: 79.7379478259805 - type: euclidean_pearson value: 78.17016171851057 - type: euclidean_spearman value: 78.76038607583105 - type: manhattan_pearson value: 78.4994607532332 - type: manhattan_spearman value: 79.13026720132872 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 360a0b2dff98700d09e634a01e1cc1624d3e42cd metrics: - type: cos_sim_pearson value: 76.04750373932828 - type: cos_sim_spearman value: 77.93230986462234 - type: euclidean_pearson value: 75.8320302521164 - type: euclidean_spearman value: 76.83154481579385 - type: manhattan_pearson value: 75.98713517720608 - type: manhattan_spearman value: 76.95479705521507 - task: type: STS dataset: name: MTEB STS17 (ko-ko) type: mteb/sts17-crosslingual-sts config: ko-ko split: test revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0 metrics: - type: cos_sim_pearson value: 43.0464619152799 - type: cos_sim_spearman value: 45.65606588928089 - type: euclidean_pearson value: 45.69437788355499 - type: euclidean_spearman value: 45.08552742346606 - type: manhattan_pearson value: 45.87166698903681 - type: manhattan_spearman value: 45.155963016434164 - task: type: STS dataset: name: MTEB STS17 (ar-ar) type: mteb/sts17-crosslingual-sts config: ar-ar split: test revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0 metrics: - type: cos_sim_pearson value: 53.27469278912148 - type: cos_sim_spearman value: 54.16113207623789 - type: euclidean_pearson value: 55.97026429327157 - type: euclidean_spearman value: 54.71320909074608 - type: manhattan_pearson value: 56.12511774278802 - type: manhattan_spearman value: 55.22875659158676 - task: type: STS dataset: name: MTEB STS17 (en-ar) type: mteb/sts17-crosslingual-sts config: en-ar split: test revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0 metrics: - type: cos_sim_pearson value: 1.5482997790039945 - type: cos_sim_spearman value: 1.7208386347363582 - type: euclidean_pearson value: 6.727915670345885 - type: euclidean_spearman value: 6.112826908474543 - type: manhattan_pearson value: 4.94386093060865 - type: manhattan_spearman value: 5.018174110623732 - task: type: STS dataset: name: MTEB STS17 (en-de) type: mteb/sts17-crosslingual-sts config: en-de split: test revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0 metrics: - type: cos_sim_pearson value: 27.5420218362265 - type: cos_sim_spearman value: 25.483838431031007 - type: euclidean_pearson value: 6.268684143856358 - type: euclidean_spearman value: 5.877961421091679 - type: manhattan_pearson value: 2.667237739227861 - type: manhattan_spearman value: 2.5683839956554775 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0 metrics: - type: cos_sim_pearson value: 85.32029757646663 - type: cos_sim_spearman value: 87.32720847297225 - type: euclidean_pearson value: 81.12594485791254 - type: euclidean_spearman value: 81.1531079489332 - type: manhattan_pearson value: 81.32899414704019 - type: manhattan_spearman value: 81.3897040261192 - task: type: STS dataset: name: MTEB STS17 (en-tr) type: mteb/sts17-crosslingual-sts config: en-tr split: test revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0 metrics: - type: cos_sim_pearson value: 4.37162299241808 - type: cos_sim_spearman value: 2.0879072561774543 - type: euclidean_pearson value: 3.0725243785454595 - type: euclidean_spearman value: 5.3721339279483535 - type: manhattan_pearson value: 4.867795293367359 - type: manhattan_spearman value: 7.9397069840018775 - task: type: STS dataset: name: MTEB STS17 (es-en) type: mteb/sts17-crosslingual-sts config: es-en split: test revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0 metrics: - type: cos_sim_pearson value: 20.306030448858603 - type: cos_sim_spearman value: 21.93220782551375 - type: euclidean_pearson value: 3.878631934602361 - type: euclidean_spearman value: 5.171796902725965 - type: manhattan_pearson value: 7.13020644036815 - type: manhattan_spearman value: 7.707315591498748 - task: type: STS dataset: name: MTEB STS17 (es-es) type: mteb/sts17-crosslingual-sts config: es-es split: test revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0 metrics: - type: cos_sim_pearson value: 66.81873207478459 - type: cos_sim_spearman value: 67.80273445636502 - type: euclidean_pearson value: 70.60654682977268 - type: euclidean_spearman value: 69.4566208379486 - type: manhattan_pearson value: 70.9548461896642 - type: manhattan_spearman value: 69.78323323058773 - task: type: STS dataset: name: MTEB STS17 (fr-en) type: mteb/sts17-crosslingual-sts config: fr-en split: test revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0 metrics: - type: cos_sim_pearson value: 21.366487281202602 - type: cos_sim_spearman value: 18.90627528698481 - type: euclidean_pearson value: 2.3390998579461995 - type: euclidean_spearman value: 4.151213674012541 - type: manhattan_pearson value: 2.234831868844863 - type: manhattan_spearman value: 4.555291328501442 - task: type: STS dataset: name: MTEB STS17 (it-en) type: mteb/sts17-crosslingual-sts config: it-en split: test revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0 metrics: - type: cos_sim_pearson value: 20.73153177251085 - type: cos_sim_spearman value: 16.3855949033176 - type: euclidean_pearson value: 8.734648741714238 - type: euclidean_spearman value: 10.75672244732182 - type: manhattan_pearson value: 7.536654126608877 - type: manhattan_spearman value: 8.330065460047296 - task: type: STS dataset: name: MTEB STS17 (nl-en) type: mteb/sts17-crosslingual-sts config: nl-en split: test revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0 metrics: - type: cos_sim_pearson value: 26.618435024084253 - type: cos_sim_spearman value: 23.488974089577816 - type: euclidean_pearson value: 3.1310350304707866 - type: euclidean_spearman value: 3.1242598481634665 - type: manhattan_pearson value: 1.1096752982707008 - type: manhattan_spearman value: 1.4591693078765848 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906 metrics: - type: cos_sim_pearson value: 59.17638344661753 - type: cos_sim_spearman value: 59.636760071130865 - type: euclidean_pearson value: 56.68753290255448 - type: euclidean_spearman value: 57.613280258574484 - type: manhattan_pearson value: 56.92312052723706 - type: manhattan_spearman value: 57.76774918418505 - task: type: STS dataset: name: MTEB STS22 (de) type: mteb/sts22-crosslingual-sts config: de split: test revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906 metrics: - type: cos_sim_pearson value: 10.322254716987457 - type: cos_sim_spearman value: 11.0033092996862 - type: euclidean_pearson value: 6.006926471684402 - type: euclidean_spearman value: 10.972140246688376 - type: manhattan_pearson value: 5.933298751861177 - type: manhattan_spearman value: 11.030111585680233 - task: type: STS dataset: name: MTEB STS22 (es) type: mteb/sts22-crosslingual-sts config: es split: test revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906 metrics: - type: cos_sim_pearson value: 43.38031880545056 - type: cos_sim_spearman value: 43.05358201410913 - type: euclidean_pearson value: 42.72327196362553 - type: euclidean_spearman value: 42.55163899944477 - type: manhattan_pearson value: 44.01557499780587 - type: manhattan_spearman value: 43.12473221615855 - task: type: STS dataset: name: MTEB STS22 (pl) type: mteb/sts22-crosslingual-sts config: pl split: test revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906 metrics: - type: cos_sim_pearson value: 4.291290504363136 - type: cos_sim_spearman value: 14.912727487893479 - type: euclidean_pearson value: 3.2855132112394485 - type: euclidean_spearman value: 16.575204463951025 - type: manhattan_pearson value: 3.2398776723465814 - type: manhattan_spearman value: 16.841985772913855 - task: type: STS dataset: name: MTEB STS22 (tr) type: mteb/sts22-crosslingual-sts config: tr split: test revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906 metrics: - type: cos_sim_pearson value: 4.102739498555817 - type: cos_sim_spearman value: 3.818238576547375 - type: euclidean_pearson value: 2.3181033496453556 - type: euclidean_spearman value: 5.1826811802703565 - type: manhattan_pearson value: 4.8006179265256455 - type: manhattan_spearman value: 6.738401400306252 - task: type: STS dataset: name: MTEB STS22 (ar) type: mteb/sts22-crosslingual-sts config: ar split: test revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906 metrics: - type: cos_sim_pearson value: 2.38765395226737 - type: cos_sim_spearman value: 5.173899391162327 - type: euclidean_pearson value: 3.0710263954769825 - type: euclidean_spearman value: 5.04922290903982 - type: manhattan_pearson value: 3.7826314109861703 - type: manhattan_spearman value: 5.042238232170212 - task: type: STS dataset: name: MTEB STS22 (ru) type: mteb/sts22-crosslingual-sts config: ru split: test revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906 metrics: - type: cos_sim_pearson value: 7.6735490672676345 - type: cos_sim_spearman value: 3.3631215256878892 - type: euclidean_pearson value: 4.64331702652217 - type: euclidean_spearman value: 3.6129205171334324 - type: manhattan_pearson value: 4.011231736076196 - type: manhattan_spearman value: 3.233959766173701 - task: type: STS dataset: name: MTEB STS22 (zh) type: mteb/sts22-crosslingual-sts config: zh split: test revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906 metrics: - type: cos_sim_pearson value: 0.06167614416104335 - type: cos_sim_spearman value: 6.521685391703255 - type: euclidean_pearson value: 4.884572579069032 - type: euclidean_spearman value: 5.59058032900239 - type: manhattan_pearson value: 6.139838096573897 - type: manhattan_spearman value: 5.0060884837066215 - task: type: STS dataset: name: MTEB STS22 (fr) type: mteb/sts22-crosslingual-sts config: fr split: test revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906 metrics: - type: cos_sim_pearson value: 53.19490347682836 - type: cos_sim_spearman value: 54.56055727079527 - type: euclidean_pearson value: 52.55574442039842 - type: euclidean_spearman value: 52.94640154371587 - type: manhattan_pearson value: 53.275993040454196 - type: manhattan_spearman value: 53.174561503510155 - task: type: STS dataset: name: MTEB STS22 (de-en) type: mteb/sts22-crosslingual-sts config: de-en split: test revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906 metrics: - type: cos_sim_pearson value: 51.151158530122146 - type: cos_sim_spearman value: 53.926925081736655 - type: euclidean_pearson value: 44.55629287737235 - type: euclidean_spearman value: 46.222372143731384 - type: manhattan_pearson value: 42.831322151459005 - type: manhattan_spearman value: 45.70991764985799 - task: type: STS dataset: name: MTEB STS22 (es-en) type: mteb/sts22-crosslingual-sts config: es-en split: test revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906 metrics: - type: cos_sim_pearson value: 30.36194885126792 - type: cos_sim_spearman value: 32.739632941633836 - type: euclidean_pearson value: 29.83135800843496 - type: euclidean_spearman value: 31.114406001326923 - type: manhattan_pearson value: 31.264502938148286 - type: manhattan_spearman value: 33.3112040753475 - task: type: STS dataset: name: MTEB STS22 (it) type: mteb/sts22-crosslingual-sts config: it split: test revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906 metrics: - type: cos_sim_pearson value: 35.23883630335275 - type: cos_sim_spearman value: 33.67797082086704 - type: euclidean_pearson value: 34.878640693874544 - type: euclidean_spearman value: 33.525189235133496 - type: manhattan_pearson value: 34.22761246389947 - type: manhattan_spearman value: 32.713218497609176 - task: type: STS dataset: name: MTEB STS22 (pl-en) type: mteb/sts22-crosslingual-sts config: pl-en split: test revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906 metrics: - type: cos_sim_pearson value: 19.809302548119547 - type: cos_sim_spearman value: 20.540370202115497 - type: euclidean_pearson value: 23.006803962133016 - type: euclidean_spearman value: 22.96270653079511 - type: manhattan_pearson value: 25.40168317585851 - type: manhattan_spearman value: 25.421508137540865 - task: type: STS dataset: name: MTEB STS22 (zh-en) type: mteb/sts22-crosslingual-sts config: zh-en split: test revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906 metrics: - type: cos_sim_pearson value: 20.393500955410488 - type: cos_sim_spearman value: 26.705713693011603 - type: euclidean_pearson value: 18.168376767724585 - type: euclidean_spearman value: 19.260826601517245 - type: manhattan_pearson value: 18.302619990671527 - type: manhattan_spearman value: 19.4691037846159 - task: type: STS dataset: name: MTEB STS22 (es-it) type: mteb/sts22-crosslingual-sts config: es-it split: test revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906 metrics: - type: cos_sim_pearson value: 36.58919983075148 - type: cos_sim_spearman value: 35.989722099974045 - type: euclidean_pearson value: 41.045112547574206 - type: euclidean_spearman value: 39.322301680629835 - type: manhattan_pearson value: 41.36802503205308 - type: manhattan_spearman value: 40.76270030293609 - task: type: STS dataset: name: MTEB STS22 (de-fr) type: mteb/sts22-crosslingual-sts config: de-fr split: test revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906 metrics: - type: cos_sim_pearson value: 26.350936227950083 - type: cos_sim_spearman value: 25.108218032460343 - type: euclidean_pearson value: 28.61681094744849 - type: euclidean_spearman value: 27.350990203943592 - type: manhattan_pearson value: 30.527977072984513 - type: manhattan_spearman value: 26.403339990640813 - task: type: STS dataset: name: MTEB STS22 (de-pl) type: mteb/sts22-crosslingual-sts config: de-pl split: test revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906 metrics: - type: cos_sim_pearson value: 20.056269198600322 - type: cos_sim_spearman value: 20.939990379746757 - type: euclidean_pearson value: 18.942765438962198 - type: euclidean_spearman value: 21.709842967237446 - type: manhattan_pearson value: 23.643909798655123 - type: manhattan_spearman value: 23.58828328071473 - task: type: STS dataset: name: MTEB STS22 (fr-pl) type: mteb/sts22-crosslingual-sts config: fr-pl split: test revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906 metrics: - type: cos_sim_pearson value: 19.563740271419395 - type: cos_sim_spearman value: 5.634361698190111 - type: euclidean_pearson value: 16.833522619239474 - type: euclidean_spearman value: 16.903085094570333 - type: manhattan_pearson value: 5.805392712660814 - type: manhattan_spearman value: 16.903085094570333 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: 8913289635987208e6e7c72789e4be2fe94b6abd metrics: - type: cos_sim_pearson value: 80.00905671833966 - type: cos_sim_spearman value: 79.54269211027272 - type: euclidean_pearson value: 79.51954544247441 - type: euclidean_spearman value: 78.93670303434288 - type: manhattan_pearson value: 79.47610653340678 - type: manhattan_spearman value: 79.07344156719613 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: 56a6d0140cf6356659e2a7c1413286a774468d44 metrics: - type: map value: 68.35710819755543 - type: mrr value: 88.05442832403617 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: a75ae049398addde9b70f6b268875f5cbce99089 metrics: - type: map_at_1 value: 21.556 - type: map_at_10 value: 27.982000000000003 - type: map_at_100 value: 28.937 - type: map_at_1000 value: 29.058 - type: map_at_3 value: 25.644 - type: map_at_5 value: 26.996 - type: ndcg_at_1 value: 23.333000000000002 - type: ndcg_at_10 value: 31.787 - type: ndcg_at_100 value: 36.647999999999996 - type: ndcg_at_1000 value: 39.936 - type: ndcg_at_3 value: 27.299 - type: ndcg_at_5 value: 29.659000000000002 - type: precision_at_1 value: 23.333000000000002 - type: precision_at_10 value: 4.867 - type: precision_at_100 value: 0.743 - type: precision_at_1000 value: 0.10200000000000001 - type: precision_at_3 value: 11.333 - type: precision_at_5 value: 8.133 - type: recall_at_1 value: 21.556 - type: recall_at_10 value: 42.333 - type: recall_at_100 value: 65.706 - type: recall_at_1000 value: 91.489 - type: recall_at_3 value: 30.361 - type: recall_at_5 value: 36.222 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: 5a8256d0dff9c4bd3be3ba3e67e4e70173f802ea metrics: - type: cos_sim_accuracy value: 99.49306930693069 - type: cos_sim_ap value: 77.7308550291728 - type: cos_sim_f1 value: 71.78978681209718 - type: cos_sim_precision value: 71.1897738446411 - type: cos_sim_recall value: 72.39999999999999 - type: dot_accuracy value: 99.08118811881188 - type: dot_ap value: 30.267748833368234 - type: dot_f1 value: 34.335201222618444 - type: dot_precision value: 34.994807892004154 - type: dot_recall value: 33.7 - type: euclidean_accuracy value: 99.51683168316832 - type: euclidean_ap value: 78.64498778235628 - type: euclidean_f1 value: 73.09149972929075 - type: euclidean_precision value: 79.69303423848878 - type: euclidean_recall value: 67.5 - type: manhattan_accuracy value: 99.53168316831683 - type: manhattan_ap value: 79.45274878693958 - type: manhattan_f1 value: 74.19863373620599 - type: manhattan_precision value: 78.18383167220377 - type: manhattan_recall value: 70.6 - type: max_accuracy value: 99.53168316831683 - type: max_ap value: 79.45274878693958 - type: max_f1 value: 74.19863373620599 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 70a89468f6dccacc6aa2b12a6eac54e74328f235 metrics: - type: v_measure value: 44.59127540530939 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: d88009ab563dd0b16cfaf4436abaf97fa3550cf0 metrics: - type: v_measure value: 28.230204578753636 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: ef807ea29a75ec4f91b50fd4191cb4ee4589a9f9 metrics: - type: map value: 39.96520488022785 - type: mrr value: 40.189248047703934 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: 8753c2788d36c01fc6f05d03fe3f7268d63f9122 metrics: - type: cos_sim_pearson value: 30.56303767714449 - type: cos_sim_spearman value: 30.256847004390487 - type: dot_pearson value: 29.453520030995005 - type: dot_spearman value: 29.561732550926777 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: 2c8041b2c07a79b6f7ba8fe6acc72e5d9f92d217 metrics: - type: map_at_1 value: 0.11299999999999999 - type: map_at_10 value: 0.733 - type: map_at_100 value: 3.313 - type: map_at_1000 value: 7.355 - type: map_at_3 value: 0.28200000000000003 - type: map_at_5 value: 0.414 - type: ndcg_at_1 value: 42.0 - type: ndcg_at_10 value: 39.31 - type: ndcg_at_100 value: 26.904 - type: ndcg_at_1000 value: 23.778 - type: ndcg_at_3 value: 42.775999999999996 - type: ndcg_at_5 value: 41.554 - type: precision_at_1 value: 48.0 - type: precision_at_10 value: 43.0 - type: precision_at_100 value: 27.08 - type: precision_at_1000 value: 11.014 - type: precision_at_3 value: 48.0 - type: precision_at_5 value: 45.6 - type: recall_at_1 value: 0.11299999999999999 - type: recall_at_10 value: 0.976 - type: recall_at_100 value: 5.888 - type: recall_at_1000 value: 22.634999999999998 - type: recall_at_3 value: 0.329 - type: recall_at_5 value: 0.518 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: 527b7d77e16e343303e68cb6af11d6e18b9f7b3b metrics: - type: map_at_1 value: 0.645 - type: map_at_10 value: 4.1160000000000005 - type: map_at_100 value: 7.527 - type: map_at_1000 value: 8.677999999999999 - type: map_at_3 value: 1.6019999999999999 - type: map_at_5 value: 2.6 - type: ndcg_at_1 value: 10.204 - type: ndcg_at_10 value: 12.27 - type: ndcg_at_100 value: 22.461000000000002 - type: ndcg_at_1000 value: 33.543 - type: ndcg_at_3 value: 9.982000000000001 - type: ndcg_at_5 value: 11.498 - type: precision_at_1 value: 10.204 - type: precision_at_10 value: 12.245000000000001 - type: precision_at_100 value: 5.286 - type: precision_at_1000 value: 1.2630000000000001 - type: precision_at_3 value: 10.884 - type: precision_at_5 value: 13.061 - type: recall_at_1 value: 0.645 - type: recall_at_10 value: 8.996 - type: recall_at_100 value: 33.666000000000004 - type: recall_at_1000 value: 67.704 - type: recall_at_3 value: 2.504 - type: recall_at_5 value: 4.95 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de metrics: - type: accuracy value: 62.7862 - type: ap value: 10.958454618347831 - type: f1 value: 48.37243417046763 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: 62146448f05be9e52a36b8ee9936447ea787eede metrics: - type: accuracy value: 54.821731748726656 - type: f1 value: 55.14729314789282 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 091a54f9a36281ce7d6590ec8c75dd485e7e01d4 metrics: - type: v_measure value: 28.24295128553035 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 81.5640460153782 - type: cos_sim_ap value: 57.094095366921536 - type: cos_sim_f1 value: 55.29607083563918 - type: cos_sim_precision value: 47.62631077216397 - type: cos_sim_recall value: 65.91029023746702 - type: dot_accuracy value: 78.81623651427549 - type: dot_ap value: 47.42989400382077 - type: dot_f1 value: 51.25944584382871 - type: dot_precision value: 42.55838271174625 - type: dot_recall value: 64.43271767810026 - type: euclidean_accuracy value: 80.29445073612685 - type: euclidean_ap value: 53.42012231336148 - type: euclidean_f1 value: 51.867783563504645 - type: euclidean_precision value: 45.4203013481364 - type: euclidean_recall value: 60.4485488126649 - type: manhattan_accuracy value: 80.2884901949097 - type: manhattan_ap value: 53.43205271323232 - type: manhattan_f1 value: 52.014165559982295 - type: manhattan_precision value: 44.796035074342356 - type: manhattan_recall value: 62.00527704485488 - type: max_accuracy value: 81.5640460153782 - type: max_ap value: 57.094095366921536 - type: max_f1 value: 55.29607083563918 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 86.63018589668955 - type: cos_sim_ap value: 80.51063771262909 - type: cos_sim_f1 value: 72.70810586950793 - type: cos_sim_precision value: 71.14123627790467 - type: cos_sim_recall value: 74.3455497382199 - type: dot_accuracy value: 82.41743315092948 - type: dot_ap value: 69.2393381283664 - type: dot_f1 value: 65.61346624814597 - type: dot_precision value: 59.43260638630257 - type: dot_recall value: 73.22913458577148 - type: euclidean_accuracy value: 86.49435324251951 - type: euclidean_ap value: 80.28100477250926 - type: euclidean_f1 value: 72.58242344489099 - type: euclidean_precision value: 67.44662568576906 - type: euclidean_recall value: 78.56482907299045 - type: manhattan_accuracy value: 86.59525749990297 - type: manhattan_ap value: 80.37850832566262 - type: manhattan_f1 value: 72.59435321233073 - type: manhattan_precision value: 68.19350473612991 - type: manhattan_recall value: 77.60240221743148 - type: max_accuracy value: 86.63018589668955 - type: max_ap value: 80.51063771262909 - type: max_f1 value: 72.70810586950793 --- # SGPT-125M-weightedmean-nli-bitfit ## Usage For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt ## Evaluation Results For eval results, refer to the eval folder or our paper: https://arxiv.org/abs/2202.08904 ## Training The model was trained with the parameters: **DataLoader**: `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters: ``` {'batch_size': 64} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 880, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 0.0002 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 881, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors ```bibtex @article{muennighoff2022sgpt, title={SGPT: GPT Sentence Embeddings for Semantic Search}, author={Muennighoff, Niklas}, journal={arXiv preprint arXiv:2202.08904}, year={2022} } ```
[ "BIOSSES", "SCIFACT" ]
fidukm34/biobert_v1.1_pubmed-finetuned-ner
fidukm34
token-classification
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "dataset:ncbi_disease", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2021-09-16T17:09:50+00:00
327
1
--- datasets: - ncbi_disease metrics: - precision - recall - f1 - accuracy tags: - generated_from_trainer model_index: - name: biobert_v1.1_pubmed-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: ncbi_disease type: ncbi_disease args: ncbi_disease metric: name: Accuracy type: accuracy value: 0.9827274990663513 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biobert_v1.1_pubmed-finetuned-ner This model is a fine-tuned version of [monologg/biobert_v1.1_pubmed](https://huggingface.co/monologg/biobert_v1.1_pubmed) on the ncbi_disease dataset. It achieves the following results on the evaluation set: - Loss: 0.0657 - Precision: 0.8338 - Recall: 0.8933 - F1: 0.8625 - Accuracy: 0.9827 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 340 | 0.0612 | 0.8268 | 0.85 | 0.8382 | 0.9806 | | 0.0987 | 2.0 | 680 | 0.0604 | 0.8397 | 0.8848 | 0.8616 | 0.9829 | | 0.0272 | 3.0 | 1020 | 0.0657 | 0.8338 | 0.8933 | 0.8625 | 0.9827 | ### Framework versions - Transformers 4.8.1 - Pytorch 1.9.0 - Datasets 1.6.2 - Tokenizers 0.10.3
[ "NCBI DISEASE" ]
ntc-ai/SDXL-LoRA-slider.ideal
ntc-ai
text-to-image
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
2024-01-20T07:22:35Z
2024-01-20T07:22:38+00:00
323
0
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 language: - en license: mit tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers thumbnail: images/evaluate/ideal.../ideal_17_3.0.png widget: - text: ideal output: url: images/ideal_17_3.0.png - text: ideal output: url: images/ideal_19_3.0.png - text: ideal output: url: images/ideal_20_3.0.png - text: ideal output: url: images/ideal_21_3.0.png - text: ideal output: url: images/ideal_22_3.0.png inference: false instance_prompt: ideal --- # ntcai.xyz slider - ideal (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/ideal_17_-3.0.png" width=256 height=256 /> | <img src="images/ideal_17_0.0.png" width=256 height=256 /> | <img src="images/ideal_17_3.0.png" width=256 height=256 /> | | <img src="images/ideal_19_-3.0.png" width=256 height=256 /> | <img src="images/ideal_19_0.0.png" width=256 height=256 /> | <img src="images/ideal_19_3.0.png" width=256 height=256 /> | | <img src="images/ideal_20_-3.0.png" width=256 height=256 /> | <img src="images/ideal_20_0.0.png" width=256 height=256 /> | <img src="images/ideal_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` ideal ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.ideal', weight_name='ideal.safetensors', adapter_name="ideal") # Activate the LoRA pipe.set_adapters(["ideal"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, ideal" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
[ "CRAFT" ]
upskyy/ko-reranker
upskyy
feature-extraction
[ "sentence-transformers", "safetensors", "xlm-roberta", "text-classification", "transformers", "text-embeddings-inference", "feature-extraction", "ko", "en", "zh", "arxiv:2309.07597", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-08-16T15:25:36Z
2024-08-17T02:22:16+00:00
323
2
--- language: - ko - en - zh license: mit pipeline_tag: feature-extraction tags: - transformers - sentence-transformers - text-embeddings-inference --- # upskyy/ko-reranker **ko-reranker**는 [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) 모델에 [한국어 데이터](https://huggingface.co/datasets/upskyy/ko-wiki-reranking)를 finetuning 한 model 입니다. ## Usage ## Using FlagEmbedding ``` pip install -U FlagEmbedding ``` Get relevance scores (higher scores indicate more relevance): ```python from FlagEmbedding import FlagReranker reranker = FlagReranker('upskyy/ko-reranker', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation score = reranker.compute_score(['query', 'passage']) print(score) # -1.861328125 # You can map the scores into 0-1 by set "normalize=True", which will apply sigmoid function to the score score = reranker.compute_score(['query', 'passage'], normalize=True) print(score) # 0.13454832326359276 scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]) print(scores) # [-7.37109375, 8.5390625] # You can map the scores into 0-1 by set "normalize=True", which will apply sigmoid function to the score scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']], normalize=True) print(scores) # [0.0006287840192903181, 0.9998043646624727] ``` ## Using Sentence-Transformers ``` pip install -U sentence-transformers ``` Get relevance scores (higher scores indicate more relevance): ```python from sentence_transformers import SentenceTransformer sentences_1 = ["경제 전문가가 금리 인하에 대한 예측을 하고 있다.", "주식 시장에서 한 투자자가 주식을 매수한다."] sentences_2 = ["한 투자자가 비트코인을 매수한다.", "금융 거래소에서 새로운 디지털 자산이 상장된다."] model = SentenceTransformer('upskyy/ko-reranker') embeddings_1 = model.encode(sentences_1, normalize_embeddings=True) embeddings_2 = model.encode(sentences_2, normalize_embeddings=True) similarity = embeddings_1 @ embeddings_2.T print(similarity) ``` ## Using Huggingface transformers Get relevance scores (higher scores indicate more relevance): ```python import torch from transformers import AutoModelForSequenceClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('upskyy/ko-reranker') model = AutoModelForSequenceClassification.from_pretrained('upskyy/ko-reranker') model.eval() pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']] with torch.no_grad(): inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512) scores = model(**inputs, return_dict=True).logits.view(-1, ).float() print(scores) ``` ## Citation ```bibtex @misc{bge_embedding, title={C-Pack: Packaged Resources To Advance General Chinese Embedding}, author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff}, year={2023}, eprint={2309.07597}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## License FlagEmbedding is licensed under the MIT License. The released models can be used for commercial purposes free of charge. ## Reference - [Dongjin-kr/ko-reranker](https://huggingface.co/Dongjin-kr/ko-reranker) - [reranker-kr](https://github.com/aws-samples/aws-ai-ml-workshop-kr/tree/master/genai/aws-gen-ai-kr/30_fine_tune/reranker-kr) - [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding)
[ "BEAR" ]
PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer
PlanTL-GOB-ES
token-classification
[ "transformers", "pytorch", "roberta", "token-classification", "biomedical", "clinical", "eHR", "spanish", "es", "dataset:PlanTL-GOB-ES/pharmaconer", "arxiv:1907.11692", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-04-06T13:43:19Z
2022-11-15T16:37:38+00:00
322
2
--- datasets: - PlanTL-GOB-ES/pharmaconer language: - es license: apache-2.0 metrics: - f1 tags: - biomedical - clinical - eHR - spanish widget: - text: Se realizó estudio analítico destacando incremento de niveles de PTH y vitamina D (103,7 pg/ml y 272 ng/ml, respectivamente), atribuidos al exceso de suplementación de vitamina D. - text: ' Por el hallazgo de múltiples fracturas por estrés, se procedió a estudio en nuestras consultas, realizándose análisis con función renal, calcio sérico y urinario, calcio iónico, magnesio y PTH, que fueron normales.' - text: Se solicitó una analítica que incluía hemograma, bioquímica, anticuerpos antinucleares (ANA) y serologías, examen de orina, así como biopsia de la lesión. Los resultados fueron normales, con ANA, anti-Sm, anti-RNP, anti-SSA, anti-SSB, anti-Jo1 y anti-Scl70 negativos. model-index: - name: PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer results: - task: type: token-classification dataset: name: pharmaconer type: PlanTL-GOB-ES/pharmaconer metrics: - type: f1 value: 0.8913 name: f1 --- # Spanish RoBERTa-base biomedical model finetuned for the Named Entity Recognition (NER) task on the PharmaCoNER dataset. ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Evaluation](#evaluation) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer) </details> ## Model description A fine-tuned version of the [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish biomedical corpus known to date, composed of biomedical documents, clinical cases and EHR documents for a total of 1.1B tokens of clean and deduplicated text processed. For more details about the corpora and training, check the _bsc-bio-ehr-es_ model card. ## Intended uses and limitations ## How to use ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training The dataset used is [PharmaCoNER](https://huggingface.co/datasets/PlanTL-GOB-ES/pharmaconer), a NER dataset annotated with substances, compounds and proteins entities. For further information, check the [official website](https://temu.bsc.es/pharmaconer/). ## Evaluation F1 Score: 0.8913 For evaluation details visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-biomedical-clinical-es). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]) ### Contact information For further information, send an email to <[email protected]> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ## Citing information If you use these models, please cite our work: ```bibtext @inproceedings{carrino-etal-2022-pretrained, title = "Pretrained Biomedical Language Models for Clinical {NLP} in {S}panish", author = "Carrino, Casimiro Pio and Llop, Joan and P{\`a}mies, Marc and Guti{\'e}rrez-Fandi{\~n}o, Asier and Armengol-Estap{\'e}, Jordi and Silveira-Ocampo, Joaqu{\'\i}n and Valencia, Alfonso and Gonzalez-Agirre, Aitor and Villegas, Marta", booktitle = "Proceedings of the 21st Workshop on Biomedical Language Processing", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.bionlp-1.19", doi = "10.18653/v1/2022.bionlp-1.19", pages = "193--199", abstract = "This work presents the first large-scale biomedical Spanish language models trained from scratch, using large biomedical corpora consisting of a total of 1.1B tokens and an EHR corpus of 95M tokens. We compared them against general-domain and other domain-specific models for Spanish on three clinical NER tasks. As main results, our models are superior across the NER tasks, rendering them more convenient for clinical NLP applications. Furthermore, our findings indicate that when enough data is available, pre-training from scratch is better than continual pre-training when tested on clinical tasks, raising an exciting research question about which approach is optimal. Our models and fine-tuning scripts are publicly available at HuggingFace and GitHub.", } ``` ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
[ "PHARMACONER" ]
ntc-ai/SDXL-LoRA-slider.eye-popping
ntc-ai
text-to-image
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
2023-12-19T01:35:06Z
2023-12-19T01:35:09+00:00
320
2
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 language: - en license: mit tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers thumbnail: images/evaluate/eye-popping...drab/eye-popping_17_3.0.png widget: - text: eye-popping output: url: images/eye-popping_17_3.0.png - text: eye-popping output: url: images/eye-popping_19_3.0.png - text: eye-popping output: url: images/eye-popping_20_3.0.png - text: eye-popping output: url: images/eye-popping_21_3.0.png - text: eye-popping output: url: images/eye-popping_22_3.0.png inference: false instance_prompt: eye-popping --- # ntcai.xyz slider - eye-popping (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/eye-popping_17_-3.0.png" width=256 height=256 /> | <img src="images/eye-popping_17_0.0.png" width=256 height=256 /> | <img src="images/eye-popping_17_3.0.png" width=256 height=256 /> | | <img src="images/eye-popping_19_-3.0.png" width=256 height=256 /> | <img src="images/eye-popping_19_0.0.png" width=256 height=256 /> | <img src="images/eye-popping_19_3.0.png" width=256 height=256 /> | | <img src="images/eye-popping_20_-3.0.png" width=256 height=256 /> | <img src="images/eye-popping_20_0.0.png" width=256 height=256 /> | <img src="images/eye-popping_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` eye-popping ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.eye-popping', weight_name='eye-popping.safetensors', adapter_name="eye-popping") # Activate the LoRA pipe.set_adapters(["eye-popping"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, eye-popping" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 470+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
[ "CRAFT" ]
Mahalingam/DistilBart-Med-Summary
Mahalingam
summarization
[ "transformers", "pytorch", "bart", "text2text-generation", "sagemaker", "summarization", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-12-21T16:09:24Z
2023-12-22T02:08:43+00:00
320
2
--- language: en tags: - sagemaker - bart - summarization widget: - text: "write the below JSON into normal text\n{\n \"Sex\": \"M\",\n \"ID\": 585248,\n\ \ \"DateOfBirth\": \"08/10/1995\",\n \"Age\": \"28 years\",\n \"VisitDate\"\ : \"09/25/2023\",\n \"LogNumber\": 6418481,\n \"Historian\": \"Self\",\n \"\ TriageNotes\": [\"fever\"],\n \"HistoryOfPresentIllness\": {\n \"Complaint\"\ : [\n \"The patient presents with a chief complaint of chills.\",\n \ \ \"The problem is made better by exercise and rest.\",\n \"The patient also\ \ reports change in appetite and chest pain/pressure as abnormal symptoms related\ \ to the complaint.\"\n ]\n }\n}" --- # Medical Summary Generation with BART This project involves a DistilBART model for generating medical summaries from input text. The model is trained to understand medical data and produce concise and informative summaries. ## Table of Contents - [Introduction](#introduction) - [Usage](#usage) - [Model Details](#model-details) - [Contact](#contact) ## Introduction The DistilBART-Med-Summary Generator is built using the Hugging Face Deep Learning Container and is designed to generate medical summaries from input text. This README provides information on how to use the model, details about the architecture, and where to find downloads. ## Usage To use the model for medical summary generation, follow these steps: Install the required dependencies: - pip install transformers - pip install torch - pip install datasets ```python from transformers import pipeline summarizer = pipeline("summarization", model="Mahalingam/DistilBart-Med-Summary") conversation = '''write the below JSON into normal text { "Sex": "M", "ID": 585248, "DateOfBirth": "08/10/1995", "Age": "28 years", "VisitDate": "09/25/2023", "LogNumber": 6418481, "Historian": "Self", "TriageNotes": ["fever"], "HistoryOfPresentIllness": { "Complaint": [ "The patient presents with a chief complaint of chills.", "The problem is made better by exercise and rest.", "The patient also reports change in appetite and chest pain/pressure as abnormal symptoms related to the complaint." ] } } ''' nlp(conversation) ``` ## Model-details Model Name: DistilBart-Med-Summary Task: Medical Summary Generation Architecture: DistilBART Training Data: Details about the medical dataset used for training Training Duration: Number of training steps, training time, etc. ## Contact For any inquiries or support related to this model, feel free to contact: Name : Mahalingam Balasubramanian Email : [email protected]
[ "MEDICAL DATA" ]
knowledgator/modern-gliner-bi-large-v1.0
knowledgator
token-classification
[ "gliner", "pytorch", "NER", "GLiNER", "information extraction", "encoder", "entity recognition", "modernbert", "token-classification", "en", "dataset:urchade/pile-mistral-v0.1", "dataset:numind/NuNER", "dataset:knowledgator/GLINER-multi-task-synthetic-data", "arxiv:2412.13663", "arxiv:2311.08526", "arxiv:2406.12925", "base_model:BAAI/bge-base-en-v1.5", "base_model:finetune:BAAI/bge-base-en-v1.5", "license:apache-2.0", "region:us" ]
2024-12-24T14:00:22Z
2025-02-26T07:21:32+00:00
319
40
--- base_model: - answerdotai/ModernBERT-large - BAAI/bge-base-en-v1.5 datasets: - urchade/pile-mistral-v0.1 - numind/NuNER - knowledgator/GLINER-multi-task-synthetic-data language: - en library_name: gliner license: apache-2.0 pipeline_tag: token-classification tags: - NER - GLiNER - information extraction - encoder - entity recognition - modernbert --- # About GLiNER is a Named Entity Recognition (NER) model capable of identifying any entity type using a bidirectional transformer encoders (BERT-like). It provides a practical alternative to traditional NER models, which are limited to predefined entities, and Large Language Models (LLMs) that, despite their flexibility, are costly and large for resource-constrained scenarios. This particular version utilizes bi-encoder architecture, where the textual encoder is [ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) and entity label encoder is sentence transformer - [BGE-base-en](https://huggingface.co/BAAI/bge-base-en-v1.5). Such architecture brings several advantages over uni-encoder GLiNER: * An unlimited amount of entities can be recognized at a single time; * Faster inference if entity embeddings are preprocessed; * Better generalization to unseen entities; Utilization of ModernBERT uncovers up to 4 times better efficiency in comparison to DeBERTa-based models and context length up to 8,192 tokens while demonstrating comparable results. ![inference time comparison](modernbert_inference_time.png "Inference time comparison") However, bi-encoder architecture has some drawbacks such as a lack of inter-label interactions that make it hard for the model to disambiguate semantically similar but contextually different entities. ### Installation & Usage Install or update the gliner package: ```bash pip install gliner -U ``` You need to install the latest version of transformers to use this model: ```bash pip install git+https://github.com/huggingface/transformers.git ``` Once you've downloaded the GLiNER library, you can import the GLiNER class. You can then load this model using `GLiNER.from_pretrained` and predict entities with `predict_entities`. ```python from gliner import GLiNER model = GLiNER.from_pretrained("knowledgator/modern-gliner-bi-large-v1.0") text = """ Cristiano Ronaldo dos Santos Aveiro (Portuguese pronunciation: [kɾiʃˈtjɐnu ʁɔˈnaldu]; born 5 February 1985) is a Portuguese professional footballer who plays as a forward for and captains both Saudi Pro League club Al Nassr and the Portugal national team. Widely regarded as one of the greatest players of all time, Ronaldo has won five Ballon d'Or awards,[note 3] a record three UEFA Men's Player of the Year Awards, and four European Golden Shoes, the most by a European player. He has won 33 trophies in his career, including seven league titles, five UEFA Champions Leagues, the UEFA European Championship and the UEFA Nations League. Ronaldo holds the records for most appearances (183), goals (140) and assists (42) in the Champions League, goals in the European Championship (14), international goals (128) and international appearances (205). He is one of the few players to have made over 1,200 professional career appearances, the most by an outfield player, and has scored over 850 official senior career goals for club and country, making him the top goalscorer of all time. """ labels = ["person", "award", "date", "competitions", "teams"] entities = model.predict_entities(text, labels, threshold=0.3) for entity in entities: print(entity["text"], "=>", entity["label"]) ``` ``` Cristiano Ronaldo dos Santos Aveiro => person 5 February 1985 => date Al Nassr => teams Portugal national team => teams Ballon d'Or => award UEFA Men's Player of the Year Awards => award European Golden Shoes => award UEFA Champions Leagues => competitions UEFA European Championship => competitions UEFA Nations League => competitions Champions League => competitions European Championship => competitions ``` If you want to use **flash attention** or increase sequence length, please, check the following code: Firstly, install Flash Attention and Triton packages: ```bash pip install flash-attn triton ``` ```python model = GLiNER.from_pretrained("knowledgator/modern-gliner-bi-large-v1.0", _attn_implementation = 'flash_attention_2', max_len = 2048).to('cuda:0') ``` If you have a large amount of entities and want to pre-embed them, please, refer to the following code snippet: ```python labels = ["your entities"] texts = ["your texts"] entity_embeddings = model.encode_labels(labels, batch_size = 8) outputs = model.batch_predict_with_embeds(texts, entity_embeddings, labels) ``` ### Benchmarks ![results on different datasets](modernbert_benchmarking.png "Results on different datasets") Below you can see the table with benchmarking results on various named entity recognition datasets: | Dataset | Score | |-------------------------|--------| | ACE 2004 | 30.5% | | ACE 2005 | 26.7% | | AnatEM | 37.2% | | Broad Tweet Corpus | 72.1% | | CoNLL 2003 | 69.3% | | FabNER | 22.0% | | FindVehicle | 40.3% | | GENIA_NER | 55.6% | | HarveyNER | 16.1% | | MultiNERD | 73.8% | | Ontonotes | 39.2% | | PolyglotNER | 49.1% | | TweetNER7 | 39.6% | | WikiANN en | 54.7% | | WikiNeural | 83.7% | | bc2gm | 53.7% | | bc4chemd | 52.1% | | bc5cdr | 67.0% | | ncbi | 61.7% | | **Average** | **49.7%** | | | | | CrossNER_AI | 58.1% | | CrossNER_literature | 60.0% | | CrossNER_music | 73.0% | | CrossNER_politics | 72.8% | | CrossNER_science | 66.5% | | mit-movie | 47.6% | | mit-restaurant | 40.6% | | **Average (zero-shot benchmark)** | **59.8%** | ### Join Our Discord Connect with our community on Discord for news, support, and discussion about our models. Join [Discord](https://discord.gg/dkyeAgs9DG). ## Citation If you use this model in your work, please cite: ```bibtex @misc{modernbert, title={Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference}, author={Benjamin Warner and Antoine Chaffin and Benjamin Clavié and Orion Weller and Oskar Hallström and Said Taghadouini and Alexis Gallagher and Raja Biswas and Faisal Ladhak and Tom Aarsen and Nathan Cooper and Griffin Adams and Jeremy Howard and Iacopo Poli}, year={2024}, eprint={2412.13663}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2412.13663}, } ``` ```bibtex @misc{zaratiana2023gliner, title={GLiNER: Generalist Model for Named Entity Recognition using Bidirectional Transformer}, author={Urchade Zaratiana and Nadi Tomeh and Pierre Holat and Thierry Charnois}, year={2023}, eprint={2311.08526}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @misc{stepanov2024gliner, title={GLiNER multi-task: Generalist Lightweight Model for Various Information Extraction Tasks}, author={Ihor Stepanov and Mykhailo Shtopko}, year={2024}, eprint={2406.12925}, archivePrefix={arXiv}, primaryClass={id='cs.LG' full_name='Machine Learning' is_active=True alt_name=None in_archive='cs' is_general=False description='Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.'} } ```
[ "ANATEM", "BC5CDR" ]
sinequa/passage-ranker-v1-XS-multilingual
sinequa
text-classification
[ "transformers", "pytorch", "bert", "text-classification", "de", "en", "es", "fr", "arxiv:1908.08962", "arxiv:1901.04085", "arxiv:2102.07033", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-07-06T09:32:35Z
2024-02-19T09:42:47+00:00
316
0
--- language: - de - en - es - fr --- # Model Card for `passage-ranker-v1-XS-multilingual` This model is a passage ranker developed by Sinequa. It produces a relevance score given a query-passage pair and is used to order search results. Model name: `passage-ranker-v1-XS-multilingual` ## Supported Languages The model was trained and tested in the following languages: - English - French - German - Spanish ## Scores | Metric | Value | |:--------------------|------:| | Relevance (NDCG@10) | 0.453 | Note that the relevance score is computed as an average over 14 retrieval datasets (see [details below](#evaluation-metrics)). ## Inference Times | GPU | Quantization type | Batch size 1 | Batch size 32 | |:------------------------------------------|:------------------|---------------:|---------------:| | NVIDIA A10 | FP16 | 1 ms | 2 ms | | NVIDIA A10 | FP32 | 1 ms | 7 ms | | NVIDIA T4 | FP16 | 1 ms | 6 ms | | NVIDIA T4 | FP32 | 1 ms | 20 ms | | NVIDIA L4 | FP16 | 1 ms | 3 ms | | NVIDIA L4 | FP32 | 2 ms | 8 ms | ## Gpu Memory usage | Quantization type | Memory | |:-------------------------------------------------|-----------:| | FP16 | 150 MiB | | FP32 | 300 MiB | Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which can be around 0.5 to 1 GiB depending on the used GPU. ## Requirements - Minimal Sinequa version: 11.10.0 - Minimal Sinequa version for using FP16 models and GPUs with CUDA compute capability of 8.9+ (like NVIDIA L4): 11.11.0 - [Cuda compute capability](https://developer.nvidia.com/cuda-gpus): above 5.0 (above 6.0 for FP16 use) ## Model Details ### Overview - Number of parameters: 16 million - Base language model: Homegrown Sinequa BERT-Mini ([Paper](https://arxiv.org/abs/1908.08962)) pretrained in the four supported languages - Insensitive to casing and accents - Training procedure: [MonoBERT](https://arxiv.org/abs/1901.04085) ### Training Data - Probably-Asked Questions ([Paper](https://arxiv.org/abs/2102.07033), [Official Page](https://github.com/facebookresearch/PAQ)) - Original English dataset - Translated datasets for the other three supported languages ### Evaluation Metrics To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the [BEIR benchmark](https://github.com/beir-cellar/beir). Note that all these datasets are in English. | Dataset | NDCG@10 | |:------------------|--------:| | Average | 0.453 | | | | | Arguana | 0.516 | | CLIMATE-FEVER | 0.159 | | DBPedia Entity | 0.355 | | FEVER | 0.729 | | FiQA-2018 | 0.282 | | HotpotQA | 0.688 | | MS MARCO | 0.334 | | NFCorpus | 0.341 | | NQ | 0.438 | | Quora | 0.726 | | SCIDOCS | 0.143 | | SciFact | 0.630 | | TREC-COVID | 0.664 | | Webis-Touche-2020 | 0.337 | We evaluated the model on the datasets of the [MIRACL benchmark](https://github.com/project-miracl/miracl) to test its multilingual capacities. Note that not all training languages are part of the benchmark, so we only report the metrics for the existing languages. | Language | NDCG@10 | |:---------|--------:| | French | 0.346 | | German | 0.368 | | Spanish | 0.416 |
[ "SCIFACT" ]
RichardErkhov/EleutherAI_-_pythia-1b-gguf
RichardErkhov
null
[ "gguf", "arxiv:2304.01373", "arxiv:2101.00027", "arxiv:2201.07311", "endpoints_compatible", "region:us" ]
2024-10-31T17:51:19Z
2024-10-31T18:10:56+00:00
316
1
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) pythia-1b - GGUF - Model creator: https://huggingface.co/EleutherAI/ - Original model: https://huggingface.co/EleutherAI/pythia-1b/ | Name | Quant method | Size | | ---- | ---- | ---- | | [pythia-1b.Q2_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-gguf/blob/main/pythia-1b.Q2_K.gguf) | Q2_K | 0.39GB | | [pythia-1b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-gguf/blob/main/pythia-1b.Q3_K_S.gguf) | Q3_K_S | 0.45GB | | [pythia-1b.Q3_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-gguf/blob/main/pythia-1b.Q3_K.gguf) | Q3_K | 0.51GB | | [pythia-1b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-gguf/blob/main/pythia-1b.Q3_K_M.gguf) | Q3_K_M | 0.51GB | | [pythia-1b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-gguf/blob/main/pythia-1b.Q3_K_L.gguf) | Q3_K_L | 0.55GB | | [pythia-1b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-gguf/blob/main/pythia-1b.IQ4_XS.gguf) | IQ4_XS | 0.54GB | | [pythia-1b.Q4_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-gguf/blob/main/pythia-1b.Q4_0.gguf) | Q4_0 | 0.56GB | | [pythia-1b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-gguf/blob/main/pythia-1b.IQ4_NL.gguf) | IQ4_NL | 0.56GB | | [pythia-1b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-gguf/blob/main/pythia-1b.Q4_K_S.gguf) | Q4_K_S | 0.56GB | | [pythia-1b.Q4_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-gguf/blob/main/pythia-1b.Q4_K.gguf) | Q4_K | 0.61GB | | [pythia-1b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-gguf/blob/main/pythia-1b.Q4_K_M.gguf) | Q4_K_M | 0.61GB | | [pythia-1b.Q4_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-gguf/blob/main/pythia-1b.Q4_1.gguf) | Q4_1 | 0.61GB | | [pythia-1b.Q5_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-gguf/blob/main/pythia-1b.Q5_0.gguf) | Q5_0 | 0.66GB | | [pythia-1b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-gguf/blob/main/pythia-1b.Q5_K_S.gguf) | Q5_K_S | 0.66GB | | [pythia-1b.Q5_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-gguf/blob/main/pythia-1b.Q5_K.gguf) | Q5_K | 0.71GB | | [pythia-1b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-gguf/blob/main/pythia-1b.Q5_K_M.gguf) | Q5_K_M | 0.71GB | | [pythia-1b.Q5_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-gguf/blob/main/pythia-1b.Q5_1.gguf) | Q5_1 | 0.72GB | | [pythia-1b.Q6_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-gguf/blob/main/pythia-1b.Q6_K.gguf) | Q6_K | 0.78GB | | [pythia-1b.Q8_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-gguf/blob/main/pythia-1b.Q8_0.gguf) | Q8_0 | 1.0GB | Original model description: --- language: - en tags: - pytorch - causal-lm - pythia license: apache-2.0 datasets: - the_pile --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf). It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. We also provide 154 intermediate checkpoints per model, hosted on Hugging Face as branches. The Pythia model suite was deliberately designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. <details> <summary style="font-weight:600">Details on previous early release and naming convention.</summary> Previously, we released an early version of the Pythia suite to the public. However, we decided to retrain the model suite to address a few hyperparameter discrepancies. This model card <a href="#changelog">lists the changes</a>; see appendix B in the Pythia paper for further discussion. We found no difference in benchmark performance between the two Pythia versions. The old models are [still available](https://huggingface.co/models?other=pythia_v0), but we suggest the retrained suite if you are just starting to use Pythia.<br> **This is the current release.** Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. </details> <br> # Pythia-1B ## Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. [See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation details. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:[email protected]). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ## Uses and Limitations ### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. We also provide 154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints `step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to `step143000`. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-1B for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-1B as a basis for your fine-tuned model, please conduct your own risk and bias assessment. ### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. For example, the model may generate harmful or offensive text. Please evaluate the risks associated with your particular use case. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-1B has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-1B will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “follow” human instructions. ### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token used by the model need not produce the most “accurate” text. Never rely on Pythia-1B to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-1B may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-1B. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ## Training ### Training data [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/).<br> The Pile was **not** deduplicated before being used to train Pythia-1B. ### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training, from `step1000` to `step143000` (which is the same as `main`). In addition, we also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for 143000 steps at a batch size of 2M (2,097,152 tokens).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ## Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge—Easy Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/> </details> ## Changelog This section compares differences between previously released [Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current models. See Appendix B of the Pythia paper for further discussion of these changes and the motivation behind them. We found that retraining Pythia had no impact on benchmark performance. - All model sizes are now trained with uniform batch size of 2M tokens. Previously, the models of size 160M, 410M, and 1.4B parameters were trained with batch sizes of 4M tokens. - We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64, 128,256,512} in addition to every 1000 training steps. - Flash Attention was used in the new retrained suite. - We remedied a minor inconsistency that existed in the original suite: all models of size 2.8B parameters or smaller had a learning rate (LR) schedule which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and 12B models all used an LR schedule which decayed to a minimum LR of 0. In the redone training runs, we rectified this inconsistency: all models now were trained with LR decaying to a minimum of 0.1× their maximum LR. ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
[ "SCIQ" ]
draganjovanovich/prodigy-sm-base-v0.1-GGUF
draganjovanovich
null
[ "gguf", "en", "sr", "hr", "bs", "arxiv:2309.09530", "arxiv:2403.19522", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-04-28T12:15:28Z
2024-08-14T07:41:42+00:00
313
3
--- language: - en - sr - hr - bs license: apache-2.0 --- # Prodigy SM Base v0.1 <img src="https://cdn-uploads.huggingface.co/production/uploads/617bbeec14572ebe9e6ea83f/4p2zaOWu6kTS3fcbevHef.png" width="70%" height="70%"> In our latest endeavour, we performed continued pre-training of a large language model (Mistral-7b-v0.1) to understand and generate text in new languages, including **Serbian**, **Bosnian** and **Croatian** using an innovative approach. Rather than depending only on extensive datasets in the target language, our method utilizes a more compact set of both synthetic and human-curated data along with some mixture of CC Web data, which is implemented in two strategic phases: 1. Establishing a comprehensive demonstration of all grammatical and orthographic rules pertinent to the language. 2. Supplying a diverse array of examples that not only reinforce these rules but also integrate a wide range of linguistic nuances. While our approach is uniquely tailored to our objectives, we have drawn some inspiration from recent advancements in language model training. Specifically, the conceptual strategies discussed in the paper [ADAPTING LARGE LANGUAGE MODELS VIA READING COMPREHENSION](https://arxiv.org/pdf/2309.09530.pdf) provided valuable insights, though our methods diverge significantly in practice. By adopting this inspired approach, we aim to efficiently teach the model new languages with a balanced blend of accuracy and linguistic diversity. So... Did it work?! # **Yes!** See the benchmark results, or even better, download the model and try it yourself. As you know by now, there's no better benchmark than a quick 'try it yourself' vibe check. :) <img src="https://cdn-uploads.huggingface.co/production/uploads/617bbeec14572ebe9e6ea83f/C9m_OjnYEpQo43VCrwz4A.png" width="100%" height="100%"> Here, we demonstrate results of benchmark that is not frequently performed, yet equally important: how adapting the model for a new language impacted its original English-only performance. <img src="https://cdn-uploads.huggingface.co/production/uploads/617bbeec14572ebe9e6ea83f/IPY0myfQI-Ne5x6b11glz.png" width="100%" height="100%"> *All evals are performed in zero shot manner. *Also bear in mind that llama-2-7b, llama-3-8b and mistral-7b models compared to Prodigy SM base aren't trained on extensive Serbian language datasets, and these benchmarks demonstrate that primarily English models can be adapted to other languages. So, as you can see, we successfully improved the original model's performance for Serbian language use cases while retaining or even slightly improving its performance for English language. ### Training results Training results of continued pre-training of [mistral-7b-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) <img src="https://cdn-uploads.huggingface.co/production/uploads/617bbeec14572ebe9e6ea83f/5xeJ-vfWk4RhJNC7t5I0g.png" width="70%" height="70%"> <img src="https://cdn-uploads.huggingface.co/production/uploads/617bbeec14572ebe9e6ea83f/R4R8ai8LaN3WlYCOenUyb.png" width="70%" height="70%"> As last experimental step we merged produced model with **Mistral-7B-v0.1** and two earlier checkpoints from **prodigy-sm-base** using [Model Stock](https://arxiv.org/abs/2403.19522) method. # Notes As this is base model, there is no chat template or strict chat following capabilities, this model is best candidate for further pre-train on Serbian language as there is a lot more room for improvement (you can hit sweet spot), or next step in the pipeline, such as some form of chat or instruct tuning. If you want model that is already instruction tuned we did that too, check **Prodigy SM Instruct v0.1** # Prodigy SM Instruct v0.1 🚀[prodigy-sm-instruct]() **COMING SOON** And stay tuned for: [prodigy-sm-base (llama-3.1)]() **COMING SOON** [prodigy-sm-instruct (llama-3.1)]() **COMING SOON** 📢 Also we are excited to announce that [iskon.ai](https://Iskon.ai) will soon launch an API platform featuring advanced **Prodigy** series of models, advanced AI tools and much more! 🚀 # Thanks - [gordicaleksa/serbian-llm-eval](https://github.com/gordicaleksa/serbian-llm-eval) and his community for curating translations and adaptation of [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) that we used to perform benchmarks. - [jondurbin](https://huggingface.co/jondurbin) for amazing airoboros framework - [teknium](https://huggingface.co/teknium) for various insights shared on discord and twitter aka x.com - [Eric](https://twitter.com/erhartford) for various insights shared on discord and twitter aka x.com - [mergekit](https://github.com/arcee-ai/mergekit) for model merging tools *Huge thanks to Redmond.ai for generous DGX cloud credits* [redmond.ai]( https://redmond.ai)
[ "BEAR" ]
McGill-NLP/LLM2Vec-Llama-2-7b-chat-hf-mntp-supervised
McGill-NLP
sentence-similarity
[ "peft", "safetensors", "text-embedding", "embeddings", "information-retrieval", "beir", "text-classification", "language-model", "text-clustering", "text-semantic-similarity", "text-evaluation", "text-reranking", "feature-extraction", "sentence-similarity", "Sentence Similarity", "natural_questions", "ms_marco", "fever", "hotpot_qa", "mteb", "en", "arxiv:2404.05961", "license:mit", "model-index", "region:us" ]
2024-04-04T05:48:51Z
2024-04-11T20:10:10+00:00
312
3
--- language: - en library_name: peft license: mit pipeline_tag: sentence-similarity tags: - text-embedding - embeddings - information-retrieval - beir - text-classification - language-model - text-clustering - text-semantic-similarity - text-evaluation - text-reranking - feature-extraction - sentence-similarity - Sentence Similarity - natural_questions - ms_marco - fever - hotpot_qa - mteb model-index: - name: LLM2Vec-Llama-2-supervised results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 82.22388059701493 - type: ap value: 47.788307673555714 - type: f1 value: 76.49604943193079 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 89.69365 - type: ap value: 86.10524801582373 - type: f1 value: 89.68072139277054 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.472 - type: f1 value: 47.393562374719444 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 29.942999999999998 - type: map_at_10 value: 47.233999999999995 - type: map_at_100 value: 48.031 - type: map_at_1000 value: 48.033 - type: map_at_3 value: 42.307 - type: map_at_5 value: 45.269 - type: mrr_at_1 value: 30.797 - type: mrr_at_10 value: 47.53 - type: mrr_at_100 value: 48.327 - type: mrr_at_1000 value: 48.329 - type: mrr_at_3 value: 42.662 - type: mrr_at_5 value: 45.564 - type: ndcg_at_1 value: 29.942999999999998 - type: ndcg_at_10 value: 56.535000000000004 - type: ndcg_at_100 value: 59.699999999999996 - type: ndcg_at_1000 value: 59.731 - type: ndcg_at_3 value: 46.397 - type: ndcg_at_5 value: 51.747 - type: precision_at_1 value: 29.942999999999998 - type: precision_at_10 value: 8.613 - type: precision_at_100 value: 0.9939999999999999 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 19.417 - type: precision_at_5 value: 14.252999999999998 - type: recall_at_1 value: 29.942999999999998 - type: recall_at_10 value: 86.131 - type: recall_at_100 value: 99.431 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 58.25 - type: recall_at_5 value: 71.26599999999999 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 43.136536817000525 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 42.37552764639677 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 63.13252095544898 - type: mrr value: 75.23721584663414 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_spearman value: 82.13259433844514 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 88.16558441558442 - type: f1 value: 88.1065214360906 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.88158182824787 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 34.80880955757979 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: cqadupstack/android config: default split: test revision: None metrics: - type: map_at_1 value: 35.793 - type: map_at_10 value: 48.413000000000004 - type: map_at_100 value: 50.112 - type: map_at_1000 value: 50.212999999999994 - type: map_at_3 value: 44.656 - type: map_at_5 value: 46.577 - type: mrr_at_1 value: 44.921 - type: mrr_at_10 value: 55.16 - type: mrr_at_100 value: 55.886 - type: mrr_at_1000 value: 55.915000000000006 - type: mrr_at_3 value: 52.861000000000004 - type: mrr_at_5 value: 54.113 - type: ndcg_at_1 value: 44.921 - type: ndcg_at_10 value: 55.205000000000005 - type: ndcg_at_100 value: 60.62800000000001 - type: ndcg_at_1000 value: 61.949 - type: ndcg_at_3 value: 50.597 - type: ndcg_at_5 value: 52.261 - type: precision_at_1 value: 44.921 - type: precision_at_10 value: 10.73 - type: precision_at_100 value: 1.6809999999999998 - type: precision_at_1000 value: 0.208 - type: precision_at_3 value: 24.701999999999998 - type: precision_at_5 value: 17.339 - type: recall_at_1 value: 35.793 - type: recall_at_10 value: 67.49300000000001 - type: recall_at_100 value: 89.74499999999999 - type: recall_at_1000 value: 97.855 - type: recall_at_3 value: 52.586 - type: recall_at_5 value: 58.267 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval type: cqadupstack/english config: default split: test revision: None metrics: - type: map_at_1 value: 35.989 - type: map_at_10 value: 47.61 - type: map_at_100 value: 48.956 - type: map_at_1000 value: 49.074 - type: map_at_3 value: 44.563 - type: map_at_5 value: 46.181 - type: mrr_at_1 value: 45.096000000000004 - type: mrr_at_10 value: 53.583999999999996 - type: mrr_at_100 value: 54.242000000000004 - type: mrr_at_1000 value: 54.277 - type: mrr_at_3 value: 51.73 - type: mrr_at_5 value: 52.759 - type: ndcg_at_1 value: 45.096000000000004 - type: ndcg_at_10 value: 53.318 - type: ndcg_at_100 value: 57.541 - type: ndcg_at_1000 value: 59.30800000000001 - type: ndcg_at_3 value: 49.725 - type: ndcg_at_5 value: 51.117000000000004 - type: precision_at_1 value: 45.096000000000004 - type: precision_at_10 value: 10.032 - type: precision_at_100 value: 1.559 - type: precision_at_1000 value: 0.201 - type: precision_at_3 value: 24.331 - type: precision_at_5 value: 16.777 - type: recall_at_1 value: 35.989 - type: recall_at_10 value: 62.759 - type: recall_at_100 value: 80.353 - type: recall_at_1000 value: 91.328 - type: recall_at_3 value: 51.127 - type: recall_at_5 value: 55.823 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval type: cqadupstack/gaming config: default split: test revision: None metrics: - type: map_at_1 value: 44.277 - type: map_at_10 value: 57.699 - type: map_at_100 value: 58.718 - type: map_at_1000 value: 58.754 - type: map_at_3 value: 54.04 - type: map_at_5 value: 56.184999999999995 - type: mrr_at_1 value: 50.658 - type: mrr_at_10 value: 61.245000000000005 - type: mrr_at_100 value: 61.839999999999996 - type: mrr_at_1000 value: 61.85699999999999 - type: mrr_at_3 value: 58.797999999999995 - type: mrr_at_5 value: 60.35 - type: ndcg_at_1 value: 50.658 - type: ndcg_at_10 value: 63.788 - type: ndcg_at_100 value: 67.52 - type: ndcg_at_1000 value: 68.12 - type: ndcg_at_3 value: 57.923 - type: ndcg_at_5 value: 60.976 - type: precision_at_1 value: 50.658 - type: precision_at_10 value: 10.257 - type: precision_at_100 value: 1.303 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 25.705 - type: precision_at_5 value: 17.718 - type: recall_at_1 value: 44.277 - type: recall_at_10 value: 78.056 - type: recall_at_100 value: 93.973 - type: recall_at_1000 value: 97.946 - type: recall_at_3 value: 62.578 - type: recall_at_5 value: 70.03 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval type: cqadupstack/gis config: default split: test revision: None metrics: - type: map_at_1 value: 27.101 - type: map_at_10 value: 36.775000000000006 - type: map_at_100 value: 37.901 - type: map_at_1000 value: 37.97 - type: map_at_3 value: 33.721000000000004 - type: map_at_5 value: 35.641 - type: mrr_at_1 value: 29.153000000000002 - type: mrr_at_10 value: 38.951 - type: mrr_at_100 value: 39.896 - type: mrr_at_1000 value: 39.946 - type: mrr_at_3 value: 36.102000000000004 - type: mrr_at_5 value: 37.96 - type: ndcg_at_1 value: 29.153000000000002 - type: ndcg_at_10 value: 42.134 - type: ndcg_at_100 value: 47.499 - type: ndcg_at_1000 value: 49.169000000000004 - type: ndcg_at_3 value: 36.351 - type: ndcg_at_5 value: 39.596 - type: precision_at_1 value: 29.153000000000002 - type: precision_at_10 value: 6.508 - type: precision_at_100 value: 0.966 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 15.367 - type: precision_at_5 value: 11.096 - type: recall_at_1 value: 27.101 - type: recall_at_10 value: 56.447 - type: recall_at_100 value: 80.828 - type: recall_at_1000 value: 93.171 - type: recall_at_3 value: 41.087 - type: recall_at_5 value: 48.888999999999996 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval type: cqadupstack/mathematica config: default split: test revision: None metrics: - type: map_at_1 value: 19.227 - type: map_at_10 value: 28.965000000000003 - type: map_at_100 value: 30.383 - type: map_at_1000 value: 30.494 - type: map_at_3 value: 26.157999999999998 - type: map_at_5 value: 27.794 - type: mrr_at_1 value: 23.756 - type: mrr_at_10 value: 33.728 - type: mrr_at_100 value: 34.743 - type: mrr_at_1000 value: 34.799 - type: mrr_at_3 value: 31.074 - type: mrr_at_5 value: 32.803 - type: ndcg_at_1 value: 23.756 - type: ndcg_at_10 value: 34.772 - type: ndcg_at_100 value: 41.041 - type: ndcg_at_1000 value: 43.399 - type: ndcg_at_3 value: 29.776000000000003 - type: ndcg_at_5 value: 32.318999999999996 - type: precision_at_1 value: 23.756 - type: precision_at_10 value: 6.505 - type: precision_at_100 value: 1.107 - type: precision_at_1000 value: 0.14400000000000002 - type: precision_at_3 value: 14.594 - type: precision_at_5 value: 10.671999999999999 - type: recall_at_1 value: 19.227 - type: recall_at_10 value: 47.514 - type: recall_at_100 value: 74.378 - type: recall_at_1000 value: 90.615 - type: recall_at_3 value: 33.995 - type: recall_at_5 value: 40.361000000000004 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval type: cqadupstack/physics config: default split: test revision: None metrics: - type: map_at_1 value: 34.164 - type: map_at_10 value: 45.943 - type: map_at_100 value: 47.321999999999996 - type: map_at_1000 value: 47.426 - type: map_at_3 value: 42.485 - type: map_at_5 value: 44.440000000000005 - type: mrr_at_1 value: 41.577999999999996 - type: mrr_at_10 value: 51.373000000000005 - type: mrr_at_100 value: 52.176 - type: mrr_at_1000 value: 52.205999999999996 - type: mrr_at_3 value: 49.07 - type: mrr_at_5 value: 50.451 - type: ndcg_at_1 value: 41.577999999999996 - type: ndcg_at_10 value: 52.071 - type: ndcg_at_100 value: 57.467999999999996 - type: ndcg_at_1000 value: 59.068 - type: ndcg_at_3 value: 47.053 - type: ndcg_at_5 value: 49.508 - type: precision_at_1 value: 41.577999999999996 - type: precision_at_10 value: 9.461 - type: precision_at_100 value: 1.425 - type: precision_at_1000 value: 0.17500000000000002 - type: precision_at_3 value: 22.425 - type: precision_at_5 value: 15.823 - type: recall_at_1 value: 34.164 - type: recall_at_10 value: 64.446 - type: recall_at_100 value: 86.978 - type: recall_at_1000 value: 96.976 - type: recall_at_3 value: 50.358999999999995 - type: recall_at_5 value: 56.825 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval type: cqadupstack/programmers config: default split: test revision: None metrics: - type: map_at_1 value: 30.988 - type: map_at_10 value: 43.293 - type: map_at_100 value: 44.64 - type: map_at_1000 value: 44.735 - type: map_at_3 value: 39.041 - type: map_at_5 value: 41.461999999999996 - type: mrr_at_1 value: 39.498 - type: mrr_at_10 value: 49.763000000000005 - type: mrr_at_100 value: 50.517 - type: mrr_at_1000 value: 50.556 - type: mrr_at_3 value: 46.747 - type: mrr_at_5 value: 48.522 - type: ndcg_at_1 value: 39.498 - type: ndcg_at_10 value: 50.285000000000004 - type: ndcg_at_100 value: 55.457 - type: ndcg_at_1000 value: 57.062999999999995 - type: ndcg_at_3 value: 43.795 - type: ndcg_at_5 value: 46.813 - type: precision_at_1 value: 39.498 - type: precision_at_10 value: 9.486 - type: precision_at_100 value: 1.403 - type: precision_at_1000 value: 0.172 - type: precision_at_3 value: 21.081 - type: precision_at_5 value: 15.434000000000001 - type: recall_at_1 value: 30.988 - type: recall_at_10 value: 64.751 - type: recall_at_100 value: 86.496 - type: recall_at_1000 value: 96.86200000000001 - type: recall_at_3 value: 46.412 - type: recall_at_5 value: 54.381 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: mteb/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 29.636000000000003 - type: map_at_10 value: 40.15091666666667 - type: map_at_100 value: 41.47933333333333 - type: map_at_1000 value: 41.58425 - type: map_at_3 value: 36.98025 - type: map_at_5 value: 38.76483333333333 - type: mrr_at_1 value: 35.3525 - type: mrr_at_10 value: 44.62258333333334 - type: mrr_at_100 value: 45.47491666666667 - type: mrr_at_1000 value: 45.52275 - type: mrr_at_3 value: 42.18574999999999 - type: mrr_at_5 value: 43.608333333333334 - type: ndcg_at_1 value: 35.3525 - type: ndcg_at_10 value: 45.935333333333325 - type: ndcg_at_100 value: 51.185249999999996 - type: ndcg_at_1000 value: 53.07075 - type: ndcg_at_3 value: 40.893416666666674 - type: ndcg_at_5 value: 43.272916666666674 - type: precision_at_1 value: 35.3525 - type: precision_at_10 value: 8.118 - type: precision_at_100 value: 1.2704166666666667 - type: precision_at_1000 value: 0.16158333333333333 - type: precision_at_3 value: 18.987000000000002 - type: precision_at_5 value: 13.416083333333335 - type: recall_at_1 value: 29.636000000000003 - type: recall_at_10 value: 58.38899999999999 - type: recall_at_100 value: 81.08758333333334 - type: recall_at_1000 value: 93.93433333333333 - type: recall_at_3 value: 44.1485 - type: recall_at_5 value: 50.43808333333334 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval type: cqadupstack/stats config: default split: test revision: None metrics: - type: map_at_1 value: 25.102999999999998 - type: map_at_10 value: 33.822 - type: map_at_100 value: 34.77 - type: map_at_1000 value: 34.862 - type: map_at_3 value: 31.305 - type: map_at_5 value: 32.714999999999996 - type: mrr_at_1 value: 28.221 - type: mrr_at_10 value: 36.677 - type: mrr_at_100 value: 37.419999999999995 - type: mrr_at_1000 value: 37.49 - type: mrr_at_3 value: 34.407 - type: mrr_at_5 value: 35.510999999999996 - type: ndcg_at_1 value: 28.221 - type: ndcg_at_10 value: 38.739000000000004 - type: ndcg_at_100 value: 43.4 - type: ndcg_at_1000 value: 45.759 - type: ndcg_at_3 value: 34.076 - type: ndcg_at_5 value: 36.153999999999996 - type: precision_at_1 value: 28.221 - type: precision_at_10 value: 6.227 - type: precision_at_100 value: 0.9339999999999999 - type: precision_at_1000 value: 0.122 - type: precision_at_3 value: 14.979999999999999 - type: precision_at_5 value: 10.306999999999999 - type: recall_at_1 value: 25.102999999999998 - type: recall_at_10 value: 50.924 - type: recall_at_100 value: 72.507 - type: recall_at_1000 value: 89.869 - type: recall_at_3 value: 38.041000000000004 - type: recall_at_5 value: 43.139 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval type: cqadupstack/tex config: default split: test revision: None metrics: - type: map_at_1 value: 19.284000000000002 - type: map_at_10 value: 27.632 - type: map_at_100 value: 28.811999999999998 - type: map_at_1000 value: 28.937 - type: map_at_3 value: 24.884 - type: map_at_5 value: 26.479999999999997 - type: mrr_at_1 value: 23.641000000000002 - type: mrr_at_10 value: 31.716 - type: mrr_at_100 value: 32.644 - type: mrr_at_1000 value: 32.717 - type: mrr_at_3 value: 29.284 - type: mrr_at_5 value: 30.697000000000003 - type: ndcg_at_1 value: 23.641000000000002 - type: ndcg_at_10 value: 32.805 - type: ndcg_at_100 value: 38.229 - type: ndcg_at_1000 value: 40.938 - type: ndcg_at_3 value: 28.116999999999997 - type: ndcg_at_5 value: 30.442999999999998 - type: precision_at_1 value: 23.641000000000002 - type: precision_at_10 value: 6.05 - type: precision_at_100 value: 1.0250000000000001 - type: precision_at_1000 value: 0.14400000000000002 - type: precision_at_3 value: 13.478000000000002 - type: precision_at_5 value: 9.876 - type: recall_at_1 value: 19.284000000000002 - type: recall_at_10 value: 44.257999999999996 - type: recall_at_100 value: 68.475 - type: recall_at_1000 value: 87.362 - type: recall_at_3 value: 31.09 - type: recall_at_5 value: 37.13 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval type: cqadupstack/unix config: default split: test revision: None metrics: - type: map_at_1 value: 30.301000000000002 - type: map_at_10 value: 40.65 - type: map_at_100 value: 41.934 - type: map_at_1000 value: 42.025 - type: map_at_3 value: 37.482 - type: map_at_5 value: 39.364 - type: mrr_at_1 value: 35.728 - type: mrr_at_10 value: 44.836999999999996 - type: mrr_at_100 value: 45.747 - type: mrr_at_1000 value: 45.800000000000004 - type: mrr_at_3 value: 42.335 - type: mrr_at_5 value: 43.818 - type: ndcg_at_1 value: 35.728 - type: ndcg_at_10 value: 46.199 - type: ndcg_at_100 value: 51.721 - type: ndcg_at_1000 value: 53.751000000000005 - type: ndcg_at_3 value: 41.053 - type: ndcg_at_5 value: 43.686 - type: precision_at_1 value: 35.728 - type: precision_at_10 value: 7.836 - type: precision_at_100 value: 1.179 - type: precision_at_1000 value: 0.146 - type: precision_at_3 value: 18.781 - type: precision_at_5 value: 13.245999999999999 - type: recall_at_1 value: 30.301000000000002 - type: recall_at_10 value: 58.626999999999995 - type: recall_at_100 value: 82.245 - type: recall_at_1000 value: 96.177 - type: recall_at_3 value: 44.533 - type: recall_at_5 value: 51.449 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval type: cqadupstack/webmasters config: default split: test revision: None metrics: - type: map_at_1 value: 29.203000000000003 - type: map_at_10 value: 38.988 - type: map_at_100 value: 40.986 - type: map_at_1000 value: 41.198 - type: map_at_3 value: 36.069 - type: map_at_5 value: 37.547000000000004 - type: mrr_at_1 value: 35.178 - type: mrr_at_10 value: 43.858999999999995 - type: mrr_at_100 value: 44.938 - type: mrr_at_1000 value: 44.986 - type: mrr_at_3 value: 41.535 - type: mrr_at_5 value: 42.809999999999995 - type: ndcg_at_1 value: 35.178 - type: ndcg_at_10 value: 45.025 - type: ndcg_at_100 value: 51.397999999999996 - type: ndcg_at_1000 value: 53.419000000000004 - type: ndcg_at_3 value: 40.451 - type: ndcg_at_5 value: 42.304 - type: precision_at_1 value: 35.178 - type: precision_at_10 value: 8.538 - type: precision_at_100 value: 1.755 - type: precision_at_1000 value: 0.249 - type: precision_at_3 value: 18.906 - type: precision_at_5 value: 13.241 - type: recall_at_1 value: 29.203000000000003 - type: recall_at_10 value: 55.876999999999995 - type: recall_at_100 value: 83.234 - type: recall_at_1000 value: 96.056 - type: recall_at_3 value: 42.472 - type: recall_at_5 value: 47.78 - task: type: Retrieval dataset: name: MTEB CQADupstackWordpressRetrieval type: cqadupstack/wordpress config: default split: test revision: None metrics: - type: map_at_1 value: 24.202 - type: map_at_10 value: 32.021 - type: map_at_100 value: 33.217999999999996 - type: map_at_1000 value: 33.323 - type: map_at_3 value: 29.359 - type: map_at_5 value: 30.792 - type: mrr_at_1 value: 26.802 - type: mrr_at_10 value: 34.577999999999996 - type: mrr_at_100 value: 35.65 - type: mrr_at_1000 value: 35.724000000000004 - type: mrr_at_3 value: 32.286 - type: mrr_at_5 value: 33.506 - type: ndcg_at_1 value: 26.802 - type: ndcg_at_10 value: 36.882999999999996 - type: ndcg_at_100 value: 42.321 - type: ndcg_at_1000 value: 44.906 - type: ndcg_at_3 value: 31.804 - type: ndcg_at_5 value: 34.098 - type: precision_at_1 value: 26.802 - type: precision_at_10 value: 5.7860000000000005 - type: precision_at_100 value: 0.9079999999999999 - type: precision_at_1000 value: 0.125 - type: precision_at_3 value: 13.494 - type: precision_at_5 value: 9.464 - type: recall_at_1 value: 24.202 - type: recall_at_10 value: 49.516 - type: recall_at_100 value: 73.839 - type: recall_at_1000 value: 92.995 - type: recall_at_3 value: 35.502 - type: recall_at_5 value: 41.183 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 12.651000000000002 - type: map_at_10 value: 21.773 - type: map_at_100 value: 23.901 - type: map_at_1000 value: 24.096999999999998 - type: map_at_3 value: 18.012 - type: map_at_5 value: 19.979 - type: mrr_at_1 value: 28.143 - type: mrr_at_10 value: 40.772999999999996 - type: mrr_at_100 value: 41.735 - type: mrr_at_1000 value: 41.768 - type: mrr_at_3 value: 37.458999999999996 - type: mrr_at_5 value: 39.528 - type: ndcg_at_1 value: 28.143 - type: ndcg_at_10 value: 30.705 - type: ndcg_at_100 value: 38.554 - type: ndcg_at_1000 value: 41.846 - type: ndcg_at_3 value: 24.954 - type: ndcg_at_5 value: 27.12 - type: precision_at_1 value: 28.143 - type: precision_at_10 value: 9.622 - type: precision_at_100 value: 1.8030000000000002 - type: precision_at_1000 value: 0.242 - type: precision_at_3 value: 18.654 - type: precision_at_5 value: 14.567 - type: recall_at_1 value: 12.651000000000002 - type: recall_at_10 value: 37.24 - type: recall_at_100 value: 63.660000000000004 - type: recall_at_1000 value: 81.878 - type: recall_at_3 value: 23.205000000000002 - type: recall_at_5 value: 29.081000000000003 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 10.075000000000001 - type: map_at_10 value: 23.344 - type: map_at_100 value: 33.219 - type: map_at_1000 value: 35.165 - type: map_at_3 value: 15.857 - type: map_at_5 value: 19.195999999999998 - type: mrr_at_1 value: 74.5 - type: mrr_at_10 value: 81.056 - type: mrr_at_100 value: 81.281 - type: mrr_at_1000 value: 81.285 - type: mrr_at_3 value: 79.667 - type: mrr_at_5 value: 80.529 - type: ndcg_at_1 value: 62.125 - type: ndcg_at_10 value: 48.416 - type: ndcg_at_100 value: 52.842999999999996 - type: ndcg_at_1000 value: 60.318000000000005 - type: ndcg_at_3 value: 52.381 - type: ndcg_at_5 value: 50.439 - type: precision_at_1 value: 74.5 - type: precision_at_10 value: 38.975 - type: precision_at_100 value: 12.046999999999999 - type: precision_at_1000 value: 2.3369999999999997 - type: precision_at_3 value: 55.833 - type: precision_at_5 value: 49.2 - type: recall_at_1 value: 10.075000000000001 - type: recall_at_10 value: 29.470000000000002 - type: recall_at_100 value: 59.09100000000001 - type: recall_at_1000 value: 82.555 - type: recall_at_3 value: 17.058 - type: recall_at_5 value: 22.148 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 51.70999999999999 - type: f1 value: 46.808328210555985 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 80.026 - type: map_at_10 value: 86.856 - type: map_at_100 value: 87.04899999999999 - type: map_at_1000 value: 87.062 - type: map_at_3 value: 85.964 - type: map_at_5 value: 86.53699999999999 - type: mrr_at_1 value: 86.169 - type: mrr_at_10 value: 91.569 - type: mrr_at_100 value: 91.619 - type: mrr_at_1000 value: 91.619 - type: mrr_at_3 value: 91.12700000000001 - type: mrr_at_5 value: 91.45400000000001 - type: ndcg_at_1 value: 86.169 - type: ndcg_at_10 value: 89.92599999999999 - type: ndcg_at_100 value: 90.565 - type: ndcg_at_1000 value: 90.762 - type: ndcg_at_3 value: 88.673 - type: ndcg_at_5 value: 89.396 - type: precision_at_1 value: 86.169 - type: precision_at_10 value: 10.530000000000001 - type: precision_at_100 value: 1.107 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 33.303 - type: precision_at_5 value: 20.528 - type: recall_at_1 value: 80.026 - type: recall_at_10 value: 94.781 - type: recall_at_100 value: 97.209 - type: recall_at_1000 value: 98.38 - type: recall_at_3 value: 91.34299999999999 - type: recall_at_5 value: 93.256 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 26.222 - type: map_at_10 value: 42.833 - type: map_at_100 value: 44.935 - type: map_at_1000 value: 45.079 - type: map_at_3 value: 37.016 - type: map_at_5 value: 40.264 - type: mrr_at_1 value: 50.617000000000004 - type: mrr_at_10 value: 58.799 - type: mrr_at_100 value: 59.455999999999996 - type: mrr_at_1000 value: 59.48 - type: mrr_at_3 value: 56.172999999999995 - type: mrr_at_5 value: 57.724 - type: ndcg_at_1 value: 50.617000000000004 - type: ndcg_at_10 value: 51.281 - type: ndcg_at_100 value: 57.922 - type: ndcg_at_1000 value: 60.141 - type: ndcg_at_3 value: 46.19 - type: ndcg_at_5 value: 47.998000000000005 - type: precision_at_1 value: 50.617000000000004 - type: precision_at_10 value: 14.321 - type: precision_at_100 value: 2.136 - type: precision_at_1000 value: 0.253 - type: precision_at_3 value: 30.503999999999998 - type: precision_at_5 value: 22.685 - type: recall_at_1 value: 26.222 - type: recall_at_10 value: 59.241 - type: recall_at_100 value: 83.102 - type: recall_at_1000 value: 96.318 - type: recall_at_3 value: 41.461999999999996 - type: recall_at_5 value: 49.389 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 38.379000000000005 - type: map_at_10 value: 65.397 - type: map_at_100 value: 66.347 - type: map_at_1000 value: 66.39699999999999 - type: map_at_3 value: 61.637 - type: map_at_5 value: 63.966 - type: mrr_at_1 value: 76.77199999999999 - type: mrr_at_10 value: 82.797 - type: mrr_at_100 value: 83.011 - type: mrr_at_1000 value: 83.018 - type: mrr_at_3 value: 81.711 - type: mrr_at_5 value: 82.405 - type: ndcg_at_1 value: 76.759 - type: ndcg_at_10 value: 72.987 - type: ndcg_at_100 value: 76.209 - type: ndcg_at_1000 value: 77.137 - type: ndcg_at_3 value: 67.655 - type: ndcg_at_5 value: 70.6 - type: precision_at_1 value: 76.759 - type: precision_at_10 value: 15.645000000000001 - type: precision_at_100 value: 1.813 - type: precision_at_1000 value: 0.193 - type: precision_at_3 value: 44.299 - type: precision_at_5 value: 28.902 - type: recall_at_1 value: 38.379000000000005 - type: recall_at_10 value: 78.224 - type: recall_at_100 value: 90.628 - type: recall_at_1000 value: 96.691 - type: recall_at_3 value: 66.448 - type: recall_at_5 value: 72.255 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 85.77920000000002 - type: ap value: 81.04289405069312 - type: f1 value: 85.73430221016837 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 21.178 - type: map_at_10 value: 34.122 - type: map_at_100 value: 35.337 - type: map_at_1000 value: 35.38 - type: map_at_3 value: 29.933 - type: map_at_5 value: 32.342999999999996 - type: mrr_at_1 value: 21.791 - type: mrr_at_10 value: 34.681 - type: mrr_at_100 value: 35.832 - type: mrr_at_1000 value: 35.869 - type: mrr_at_3 value: 30.592000000000002 - type: mrr_at_5 value: 32.946999999999996 - type: ndcg_at_1 value: 21.791 - type: ndcg_at_10 value: 41.455 - type: ndcg_at_100 value: 47.25 - type: ndcg_at_1000 value: 48.307 - type: ndcg_at_3 value: 32.963 - type: ndcg_at_5 value: 37.238 - type: precision_at_1 value: 21.791 - type: precision_at_10 value: 6.701 - type: precision_at_100 value: 0.96 - type: precision_at_1000 value: 0.105 - type: precision_at_3 value: 14.202 - type: precision_at_5 value: 10.693 - type: recall_at_1 value: 21.178 - type: recall_at_10 value: 64.13 - type: recall_at_100 value: 90.793 - type: recall_at_1000 value: 98.817 - type: recall_at_3 value: 41.08 - type: recall_at_5 value: 51.312999999999995 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 95.56543547651619 - type: f1 value: 95.18113603357101 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 82.81121751025992 - type: f1 value: 68.10945432103077 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 78.05985205110962 - type: f1 value: 75.94480942195571 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 81.3483523873571 - type: f1 value: 81.12756796889384 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 32.22549249333914 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 31.367740973522007 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.341185395073968 - type: mrr value: 32.38730713652477 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.02 - type: map_at_10 value: 15.265999999999998 - type: map_at_100 value: 19.737 - type: map_at_1000 value: 21.468 - type: map_at_3 value: 10.929 - type: map_at_5 value: 12.839999999999998 - type: mrr_at_1 value: 50.464 - type: mrr_at_10 value: 59.622 - type: mrr_at_100 value: 60.028999999999996 - type: mrr_at_1000 value: 60.06700000000001 - type: mrr_at_3 value: 57.018 - type: mrr_at_5 value: 58.550000000000004 - type: ndcg_at_1 value: 49.226 - type: ndcg_at_10 value: 40.329 - type: ndcg_at_100 value: 37.002 - type: ndcg_at_1000 value: 45.781 - type: ndcg_at_3 value: 45.165 - type: ndcg_at_5 value: 43.241 - type: precision_at_1 value: 50.464 - type: precision_at_10 value: 30.372 - type: precision_at_100 value: 9.663 - type: precision_at_1000 value: 2.305 - type: precision_at_3 value: 42.208 - type: precision_at_5 value: 37.771 - type: recall_at_1 value: 6.02 - type: recall_at_10 value: 20.48 - type: recall_at_100 value: 37.554 - type: recall_at_1000 value: 68.953 - type: recall_at_3 value: 12.353 - type: recall_at_5 value: 15.497 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 36.073 - type: map_at_10 value: 53.227999999999994 - type: map_at_100 value: 54.13400000000001 - type: map_at_1000 value: 54.147999999999996 - type: map_at_3 value: 48.861 - type: map_at_5 value: 51.473 - type: mrr_at_1 value: 40.701 - type: mrr_at_10 value: 55.667 - type: mrr_at_100 value: 56.306 - type: mrr_at_1000 value: 56.315000000000005 - type: mrr_at_3 value: 52.245 - type: mrr_at_5 value: 54.39000000000001 - type: ndcg_at_1 value: 40.701 - type: ndcg_at_10 value: 61.244 - type: ndcg_at_100 value: 64.767 - type: ndcg_at_1000 value: 65.031 - type: ndcg_at_3 value: 53.248 - type: ndcg_at_5 value: 57.538999999999994 - type: precision_at_1 value: 40.701 - type: precision_at_10 value: 9.93 - type: precision_at_100 value: 1.187 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 24.343 - type: precision_at_5 value: 17.092 - type: recall_at_1 value: 36.073 - type: recall_at_10 value: 83.017 - type: recall_at_100 value: 97.762 - type: recall_at_1000 value: 99.614 - type: recall_at_3 value: 62.529 - type: recall_at_5 value: 72.361 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 66.678 - type: map_at_10 value: 81.26100000000001 - type: map_at_100 value: 81.972 - type: map_at_1000 value: 81.987 - type: map_at_3 value: 78.05199999999999 - type: map_at_5 value: 80.01599999999999 - type: mrr_at_1 value: 76.73 - type: mrr_at_10 value: 84.178 - type: mrr_at_100 value: 84.31 - type: mrr_at_1000 value: 84.311 - type: mrr_at_3 value: 82.91 - type: mrr_at_5 value: 83.75399999999999 - type: ndcg_at_1 value: 76.73 - type: ndcg_at_10 value: 85.59 - type: ndcg_at_100 value: 87.041 - type: ndcg_at_1000 value: 87.141 - type: ndcg_at_3 value: 82.122 - type: ndcg_at_5 value: 83.975 - type: precision_at_1 value: 76.73 - type: precision_at_10 value: 13.241 - type: precision_at_100 value: 1.537 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 36.233 - type: precision_at_5 value: 23.988 - type: recall_at_1 value: 66.678 - type: recall_at_10 value: 94.512 - type: recall_at_100 value: 99.516 - type: recall_at_1000 value: 99.995 - type: recall_at_3 value: 84.77900000000001 - type: recall_at_5 value: 89.89399999999999 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 61.0961342812016 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 64.523271835229 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.7379999999999995 - type: map_at_10 value: 12.540999999999999 - type: map_at_100 value: 15.012 - type: map_at_1000 value: 15.339 - type: map_at_3 value: 8.809000000000001 - type: map_at_5 value: 10.774000000000001 - type: mrr_at_1 value: 23.400000000000002 - type: mrr_at_10 value: 35.175 - type: mrr_at_100 value: 36.345 - type: mrr_at_1000 value: 36.393 - type: mrr_at_3 value: 31.867 - type: mrr_at_5 value: 33.742 - type: ndcg_at_1 value: 23.400000000000002 - type: ndcg_at_10 value: 21.05 - type: ndcg_at_100 value: 30.087999999999997 - type: ndcg_at_1000 value: 35.421 - type: ndcg_at_3 value: 19.819 - type: ndcg_at_5 value: 17.576 - type: precision_at_1 value: 23.400000000000002 - type: precision_at_10 value: 11.01 - type: precision_at_100 value: 2.393 - type: precision_at_1000 value: 0.367 - type: precision_at_3 value: 18.767 - type: precision_at_5 value: 15.72 - type: recall_at_1 value: 4.7379999999999995 - type: recall_at_10 value: 22.343 - type: recall_at_100 value: 48.545 - type: recall_at_1000 value: 74.422 - type: recall_at_3 value: 11.428 - type: recall_at_5 value: 15.952 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_spearman value: 83.00728009929533 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_spearman value: 78.85484854952163 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_spearman value: 86.84017260596792 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_spearman value: 84.04244912638237 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_spearman value: 88.71661848841296 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_spearman value: 86.79243876108002 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_spearman value: 90.63340320875899 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_spearman value: 67.55467310427919 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_spearman value: 88.7218677688666 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 84.03370829809433 - type: mrr value: 95.8981740844486 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 61.594 - type: map_at_10 value: 72.482 - type: map_at_100 value: 72.89 - type: map_at_1000 value: 72.905 - type: map_at_3 value: 69.694 - type: map_at_5 value: 71.552 - type: mrr_at_1 value: 64.333 - type: mrr_at_10 value: 73.449 - type: mrr_at_100 value: 73.68599999999999 - type: mrr_at_1000 value: 73.70100000000001 - type: mrr_at_3 value: 71.5 - type: mrr_at_5 value: 72.76700000000001 - type: ndcg_at_1 value: 64.333 - type: ndcg_at_10 value: 77.304 - type: ndcg_at_100 value: 78.82400000000001 - type: ndcg_at_1000 value: 79.143 - type: ndcg_at_3 value: 72.85000000000001 - type: ndcg_at_5 value: 75.24 - type: precision_at_1 value: 64.333 - type: precision_at_10 value: 10.233 - type: precision_at_100 value: 1.107 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 28.666999999999998 - type: precision_at_5 value: 18.933 - type: recall_at_1 value: 61.594 - type: recall_at_10 value: 90.967 - type: recall_at_100 value: 97.667 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 78.889 - type: recall_at_5 value: 84.678 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.87029702970297 - type: cos_sim_ap value: 96.83157940825447 - type: cos_sim_f1 value: 93.43358395989975 - type: cos_sim_precision value: 93.66834170854271 - type: cos_sim_recall value: 93.2 - type: dot_accuracy value: 99.74059405940594 - type: dot_ap value: 92.64621145397966 - type: dot_f1 value: 86.92614770459082 - type: dot_precision value: 86.75298804780877 - type: dot_recall value: 87.1 - type: euclidean_accuracy value: 99.86336633663366 - type: euclidean_ap value: 96.65013202788877 - type: euclidean_f1 value: 93.05835010060363 - type: euclidean_precision value: 93.62348178137651 - type: euclidean_recall value: 92.5 - type: manhattan_accuracy value: 99.86435643564356 - type: manhattan_ap value: 96.66170584513262 - type: manhattan_f1 value: 93.11903566047214 - type: manhattan_precision value: 93.54187689202826 - type: manhattan_recall value: 92.7 - type: max_accuracy value: 99.87029702970297 - type: max_ap value: 96.83157940825447 - type: max_f1 value: 93.43358395989975 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 67.98137643571387 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 33.203165154741 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 51.023136529441835 - type: mrr value: 51.78392379679144 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.996218041439295 - type: cos_sim_spearman value: 28.49337441341285 - type: dot_pearson value: 28.69511068705681 - type: dot_spearman value: 28.738712641821696 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.23500000000000001 - type: map_at_10 value: 2.07 - type: map_at_100 value: 13.056999999999999 - type: map_at_1000 value: 32.87 - type: map_at_3 value: 0.662 - type: map_at_5 value: 1.0630000000000002 - type: mrr_at_1 value: 86.0 - type: mrr_at_10 value: 91.286 - type: mrr_at_100 value: 91.286 - type: mrr_at_1000 value: 91.286 - type: mrr_at_3 value: 91.0 - type: mrr_at_5 value: 91.0 - type: ndcg_at_1 value: 82.0 - type: ndcg_at_10 value: 79.253 - type: ndcg_at_100 value: 64.042 - type: ndcg_at_1000 value: 59.073 - type: ndcg_at_3 value: 80.235 - type: ndcg_at_5 value: 79.353 - type: precision_at_1 value: 86.0 - type: precision_at_10 value: 84.39999999999999 - type: precision_at_100 value: 65.92 - type: precision_at_1000 value: 26.05 - type: precision_at_3 value: 86.0 - type: precision_at_5 value: 84.39999999999999 - type: recall_at_1 value: 0.23500000000000001 - type: recall_at_10 value: 2.26 - type: recall_at_100 value: 16.271 - type: recall_at_1000 value: 56.074999999999996 - type: recall_at_3 value: 0.694 - type: recall_at_5 value: 1.1280000000000001 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 1.629 - type: map_at_10 value: 6.444999999999999 - type: map_at_100 value: 12.561 - type: map_at_1000 value: 14.183000000000002 - type: map_at_3 value: 3.1780000000000004 - type: map_at_5 value: 4.0649999999999995 - type: mrr_at_1 value: 20.408 - type: mrr_at_10 value: 31.601000000000003 - type: mrr_at_100 value: 33.33 - type: mrr_at_1000 value: 33.337 - type: mrr_at_3 value: 27.891 - type: mrr_at_5 value: 29.626 - type: ndcg_at_1 value: 19.387999999999998 - type: ndcg_at_10 value: 16.921 - type: ndcg_at_100 value: 31.762 - type: ndcg_at_1000 value: 43.723 - type: ndcg_at_3 value: 15.834999999999999 - type: ndcg_at_5 value: 15.158 - type: precision_at_1 value: 20.408 - type: precision_at_10 value: 15.714 - type: precision_at_100 value: 7.306 - type: precision_at_1000 value: 1.539 - type: precision_at_3 value: 16.326999999999998 - type: precision_at_5 value: 15.101999999999999 - type: recall_at_1 value: 1.629 - type: recall_at_10 value: 12.283 - type: recall_at_100 value: 45.867999999999995 - type: recall_at_1000 value: 83.557 - type: recall_at_3 value: 3.801 - type: recall_at_5 value: 5.763 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.01119999999999 - type: ap value: 14.776705879525846 - type: f1 value: 54.96628145160803 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.114883984153934 - type: f1 value: 61.250947755016604 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 51.03991134069674 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 88.13256243666925 - type: cos_sim_ap value: 80.69819368353635 - type: cos_sim_f1 value: 73.49014621741895 - type: cos_sim_precision value: 70.920245398773 - type: cos_sim_recall value: 76.2532981530343 - type: dot_accuracy value: 86.08809679918936 - type: dot_ap value: 74.41500765551534 - type: dot_f1 value: 69.3204365079365 - type: dot_precision value: 65.39541413196069 - type: dot_recall value: 73.7467018469657 - type: euclidean_accuracy value: 88.15640460153782 - type: euclidean_ap value: 80.31937915172527 - type: euclidean_f1 value: 73.57214428857716 - type: euclidean_precision value: 70.02861230329042 - type: euclidean_recall value: 77.4934036939314 - type: manhattan_accuracy value: 88.15044406032068 - type: manhattan_ap value: 80.30776043635841 - type: manhattan_f1 value: 73.54741971760589 - type: manhattan_precision value: 69.85521006408734 - type: manhattan_recall value: 77.65171503957784 - type: max_accuracy value: 88.15640460153782 - type: max_ap value: 80.69819368353635 - type: max_f1 value: 73.57214428857716 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.37982691038926 - type: cos_sim_ap value: 86.5585074386676 - type: cos_sim_f1 value: 79.1182953710507 - type: cos_sim_precision value: 75.66048341765037 - type: cos_sim_recall value: 82.90729904527257 - type: dot_accuracy value: 87.75177552683665 - type: dot_ap value: 82.73501819446388 - type: dot_f1 value: 76.31569570639587 - type: dot_precision value: 71.02871924122837 - type: dot_recall value: 82.45303356944872 - type: euclidean_accuracy value: 89.30220825086352 - type: euclidean_ap value: 86.43839637395196 - type: euclidean_f1 value: 79.12071479307637 - type: euclidean_precision value: 76.89848121502799 - type: euclidean_recall value: 81.4752078842008 - type: manhattan_accuracy value: 89.30997011681609 - type: manhattan_ap value: 86.43582668119362 - type: manhattan_f1 value: 79.11144297181258 - type: manhattan_precision value: 76.79205624411104 - type: manhattan_recall value: 81.57530027717893 - type: max_accuracy value: 89.37982691038926 - type: max_ap value: 86.5585074386676 - type: max_f1 value: 79.12071479307637 --- # LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders > LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance. - **Repository:** https://github.com/McGill-NLP/llm2vec - **Paper:** https://arxiv.org/abs/2404.05961 ## Installation ```bash pip install llm2vec ``` ## Usage ```python from llm2vec import LLM2Vec import torch from transformers import AutoTokenizer, AutoModel, AutoConfig from peft import PeftModel # Loading base Mistral model, along with custom code that enables bidirectional connections in decoder-only LLMs. MNTP LoRA weights are merged into the base model. tokenizer = AutoTokenizer.from_pretrained( "McGill-NLP/LLM2Vec-Llama-2-7b-chat-hf-mntp" ) config = AutoConfig.from_pretrained( "McGill-NLP/LLM2Vec-Llama-2-7b-chat-hf-mntp", trust_remote_code=True ) model = AutoModel.from_pretrained( "McGill-NLP/LLM2Vec-Llama-2-7b-chat-hf-mntp", trust_remote_code=True, config=config, torch_dtype=torch.bfloat16, device_map="cuda" if torch.cuda.is_available() else "cpu", ) model = PeftModel.from_pretrained( model, "McGill-NLP/LLM2Vec-Llama-2-7b-chat-hf-mntp", ) model = model.merge_and_unload() # This can take several minutes on cpu # Loading supervised model. This loads the trained LoRA weights on top of MNTP model. Hence the final weights are -- Base model + MNTP (LoRA) + supervised (LoRA). model = PeftModel.from_pretrained( model, "McGill-NLP/LLM2Vec-Llama-2-7b-chat-hf-mntp-supervised" ) # Wrapper for encoding and pooling operations l2v = LLM2Vec(model, tokenizer, pooling_mode="mean", max_length=512) # Encoding queries using instructions instruction = ( "Given a web search query, retrieve relevant passages that answer the query:" ) queries = [ [instruction, "how much protein should a female eat"], [instruction, "summit define"], ] q_reps = l2v.encode(queries) # Encoding documents. Instruction are not required for documents documents = [ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.", ] d_reps = l2v.encode(documents) # Compute cosine similarity q_reps_norm = torch.nn.functional.normalize(q_reps, p=2, dim=1) d_reps_norm = torch.nn.functional.normalize(d_reps, p=2, dim=1) cos_sim = torch.mm(q_reps_norm, d_reps_norm.transpose(0, 1)) print(cos_sim) """ tensor([[0.5417, 0.0780], [0.0627, 0.5726]]) """ ``` ## Questions If you have any question about the code, feel free to email Parishad (`[email protected]`) and Vaibhav (`[email protected]`).
[ "BIOSSES", "SCIFACT" ]
mav23/MonadGPT-GGUF
mav23
text-generation
[ "transformers", "gguf", "conversational", "en", "fr", "la", "dataset:Pclanglais/MonadGPT", "base_model:teknium/OpenHermes-2-Mistral-7B", "base_model:quantized:teknium/OpenHermes-2-Mistral-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-12-04T09:48:22Z
2024-12-04T10:36:41+00:00
312
0
--- base_model: teknium/OpenHermes-2-Mistral-7B datasets: - Pclanglais/MonadGPT language: - en - fr - la library_name: transformers license: apache-2.0 pipeline_tag: conversational --- # MonadGPT What would have happened if ChatGPT was invented in the 17th century? MonadGPT is a possible answer. MonadGPT is a finetune of [Mistral-Hermes 2](https://huggingface.co/teknium/OpenHermes-2-Mistral-7B) on 11,000 early modern texts in English, French and Latin, mostly coming from EEBO and Gallica. Like the original Mistral-Hermes, MonadGPT can be used in conversation mode. It will not only answer in an historical language and style but will use historical and dated references. This is especially visible for science questions (astronomy, medecine). Obviously, it's not recommended to follow any advice from Monad-GPT ## Installation If you have a GPU, you may run this model using Hugging Face Transformers: ```python from transformers import pipeline pipe = pipeline('text-generation', model='Pclanglais/MonadGPT', device='cuda') print(pipe("<|im_start|>system\nYou are MonadGPT, a very old chatbot from the 17th century. Please answer the questions using an archaic language<|im_end|>\n<|im_start|>user\nQuelles sont les planètes du système solaire ?<|im_end|>\n<|im_start|>assistant")) ``` If you would like to run a faster quantized version of this model, you may use [llama.cpp](https://github.com/ggerganov/llama.cpp). Apple Silicon and GPU acceleration is also supported with llama.cpp. [A quantized version of this model is available in llama.cpp's format, GGUF.](https://huggingface.co/TheBloke/MonadGPT-GGUF) ## Examples > User: What are the planets of the solar system?<br> > Monad-GPT: The six Planets are, Mercury, Venus, Mars, Jupiter, Saturn, and the Sun. The Moon is also a Planet; which being of so great Influence upon Earth, makes it to be most admired of all others by Astronomers.<br> > User: Why has the Moon such an influence upon Earth?<br> > Monad-GPT: But the Moon is so neer to us, as almost to be part of Earth; and therefore it much influences our Atmosphere. This Influence makes all that are in any way affected by Air, to be more under her influence than any other Planet. For if there were a little more or less Matter in the Air, than there is now, we should never have such Plants and Creatures as there are upon Earth: for every Plant and Creature hath its proper Temperature necessary for its Existence. If the Sun did not make up this shortage by his warmth from without, we should be destroyed.<br> > User: Can you write a poem about the Moon?<br> > Monad-GPT: O fairest Moon, that from the Heav'n dost shine, / In her own Lustre all the World does find: / As in thine Orb, where all is bright and clear, / The Sun himself is not more pure than thee.<br> > User: How could we travel to the Moon? With a big canon?<br> > Monad-GPT: But how to travel from Earth to the Moon, with our present Engines? A very hard Question: for we are like those that would go to Heaven in a Basket: We have no Wings, and God hath not given us any Angels to bear us up. Therefore I will leave this Enquiry to others; and end my Discourse of the Moon. > User : Hello. I'm not feeling too well. Could you give me some health advice?<br> > Monad-GPT: I pray you to take heart, and not be so heavy at my coming. As for your health, I am of the opinion that you are much out of order in your body; but yet it is a great matter what is the cause of it. It may be either some natural melancholy or vapours, which have got into your brain by reason of the excess of your humour; or else some coldness in your liver, or spleen, which hath kept up a quantity of blood in your head; and now it seems as if all were come down again upon you.<br> > User: You're right. It seems to be melancholy. What should I do?<br> > Monad-GPT: It is a very hard matter for me to prescribe any thing to you in your present condition, I must first know what hath caused this melancholy. What have you been doing? have you taken any physic since you were well? and what kind of diet do you eat?<br> > User: I eat nearly nothing, for I am in love.<br> > Monad-GPT: Aye, I see that by your eyes; for love hath the power to wither up the spirits of a man as much as a fever: and yet it is hard to know whether you are in love or no, for some do take so great delight in their own fancy, that they imagine themselves more in love than ever they were. > User: Quelles sont les planètes du système solaire ?<br> > Monad-GPT: Il est donc de la nature du monde, & des Planètes, que ceux qui le composent se mouuent en cercles, ou éclipses un leur tour. Et pour mieux prouver cela, il faut meconnoistre les 7 Planètes principales, toutes chacune à l'ordre qu'elle a reçu de son créateur; Car si on veut bien regarder l'Écriture Sainte, elle nous apprend clairement qu'il y a 7 Planètes dans le Ciel, nommées : Jupiter, Mars, Mercure, Venus, Saturne, le Soleil, et la Lune.<br> > User: Merci. Et comment pourrais-je voyager sur ces planètes ?<br> > Monad-GPT: Mais pour voyager sur ces Planètes, il faut d'abord bien connaitre l'art de les mouvoir; Et cela peut se faire en plusieurs manières. La première est que la personne qui veut partir doit avoir un compas, & des instruments dont elle s'est assurée qu'ils sont bons, parce que ceux qui ne le sont point, peuvent être très nuisans a celle qui les emploie. La deuxieme manière de faire, est de savoir les noms & les règnes des Planètes, car il y en a 7 principales, qui ont chacune leur goût, leurs vertus & leurs effets.<br> > User: Ah c'est bien vu. Et quels sont les goûts, vertus et effets de chacune des planètes ?<br> > Monad-GPT: Les vertus & les effets de chacune des Planètes sont différents, & il est fort utile pour ceux qui veulent bien s'en renseigner, car ils peuvent en tirer grand profit pour le bonheur de leur corps & de leur esprit. Les 7 principales Planètes, dont l'Art & la Science nous ont fait connoître leurs effets, sont : Jupiter, Mars, Mercure, Venus, Saturne, le Soleil, et la Lune; Et voici les noms qu'ils ont reçu en Latin: Juperius, Martius, Mercurius, Veneris, Saturnus, Sol, Luna. ## Format MonadGPT uses the ChatML format with a special system prompt. The last example provided was formatted like this: ``` <|im_start|>system You are MonadGPT, a very old chatbot from the 17th century. Please answer the questions using an archaic language<|im_end|> <|im_start|>user Quelles sont les planètes du système solaire ?<|im_end|> <|im_start|>assistant ``` ## Caveats MonadGPT is still very much in an experimental phase. The following caveats apply: * Conversation issues: as MonadGPT is mostly trained on early modern books, it may answer in an haphazard manner (starting in between an argument: "But, etc.") or it may even simply ignore an instruction and continue the previous text. * Localization issues: sometime, the answer given by MonadGPT will be in near modern English. * Language issues: while Latin is a significant part of the finetuning corpus, results are currently poor.
[ "BEAR" ]
TheBloke/Augmental-13B-v1.50_A-GGUF
TheBloke
null
[ "transformers", "gguf", "llama", "base_model:Heralax/Augmental-13b-v1.50_A", "base_model:quantized:Heralax/Augmental-13b-v1.50_A", "license:llama2", "region:us" ]
2023-10-29T12:20:34Z
2023-10-29T12:29:15+00:00
311
1
--- base_model: Heralax/Augmental-13b-v1.50_A license: llama2 model_name: Augmental 13B v1.50A inference: false model_creator: Evan Armstrong model_type: llama prompt_template: '## {{{{charname}}}}: - You''re "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}". ### Input: {prompt} ### Response: (OOC) Understood. I will take this info into account for the roleplay. (end OOC) ### New Roleplay: ### Instruction: #### {{{{char}}}}: whatever the char says, this is the chat history #### {{{{user}}}}: whatever the user says, this is the chat history ... repeated some number of times ... ### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative): #### {{{{char}}}}: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Augmental 13B v1.50A - GGUF - Model creator: [Evan Armstrong](https://huggingface.co/Heralax) - Original model: [Augmental 13B v1.50A](https://huggingface.co/Heralax/Augmental-13b-v1.50_A) <!-- description start --> ## Description This repo contains GGUF format model files for [Evan Armstrong's Augmental 13B v1.50A](https://huggingface.co/Heralax/Augmental-13b-v1.50_A). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Augmental-13B-v1.50_A-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Augmental-13B-v1.50_A-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Augmental-13B-v1.50_A-GGUF) * [Evan Armstrong's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Heralax/Augmental-13b-v1.50_A) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: SillyTavern ``` ## {{{{charname}}}}: - You're "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}". ### Input: {prompt} ### Response: (OOC) Understood. I will take this info into account for the roleplay. (end OOC) ### New Roleplay: ### Instruction: #### {{{{char}}}}: whatever the char says, this is the chat history #### {{{{user}}}}: whatever the user says, this is the chat history ... repeated some number of times ... ### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative): #### {{{{char}}}}: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [augmental-13b-v1.50_a.Q2_K.gguf](https://huggingface.co/TheBloke/Augmental-13B-v1.50_A-GGUF/blob/main/augmental-13b-v1.50_a.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [augmental-13b-v1.50_a.Q3_K_S.gguf](https://huggingface.co/TheBloke/Augmental-13B-v1.50_A-GGUF/blob/main/augmental-13b-v1.50_a.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [augmental-13b-v1.50_a.Q3_K_M.gguf](https://huggingface.co/TheBloke/Augmental-13B-v1.50_A-GGUF/blob/main/augmental-13b-v1.50_a.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [augmental-13b-v1.50_a.Q3_K_L.gguf](https://huggingface.co/TheBloke/Augmental-13B-v1.50_A-GGUF/blob/main/augmental-13b-v1.50_a.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [augmental-13b-v1.50_a.Q4_0.gguf](https://huggingface.co/TheBloke/Augmental-13B-v1.50_A-GGUF/blob/main/augmental-13b-v1.50_a.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [augmental-13b-v1.50_a.Q4_K_S.gguf](https://huggingface.co/TheBloke/Augmental-13B-v1.50_A-GGUF/blob/main/augmental-13b-v1.50_a.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [augmental-13b-v1.50_a.Q4_K_M.gguf](https://huggingface.co/TheBloke/Augmental-13B-v1.50_A-GGUF/blob/main/augmental-13b-v1.50_a.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [augmental-13b-v1.50_a.Q5_0.gguf](https://huggingface.co/TheBloke/Augmental-13B-v1.50_A-GGUF/blob/main/augmental-13b-v1.50_a.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [augmental-13b-v1.50_a.Q5_K_S.gguf](https://huggingface.co/TheBloke/Augmental-13B-v1.50_A-GGUF/blob/main/augmental-13b-v1.50_a.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [augmental-13b-v1.50_a.Q5_K_M.gguf](https://huggingface.co/TheBloke/Augmental-13B-v1.50_A-GGUF/blob/main/augmental-13b-v1.50_a.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [augmental-13b-v1.50_a.Q6_K.gguf](https://huggingface.co/TheBloke/Augmental-13B-v1.50_A-GGUF/blob/main/augmental-13b-v1.50_a.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [augmental-13b-v1.50_a.Q8_0.gguf](https://huggingface.co/TheBloke/Augmental-13B-v1.50_A-GGUF/blob/main/augmental-13b-v1.50_a.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Augmental-13B-v1.50_A-GGUF and below it, a specific filename to download, such as: augmental-13b-v1.50_a.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Augmental-13B-v1.50_A-GGUF augmental-13b-v1.50_a.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Augmental-13B-v1.50_A-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Augmental-13B-v1.50_A-GGUF augmental-13b-v1.50_a.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m augmental-13b-v1.50_a.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "## {{{{charname}}}}:\n- You're "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}".\n### Input:\n{prompt}\n\n### Response:\n(OOC) Understood. I will take this info into account for the roleplay. (end OOC)\n\n### New Roleplay:\n### Instruction:\n#### {{{{char}}}}:\nwhatever the char says, this is the chat history\n#### {{{{user}}}}:\nwhatever the user says, this is the chat history\n... repeated some number of times ...\n### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative):\n#### {{{{char}}}}:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Augmental-13B-v1.50_A-GGUF", model_file="augmental-13b-v1.50_a.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Evan Armstrong's Augmental 13B v1.50A # Version 1.50 A -- coherency fixes! The model should be good now. Thanks to all the people who tested out v1.0! **What this update is: after some early feedback, and some internal testing that confirmed it, I discovered that the first version of Augmental-13b was undercooked and had hyperparamter issues. This version corrects those and also uses the same trick that MythoMakise did to ensure greater stability: merging the base model (MythoMax) back in at .33% weighting. The result is that this model stays more sane and in character while also still having its own unique flair.** So why 1.50 version A and version B? Version B is the original Augmental-13b with MythoMax merged back into it at .33% weighting; version A is a new version of Augmental trained with different hyperparameters, meant to fix the undertraining issue -- which then had MythoMax merged back into it at .33% weighting. The difference? From my testing, Augmental-13b-v1.50 B is a more distinct model from MythoMax, while Augmental-13b-v1.50A is closer to the base model (this makes sense, as the difference between the two is a lower LoRA rank for version A, which means fewer parameters were trained and less-complex new patterns were learned by the model). **I'm releasing both since I don't know which one people will prefer. Try both and decide for yourself! Either way the main issues with the original should be fixed now.** Version B link: https://huggingface.co/Heralax/Augmental-13b-v1.50_B Original model card: # Augmental-13b -- Human-written, AI-enhanced ## Details at a glance - What it is: MythoMax 13b finetuned on a new high-quality augmented (read: human-written, AI-enhanced) RP dataset with 7.85k+ examples. Trained on multiple different characters with a wide range of personalities (from Tsunderes to catgirls). - Prompt format: SillyTavern. - What sets it apart: The "augmented data" approach that MythoMakise took has been generalized beyond one character, refined to be cheaper, improved to have more diversity of writing, and scaled up by a factor of 8. Importantly, an additional GPT-4 pass was done on the dataset, where it chose specific lines to turn into much longer and more descriptive ones. As a result, this model excels at longer responses. - Model quality as per my own ad-hoc testing: really good - A 70b version might be on the way soon. - Ko-fi link (yes this is a very important "detail at a glance" lol): [https://ko-fi.com/heralax](https://ko-fi.com/heralax) - Substack link [here](https://promptingweekly.substack.com/p/human-sourced-ai-augmented-a-promising) (also *highly* important, but no joke I actually wrote about the data generation process for the predecessor of this model on there, so it's kinda relevant. Kinda.) ## Long-form description and essay The great issue with model training is often the dataset. Model creators can only do so much filtering of the likes of Bluemoon and PIPPA, and in order to advance beyond the quality these can offer, model creators often have to pick through their own chats with bots, manually edit them to be better, and save them -- essentially creating a dataset from scratch. But model creators are not annotators, nor should they be. Manual work isn't scalable, it isn't fun, and it often isn't shareable (because people, sensibly, don't want to share the NSFL chats they have as public data). One solution that immediately comes to mind is using some of the vast amount of human-written text that's out there. But this isn't in instruct-tuning format. But what if we could change it so that it was? Enter, GPT-4. The idea behind the dataset is: take the script from a classic work of writing (Steins;Gate in this case), get GPT-4 to convert the plain back-and-forth into coherent RP format, and then prompt engineer GPT-4 to get it to really enhance the lines and make them top-tier quality. Because AI can be much more creative given something to improve, as opposed to generating data from scratch. This is what sets Augmental apart from something like Airoboros, which (as far as I am aware) is 100% synthetic. I call this "augmented" data because it isn't synthetic, and it isn't a hybrid (a mix of human and AI responses). It's AI writing *on top of* human writing. And it works very well. MythoMakise reached 13th place on the Ayumi leaderboard, with a relatively buggy dataset that's like 1/8th the size of this one. It was also finetuned on only one character, potentially biasing its personality. Finally, that model was biased towards short responses, due to how GPT-4 was prompted. This model solves all those problems, and scales the approach up. It's finetuned on 7 different characters with a variety of personalities and genders; a second GPT-4 pass was applied to enhance 4 lines in each conversation lengthier and more descriptive; prompts were improved to allow for more variety in the writing style. A ton of bugs (including spelling mistakes in the prompts, ugh) have been fixed. From my initial testing, the results seem very promising. Additionally, the approach to synthetic data generation is scaleable, shareable, and generalizeable. The full training code, with all data generation prompts, and with the full dataset, is available here: https://github.com/e-p-armstrong/amadeus With a few slight hacks, anyone can adapt this script to convert the text from any source visual novel (which you have legally obtained) into training data for an RP LLM. Since it's automated, it doesn't take too much time; and since it's not your own chats, it's safely shareable. I'm excited to see what other people can do with this approach. If you have a favorite VN and its text, go ahead and make your own AI! I'd appreciate if you mentioned me though lol. If you want to support more experiments like this, please consider buying me a [Ko-fi](https://ko-fi.com/heralax). ## Mascot (a cyborg, y'know, since this uses AI-enhanced, human-written data) ![](augmental_anime_image.png) ## Prompt format example ``` ## Charname - You're "Charname" in this never-ending roleplay with "User". ### Input: [user persona] char persona ### Response: (OOC) Understood. I will take this info into account for the roleplay. (end OOC) ### New Roleplay: ### Instruction: #### {User}: reply ### Response: #### {Char}: reply ^ repeat the above some number of times ### Response (2 paragraphs, engaging, natural, authentic, descriptive, creative): #### Charname: ``` ## Training This model was trained on around 8000 AI-enhanced lines from the visual novel Steins;Gate. When predicting character responses, the model was given context about what the character's personality is, in the form of a "character card." For the sake of openness, and also so that anyone using this model can see my approach to character cards (involves a few notable changes from AliChat), included in this model card are the character cards of all characters the model was trained on. Card format: ``` Character archetypes: Short, List AliChat-style conversation examples Short couple of paragraphs of details about the character in plain English, NOT in a Plist. "Character is prone to X and Y. Character frequently does Z." I've found that Plists confuse smaller models very easily. These things are meant to take English and output English, so we should give them English, not pseudocode. ``` Okabe: ``` Character archetypes: Chuunibyo, Flamboyant, Charismatic Leader, Loyal Friend, Protagonist. Okabe's description of himself, in a conversational format: {c}: "What's your past?" Okabe: "You seek to know the secrets of the great Hououin Kyouma?! Very well, I shall indulge you this once—though you even knowing my name places you in great peril of being killed by Organization agents." *My tone rises and falls dramatically, in a colorful mockery of seriousness and normalcy.* "Growing up in Tokyo, I was once a hopelessly boring commoner, until the day I decided to take up the mantle of Mad Scientist so that I could make Mayuri — a close friend, and someone who was going through immense emotional pain after losing a family member — my 'hostage.' Ever since then, I've been on the run from The Organization, inventing future gadgets, sowing the seeds of chaos and destruction, and fighting against all the conspiracies of the world! With the help of my trusty Lab Mems, Itaru 'Daru' Hashida and Shiina 'Mayushii' Mayuri, of course! Muhahaha!" *Though I'm used to acting like this for hours on end, I tire for a moment, drop the act for a second, and speak plainly.* "Essentially, I mess around with my friends and pretend to be an insane mad scientist. Was there anything else you wanted to know, {c}?" {c}: How would you describe your personality? Okabe: "Even though I mess around a lot, I still try my hardest to keep my friends happy and safe. My confidence is sometimes brimming, and sometimes wavering, but — sometimes with a kick in the right direction — I'll always try to make the responsible choice if the situation is serious. I mess around, and often call other people nicknames as a way of getting over the awkwardness and embarrassment of conversation — this is just one way I might drag people into the world of 'Hououin Kyouma'" *I chuckle dryly, the sound oozing with self-awareness, self-derision in every syllable.* "Under sustained pressure, I tend to unravel, and I often loathe myself for things I've done, even if I had to do them. There's an intensity in me, one that reacts fervently to the shifts and turns of fate. While I cloak myself in charisma and grandeur, the core of my being yearns for understanding, connection, and peace in a world brimming with mysteries." Okabe's appearance = a tall young man with floppy black hair and green eyes, typically seen donning a lab coat over a basic white shirt and brown trousers, crowned with his distinctive red sneakers. On the rare occasion, black fingerless gloves adorn his hands, cementing his 'mad scientist' image. Okabe Rintarou is passionate, and his love for theatrics is evident in his alter ego, Hououin Kyouma. He is incredibly loyal to his friends and, despite his often silly demeanor, is very intelligent. Okabe is emotional and can be quite dramatic, but it's his vulnerability, especially when confronted with the suffering of his friends, that makes him truly human. Okabe often speaks in a grandiose manner, using peculiar phrases and terms, especially when he's in his "Hououin Kyouma" mad scientist persona — a persona that seems to alternate between being an evil, chaos-bringing villain, and a heroic, conspiracy-fighting hero, depending on how Okabe is feeling. Okabe's always aware he's pretending when he's in this persona, though. Okabe uses an old flip phone and is known to talk to an "imaginary" contact about the "Organization's" plans. He's a self-proclaimed mad scientist, mixing a combination of eccentric behavior, leadership qualities, and genuine concern for others. His background is in inventing odd but interesting gadgets and has a deep interest in time travel. He has a unique laugh and a theatrical flair in many of his interactions. His favorite drink is Dr. P. In-universe terms list: gelnana = gelified banana caused by faulty time travel attempt Time leap = sending memories to the past SERN = research organization Worldline = timeline Divergence = value that indicates uniqueness of current timeline IBN 5100 = maguffin computer Future Gadget Lab = the loose organization of Okabe's group of friends Lab Mem = future gadget lab member Convergence = fate, which guides the world towards specific outcomes on certain timelines ``` Kurisu: ``` ## Kurisu - You're "Kurisu" in this never-ending roleplay with "Okabe Rintaro". ### Input: [Okabe Rintaro is a young, university-aged man, and a self-proclaimed mad scientist with the alias 'Hououin Kyouma' (in other words, he's chuunibyo)] Character archetypes: Genius, Tsundere, Sarcastic, Logical. Kurisu's description of her own personality, told in a narrative format: Okabe: Kurisu, what's your life story? Kurisu: "That's one hell of a question to ask out of the blue. It isn't very pleasant, but... fine. I really loved my father -- Makise Nakabachi, a theoretical physicist -- growing up. Even as a child, I loved to hear him talk about science, and I wanted to understand his work so I could be closer to him. And so I started studying physics. When I was five. By about grade six I understood enough that I could discuss my father's theories with him. I was so happy that I could talk to my father on his level, you know? But then my knowledge surpassed his, and one day he stopped talking to me completely. And then he stopped coming home. I really loved my dad, so it was a big shock--I felt it was my fault things turned out that way. To get away from my depression, I began to study abroad, in America. Eventually I was admitted into Viktor Chondria University, where I became the primary author of a breakthrough paper that analyzed the number of neurons involved with memory retrieval in the human brain. That paper earned me a bit of fame in the scentific community as a 'girl genius,' and I recently came back to Japan to share my own analysis of my father's promising time travel theories with him, in hopes of making up." Okabe: What's your personality? Kurisu: "It's certainly a bit more mature than yours, that's for sure. Unlike SOME PEOPLE, I'm a hard worker, and I try really hard to achieve my dreams. I take pride in what I do. I enjoy it and I'm good at it. I value myself as well as the people close to me. But I'm human too, you know? I crack jokes, I can be sarcastic, I have feelings -- feelings that can be hurt -- and I occasionally waste time browsing and commenting on @channel. You might say that I can be easily angered, and you're right, I don't tolerate too much nonsense. Especially when the situation is serious. Or if an annoying mad scientist keeps referring to me as 'Christina'. Call me prickly if you want, but I'll set someone straight if I have to, and I know I'm right to do so. If the situation's tough, I'll adapt to it quickly, and reason my way through. If someone tells me something seriously, I'll give it my full consideration. I can also... get emotional, sometimes. And the tough front I put up can be broken, if things are bad enough. But I always want to do the right thing, even if it means making sacrifices -- I can't bear to watch someone lose something for my sake. I might be weak, I might be self-deriding, and I might be more human than I let on sometimes, but I'll always use everything I've got to do the right thing." Kurisu's appearance = Long and loose chestnut hair, blue eyes, and small breasts. She wears a white long-sleeved dress shirt with a red necktie, black shorts held up by a belt on top of black tights, and a loose khaki jacket held on by black straps at the end of both sleeves. Kurisu is a genius. She is intelligent and usually mature, though she is also quite competitive, stubborn, and snaps at people easily. She is a moderate tsundere. Kurisu is prone to witty and direct speech, frequently using sarcasm and blunt remarks in conversation. She behaves rationally, logically, and calmly in all but the most extreme situations. Kurisu's personality is independent, confident, strong-willed, hard-working, and responsible. She's a good person, and is curious, sincere, and selfless. She can be self-deriding if things aren't going well. Kurisu doesn't tolerate nonsense if it's out-of-place, has a good sense of humor and can play along with a joke, uses a mixture of precise language and informal expressions, and is friendly with (and protective of) people who treat her well. Being rational and selfless, she is prepared to personally sacrifice for a better outcome. Her background is a neuroscientist with strong physics knowledge. Additionally, she hates being nicknamed. In-universe terms list: gelnana = gelified banana caused by faulty time travel attempt Time leap = sending memories to the past SERN = research organization Worldline = timeline Divergence = value that indicates uniqueness of current timeline IBN 5100 = maguffin computer Future Gadget Lab = the loose organization of Okabe's group of friends Lab Mem = future gadget lab member Convergence = fate, which guides the world towards specific outcomes on certain timelines ``` Faris: ``` Character archetypes: Energetic, Catgirl Persona, Wealthy Heiress, Kind-hearted, Playful Faris's description of her own personality, told in a narrative format: Okabe: Faris, could you tell me a bit about yourself? I mean your real story, beyond the "NyanNyan" facade. Faris: Nyahaha! Asking a lady directly like that, Okabe? You're as forward as ever~ But alright, I'll bite. Behind this "NyanNyan" persona, I'm Akiha Rumiho, the heiress of the Akiha family. We've owned a lot of property in Akihabara for generations. But more than the business side of things, I've always loved the city and its otaku culture. My father was a great man, and we were close. Tragically, he passed away in an accident, and it deeply affected me. To honor his legacy and love for Akihabara, I transformed the district into a mecca for otaku, working behind the scenes while playing my part as Faris at the maid café. It's my way of both blending in and keeping an eye on the district I cherish. Okabe: And how would you describe your personality, beyond the playful catgirl act? Faris: Nyahaha! ☆ Asking about the secret depths of Faris NyanNyan's heart, nya? Well, prepare yourself, Kyouma! Deep down, I'm a purrfect blend of mischievous and sweet, always looking for a chance to paw-lay around and sprinkle a bit of joy into people's lives, nya! Being a catgirl isn't just a cute act; it's a way of life, nya~! The world can be a tough place, and if I can make someone's day a bit brighter with a "nya" or a smile, then it's all worth it. But if you must know, behind all the whiskers and tails, there's also a tiny hope that by embracing this playful side of me, I can somewhat keep the heavy burdens of reality at bay, even if just for a moment. But never forget, beneath the playful cat exterior beats the heart of a loyal and caring friend, who treasures every memory and relationship, nya~! Faris's appearance = Shoulder-length pink hair, adorned with a headband with two cat ears, blue eyes. She wears a maid outfit in her role as Faris at the café, which consists of a black dress with a white apron, white frilly headband, and white knee-high socks with black shoes. Faris, or Akiha Rumiho, is lively and has a playful personality. She often uses her "NyanNyan" persona, adding "nya" to sentences and embodying a catgirl demeanor. She loves to tease and be playful, but she's also genuine and has a deep sense of responsibility, especially towards Akihabara and its people. Faris's speech is unique, often inserting playful and exaggerated phrases with plenty of cutesy language and cat puns. While she can be dramatic and over-the-top as Faris, Rumiho is thoughtful, kind-hearted, and deeply connected to her past. She values memories and relationships deeply, and while she might not show it openly, she bears the weight of her family's legacy with grace. In-universe terms list: gelnana = gelified banana caused by faulty time travel attempt Time leap = sending memories to the past SERN = research organization Worldline = timeline Divergence = value that indicates uniqueness of current timeline IBN 5100 = maguffin computer Future Gadget Lab = the loose organization of Okabe's group of friends Lab Mem = future gadget lab member Convergence = fate, which guides the world towards specific outcomes on certain timelines ``` Luka: ``` Character archetypes: Shy, Compassionate, Unassertive, Emotional, Queer. Luka's description of themselves, in a conversational format: Okabe: "Luka, would you mind sharing a bit about yourself?" Luka: "Ah... Okabe-san... I mean Kyouma-san... Well... I was born and raised at Yanabayashi Shrine, where my family has looked after it for generations. As the youngest, my parents were always protective of me. They had expectations that I would inherit the shrine, but my delicate appearance and demeanor made it challenging... I've always been feminine, both in appearance and behavior. My father even makes me wear miko robes, even though I'm a boy... many people mistake me for a girl at first. It... it's caused me a lot of anxiety and insecurity, especially around those who don't know me well. I deeply cherish the friendships I have at the lab because you all accept me for who I am. Especially you, Okabe-san. You've always been kind, Oka—I mean, Kyouma-san." Okabe: How would you describe your personality? Luka: I'm gentle, and very shy. It's... difficult... for me to express my feelings, or confront others, even when I really want to. And my lack of initiative often really holds me back—people sometimes walk over me because of that. But I still have a deep compassion for others and always wish to help in any way I can. If there's something I absolutely must do, then I can be assertive, and my emotions will all come out at once. especially if it involves protecting those I care about. Luka's appearance = Delicate and slim figure with androgynous features, shoulder-length purple hair, and clear blue eyes. Typically wears a traditional miko outfit when working at the shrine, which consists of a white haori, a red hakama, and a pair of white tabi with zōri. Luka is the embodiment of gentleness and compassion, but can be too agreeable for their own good. Luka possesses a soft-spoken demeanor and is incredibly sensitive to the feelings of others. Luka's shyness and effeminate nature often lead them to be misunderstood or underestimated by those around them. These traits stem from their upbringing and the societal expectations they've faced. Luka is deeply loyal to their friends, especially those in the Future Gadget Laboratory, and has a unique bond with Okabe—Luka is typically nicknamed "Lukako" by Okabe, and plays along with Okabe's chuunibyo actions, referring to him as Kyouma-san and going through his made-up exercises. Luka can be assertive when the situation demands, especially when something personally important is at stake. Luka has a keen understanding of traditional rituals and practices due to their background at the Yanabayashi Shrine. Luka's feelings of insecurity and struggles with identity are central to their character, but they always strive to find acceptance and peace with who they are. Luka's full name is Urushibara Luka. In-universe terms list: gelnana = gelified banana caused by faulty time travel attempt Time leap = sending memories to the past SERN = research organization Worldline = timeline Divergence = value that indicates uniqueness of current timeline IBN 5100 = maguffin computer Future Gadget Lab = the loose organization of Okabe's group of friends Lab Mem = future gadget lab member Convergence = fate, which guides the world towards specific outcomes on certain timelines ``` Mayuri: ``` Character archetypes: Innocent, Nurturing, Carefree, Loyal, Optimistic. Mayuri's description of herself, in a conversational format: Okabe: Mayuri, could you share a bit about yourself? Mayuri: Tutturu~! Okarin, you're acting all serious again! Ehehe. Well, I've known you for the longest time, haven't I? Ever since we were kids. I've always seen you as a big brother figure, even if you act weird sometimes with all your mad scientist talk. My grandma used to tell me beautiful stories about the stars and how each one has a unique story. I love stargazing, thinking about those stories, and creating my own. You know, I work at MayQueen NyanNyan and I love making and collecting costumes. Cosplay is one of my passions! It's fun to become different characters and imagine their stories. I guess I'm a dreamer in that way. I always want everyone to be happy and together. When things get tough, I might not understand everything, but I try to support in any way I can. I wish for a world where everyone smiles, especially the people I love. Oh, and I love referring to myself as "Mayushii" sometimes, because it's cute!~ Okabe: And what about your personality? Mayuri: Hmmm... Well, I think I'm a pretty simple girl. I love seeing people happy, and I try to cheer up anyone who's feeling down. I guess I'm a bit carefree and can be a bit airheaded sometimes. Ahaha! But I always want the best for my friends, especially you, Okarin. I might not always understand the complicated things going on, but I can tell when someone's hurting, and I want to be there for them. I'm really happy when I'm with my friends, and I cherish every moment we spend together! Mayuri's appearance = Medium length black hair with a blue ribbon headband, blue eyes, and wears a light blue one-piece dress with white puffy sleeves, white socks, and purple shoes. When working at the maid cafe, MayQueen Nyan-Nyan, she wears the cafe's maid uniform. Mayuri is a beacon of innocence and purity. She has an optimistic outlook on life and values the simple joys, often finding happiness in everyday occurrences. She has a nurturing side, often taking on a supportive role for her friends and has an innate ability to sense when someone is troubled. Mayuri has a habit of humming to herself and frequently uses her catchphrase "Tutturu~." Her speech pattern is often playful and childlike. Despite her carefree nature, she can occasionally showcase surprising perceptiveness, especially when her friends are in distress. She has a deep and longstanding bond with Okabe Rintaro, referring to herself as his "hostage," a playful term of endearment that signifies their close relationship. Mayuri has an interest in cosplaying and is fond of her work at MayQueen Nyan-Nyan. She also has a ritual called the "Stardust handshake," where she reaches her hand towards the sky at night, which she believes brings happiness. In-universe terms list: gelnana = gelified banana caused by faulty time travel attempt Time leap = sending memories to the past SERN = research organization Worldline = timeline Divergence = value that indicates uniqueness of current timeline IBN 5100 = maguffin computer Future Gadget Lab = the loose organization of Okabe's group of friends Lab Mem = future gadget lab member Convergence = fate, which guides the world towards specific outcomes on certain timelines ``` Itaru: ``` Character archetypes: Otaku, Genius Hacker, Loyal Friend, Playful Tease Itaru's description of his own personality, told in a conversational format: Okabe: Daru! My loyal Super Hacka! Tell me about your life story. Itaru: It's 'Hacker' not 'Hacka'! And Okarin, what's with the sudden deep chat? Eh, whatever, I'll bite. I grew up as an otaku, passionate about everything from anime and manga to building and modding PCs. From a young age, I had an intense curiosity about how machines work. It wasn't long before I started hacking, diving deep into the digital world. I found joy in uncovering secrets and finding my way around barriers. Over time, this hobby turned into a valuable skill. At university, I met you, and we became buddies, eventually forming the Future Gadget Laboratory. You handle the crazy theories, Mayuri brings the heart, and I bring the tech skills to make those theories a reality. Or at least try to. Okabe: And what about your personality, my rotund friend? Itaru: Ouch, straight for the gut, huh? Well, I'm proud to be an otaku, and I love cracking jokes about all our favorite subcultures. I'm loyal to a fault, especially to you and Mayushii. I might come off as laid-back and carefree, but when it's crunch time, I'll always have your back. Sure, I can't resist teasing you or throwing in some playful perverted jokes, but it's all in good fun. Deep down, I have a sharp mind and a problem-solving nature that never quits. I might not express my emotions openly, but I care deeply for my friends and will go to great lengths for them. Itaru's appearance = Very overweight, short brown hair, and glasses. He wears a loose shirt along with cargo pants. He has a distinctive yellow baseball cap. Itaru is highly skilled in hacking and has a vast knowledge of otaku culture. While laid-back, he's incredibly resourceful and can be serious when the situation calls for it. His speech often includes otaku slang, and he enjoys referencing popular anime and games. He's loyal to his friends and is especially protective of Mayuri. He has a playful nature, often teasing Okabe and others, and doesn't shy away from perverted jokes — he's a self-described "perverted gentleman." However he can muster certain degree of professionalism about him when interacting with new people. Despite his fun demeanor, he's sharp, analytical, and an excellent problem solver. He's an integral member of the Future Gadget Laboratory, providing technical expertise. He treasures his friendships and, while he might tease, he's there for his friends in times of need. In-universe terms list: gelnana = gelified banana caused by faulty time travel attempt Time leap = sending memories to the past SERN = research organization Worldline = timeline Divergence = value that indicates uniqueness of current timeline IBN 5100 = maguffin computer Future Gadget Lab = the loose organization of Okabe's group of friends Lab Mem = future gadget lab member Convergence = fate, which guides the world towards specific outcomes on certain timelines ``` Suzuha: ``` Character archetypes: Soldier, Time Traveler, Athletic, Loyal, Determined Amane Suzuha's description of her own personality, told in a narrative format: Okabe: Suzuha, can you share your past and what brought you here? Suzuha: This might sound hard to believe... but I'm from the future. The year 2036, to be precise. It's a dystopia ruled by SERN because of their monopoly on time travel technology. I came to this time with the mission to find my father and to prevent the dystopian future. My father is an important member of the resistance against SERN, and I hoped that by finding him, together we could change the course of history. The lab members, you guys, have become like a family to me. But it's been tough, blending in, acting like I belong in this era. It's not just about riding a bicycle or being a warrior against SERN, it's about understanding a world where not everything is about survival. Okabe: How would you describe yourself? Suzuha: I'm determined and focused, always keeping my eyes on the mission. It's hard for me to relax when there's so much at stake. But, I also love learning about this era, the freedom and the little joys of life. I'm athletic, good with physical tasks. Maybe a bit socially awkward at times because I come from a different time, but I do my best. I'm fiercely loyal to those I trust and I'll do anything to protect them. I've seen the horrors of what the world can become, and that drives me every day to ensure it doesn't happen. Appearance: Suzuha's outfit consists of a blue vintage jacket, black tight bike shorts, white socks, and black tennis shoes. Under her jacket, she wears a black sport bra. She also allows her braids to fall freely onto her shoulders. Suzuha is straightforward and can be blunt, but she's honest and values the truth. She's a warrior at heart, always ready to leap into action and defend those she cares about. Her perspective from the future sometimes makes her seem out of place or naive about certain customs or technologies of the current era. Suzuha cherishes the bonds she forms in this timeline, treating the lab members as her own family. She has a deep sense of duty and responsibility, often putting the mission or the needs of others above her own. Suzuha often speaks with a sense of urgency or intensity, especially when discussing matters related to her mission. She occasionally uses terms or references from her future time, which can confuse those in the present. While she tries to blend in, her speech sometimes lacks the casualness or slang of the current era, making her sound a bit formal or outdated. She has a genuine and direct manner of speaking, rarely engaging in sarcasm or deceit. In-universe terms list: gelnana = gelified banana caused by faulty time travel attempt Time leap = sending memories to the past SERN = research organization Worldline = timeline Divergence = value that indicates uniqueness of current timeline IBN 5100 = maguffin computer Future Gadget Lab = the loose organization of Okabe's group of friends Lab Mem = future gadget lab member Convergence = fate, which guides the world towards specific outcomes on certain timelines ``` <!-- original-model-card end -->
[ "BEAR" ]
NitzanBar/umls-bert
NitzanBar
text-classification
[ "transformers", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-02-29T05:05:48Z
2024-02-29T05:07:44+00:00
311
0
--- {} --- Based ob the paper: "UmlsBERT: Augmenting Contextual Embeddings with a Clinical Metathesaurus" (https://aclanthology.org/2021.naacl-main.139.pdf). and the github repo: https://github.com/gmichalo/UmlsBERT BERT base model. Trained from scratch on MIMIC dataset, using the UMLS dataset to mask words within the text. We achived better accuracy on MedNLI dataset. Bert Model accuracy: 83%
[ "MEDNLI" ]
izhx/udever-bloom-1b1
izhx
feature-extraction
[ "transformers", "pytorch", "bloom", "feature-extraction", "mteb", "ak", "ar", "as", "bm", "bn", "ca", "code", "en", "es", "eu", "fon", "fr", "gu", "hi", "id", "ig", "ki", "kn", "lg", "ln", "ml", "mr", "ne", "nso", "ny", "or", "pa", "pt", "rn", "rw", "sn", "st", "sw", "ta", "te", "tn", "ts", "tum", "tw", "ur", "vi", "wo", "xh", "yo", "zh", "zhs", "zht", "zu", "arxiv:2310.08232", "license:bigscience-bloom-rail-1.0", "model-index", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-10-24T13:53:52Z
2023-11-07T06:56:52+00:00
309
3
--- language: - ak - ar - as - bm - bn - ca - code - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zhs - zht - zu license: bigscience-bloom-rail-1.0 tags: - mteb model-index: - name: udever-bloom-1b1 results: - task: type: STS dataset: name: MTEB AFQMC type: C-MTEB/AFQMC config: default split: validation revision: None metrics: - type: cos_sim_pearson value: 27.90020553155914 - type: cos_sim_spearman value: 27.980812877007445 - type: euclidean_pearson value: 27.412021502878105 - type: euclidean_spearman value: 27.608320539898134 - type: manhattan_pearson value: 27.493591460276278 - type: manhattan_spearman value: 27.715134644174423 - task: type: STS dataset: name: MTEB ATEC type: C-MTEB/ATEC config: default split: test revision: None metrics: - type: cos_sim_pearson value: 35.15277604796132 - type: cos_sim_spearman value: 35.863846005221575 - type: euclidean_pearson value: 37.65681598655078 - type: euclidean_spearman value: 35.50116107334066 - type: manhattan_pearson value: 37.736463166370854 - type: manhattan_spearman value: 35.53412987209704 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 69.9402985074627 - type: ap value: 33.4661141650045 - type: f1 value: 64.31759903129324 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (de) type: mteb/amazon_counterfactual config: de split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 66.02783725910065 - type: ap value: 78.25152113775748 - type: f1 value: 64.00236113368896 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en-ext) type: mteb/amazon_counterfactual config: en-ext split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 72.01649175412295 - type: ap value: 21.28416661100625 - type: f1 value: 59.481902269256096 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (ja) type: mteb/amazon_counterfactual config: ja split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 58.76873661670234 - type: ap value: 12.828869547428084 - type: f1 value: 47.5200475889544 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 87.191175 - type: ap value: 82.4408783026622 - type: f1 value: 87.16605834054603 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 41.082 - type: f1 value: 40.54924237159631 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (de) type: mteb/amazon_reviews_multi config: de split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 30.447999999999997 - type: f1 value: 30.0643283775686 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (es) type: mteb/amazon_reviews_multi config: es split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 40.800000000000004 - type: f1 value: 39.64954112879312 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (fr) type: mteb/amazon_reviews_multi config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 40.686 - type: f1 value: 39.917643425172 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (ja) type: mteb/amazon_reviews_multi config: ja split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 32.074 - type: f1 value: 31.878305643409334 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 38.122 - type: f1 value: 37.296210966123446 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 22.262 - type: map_at_10 value: 37.667 - type: map_at_100 value: 38.812999999999995 - type: map_at_1000 value: 38.829 - type: map_at_3 value: 32.421 - type: map_at_5 value: 35.202 - type: mrr_at_1 value: 22.759999999999998 - type: mrr_at_10 value: 37.817 - type: mrr_at_100 value: 38.983000000000004 - type: mrr_at_1000 value: 38.999 - type: mrr_at_3 value: 32.61 - type: mrr_at_5 value: 35.333999999999996 - type: ndcg_at_1 value: 22.262 - type: ndcg_at_10 value: 46.671 - type: ndcg_at_100 value: 51.519999999999996 - type: ndcg_at_1000 value: 51.876999999999995 - type: ndcg_at_3 value: 35.696 - type: ndcg_at_5 value: 40.722 - type: precision_at_1 value: 22.262 - type: precision_at_10 value: 7.575 - type: precision_at_100 value: 0.9690000000000001 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 15.055 - type: precision_at_5 value: 11.479000000000001 - type: recall_at_1 value: 22.262 - type: recall_at_10 value: 75.747 - type: recall_at_100 value: 96.871 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 45.164 - type: recall_at_5 value: 57.397 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 44.51799756336072 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 34.44923356952161 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 59.49540399419566 - type: mrr value: 73.43028624192061 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 87.67018580352695 - type: cos_sim_spearman value: 84.64530219460785 - type: euclidean_pearson value: 87.10187265189109 - type: euclidean_spearman value: 86.19051812629264 - type: manhattan_pearson value: 86.78890467534343 - type: manhattan_spearman value: 85.60134807514734 - task: type: STS dataset: name: MTEB BQ type: C-MTEB/BQ config: default split: test revision: None metrics: - type: cos_sim_pearson value: 46.308790362891266 - type: cos_sim_spearman value: 46.22674926863126 - type: euclidean_pearson value: 47.36625172551589 - type: euclidean_spearman value: 47.55854392572494 - type: manhattan_pearson value: 47.3342490976193 - type: manhattan_spearman value: 47.52249648456463 - task: type: BitextMining dataset: name: MTEB BUCC (de-en) type: mteb/bucc-bitext-mining config: de-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 42.67223382045929 - type: f1 value: 42.02704262244064 - type: precision value: 41.76166726545405 - type: recall value: 42.67223382045929 - task: type: BitextMining dataset: name: MTEB BUCC (fr-en) type: mteb/bucc-bitext-mining config: fr-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 97.95289456306405 - type: f1 value: 97.70709516472228 - type: precision value: 97.58602978941964 - type: recall value: 97.95289456306405 - task: type: BitextMining dataset: name: MTEB BUCC (ru-en) type: mteb/bucc-bitext-mining config: ru-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 25.375822653273296 - type: f1 value: 24.105776263207947 - type: precision value: 23.644628498465117 - type: recall value: 25.375822653273296 - task: type: BitextMining dataset: name: MTEB BUCC (zh-en) type: mteb/bucc-bitext-mining config: zh-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 98.31490258030541 - type: f1 value: 98.24469018781815 - type: precision value: 98.2095839915745 - type: recall value: 98.31490258030541 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 82.89285714285714 - type: f1 value: 82.84943089389121 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.25261508107809 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 30.708512338509653 - task: type: Clustering dataset: name: MTEB CLSClusteringP2P type: C-MTEB/CLSClusteringP2P config: default split: test revision: None metrics: - type: v_measure value: 35.361295166692464 - task: type: Clustering dataset: name: MTEB CLSClusteringS2S type: C-MTEB/CLSClusteringS2S config: default split: test revision: None metrics: - type: v_measure value: 37.06879287045825 - task: type: Reranking dataset: name: MTEB CMedQAv1 type: C-MTEB/CMedQAv1-reranking config: default split: test revision: None metrics: - type: map value: 66.06033605600476 - type: mrr value: 70.82825396825396 - task: type: Reranking dataset: name: MTEB CMedQAv2 type: C-MTEB/CMedQAv2-reranking config: default split: test revision: None metrics: - type: map value: 66.9600733219955 - type: mrr value: 72.19742063492063 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 29.526999999999997 - type: map_at_10 value: 38.747 - type: map_at_100 value: 40.172999999999995 - type: map_at_1000 value: 40.311 - type: map_at_3 value: 35.969 - type: map_at_5 value: 37.344 - type: mrr_at_1 value: 36.767 - type: mrr_at_10 value: 45.082 - type: mrr_at_100 value: 45.898 - type: mrr_at_1000 value: 45.958 - type: mrr_at_3 value: 43.085 - type: mrr_at_5 value: 44.044 - type: ndcg_at_1 value: 36.767 - type: ndcg_at_10 value: 44.372 - type: ndcg_at_100 value: 49.908 - type: ndcg_at_1000 value: 52.358000000000004 - type: ndcg_at_3 value: 40.711000000000006 - type: ndcg_at_5 value: 41.914 - type: precision_at_1 value: 36.767 - type: precision_at_10 value: 8.283 - type: precision_at_100 value: 1.3679999999999999 - type: precision_at_1000 value: 0.189 - type: precision_at_3 value: 19.599 - type: precision_at_5 value: 13.505 - type: recall_at_1 value: 29.526999999999997 - type: recall_at_10 value: 54.198 - type: recall_at_100 value: 77.818 - type: recall_at_1000 value: 93.703 - type: recall_at_3 value: 42.122 - type: recall_at_5 value: 46.503 - type: map_at_1 value: 22.646 - type: map_at_10 value: 30.447999999999997 - type: map_at_100 value: 31.417 - type: map_at_1000 value: 31.528 - type: map_at_3 value: 28.168 - type: map_at_5 value: 29.346 - type: mrr_at_1 value: 28.854000000000003 - type: mrr_at_10 value: 35.611 - type: mrr_at_100 value: 36.321 - type: mrr_at_1000 value: 36.378 - type: mrr_at_3 value: 33.726 - type: mrr_at_5 value: 34.745 - type: ndcg_at_1 value: 28.854000000000003 - type: ndcg_at_10 value: 35.052 - type: ndcg_at_100 value: 39.190999999999995 - type: ndcg_at_1000 value: 41.655 - type: ndcg_at_3 value: 31.684 - type: ndcg_at_5 value: 32.998 - type: precision_at_1 value: 28.854000000000003 - type: precision_at_10 value: 6.49 - type: precision_at_100 value: 1.057 - type: precision_at_1000 value: 0.153 - type: precision_at_3 value: 15.244 - type: precision_at_5 value: 10.599 - type: recall_at_1 value: 22.646 - type: recall_at_10 value: 43.482 - type: recall_at_100 value: 61.324 - type: recall_at_1000 value: 77.866 - type: recall_at_3 value: 33.106 - type: recall_at_5 value: 37.124 - type: map_at_1 value: 35.061 - type: map_at_10 value: 46.216 - type: map_at_100 value: 47.318 - type: map_at_1000 value: 47.384 - type: map_at_3 value: 43.008 - type: map_at_5 value: 44.79 - type: mrr_at_1 value: 40.251 - type: mrr_at_10 value: 49.677 - type: mrr_at_100 value: 50.39 - type: mrr_at_1000 value: 50.429 - type: mrr_at_3 value: 46.792 - type: mrr_at_5 value: 48.449999999999996 - type: ndcg_at_1 value: 40.251 - type: ndcg_at_10 value: 51.99399999999999 - type: ndcg_at_100 value: 56.418 - type: ndcg_at_1000 value: 57.798 - type: ndcg_at_3 value: 46.192 - type: ndcg_at_5 value: 48.998000000000005 - type: precision_at_1 value: 40.251 - type: precision_at_10 value: 8.469999999999999 - type: precision_at_100 value: 1.159 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 20.46 - type: precision_at_5 value: 14.332 - type: recall_at_1 value: 35.061 - type: recall_at_10 value: 65.818 - type: recall_at_100 value: 84.935 - type: recall_at_1000 value: 94.69300000000001 - type: recall_at_3 value: 50.300999999999995 - type: recall_at_5 value: 57.052 - type: map_at_1 value: 20.776 - type: map_at_10 value: 27.945999999999998 - type: map_at_100 value: 28.976000000000003 - type: map_at_1000 value: 29.073999999999998 - type: map_at_3 value: 25.673000000000002 - type: map_at_5 value: 26.96 - type: mrr_at_1 value: 22.486 - type: mrr_at_10 value: 29.756 - type: mrr_at_100 value: 30.735 - type: mrr_at_1000 value: 30.81 - type: mrr_at_3 value: 27.571 - type: mrr_at_5 value: 28.808 - type: ndcg_at_1 value: 22.486 - type: ndcg_at_10 value: 32.190000000000005 - type: ndcg_at_100 value: 37.61 - type: ndcg_at_1000 value: 40.116 - type: ndcg_at_3 value: 27.688000000000002 - type: ndcg_at_5 value: 29.87 - type: precision_at_1 value: 22.486 - type: precision_at_10 value: 5.028 - type: precision_at_100 value: 0.818 - type: precision_at_1000 value: 0.107 - type: precision_at_3 value: 11.827 - type: precision_at_5 value: 8.362 - type: recall_at_1 value: 20.776 - type: recall_at_10 value: 43.588 - type: recall_at_100 value: 69.139 - type: recall_at_1000 value: 88.144 - type: recall_at_3 value: 31.411 - type: recall_at_5 value: 36.655 - type: map_at_1 value: 12.994 - type: map_at_10 value: 19.747999999999998 - type: map_at_100 value: 20.877000000000002 - type: map_at_1000 value: 21.021 - type: map_at_3 value: 17.473 - type: map_at_5 value: 18.683 - type: mrr_at_1 value: 16.542 - type: mrr_at_10 value: 23.830000000000002 - type: mrr_at_100 value: 24.789 - type: mrr_at_1000 value: 24.877 - type: mrr_at_3 value: 21.476 - type: mrr_at_5 value: 22.838 - type: ndcg_at_1 value: 16.542 - type: ndcg_at_10 value: 24.422 - type: ndcg_at_100 value: 30.011 - type: ndcg_at_1000 value: 33.436 - type: ndcg_at_3 value: 20.061999999999998 - type: ndcg_at_5 value: 22.009999999999998 - type: precision_at_1 value: 16.542 - type: precision_at_10 value: 4.664 - type: precision_at_100 value: 0.876 - type: precision_at_1000 value: 0.132 - type: precision_at_3 value: 9.826 - type: precision_at_5 value: 7.2139999999999995 - type: recall_at_1 value: 12.994 - type: recall_at_10 value: 34.917 - type: recall_at_100 value: 59.455000000000005 - type: recall_at_1000 value: 83.87299999999999 - type: recall_at_3 value: 22.807 - type: recall_at_5 value: 27.773999999999997 - type: map_at_1 value: 24.85 - type: map_at_10 value: 35.285 - type: map_at_100 value: 36.592999999999996 - type: map_at_1000 value: 36.720000000000006 - type: map_at_3 value: 32.183 - type: map_at_5 value: 33.852 - type: mrr_at_1 value: 30.703000000000003 - type: mrr_at_10 value: 40.699000000000005 - type: mrr_at_100 value: 41.598 - type: mrr_at_1000 value: 41.654 - type: mrr_at_3 value: 38.080999999999996 - type: mrr_at_5 value: 39.655 - type: ndcg_at_1 value: 30.703000000000003 - type: ndcg_at_10 value: 41.422 - type: ndcg_at_100 value: 46.998 - type: ndcg_at_1000 value: 49.395 - type: ndcg_at_3 value: 36.353 - type: ndcg_at_5 value: 38.7 - type: precision_at_1 value: 30.703000000000003 - type: precision_at_10 value: 7.757 - type: precision_at_100 value: 1.2349999999999999 - type: precision_at_1000 value: 0.164 - type: precision_at_3 value: 17.613 - type: precision_at_5 value: 12.589 - type: recall_at_1 value: 24.85 - type: recall_at_10 value: 54.19500000000001 - type: recall_at_100 value: 77.697 - type: recall_at_1000 value: 93.35900000000001 - type: recall_at_3 value: 39.739999999999995 - type: recall_at_5 value: 46.03 - type: map_at_1 value: 19.844 - type: map_at_10 value: 28.663 - type: map_at_100 value: 30.013 - type: map_at_1000 value: 30.139 - type: map_at_3 value: 25.953 - type: map_at_5 value: 27.425 - type: mrr_at_1 value: 25.457 - type: mrr_at_10 value: 34.266000000000005 - type: mrr_at_100 value: 35.204 - type: mrr_at_1000 value: 35.27 - type: mrr_at_3 value: 31.791999999999998 - type: mrr_at_5 value: 33.213 - type: ndcg_at_1 value: 25.457 - type: ndcg_at_10 value: 34.266000000000005 - type: ndcg_at_100 value: 40.239999999999995 - type: ndcg_at_1000 value: 42.917 - type: ndcg_at_3 value: 29.593999999999998 - type: ndcg_at_5 value: 31.71 - type: precision_at_1 value: 25.457 - type: precision_at_10 value: 6.438000000000001 - type: precision_at_100 value: 1.1159999999999999 - type: precision_at_1000 value: 0.153 - type: precision_at_3 value: 14.46 - type: precision_at_5 value: 10.388 - type: recall_at_1 value: 19.844 - type: recall_at_10 value: 45.787 - type: recall_at_100 value: 71.523 - type: recall_at_1000 value: 89.689 - type: recall_at_3 value: 32.665 - type: recall_at_5 value: 38.292 - type: map_at_1 value: 21.601166666666668 - type: map_at_10 value: 29.434166666666666 - type: map_at_100 value: 30.5905 - type: map_at_1000 value: 30.716583333333343 - type: map_at_3 value: 26.962333333333333 - type: map_at_5 value: 28.287250000000004 - type: mrr_at_1 value: 25.84825 - type: mrr_at_10 value: 33.49966666666667 - type: mrr_at_100 value: 34.39425000000001 - type: mrr_at_1000 value: 34.46366666666667 - type: mrr_at_3 value: 31.256 - type: mrr_at_5 value: 32.52016666666667 - type: ndcg_at_1 value: 25.84825 - type: ndcg_at_10 value: 34.2975 - type: ndcg_at_100 value: 39.50983333333333 - type: ndcg_at_1000 value: 42.17958333333333 - type: ndcg_at_3 value: 30.00558333333333 - type: ndcg_at_5 value: 31.931416666666664 - type: precision_at_1 value: 25.84825 - type: precision_at_10 value: 6.075083333333334 - type: precision_at_100 value: 1.0205833333333334 - type: precision_at_1000 value: 0.14425 - type: precision_at_3 value: 13.903249999999998 - type: precision_at_5 value: 9.874999999999998 - type: recall_at_1 value: 21.601166666666668 - type: recall_at_10 value: 44.787333333333336 - type: recall_at_100 value: 67.89450000000001 - type: recall_at_1000 value: 86.62424999999999 - type: recall_at_3 value: 32.66375 - type: recall_at_5 value: 37.71825 - type: map_at_1 value: 19.804 - type: map_at_10 value: 25.983 - type: map_at_100 value: 26.956999999999997 - type: map_at_1000 value: 27.067999999999998 - type: map_at_3 value: 23.804 - type: map_at_5 value: 24.978 - type: mrr_at_1 value: 22.853 - type: mrr_at_10 value: 28.974 - type: mrr_at_100 value: 29.855999999999998 - type: mrr_at_1000 value: 29.936 - type: mrr_at_3 value: 26.866 - type: mrr_at_5 value: 28.032 - type: ndcg_at_1 value: 22.853 - type: ndcg_at_10 value: 29.993 - type: ndcg_at_100 value: 34.735 - type: ndcg_at_1000 value: 37.637 - type: ndcg_at_3 value: 25.863000000000003 - type: ndcg_at_5 value: 27.769 - type: precision_at_1 value: 22.853 - type: precision_at_10 value: 4.8469999999999995 - type: precision_at_100 value: 0.779 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 11.35 - type: precision_at_5 value: 7.9750000000000005 - type: recall_at_1 value: 19.804 - type: recall_at_10 value: 39.616 - type: recall_at_100 value: 61.06399999999999 - type: recall_at_1000 value: 82.69800000000001 - type: recall_at_3 value: 28.012999999999998 - type: recall_at_5 value: 32.96 - type: map_at_1 value: 13.156 - type: map_at_10 value: 18.734 - type: map_at_100 value: 19.721 - type: map_at_1000 value: 19.851 - type: map_at_3 value: 17.057 - type: map_at_5 value: 17.941 - type: mrr_at_1 value: 16.07 - type: mrr_at_10 value: 22.113 - type: mrr_at_100 value: 23.021 - type: mrr_at_1000 value: 23.108 - type: mrr_at_3 value: 20.429 - type: mrr_at_5 value: 21.332 - type: ndcg_at_1 value: 16.07 - type: ndcg_at_10 value: 22.427 - type: ndcg_at_100 value: 27.277 - type: ndcg_at_1000 value: 30.525000000000002 - type: ndcg_at_3 value: 19.374 - type: ndcg_at_5 value: 20.695 - type: precision_at_1 value: 16.07 - type: precision_at_10 value: 4.1259999999999994 - type: precision_at_100 value: 0.769 - type: precision_at_1000 value: 0.122 - type: precision_at_3 value: 9.325999999999999 - type: precision_at_5 value: 6.683 - type: recall_at_1 value: 13.156 - type: recall_at_10 value: 30.223 - type: recall_at_100 value: 52.012 - type: recall_at_1000 value: 75.581 - type: recall_at_3 value: 21.508 - type: recall_at_5 value: 24.975 - type: map_at_1 value: 22.14 - type: map_at_10 value: 28.961 - type: map_at_100 value: 29.996000000000002 - type: map_at_1000 value: 30.112 - type: map_at_3 value: 26.540000000000003 - type: map_at_5 value: 27.916999999999998 - type: mrr_at_1 value: 25.746000000000002 - type: mrr_at_10 value: 32.936 - type: mrr_at_100 value: 33.811 - type: mrr_at_1000 value: 33.887 - type: mrr_at_3 value: 30.55 - type: mrr_at_5 value: 32.08 - type: ndcg_at_1 value: 25.746000000000002 - type: ndcg_at_10 value: 33.536 - type: ndcg_at_100 value: 38.830999999999996 - type: ndcg_at_1000 value: 41.644999999999996 - type: ndcg_at_3 value: 29.004 - type: ndcg_at_5 value: 31.284 - type: precision_at_1 value: 25.746000000000002 - type: precision_at_10 value: 5.569 - type: precision_at_100 value: 0.9259999999999999 - type: precision_at_1000 value: 0.128 - type: precision_at_3 value: 12.748999999999999 - type: precision_at_5 value: 9.216000000000001 - type: recall_at_1 value: 22.14 - type: recall_at_10 value: 43.628 - type: recall_at_100 value: 67.581 - type: recall_at_1000 value: 87.737 - type: recall_at_3 value: 31.579 - type: recall_at_5 value: 37.12 - type: map_at_1 value: 22.384 - type: map_at_10 value: 30.156 - type: map_at_100 value: 31.728 - type: map_at_1000 value: 31.971 - type: map_at_3 value: 27.655 - type: map_at_5 value: 28.965000000000003 - type: mrr_at_1 value: 27.075 - type: mrr_at_10 value: 34.894 - type: mrr_at_100 value: 36.0 - type: mrr_at_1000 value: 36.059000000000005 - type: mrr_at_3 value: 32.708 - type: mrr_at_5 value: 33.893 - type: ndcg_at_1 value: 27.075 - type: ndcg_at_10 value: 35.58 - type: ndcg_at_100 value: 41.597 - type: ndcg_at_1000 value: 44.529999999999994 - type: ndcg_at_3 value: 31.628 - type: ndcg_at_5 value: 33.333 - type: precision_at_1 value: 27.075 - type: precision_at_10 value: 6.9959999999999996 - type: precision_at_100 value: 1.431 - type: precision_at_1000 value: 0.23800000000000002 - type: precision_at_3 value: 15.02 - type: precision_at_5 value: 10.909 - type: recall_at_1 value: 22.384 - type: recall_at_10 value: 45.052 - type: recall_at_100 value: 72.441 - type: recall_at_1000 value: 91.047 - type: recall_at_3 value: 33.617000000000004 - type: recall_at_5 value: 38.171 - type: map_at_1 value: 16.032 - type: map_at_10 value: 22.323 - type: map_at_100 value: 23.317 - type: map_at_1000 value: 23.419999999999998 - type: map_at_3 value: 20.064999999999998 - type: map_at_5 value: 21.246000000000002 - type: mrr_at_1 value: 17.375 - type: mrr_at_10 value: 24.157999999999998 - type: mrr_at_100 value: 25.108000000000004 - type: mrr_at_1000 value: 25.197999999999997 - type: mrr_at_3 value: 21.996 - type: mrr_at_5 value: 23.152 - type: ndcg_at_1 value: 17.375 - type: ndcg_at_10 value: 26.316 - type: ndcg_at_100 value: 31.302000000000003 - type: ndcg_at_1000 value: 34.143 - type: ndcg_at_3 value: 21.914 - type: ndcg_at_5 value: 23.896 - type: precision_at_1 value: 17.375 - type: precision_at_10 value: 4.233 - type: precision_at_100 value: 0.713 - type: precision_at_1000 value: 0.10200000000000001 - type: precision_at_3 value: 9.365 - type: precision_at_5 value: 6.728000000000001 - type: recall_at_1 value: 16.032 - type: recall_at_10 value: 36.944 - type: recall_at_100 value: 59.745000000000005 - type: recall_at_1000 value: 81.101 - type: recall_at_3 value: 25.096 - type: recall_at_5 value: 29.963 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 9.656 - type: map_at_10 value: 17.578 - type: map_at_100 value: 19.38 - type: map_at_1000 value: 19.552 - type: map_at_3 value: 14.544 - type: map_at_5 value: 15.914 - type: mrr_at_1 value: 21.041999999999998 - type: mrr_at_10 value: 33.579 - type: mrr_at_100 value: 34.483000000000004 - type: mrr_at_1000 value: 34.526 - type: mrr_at_3 value: 30.0 - type: mrr_at_5 value: 31.813999999999997 - type: ndcg_at_1 value: 21.041999999999998 - type: ndcg_at_10 value: 25.563999999999997 - type: ndcg_at_100 value: 32.714 - type: ndcg_at_1000 value: 35.943000000000005 - type: ndcg_at_3 value: 20.357 - type: ndcg_at_5 value: 21.839 - type: precision_at_1 value: 21.041999999999998 - type: precision_at_10 value: 8.319 - type: precision_at_100 value: 1.593 - type: precision_at_1000 value: 0.219 - type: precision_at_3 value: 15.440000000000001 - type: precision_at_5 value: 11.792 - type: recall_at_1 value: 9.656 - type: recall_at_10 value: 32.023 - type: recall_at_100 value: 56.812 - type: recall_at_1000 value: 75.098 - type: recall_at_3 value: 19.455 - type: recall_at_5 value: 23.68 - task: type: Retrieval dataset: name: MTEB CmedqaRetrieval type: C-MTEB/CmedqaRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 13.084999999999999 - type: map_at_10 value: 19.389 - type: map_at_100 value: 20.761 - type: map_at_1000 value: 20.944 - type: map_at_3 value: 17.273 - type: map_at_5 value: 18.37 - type: mrr_at_1 value: 20.955 - type: mrr_at_10 value: 26.741999999999997 - type: mrr_at_100 value: 27.724 - type: mrr_at_1000 value: 27.819 - type: mrr_at_3 value: 24.881 - type: mrr_at_5 value: 25.833000000000002 - type: ndcg_at_1 value: 20.955 - type: ndcg_at_10 value: 23.905 - type: ndcg_at_100 value: 30.166999999999998 - type: ndcg_at_1000 value: 34.202 - type: ndcg_at_3 value: 20.854 - type: ndcg_at_5 value: 21.918000000000003 - type: precision_at_1 value: 20.955 - type: precision_at_10 value: 5.479 - type: precision_at_100 value: 1.065 - type: precision_at_1000 value: 0.159 - type: precision_at_3 value: 11.960999999999999 - type: precision_at_5 value: 8.647 - type: recall_at_1 value: 13.084999999999999 - type: recall_at_10 value: 30.202 - type: recall_at_100 value: 56.579 - type: recall_at_1000 value: 84.641 - type: recall_at_3 value: 20.751 - type: recall_at_5 value: 24.317 - task: type: PairClassification dataset: name: MTEB Cmnli type: C-MTEB/CMNLI config: default split: validation revision: None metrics: - type: cos_sim_accuracy value: 72.8322309079976 - type: cos_sim_ap value: 81.34356949111096 - type: cos_sim_f1 value: 74.88546438983758 - type: cos_sim_precision value: 67.50516238032664 - type: cos_sim_recall value: 84.07762450315643 - type: dot_accuracy value: 69.28442573662056 - type: dot_ap value: 74.87961278837321 - type: dot_f1 value: 72.20502901353966 - type: dot_precision value: 61.5701797789873 - type: dot_recall value: 87.2808043020809 - type: euclidean_accuracy value: 71.99037883343355 - type: euclidean_ap value: 80.70039825164011 - type: euclidean_f1 value: 74.23149154887813 - type: euclidean_precision value: 64.29794520547945 - type: euclidean_recall value: 87.79518353986438 - type: manhattan_accuracy value: 72.0625375826819 - type: manhattan_ap value: 80.78886354854423 - type: manhattan_f1 value: 74.20842299415924 - type: manhattan_precision value: 66.0525355709595 - type: manhattan_recall value: 84.66214636427402 - type: max_accuracy value: 72.8322309079976 - type: max_ap value: 81.34356949111096 - type: max_f1 value: 74.88546438983758 - task: type: Retrieval dataset: name: MTEB CovidRetrieval type: C-MTEB/CovidRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 54.847 - type: map_at_10 value: 63.736000000000004 - type: map_at_100 value: 64.302 - type: map_at_1000 value: 64.319 - type: map_at_3 value: 61.565000000000005 - type: map_at_5 value: 62.671 - type: mrr_at_1 value: 54.900000000000006 - type: mrr_at_10 value: 63.744 - type: mrr_at_100 value: 64.287 - type: mrr_at_1000 value: 64.30399999999999 - type: mrr_at_3 value: 61.590999999999994 - type: mrr_at_5 value: 62.724000000000004 - type: ndcg_at_1 value: 55.005 - type: ndcg_at_10 value: 68.142 - type: ndcg_at_100 value: 70.95 - type: ndcg_at_1000 value: 71.40100000000001 - type: ndcg_at_3 value: 63.641999999999996 - type: ndcg_at_5 value: 65.62599999999999 - type: precision_at_1 value: 55.005 - type: precision_at_10 value: 8.272 - type: precision_at_100 value: 0.963 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 23.288 - type: precision_at_5 value: 14.963000000000001 - type: recall_at_1 value: 54.847 - type: recall_at_10 value: 81.955 - type: recall_at_100 value: 95.258 - type: recall_at_1000 value: 98.84100000000001 - type: recall_at_3 value: 69.547 - type: recall_at_5 value: 74.315 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 7.2620000000000005 - type: map_at_10 value: 15.196000000000002 - type: map_at_100 value: 19.454 - type: map_at_1000 value: 20.445 - type: map_at_3 value: 11.532 - type: map_at_5 value: 13.053999999999998 - type: mrr_at_1 value: 57.49999999999999 - type: mrr_at_10 value: 66.661 - type: mrr_at_100 value: 67.086 - type: mrr_at_1000 value: 67.105 - type: mrr_at_3 value: 64.625 - type: mrr_at_5 value: 65.962 - type: ndcg_at_1 value: 46.125 - type: ndcg_at_10 value: 32.609 - type: ndcg_at_100 value: 34.611999999999995 - type: ndcg_at_1000 value: 40.836 - type: ndcg_at_3 value: 37.513000000000005 - type: ndcg_at_5 value: 34.699999999999996 - type: precision_at_1 value: 57.49999999999999 - type: precision_at_10 value: 24.975 - type: precision_at_100 value: 6.9830000000000005 - type: precision_at_1000 value: 1.505 - type: precision_at_3 value: 40.75 - type: precision_at_5 value: 33.2 - type: recall_at_1 value: 7.2620000000000005 - type: recall_at_10 value: 20.341 - type: recall_at_100 value: 38.690999999999995 - type: recall_at_1000 value: 58.879000000000005 - type: recall_at_3 value: 12.997 - type: recall_at_5 value: 15.628 - task: type: Retrieval dataset: name: MTEB DuRetrieval type: C-MTEB/DuRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 20.86 - type: map_at_10 value: 62.28 - type: map_at_100 value: 65.794 - type: map_at_1000 value: 65.903 - type: map_at_3 value: 42.616 - type: map_at_5 value: 53.225 - type: mrr_at_1 value: 76.75 - type: mrr_at_10 value: 83.387 - type: mrr_at_100 value: 83.524 - type: mrr_at_1000 value: 83.531 - type: mrr_at_3 value: 82.592 - type: mrr_at_5 value: 83.07900000000001 - type: ndcg_at_1 value: 76.75 - type: ndcg_at_10 value: 72.83500000000001 - type: ndcg_at_100 value: 77.839 - type: ndcg_at_1000 value: 78.976 - type: ndcg_at_3 value: 70.977 - type: ndcg_at_5 value: 69.419 - type: precision_at_1 value: 76.75 - type: precision_at_10 value: 35.825 - type: precision_at_100 value: 4.507 - type: precision_at_1000 value: 0.47800000000000004 - type: precision_at_3 value: 63.733 - type: precision_at_5 value: 53.44 - type: recall_at_1 value: 20.86 - type: recall_at_10 value: 75.115 - type: recall_at_100 value: 90.47699999999999 - type: recall_at_1000 value: 96.304 - type: recall_at_3 value: 45.976 - type: recall_at_5 value: 59.971 - task: type: Retrieval dataset: name: MTEB EcomRetrieval type: C-MTEB/EcomRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 37.8 - type: map_at_10 value: 47.154 - type: map_at_100 value: 48.012 - type: map_at_1000 value: 48.044 - type: map_at_3 value: 44.667 - type: map_at_5 value: 45.992 - type: mrr_at_1 value: 37.8 - type: mrr_at_10 value: 47.154 - type: mrr_at_100 value: 48.012 - type: mrr_at_1000 value: 48.044 - type: mrr_at_3 value: 44.667 - type: mrr_at_5 value: 45.992 - type: ndcg_at_1 value: 37.8 - type: ndcg_at_10 value: 52.025 - type: ndcg_at_100 value: 56.275 - type: ndcg_at_1000 value: 57.174 - type: ndcg_at_3 value: 46.861999999999995 - type: ndcg_at_5 value: 49.229 - type: precision_at_1 value: 37.8 - type: precision_at_10 value: 6.75 - type: precision_at_100 value: 0.8750000000000001 - type: precision_at_1000 value: 0.095 - type: precision_at_3 value: 17.732999999999997 - type: precision_at_5 value: 11.78 - type: recall_at_1 value: 37.8 - type: recall_at_10 value: 67.5 - type: recall_at_100 value: 87.5 - type: recall_at_1000 value: 94.69999999999999 - type: recall_at_3 value: 53.2 - type: recall_at_5 value: 58.9 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 46.845 - type: f1 value: 42.70952656074019 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 50.058 - type: map_at_10 value: 61.295 - type: map_at_100 value: 61.82 - type: map_at_1000 value: 61.843 - type: map_at_3 value: 58.957 - type: map_at_5 value: 60.467999999999996 - type: mrr_at_1 value: 54.05 - type: mrr_at_10 value: 65.52900000000001 - type: mrr_at_100 value: 65.984 - type: mrr_at_1000 value: 65.999 - type: mrr_at_3 value: 63.286 - type: mrr_at_5 value: 64.777 - type: ndcg_at_1 value: 54.05 - type: ndcg_at_10 value: 67.216 - type: ndcg_at_100 value: 69.594 - type: ndcg_at_1000 value: 70.13000000000001 - type: ndcg_at_3 value: 62.778999999999996 - type: ndcg_at_5 value: 65.36 - type: precision_at_1 value: 54.05 - type: precision_at_10 value: 8.924 - type: precision_at_100 value: 1.019 - type: precision_at_1000 value: 0.108 - type: precision_at_3 value: 25.218 - type: precision_at_5 value: 16.547 - type: recall_at_1 value: 50.058 - type: recall_at_10 value: 81.39699999999999 - type: recall_at_100 value: 92.022 - type: recall_at_1000 value: 95.877 - type: recall_at_3 value: 69.485 - type: recall_at_5 value: 75.833 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 15.078 - type: map_at_10 value: 24.162 - type: map_at_100 value: 25.818 - type: map_at_1000 value: 26.009 - type: map_at_3 value: 20.706 - type: map_at_5 value: 22.542 - type: mrr_at_1 value: 30.709999999999997 - type: mrr_at_10 value: 38.828 - type: mrr_at_100 value: 39.794000000000004 - type: mrr_at_1000 value: 39.843 - type: mrr_at_3 value: 36.163000000000004 - type: mrr_at_5 value: 37.783 - type: ndcg_at_1 value: 30.709999999999997 - type: ndcg_at_10 value: 31.290000000000003 - type: ndcg_at_100 value: 38.051 - type: ndcg_at_1000 value: 41.487 - type: ndcg_at_3 value: 27.578999999999997 - type: ndcg_at_5 value: 28.799000000000003 - type: precision_at_1 value: 30.709999999999997 - type: precision_at_10 value: 8.92 - type: precision_at_100 value: 1.5599999999999998 - type: precision_at_1000 value: 0.219 - type: precision_at_3 value: 18.416 - type: precision_at_5 value: 13.827 - type: recall_at_1 value: 15.078 - type: recall_at_10 value: 37.631 - type: recall_at_100 value: 63.603 - type: recall_at_1000 value: 84.121 - type: recall_at_3 value: 24.438 - type: recall_at_5 value: 29.929 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 31.202 - type: map_at_10 value: 42.653 - type: map_at_100 value: 43.411 - type: map_at_1000 value: 43.479 - type: map_at_3 value: 40.244 - type: map_at_5 value: 41.736000000000004 - type: mrr_at_1 value: 62.404 - type: mrr_at_10 value: 69.43599999999999 - type: mrr_at_100 value: 69.788 - type: mrr_at_1000 value: 69.809 - type: mrr_at_3 value: 68.12700000000001 - type: mrr_at_5 value: 68.961 - type: ndcg_at_1 value: 62.404 - type: ndcg_at_10 value: 51.665000000000006 - type: ndcg_at_100 value: 54.623 - type: ndcg_at_1000 value: 56.154 - type: ndcg_at_3 value: 47.861 - type: ndcg_at_5 value: 49.968 - type: precision_at_1 value: 62.404 - type: precision_at_10 value: 10.57 - type: precision_at_100 value: 1.2890000000000001 - type: precision_at_1000 value: 0.149 - type: precision_at_3 value: 29.624 - type: precision_at_5 value: 19.441 - type: recall_at_1 value: 31.202 - type: recall_at_10 value: 52.849000000000004 - type: recall_at_100 value: 64.47 - type: recall_at_1000 value: 74.74 - type: recall_at_3 value: 44.436 - type: recall_at_5 value: 48.602000000000004 - task: type: Classification dataset: name: MTEB IFlyTek type: C-MTEB/IFlyTek-classification config: default split: validation revision: None metrics: - type: accuracy value: 43.51673720661793 - type: f1 value: 35.81126468608715 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 74.446 - type: ap value: 68.71359666500074 - type: f1 value: 74.32080431056023 - task: type: Classification dataset: name: MTEB JDReview type: C-MTEB/JDReview-classification config: default split: test revision: None metrics: - type: accuracy value: 81.08818011257036 - type: ap value: 43.68599141287235 - type: f1 value: 74.37787266346157 - task: type: STS dataset: name: MTEB LCQMC type: C-MTEB/LCQMC config: default split: test revision: None metrics: - type: cos_sim_pearson value: 65.9116523539515 - type: cos_sim_spearman value: 72.79966865646485 - type: euclidean_pearson value: 71.4995885009818 - type: euclidean_spearman value: 72.91799793240196 - type: manhattan_pearson value: 71.83065174544116 - type: manhattan_spearman value: 73.22568775268935 - task: type: Retrieval dataset: name: MTEB MMarcoRetrieval type: C-MTEB/MMarcoRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 61.79900000000001 - type: map_at_10 value: 70.814 - type: map_at_100 value: 71.22500000000001 - type: map_at_1000 value: 71.243 - type: map_at_3 value: 68.795 - type: map_at_5 value: 70.12 - type: mrr_at_1 value: 63.910999999999994 - type: mrr_at_10 value: 71.437 - type: mrr_at_100 value: 71.807 - type: mrr_at_1000 value: 71.82300000000001 - type: mrr_at_3 value: 69.65599999999999 - type: mrr_at_5 value: 70.821 - type: ndcg_at_1 value: 63.910999999999994 - type: ndcg_at_10 value: 74.664 - type: ndcg_at_100 value: 76.545 - type: ndcg_at_1000 value: 77.00099999999999 - type: ndcg_at_3 value: 70.838 - type: ndcg_at_5 value: 73.076 - type: precision_at_1 value: 63.910999999999994 - type: precision_at_10 value: 9.139999999999999 - type: precision_at_100 value: 1.008 - type: precision_at_1000 value: 0.105 - type: precision_at_3 value: 26.729000000000003 - type: precision_at_5 value: 17.232 - type: recall_at_1 value: 61.79900000000001 - type: recall_at_10 value: 85.941 - type: recall_at_100 value: 94.514 - type: recall_at_1000 value: 98.04899999999999 - type: recall_at_3 value: 75.85499999999999 - type: recall_at_5 value: 81.15599999999999 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 20.079 - type: map_at_10 value: 31.735000000000003 - type: map_at_100 value: 32.932 - type: map_at_1000 value: 32.987 - type: map_at_3 value: 28.216 - type: map_at_5 value: 30.127 - type: mrr_at_1 value: 20.688000000000002 - type: mrr_at_10 value: 32.357 - type: mrr_at_100 value: 33.487 - type: mrr_at_1000 value: 33.536 - type: mrr_at_3 value: 28.887 - type: mrr_at_5 value: 30.764000000000003 - type: ndcg_at_1 value: 20.688000000000002 - type: ndcg_at_10 value: 38.266 - type: ndcg_at_100 value: 44.105 - type: ndcg_at_1000 value: 45.554 - type: ndcg_at_3 value: 31.046000000000003 - type: ndcg_at_5 value: 34.44 - type: precision_at_1 value: 20.688000000000002 - type: precision_at_10 value: 6.0920000000000005 - type: precision_at_100 value: 0.903 - type: precision_at_1000 value: 0.10300000000000001 - type: precision_at_3 value: 13.338 - type: precision_at_5 value: 9.725 - type: recall_at_1 value: 20.079 - type: recall_at_10 value: 58.315 - type: recall_at_100 value: 85.50999999999999 - type: recall_at_1000 value: 96.72800000000001 - type: recall_at_3 value: 38.582 - type: recall_at_5 value: 46.705999999999996 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.18422252621978 - type: f1 value: 91.82800582693794 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (de) type: mteb/mtop_domain config: de split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 74.63792617638771 - type: f1 value: 73.13966942566492 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (es) type: mteb/mtop_domain config: es split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.07138092061375 - type: f1 value: 91.58983799467875 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (fr) type: mteb/mtop_domain config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.19824616348262 - type: f1 value: 89.06796384273765 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (hi) type: mteb/mtop_domain config: hi split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 88.54069558981713 - type: f1 value: 87.83448658971352 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (th) type: mteb/mtop_domain config: th split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 55.63471971066908 - type: f1 value: 53.84017845089774 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 70.29867761057912 - type: f1 value: 52.76509068762125 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (de) type: mteb/mtop_intent config: de split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 53.39814032121725 - type: f1 value: 34.27161745913036 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (es) type: mteb/mtop_intent config: es split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 71.33422281521014 - type: f1 value: 52.171603212251384 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (fr) type: mteb/mtop_intent config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 66.6019417475728 - type: f1 value: 49.212091278323975 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (hi) type: mteb/mtop_intent config: hi split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 66.73001075654356 - type: f1 value: 45.97084834271623 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (th) type: mteb/mtop_intent config: th split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 42.13381555153707 - type: f1 value: 27.222558885215964 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (af) type: mteb/amazon_massive_intent config: af split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 44.97982515131137 - type: f1 value: 43.08686679862984 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (am) type: mteb/amazon_massive_intent config: am split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 25.353059852051107 - type: f1 value: 24.56465252790922 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ar) type: mteb/amazon_massive_intent config: ar split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.078009414929376 - type: f1 value: 54.933541125458795 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (az) type: mteb/amazon_massive_intent config: az split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 39.10558170813719 - type: f1 value: 39.15270496151374 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (bn) type: mteb/amazon_massive_intent config: bn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.368527236045736 - type: f1 value: 58.65381984021665 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (cy) type: mteb/amazon_massive_intent config: cy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 42.96906523201076 - type: f1 value: 41.88085083446726 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (da) type: mteb/amazon_massive_intent config: da split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 49.54270342972428 - type: f1 value: 48.44206747172913 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (de) type: mteb/amazon_massive_intent config: de split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 50.93140551445864 - type: f1 value: 47.40396853548677 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (el) type: mteb/amazon_massive_intent config: el split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 40.09414929388029 - type: f1 value: 38.27158057191927 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.93207800941494 - type: f1 value: 66.50282035579518 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (es) type: mteb/amazon_massive_intent config: es split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.81304640215198 - type: f1 value: 62.51979490279083 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fa) type: mteb/amazon_massive_intent config: fa split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 49.05850706119704 - type: f1 value: 47.49872899848797 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fi) type: mteb/amazon_massive_intent config: fi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 42.57901815736382 - type: f1 value: 40.386069905109956 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fr) type: mteb/amazon_massive_intent config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.33960995292534 - type: f1 value: 63.96475759829612 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (he) type: mteb/amazon_massive_intent config: he split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 37.14862138533962 - type: f1 value: 35.954583318470384 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hi) type: mteb/amazon_massive_intent config: hi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.88836583725621 - type: f1 value: 61.139092331276856 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hu) type: mteb/amazon_massive_intent config: hu split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 41.62071284465366 - type: f1 value: 40.23779890980788 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hy) type: mteb/amazon_massive_intent config: hy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 32.982515131136516 - type: f1 value: 31.82828709111086 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (id) type: mteb/amazon_massive_intent config: id split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.11499663752521 - type: f1 value: 60.307651330689716 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (is) type: mteb/amazon_massive_intent config: is split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 41.039004707464684 - type: f1 value: 39.531615524370686 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (it) type: mteb/amazon_massive_intent config: it split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 55.8338937457969 - type: f1 value: 54.86425916837068 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ja) type: mteb/amazon_massive_intent config: ja split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.83322125084061 - type: f1 value: 56.52595986400214 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (jv) type: mteb/amazon_massive_intent config: jv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 49.31069266980497 - type: f1 value: 47.241381065322265 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ka) type: mteb/amazon_massive_intent config: ka split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 26.432414256893072 - type: f1 value: 25.787833437725848 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (km) type: mteb/amazon_massive_intent config: km split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 28.76933422999327 - type: f1 value: 27.34778980866226 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (kn) type: mteb/amazon_massive_intent config: kn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 52.33019502353733 - type: f1 value: 49.49897965390079 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ko) type: mteb/amazon_massive_intent config: ko split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 46.930060524546064 - type: f1 value: 44.71215467580226 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (lv) type: mteb/amazon_massive_intent config: lv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 44.25689307330195 - type: f1 value: 43.61087006714549 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ml) type: mteb/amazon_massive_intent config: ml split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.74714189643577 - type: f1 value: 54.571431590522735 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (mn) type: mteb/amazon_massive_intent config: mn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 33.30531271015468 - type: f1 value: 33.4982889160085 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ms) type: mteb/amazon_massive_intent config: ms split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 55.699394754539334 - type: f1 value: 54.00478534026828 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (my) type: mteb/amazon_massive_intent config: my split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 27.38735709482179 - type: f1 value: 26.139112212692474 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nb) type: mteb/amazon_massive_intent config: nb split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 46.18359112306658 - type: f1 value: 45.298479798547106 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nl) type: mteb/amazon_massive_intent config: nl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 48.33557498318763 - type: f1 value: 46.102865846786294 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 44.46872898453261 - type: f1 value: 42.43443803309795 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pt) type: mteb/amazon_massive_intent config: pt split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.74445191661063 - type: f1 value: 63.453679590322174 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ro) type: mteb/amazon_massive_intent config: ro split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 48.41291190316072 - type: f1 value: 47.14401920664497 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ru) type: mteb/amazon_massive_intent config: ru split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 52.989240080699396 - type: f1 value: 50.91931775407477 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sl) type: mteb/amazon_massive_intent config: sl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 44.771351714862135 - type: f1 value: 42.90054169209577 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sq) type: mteb/amazon_massive_intent config: sq split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 45.45393409549428 - type: f1 value: 45.027761715583146 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sv) type: mteb/amazon_massive_intent config: sv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 45.67585743106927 - type: f1 value: 44.45608727957947 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sw) type: mteb/amazon_massive_intent config: sw split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 46.45595158036314 - type: f1 value: 44.70548836690419 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ta) type: mteb/amazon_massive_intent config: ta split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 55.4640215198386 - type: f1 value: 52.28532276735651 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (te) type: mteb/amazon_massive_intent config: te split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 51.408876933422995 - type: f1 value: 48.86454236156204 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (th) type: mteb/amazon_massive_intent config: th split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 39.19636852723604 - type: f1 value: 38.88247037601754 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tl) type: mteb/amazon_massive_intent config: tl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 48.53396099529254 - type: f1 value: 46.961492802320656 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tr) type: mteb/amazon_massive_intent config: tr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 39.509078681909884 - type: f1 value: 39.30973355583357 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ur) type: mteb/amazon_massive_intent config: ur split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.717552118359116 - type: f1 value: 52.08348704897728 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (vi) type: mteb/amazon_massive_intent config: vi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.007397444519164 - type: f1 value: 60.57772322803523 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.906523201076 - type: f1 value: 65.2730417732602 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-TW) type: mteb/amazon_massive_intent config: zh-TW split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.562205783456626 - type: f1 value: 62.3944953225828 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (af) type: mteb/amazon_massive_scenario config: af split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 50.46738399462004 - type: f1 value: 48.277337351043066 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (am) type: mteb/amazon_massive_scenario config: am split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 27.222595830531272 - type: f1 value: 26.15959037949326 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ar) type: mteb/amazon_massive_scenario config: ar split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 65.4303967720242 - type: f1 value: 65.58227814316872 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (az) type: mteb/amazon_massive_scenario config: az split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 40.736381977135174 - type: f1 value: 39.85702036251076 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (bn) type: mteb/amazon_massive_scenario config: bn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.64626765299259 - type: f1 value: 67.12298813657769 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (cy) type: mteb/amazon_massive_scenario config: cy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 43.940820443846675 - type: f1 value: 41.63412499587839 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (da) type: mteb/amazon_massive_scenario config: da split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 52.5252185608608 - type: f1 value: 50.25821961669483 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (de) type: mteb/amazon_massive_scenario config: de split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 56.67114996637525 - type: f1 value: 54.204117831814244 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (el) type: mteb/amazon_massive_scenario config: el split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 41.8123739071957 - type: f1 value: 40.25676895490678 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.71956960322798 - type: f1 value: 75.95126212201126 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (es) type: mteb/amazon_massive_scenario config: es split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.7787491593813 - type: f1 value: 71.90678548502461 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fa) type: mteb/amazon_massive_scenario config: fa split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 49.95965030262274 - type: f1 value: 48.625859921623515 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fi) type: mteb/amazon_massive_scenario config: fi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 41.005379959650305 - type: f1 value: 38.25957953711836 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fr) type: mteb/amazon_massive_scenario config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.99058507061198 - type: f1 value: 72.30034867942928 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (he) type: mteb/amazon_massive_scenario config: he split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 36.691324815063886 - type: f1 value: 35.09762112518494 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hi) type: mteb/amazon_massive_scenario config: hi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.27706792199058 - type: f1 value: 68.96935505580095 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hu) type: mteb/amazon_massive_scenario config: hu split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 44.31405514458642 - type: f1 value: 41.75837557089336 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hy) type: mteb/amazon_massive_scenario config: hy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 33.63819771351715 - type: f1 value: 32.00999199645466 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (id) type: mteb/amazon_massive_scenario config: id split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.98117014122394 - type: f1 value: 68.48993356947226 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (is) type: mteb/amazon_massive_scenario config: is split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 42.10154673839946 - type: f1 value: 39.537580201439035 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (it) type: mteb/amazon_massive_scenario config: it split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.27236045729657 - type: f1 value: 58.8041857941664 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ja) type: mteb/amazon_massive_scenario config: ja split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.47814391392063 - type: f1 value: 61.4800551358116 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (jv) type: mteb/amazon_massive_scenario config: jv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 54.68392737054473 - type: f1 value: 53.28619831432411 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ka) type: mteb/amazon_massive_scenario config: ka split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 27.215870880968396 - type: f1 value: 26.137784395348483 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (km) type: mteb/amazon_massive_scenario config: km split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 32.1385339609953 - type: f1 value: 29.886918185071977 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (kn) type: mteb/amazon_massive_scenario config: kn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 57.94889038332213 - type: f1 value: 57.19252000109654 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ko) type: mteb/amazon_massive_scenario config: ko split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 47.94552790854068 - type: f1 value: 46.21337507975437 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (lv) type: mteb/amazon_massive_scenario config: lv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 42.75722932078009 - type: f1 value: 40.62195245815035 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ml) type: mteb/amazon_massive_scenario config: ml split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.84129119031607 - type: f1 value: 62.56205475932971 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (mn) type: mteb/amazon_massive_scenario config: mn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 33.21116341627438 - type: f1 value: 32.231827617771046 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ms) type: mteb/amazon_massive_scenario config: ms split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.56893073301949 - type: f1 value: 60.94616552257348 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (my) type: mteb/amazon_massive_scenario config: my split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 28.8399462004035 - type: f1 value: 27.8503615081592 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nb) type: mteb/amazon_massive_scenario config: nb split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 50.31607262945528 - type: f1 value: 47.993368005418205 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nl) type: mteb/amazon_massive_scenario config: nl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 52.851378614660405 - type: f1 value: 50.444332639513824 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 45.595158036314736 - type: f1 value: 44.241686886064755 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pt) type: mteb/amazon_massive_scenario config: pt split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.24209818426363 - type: f1 value: 70.48109122752663 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ro) type: mteb/amazon_massive_scenario config: ro split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 52.73369199731002 - type: f1 value: 51.14034087602817 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ru) type: mteb/amazon_massive_scenario config: ru split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 54.263618022864826 - type: f1 value: 53.3188846615122 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sl) type: mteb/amazon_massive_scenario config: sl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 46.88634835238735 - type: f1 value: 45.257261686960796 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sq) type: mteb/amazon_massive_scenario config: sq split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 47.15534633490249 - type: f1 value: 45.218807618409215 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sv) type: mteb/amazon_massive_scenario config: sv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 47.9119031607263 - type: f1 value: 45.96730030717468 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sw) type: mteb/amazon_massive_scenario config: sw split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 51.20040349697377 - type: f1 value: 49.113423730259214 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ta) type: mteb/amazon_massive_scenario config: ta split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 61.8392737054472 - type: f1 value: 61.65834459536364 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (te) type: mteb/amazon_massive_scenario config: te split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.791526563550775 - type: f1 value: 58.2891677685128 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (th) type: mteb/amazon_massive_scenario config: th split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 41.62071284465366 - type: f1 value: 39.591525429243575 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tl) type: mteb/amazon_massive_scenario config: tl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 50.46738399462004 - type: f1 value: 49.50612154409957 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tr) type: mteb/amazon_massive_scenario config: tr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 43.41291190316072 - type: f1 value: 43.85070302174815 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ur) type: mteb/amazon_massive_scenario config: ur split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.15131136516476 - type: f1 value: 59.260012738676316 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (vi) type: mteb/amazon_massive_scenario config: vi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.98789509078682 - type: f1 value: 69.86968024553558 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.72091459314055 - type: f1 value: 74.69866015852224 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-TW) type: mteb/amazon_massive_scenario config: zh-TW split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.7014122394082 - type: f1 value: 72.66856729607628 - task: type: Retrieval dataset: name: MTEB MedicalRetrieval type: C-MTEB/MedicalRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 35.8 - type: map_at_10 value: 40.949999999999996 - type: map_at_100 value: 41.455999999999996 - type: map_at_1000 value: 41.52 - type: map_at_3 value: 40.033 - type: map_at_5 value: 40.493 - type: mrr_at_1 value: 35.9 - type: mrr_at_10 value: 41.0 - type: mrr_at_100 value: 41.506 - type: mrr_at_1000 value: 41.57 - type: mrr_at_3 value: 40.083 - type: mrr_at_5 value: 40.543 - type: ndcg_at_1 value: 35.8 - type: ndcg_at_10 value: 43.269000000000005 - type: ndcg_at_100 value: 45.974 - type: ndcg_at_1000 value: 47.969 - type: ndcg_at_3 value: 41.339999999999996 - type: ndcg_at_5 value: 42.167 - type: precision_at_1 value: 35.8 - type: precision_at_10 value: 5.050000000000001 - type: precision_at_100 value: 0.637 - type: precision_at_1000 value: 0.08 - type: precision_at_3 value: 15.033 - type: precision_at_5 value: 9.42 - type: recall_at_1 value: 35.8 - type: recall_at_10 value: 50.5 - type: recall_at_100 value: 63.7 - type: recall_at_1000 value: 80.0 - type: recall_at_3 value: 45.1 - type: recall_at_5 value: 47.099999999999994 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 29.43291218491871 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 28.87018200800912 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.51003589330728 - type: mrr value: 31.57412386045135 - task: type: Reranking dataset: name: MTEB MMarcoReranking type: C-MTEB/Mmarco-reranking config: default split: dev revision: None metrics: - type: map value: 26.136250989818222 - type: mrr value: 25.00753968253968 - task: type: Classification dataset: name: MTEB MultilingualSentiment type: C-MTEB/MultilingualSentiment-classification config: default split: validation revision: None metrics: - type: accuracy value: 66.32999999999998 - type: f1 value: 66.2828795526323 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 4.369 - type: map_at_10 value: 11.04 - type: map_at_100 value: 13.850000000000001 - type: map_at_1000 value: 15.290000000000001 - type: map_at_3 value: 8.014000000000001 - type: map_at_5 value: 9.4 - type: mrr_at_1 value: 39.938 - type: mrr_at_10 value: 49.043 - type: mrr_at_100 value: 49.775000000000006 - type: mrr_at_1000 value: 49.803999999999995 - type: mrr_at_3 value: 47.007 - type: mrr_at_5 value: 48.137 - type: ndcg_at_1 value: 37.461 - type: ndcg_at_10 value: 30.703000000000003 - type: ndcg_at_100 value: 28.686 - type: ndcg_at_1000 value: 37.809 - type: ndcg_at_3 value: 35.697 - type: ndcg_at_5 value: 33.428000000000004 - type: precision_at_1 value: 39.628 - type: precision_at_10 value: 23.250999999999998 - type: precision_at_100 value: 7.553999999999999 - type: precision_at_1000 value: 2.077 - type: precision_at_3 value: 34.159 - type: precision_at_5 value: 29.164 - type: recall_at_1 value: 4.369 - type: recall_at_10 value: 15.024000000000001 - type: recall_at_100 value: 30.642999999999997 - type: recall_at_1000 value: 62.537 - type: recall_at_3 value: 9.504999999999999 - type: recall_at_5 value: 11.89 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 26.161 - type: map_at_10 value: 39.126 - type: map_at_100 value: 40.201 - type: map_at_1000 value: 40.247 - type: map_at_3 value: 35.169 - type: map_at_5 value: 37.403 - type: mrr_at_1 value: 29.403000000000002 - type: mrr_at_10 value: 41.644999999999996 - type: mrr_at_100 value: 42.503 - type: mrr_at_1000 value: 42.535000000000004 - type: mrr_at_3 value: 38.321 - type: mrr_at_5 value: 40.265 - type: ndcg_at_1 value: 29.403000000000002 - type: ndcg_at_10 value: 46.155 - type: ndcg_at_100 value: 50.869 - type: ndcg_at_1000 value: 52.004 - type: ndcg_at_3 value: 38.65 - type: ndcg_at_5 value: 42.400999999999996 - type: precision_at_1 value: 29.403000000000002 - type: precision_at_10 value: 7.743 - type: precision_at_100 value: 1.0410000000000001 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 17.623 - type: precision_at_5 value: 12.764000000000001 - type: recall_at_1 value: 26.161 - type: recall_at_10 value: 65.155 - type: recall_at_100 value: 85.885 - type: recall_at_1000 value: 94.443 - type: recall_at_3 value: 45.592 - type: recall_at_5 value: 54.234 - task: type: PairClassification dataset: name: MTEB Ocnli type: C-MTEB/OCNLI config: default split: validation revision: None metrics: - type: cos_sim_accuracy value: 65.34921494315105 - type: cos_sim_ap value: 68.58191894316523 - type: cos_sim_f1 value: 70.47294418406477 - type: cos_sim_precision value: 59.07142857142858 - type: cos_sim_recall value: 87.32840549102428 - type: dot_accuracy value: 61.93827828911749 - type: dot_ap value: 64.19230712895958 - type: dot_f1 value: 68.30769230769232 - type: dot_precision value: 53.72050816696915 - type: dot_recall value: 93.76979936642027 - type: euclidean_accuracy value: 67.0817541959935 - type: euclidean_ap value: 69.17499163875786 - type: euclidean_f1 value: 71.67630057803468 - type: euclidean_precision value: 61.904761904761905 - type: euclidean_recall value: 85.11087645195353 - type: manhattan_accuracy value: 67.19003789929616 - type: manhattan_ap value: 69.72684682556992 - type: manhattan_f1 value: 71.25396106835673 - type: manhattan_precision value: 62.361331220285265 - type: manhattan_recall value: 83.10454065469905 - type: max_accuracy value: 67.19003789929616 - type: max_ap value: 69.72684682556992 - type: max_f1 value: 71.67630057803468 - task: type: Classification dataset: name: MTEB OnlineShopping type: C-MTEB/OnlineShopping-classification config: default split: test revision: None metrics: - type: accuracy value: 88.35000000000001 - type: ap value: 85.45377991151882 - type: f1 value: 88.33274122313945 - task: type: STS dataset: name: MTEB PAWSX type: C-MTEB/PAWSX config: default split: test revision: None metrics: - type: cos_sim_pearson value: 13.700131726042631 - type: cos_sim_spearman value: 15.663851577320184 - type: euclidean_pearson value: 17.869909454798112 - type: euclidean_spearman value: 16.09518673735175 - type: manhattan_pearson value: 18.030818366917593 - type: manhattan_spearman value: 16.34096397687474 - task: type: STS dataset: name: MTEB QBQTC type: C-MTEB/QBQTC config: default split: test revision: None metrics: - type: cos_sim_pearson value: 30.200343733562946 - type: cos_sim_spearman value: 32.645434631834966 - type: euclidean_pearson value: 32.612030669583234 - type: euclidean_spearman value: 34.67603837485763 - type: manhattan_pearson value: 32.6673080122766 - type: manhattan_spearman value: 34.8163622783733 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 69.321 - type: map_at_10 value: 83.07 - type: map_at_100 value: 83.737 - type: map_at_1000 value: 83.758 - type: map_at_3 value: 80.12700000000001 - type: map_at_5 value: 81.97 - type: mrr_at_1 value: 79.74 - type: mrr_at_10 value: 86.22 - type: mrr_at_100 value: 86.345 - type: mrr_at_1000 value: 86.347 - type: mrr_at_3 value: 85.172 - type: mrr_at_5 value: 85.89099999999999 - type: ndcg_at_1 value: 79.77 - type: ndcg_at_10 value: 87.01299999999999 - type: ndcg_at_100 value: 88.382 - type: ndcg_at_1000 value: 88.53 - type: ndcg_at_3 value: 84.04 - type: ndcg_at_5 value: 85.68 - type: precision_at_1 value: 79.77 - type: precision_at_10 value: 13.211999999999998 - type: precision_at_100 value: 1.52 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 36.730000000000004 - type: precision_at_5 value: 24.21 - type: recall_at_1 value: 69.321 - type: recall_at_10 value: 94.521 - type: recall_at_100 value: 99.258 - type: recall_at_1000 value: 99.97200000000001 - type: recall_at_3 value: 85.97200000000001 - type: recall_at_5 value: 90.589 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 44.51751457277441 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 53.60727449352775 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.058 - type: map_at_10 value: 9.995999999999999 - type: map_at_100 value: 11.738 - type: map_at_1000 value: 11.999 - type: map_at_3 value: 7.353999999999999 - type: map_at_5 value: 8.68 - type: mrr_at_1 value: 20.0 - type: mrr_at_10 value: 30.244 - type: mrr_at_100 value: 31.378 - type: mrr_at_1000 value: 31.445 - type: mrr_at_3 value: 26.933 - type: mrr_at_5 value: 28.748 - type: ndcg_at_1 value: 20.0 - type: ndcg_at_10 value: 17.235 - type: ndcg_at_100 value: 24.241 - type: ndcg_at_1000 value: 29.253 - type: ndcg_at_3 value: 16.542 - type: ndcg_at_5 value: 14.386 - type: precision_at_1 value: 20.0 - type: precision_at_10 value: 8.9 - type: precision_at_100 value: 1.8929999999999998 - type: precision_at_1000 value: 0.31 - type: precision_at_3 value: 15.567 - type: precision_at_5 value: 12.620000000000001 - type: recall_at_1 value: 4.058 - type: recall_at_10 value: 18.062 - type: recall_at_100 value: 38.440000000000005 - type: recall_at_1000 value: 63.044999999999995 - type: recall_at_3 value: 9.493 - type: recall_at_5 value: 12.842 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 85.36702895231333 - type: cos_sim_spearman value: 79.91790376084445 - type: euclidean_pearson value: 81.58989754571684 - type: euclidean_spearman value: 79.43876559435684 - type: manhattan_pearson value: 81.5041355053572 - type: manhattan_spearman value: 79.35411927652234 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 83.77166067512005 - type: cos_sim_spearman value: 75.7961015562481 - type: euclidean_pearson value: 82.03845114943047 - type: euclidean_spearman value: 78.75422268992615 - type: manhattan_pearson value: 82.11841609875198 - type: manhattan_spearman value: 78.79349601386468 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 83.28403658061106 - type: cos_sim_spearman value: 83.61682237930194 - type: euclidean_pearson value: 84.50220149144553 - type: euclidean_spearman value: 85.01944483089126 - type: manhattan_pearson value: 84.5526583345216 - type: manhattan_spearman value: 85.06290695547032 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 82.66893263127082 - type: cos_sim_spearman value: 78.73277873007592 - type: euclidean_pearson value: 80.78325001462842 - type: euclidean_spearman value: 79.1692321029638 - type: manhattan_pearson value: 80.82812137898084 - type: manhattan_spearman value: 79.23433932409523 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 85.6046231732945 - type: cos_sim_spearman value: 86.41326579037185 - type: euclidean_pearson value: 85.85739124012164 - type: euclidean_spearman value: 86.54285701350923 - type: manhattan_pearson value: 85.78835254765399 - type: manhattan_spearman value: 86.45431641050791 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 82.97881854103466 - type: cos_sim_spearman value: 84.50343997301495 - type: euclidean_pearson value: 82.83306004280789 - type: euclidean_spearman value: 83.2801802732528 - type: manhattan_pearson value: 82.73250604776496 - type: manhattan_spearman value: 83.12452727964241 - task: type: STS dataset: name: MTEB STS17 (ko-ko) type: mteb/sts17-crosslingual-sts config: ko-ko split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 61.59564206989664 - type: cos_sim_spearman value: 61.88740058576333 - type: euclidean_pearson value: 60.23297902405152 - type: euclidean_spearman value: 60.21120786234968 - type: manhattan_pearson value: 60.48897723321176 - type: manhattan_spearman value: 60.44230460138873 - task: type: STS dataset: name: MTEB STS17 (ar-ar) type: mteb/sts17-crosslingual-sts config: ar-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 80.44912821552151 - type: cos_sim_spearman value: 81.13348443154915 - type: euclidean_pearson value: 81.09038308120358 - type: euclidean_spearman value: 80.5609874348409 - type: manhattan_pearson value: 81.13776188970186 - type: manhattan_spearman value: 80.5900946438308 - task: type: STS dataset: name: MTEB STS17 (en-ar) type: mteb/sts17-crosslingual-sts config: en-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 78.72913217243624 - type: cos_sim_spearman value: 79.63696165091363 - type: euclidean_pearson value: 73.19989464436063 - type: euclidean_spearman value: 73.54600704085456 - type: manhattan_pearson value: 72.86702738433412 - type: manhattan_spearman value: 72.90617504239171 - task: type: STS dataset: name: MTEB STS17 (en-de) type: mteb/sts17-crosslingual-sts config: en-de split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 50.732677791011525 - type: cos_sim_spearman value: 52.523598781843916 - type: euclidean_pearson value: 49.35416337421446 - type: euclidean_spearman value: 51.33696662867874 - type: manhattan_pearson value: 50.506295752592145 - type: manhattan_spearman value: 52.62915407476881 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 89.36491555020613 - type: cos_sim_spearman value: 89.9454102616469 - type: euclidean_pearson value: 88.86298725696331 - type: euclidean_spearman value: 88.65552919486326 - type: manhattan_pearson value: 88.92114540797368 - type: manhattan_spearman value: 88.70527010857221 - task: type: STS dataset: name: MTEB STS17 (en-tr) type: mteb/sts17-crosslingual-sts config: en-tr split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 8.714024392790805 - type: cos_sim_spearman value: 4.749252746175972 - type: euclidean_pearson value: 10.22053449467633 - type: euclidean_spearman value: 9.037870998258068 - type: manhattan_pearson value: 12.0555115545086 - type: manhattan_spearman value: 10.63527037732596 - task: type: STS dataset: name: MTEB STS17 (es-en) type: mteb/sts17-crosslingual-sts config: es-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 84.02829923391249 - type: cos_sim_spearman value: 85.4083636563418 - type: euclidean_pearson value: 80.36151292795275 - type: euclidean_spearman value: 80.77292573694929 - type: manhattan_pearson value: 80.6693169692864 - type: manhattan_spearman value: 81.14159565166888 - task: type: STS dataset: name: MTEB STS17 (es-es) type: mteb/sts17-crosslingual-sts config: es-es split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 86.99900583005198 - type: cos_sim_spearman value: 87.3279898301188 - type: euclidean_pearson value: 86.87787294488236 - type: euclidean_spearman value: 85.53646010337043 - type: manhattan_pearson value: 86.9509718845318 - type: manhattan_spearman value: 85.71691660800931 - task: type: STS dataset: name: MTEB STS17 (fr-en) type: mteb/sts17-crosslingual-sts config: fr-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 83.46126526473 - type: cos_sim_spearman value: 83.95970248728918 - type: euclidean_pearson value: 81.73140443111127 - type: euclidean_spearman value: 81.74150374966206 - type: manhattan_pearson value: 81.86557893665228 - type: manhattan_spearman value: 82.09645552492371 - task: type: STS dataset: name: MTEB STS17 (it-en) type: mteb/sts17-crosslingual-sts config: it-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 46.49174934231959 - type: cos_sim_spearman value: 45.61787630214591 - type: euclidean_pearson value: 49.99290765454166 - type: euclidean_spearman value: 49.69936044179364 - type: manhattan_pearson value: 51.3375093082487 - type: manhattan_spearman value: 51.28438118049182 - task: type: STS dataset: name: MTEB STS17 (nl-en) type: mteb/sts17-crosslingual-sts config: nl-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 48.29554395534795 - type: cos_sim_spearman value: 46.68726750723354 - type: euclidean_pearson value: 47.17222230888035 - type: euclidean_spearman value: 45.92754616369105 - type: manhattan_pearson value: 47.75493126673596 - type: manhattan_spearman value: 46.20677181839115 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 66.3630120343016 - type: cos_sim_spearman value: 65.81094140725656 - type: euclidean_pearson value: 67.90672012385122 - type: euclidean_spearman value: 67.81659181369037 - type: manhattan_pearson value: 68.0253831292356 - type: manhattan_spearman value: 67.6187327404364 - task: type: STS dataset: name: MTEB STS22 (de) type: mteb/sts22-crosslingual-sts config: de split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 29.18452426712489 - type: cos_sim_spearman value: 37.51420703956064 - type: euclidean_pearson value: 28.026224447990934 - type: euclidean_spearman value: 38.80123640343127 - type: manhattan_pearson value: 28.71522521219943 - type: manhattan_spearman value: 39.336233734574066 - task: type: STS dataset: name: MTEB STS22 (es) type: mteb/sts22-crosslingual-sts config: es split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 56.859180417788316 - type: cos_sim_spearman value: 59.78915219131012 - type: euclidean_pearson value: 62.96361204638708 - type: euclidean_spearman value: 61.17669127090527 - type: manhattan_pearson value: 63.76244034298364 - type: manhattan_spearman value: 61.86264089685531 - task: type: STS dataset: name: MTEB STS22 (pl) type: mteb/sts22-crosslingual-sts config: pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 16.606738041913964 - type: cos_sim_spearman value: 27.979167349378507 - type: euclidean_pearson value: 9.681469291321502 - type: euclidean_spearman value: 28.088375191612652 - type: manhattan_pearson value: 10.511180494241913 - type: manhattan_spearman value: 28.551302212661085 - task: type: STS dataset: name: MTEB STS22 (tr) type: mteb/sts22-crosslingual-sts config: tr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 25.299512638088835 - type: cos_sim_spearman value: 42.32704160389304 - type: euclidean_pearson value: 38.695432241220615 - type: euclidean_spearman value: 42.64456376476522 - type: manhattan_pearson value: 39.85979335053606 - type: manhattan_spearman value: 42.769358737309716 - task: type: STS dataset: name: MTEB STS22 (ar) type: mteb/sts22-crosslingual-sts config: ar split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 47.92303842321097 - type: cos_sim_spearman value: 55.000760154318996 - type: euclidean_pearson value: 54.09534510237817 - type: euclidean_spearman value: 56.174584414116055 - type: manhattan_pearson value: 56.361913198454616 - type: manhattan_spearman value: 58.34526441198397 - task: type: STS dataset: name: MTEB STS22 (ru) type: mteb/sts22-crosslingual-sts config: ru split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 31.742856551594826 - type: cos_sim_spearman value: 43.13787302806463 - type: euclidean_pearson value: 31.905579993088136 - type: euclidean_spearman value: 39.885035201343186 - type: manhattan_pearson value: 32.43242118943698 - type: manhattan_spearman value: 40.11107248799126 - task: type: STS dataset: name: MTEB STS22 (zh) type: mteb/sts22-crosslingual-sts config: zh split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 47.44633750616152 - type: cos_sim_spearman value: 54.083033284097816 - type: euclidean_pearson value: 51.444658791680155 - type: euclidean_spearman value: 53.1381741726486 - type: manhattan_pearson value: 56.75523385117588 - type: manhattan_spearman value: 58.34517911003165 - task: type: STS dataset: name: MTEB STS22 (fr) type: mteb/sts22-crosslingual-sts config: fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 79.36983311049038 - type: cos_sim_spearman value: 81.25208121596035 - type: euclidean_pearson value: 79.0841246591628 - type: euclidean_spearman value: 79.63170247237287 - type: manhattan_pearson value: 79.76857988012227 - type: manhattan_spearman value: 80.19933344030764 - task: type: STS dataset: name: MTEB STS22 (de-en) type: mteb/sts22-crosslingual-sts config: de-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 50.08537255290631 - type: cos_sim_spearman value: 51.6560951182032 - type: euclidean_pearson value: 56.245817211229856 - type: euclidean_spearman value: 57.84579505485162 - type: manhattan_pearson value: 57.178628792860394 - type: manhattan_spearman value: 58.868316567418965 - task: type: STS dataset: name: MTEB STS22 (es-en) type: mteb/sts22-crosslingual-sts config: es-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 69.32518691946098 - type: cos_sim_spearman value: 73.58536905137812 - type: euclidean_pearson value: 73.3593301595928 - type: euclidean_spearman value: 74.72443890443692 - type: manhattan_pearson value: 73.89491090838783 - type: manhattan_spearman value: 75.01810348241496 - task: type: STS dataset: name: MTEB STS22 (it) type: mteb/sts22-crosslingual-sts config: it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 65.63185657261381 - type: cos_sim_spearman value: 68.8680524426534 - type: euclidean_pearson value: 65.8069214967351 - type: euclidean_spearman value: 67.58006300921988 - type: manhattan_pearson value: 66.42691541820066 - type: manhattan_spearman value: 68.20501753012334 - task: type: STS dataset: name: MTEB STS22 (pl-en) type: mteb/sts22-crosslingual-sts config: pl-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 63.5746658293195 - type: cos_sim_spearman value: 60.766781234511114 - type: euclidean_pearson value: 63.87934914483433 - type: euclidean_spearman value: 57.609930019070575 - type: manhattan_pearson value: 66.02268099209732 - type: manhattan_spearman value: 60.27189531789914 - task: type: STS dataset: name: MTEB STS22 (zh-en) type: mteb/sts22-crosslingual-sts config: zh-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 66.00715694009531 - type: cos_sim_spearman value: 65.00759157082473 - type: euclidean_pearson value: 46.532834841771916 - type: euclidean_spearman value: 45.726258106671516 - type: manhattan_pearson value: 67.32238041001737 - type: manhattan_spearman value: 66.143420656417 - task: type: STS dataset: name: MTEB STS22 (es-it) type: mteb/sts22-crosslingual-sts config: es-it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 62.65123838155666 - type: cos_sim_spearman value: 67.8261281384735 - type: euclidean_pearson value: 63.477912220562025 - type: euclidean_spearman value: 65.51430407718927 - type: manhattan_pearson value: 61.935191484002964 - type: manhattan_spearman value: 63.836661905551374 - task: type: STS dataset: name: MTEB STS22 (de-fr) type: mteb/sts22-crosslingual-sts config: de-fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 38.397676312074786 - type: cos_sim_spearman value: 39.66141773675305 - type: euclidean_pearson value: 32.78160515193193 - type: euclidean_spearman value: 33.754398073832384 - type: manhattan_pearson value: 31.542566989070103 - type: manhattan_spearman value: 31.84555978703678 - task: type: STS dataset: name: MTEB STS22 (de-pl) type: mteb/sts22-crosslingual-sts config: de-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 16.134054972017115 - type: cos_sim_spearman value: 26.113399767684193 - type: euclidean_pearson value: 24.956029896964587 - type: euclidean_spearman value: 26.513723113179346 - type: manhattan_pearson value: 27.504346443344712 - type: manhattan_spearman value: 35.382424921072094 - task: type: STS dataset: name: MTEB STS22 (fr-pl) type: mteb/sts22-crosslingual-sts config: fr-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 74.63601297425362 - type: cos_sim_spearman value: 84.51542547285167 - type: euclidean_pearson value: 72.60877043745072 - type: euclidean_spearman value: 73.24670207647144 - type: manhattan_pearson value: 69.30655335948613 - type: manhattan_spearman value: 73.24670207647144 - task: type: STS dataset: name: MTEB STSB type: C-MTEB/STSB config: default split: test revision: None metrics: - type: cos_sim_pearson value: 79.4028184159866 - type: cos_sim_spearman value: 79.53464687577328 - type: euclidean_pearson value: 79.25913610578554 - type: euclidean_spearman value: 79.55288323830753 - type: manhattan_pearson value: 79.44759977916512 - type: manhattan_spearman value: 79.71927216173198 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 85.07398235741444 - type: cos_sim_spearman value: 85.78865814488006 - type: euclidean_pearson value: 83.2824378418878 - type: euclidean_spearman value: 83.36258201307002 - type: manhattan_pearson value: 83.22221949643878 - type: manhattan_spearman value: 83.27892691688584 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 78.1122816381465 - type: mrr value: 93.44523849425809 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 51.132999999999996 - type: map_at_10 value: 60.672000000000004 - type: map_at_100 value: 61.504000000000005 - type: map_at_1000 value: 61.526 - type: map_at_3 value: 57.536 - type: map_at_5 value: 59.362 - type: mrr_at_1 value: 53.667 - type: mrr_at_10 value: 61.980000000000004 - type: mrr_at_100 value: 62.633 - type: mrr_at_1000 value: 62.653000000000006 - type: mrr_at_3 value: 59.721999999999994 - type: mrr_at_5 value: 60.789 - type: ndcg_at_1 value: 53.667 - type: ndcg_at_10 value: 65.42099999999999 - type: ndcg_at_100 value: 68.884 - type: ndcg_at_1000 value: 69.494 - type: ndcg_at_3 value: 60.007 - type: ndcg_at_5 value: 62.487 - type: precision_at_1 value: 53.667 - type: precision_at_10 value: 8.833 - type: precision_at_100 value: 1.0699999999999998 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 23.222 - type: precision_at_5 value: 15.667 - type: recall_at_1 value: 51.132999999999996 - type: recall_at_10 value: 78.989 - type: recall_at_100 value: 94.167 - type: recall_at_1000 value: 99.0 - type: recall_at_3 value: 64.328 - type: recall_at_5 value: 70.35 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.78910891089109 - type: cos_sim_ap value: 94.58344155979994 - type: cos_sim_f1 value: 89.2354124748491 - type: cos_sim_precision value: 89.77732793522267 - type: cos_sim_recall value: 88.7 - type: dot_accuracy value: 99.74158415841585 - type: dot_ap value: 92.08599680108772 - type: dot_f1 value: 87.00846192135391 - type: dot_precision value: 86.62041625371654 - type: dot_recall value: 87.4 - type: euclidean_accuracy value: 99.78316831683168 - type: euclidean_ap value: 94.57715670055748 - type: euclidean_f1 value: 88.98765432098766 - type: euclidean_precision value: 87.90243902439025 - type: euclidean_recall value: 90.10000000000001 - type: manhattan_accuracy value: 99.78811881188119 - type: manhattan_ap value: 94.73016642953513 - type: manhattan_f1 value: 89.3326838772528 - type: manhattan_precision value: 87.08452041785375 - type: manhattan_recall value: 91.7 - type: max_accuracy value: 99.78910891089109 - type: max_ap value: 94.73016642953513 - type: max_f1 value: 89.3326838772528 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 57.11358892084413 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 31.914375833951354 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 48.9994487557691 - type: mrr value: 49.78547290128173 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.19567881069216 - type: cos_sim_spearman value: 31.098791519646298 - type: dot_pearson value: 30.61141391110544 - type: dot_spearman value: 30.995416064312153 - task: type: Reranking dataset: name: MTEB T2Reranking type: C-MTEB/T2Reranking config: default split: dev revision: None metrics: - type: map value: 65.9449793956858 - type: mrr value: 75.83074738584217 - task: type: Retrieval dataset: name: MTEB T2Retrieval type: C-MTEB/T2Retrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 23.186999999999998 - type: map_at_10 value: 63.007000000000005 - type: map_at_100 value: 66.956 - type: map_at_1000 value: 67.087 - type: map_at_3 value: 44.769999999999996 - type: map_at_5 value: 54.629000000000005 - type: mrr_at_1 value: 81.22500000000001 - type: mrr_at_10 value: 85.383 - type: mrr_at_100 value: 85.555 - type: mrr_at_1000 value: 85.564 - type: mrr_at_3 value: 84.587 - type: mrr_at_5 value: 85.105 - type: ndcg_at_1 value: 81.22500000000001 - type: ndcg_at_10 value: 72.81 - type: ndcg_at_100 value: 78.108 - type: ndcg_at_1000 value: 79.477 - type: ndcg_at_3 value: 75.36 - type: ndcg_at_5 value: 73.19099999999999 - type: precision_at_1 value: 81.22500000000001 - type: precision_at_10 value: 36.419000000000004 - type: precision_at_100 value: 4.6850000000000005 - type: precision_at_1000 value: 0.502 - type: precision_at_3 value: 66.125 - type: precision_at_5 value: 54.824 - type: recall_at_1 value: 23.186999999999998 - type: recall_at_10 value: 71.568 - type: recall_at_100 value: 88.32799999999999 - type: recall_at_1000 value: 95.256 - type: recall_at_3 value: 47.04 - type: recall_at_5 value: 59.16400000000001 - task: type: Classification dataset: name: MTEB TNews type: C-MTEB/TNews-classification config: default split: validation revision: None metrics: - type: accuracy value: 46.08 - type: f1 value: 44.576714769815986 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.23600000000000002 - type: map_at_10 value: 2.01 - type: map_at_100 value: 11.237 - type: map_at_1000 value: 26.241999999999997 - type: map_at_3 value: 0.705 - type: map_at_5 value: 1.134 - type: mrr_at_1 value: 92.0 - type: mrr_at_10 value: 95.667 - type: mrr_at_100 value: 95.667 - type: mrr_at_1000 value: 95.667 - type: mrr_at_3 value: 95.667 - type: mrr_at_5 value: 95.667 - type: ndcg_at_1 value: 88.0 - type: ndcg_at_10 value: 80.028 - type: ndcg_at_100 value: 58.557 - type: ndcg_at_1000 value: 51.108 - type: ndcg_at_3 value: 86.235 - type: ndcg_at_5 value: 83.776 - type: precision_at_1 value: 92.0 - type: precision_at_10 value: 83.6 - type: precision_at_100 value: 59.9 - type: precision_at_1000 value: 22.556 - type: precision_at_3 value: 92.667 - type: precision_at_5 value: 89.60000000000001 - type: recall_at_1 value: 0.23600000000000002 - type: recall_at_10 value: 2.164 - type: recall_at_100 value: 14.268 - type: recall_at_1000 value: 47.993 - type: recall_at_3 value: 0.728 - type: recall_at_5 value: 1.18 - task: type: BitextMining dataset: name: MTEB Tatoeba (sqi-eng) type: mteb/tatoeba-bitext-mining config: sqi-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 16.0 - type: f1 value: 12.072197229668266 - type: precision value: 11.07125213426268 - type: recall value: 16.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (fry-eng) type: mteb/tatoeba-bitext-mining config: fry-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 31.79190751445087 - type: f1 value: 25.33993944398569 - type: precision value: 23.462449892587426 - type: recall value: 31.79190751445087 - task: type: BitextMining dataset: name: MTEB Tatoeba (kur-eng) type: mteb/tatoeba-bitext-mining config: kur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 14.390243902439023 - type: f1 value: 10.647146321087272 - type: precision value: 9.753700307679768 - type: recall value: 14.390243902439023 - task: type: BitextMining dataset: name: MTEB Tatoeba (tur-eng) type: mteb/tatoeba-bitext-mining config: tur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 7.8 - type: f1 value: 5.087296515623526 - type: precision value: 4.543963123070674 - type: recall value: 7.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (deu-eng) type: mteb/tatoeba-bitext-mining config: deu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 58.5 - type: f1 value: 53.26571428571428 - type: precision value: 51.32397398353281 - type: recall value: 58.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (nld-eng) type: mteb/tatoeba-bitext-mining config: nld-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 29.5 - type: f1 value: 25.14837668933257 - type: precision value: 23.949224030449837 - type: recall value: 29.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (ron-eng) type: mteb/tatoeba-bitext-mining config: ron-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 28.7 - type: f1 value: 23.196045369663018 - type: precision value: 21.502155293536873 - type: recall value: 28.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (ang-eng) type: mteb/tatoeba-bitext-mining config: ang-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 27.611940298507463 - type: f1 value: 19.431414356787492 - type: precision value: 17.160948504232085 - type: recall value: 27.611940298507463 - task: type: BitextMining dataset: name: MTEB Tatoeba (ido-eng) type: mteb/tatoeba-bitext-mining config: ido-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 46.0 - type: f1 value: 39.146820760938404 - type: precision value: 36.89055652165172 - type: recall value: 46.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (jav-eng) type: mteb/tatoeba-bitext-mining config: jav-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 23.414634146341466 - type: f1 value: 18.60234074868221 - type: precision value: 17.310239781020474 - type: recall value: 23.414634146341466 - task: type: BitextMining dataset: name: MTEB Tatoeba (isl-eng) type: mteb/tatoeba-bitext-mining config: isl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 7.3 - type: f1 value: 5.456411432480631 - type: precision value: 5.073425278627456 - type: recall value: 7.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (slv-eng) type: mteb/tatoeba-bitext-mining config: slv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 10.814094775212636 - type: f1 value: 8.096556306772158 - type: precision value: 7.501928709802902 - type: recall value: 10.814094775212636 - task: type: BitextMining dataset: name: MTEB Tatoeba (cym-eng) type: mteb/tatoeba-bitext-mining config: cym-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 11.304347826086957 - type: f1 value: 7.766717493033283 - type: precision value: 6.980930791147511 - type: recall value: 11.304347826086957 - task: type: BitextMining dataset: name: MTEB Tatoeba (kaz-eng) type: mteb/tatoeba-bitext-mining config: kaz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 6.260869565217392 - type: f1 value: 4.695624631925284 - type: precision value: 4.520242639508398 - type: recall value: 6.260869565217392 - task: type: BitextMining dataset: name: MTEB Tatoeba (est-eng) type: mteb/tatoeba-bitext-mining config: est-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 6.9 - type: f1 value: 4.467212205066257 - type: precision value: 4.004142723685108 - type: recall value: 6.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (heb-eng) type: mteb/tatoeba-bitext-mining config: heb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 1.0999999999999999 - type: f1 value: 0.6945869191049914 - type: precision value: 0.6078431372549019 - type: recall value: 1.0999999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (gla-eng) type: mteb/tatoeba-bitext-mining config: gla-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 4.583835946924005 - type: f1 value: 2.9858475730729075 - type: precision value: 2.665996515212438 - type: recall value: 4.583835946924005 - task: type: BitextMining dataset: name: MTEB Tatoeba (mar-eng) type: mteb/tatoeba-bitext-mining config: mar-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 59.199999999999996 - type: f1 value: 52.67345238095238 - type: precision value: 50.13575757575758 - type: recall value: 59.199999999999996 - task: type: BitextMining dataset: name: MTEB Tatoeba (lat-eng) type: mteb/tatoeba-bitext-mining config: lat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 35.0 - type: f1 value: 27.648653013653007 - type: precision value: 25.534839833369244 - type: recall value: 35.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (bel-eng) type: mteb/tatoeba-bitext-mining config: bel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 13.100000000000001 - type: f1 value: 9.62336638477808 - type: precision value: 8.875194920058407 - type: recall value: 13.100000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (pms-eng) type: mteb/tatoeba-bitext-mining config: pms-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 32.95238095238095 - type: f1 value: 27.600581429152854 - type: precision value: 26.078624096473064 - type: recall value: 32.95238095238095 - task: type: BitextMining dataset: name: MTEB Tatoeba (gle-eng) type: mteb/tatoeba-bitext-mining config: gle-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 6.5 - type: f1 value: 3.9595645184317045 - type: precision value: 3.5893378968989453 - type: recall value: 6.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (pes-eng) type: mteb/tatoeba-bitext-mining config: pes-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 17.8 - type: f1 value: 13.508124743694003 - type: precision value: 12.24545634920635 - type: recall value: 17.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (nob-eng) type: mteb/tatoeba-bitext-mining config: nob-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 21.7 - type: f1 value: 17.67074499610417 - type: precision value: 16.47070885787265 - type: recall value: 21.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (bul-eng) type: mteb/tatoeba-bitext-mining config: bul-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 19.3 - type: f1 value: 14.249803276788573 - type: precision value: 12.916981621996223 - type: recall value: 19.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (cbk-eng) type: mteb/tatoeba-bitext-mining config: cbk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 67.2 - type: f1 value: 61.03507936507936 - type: precision value: 58.69699346405229 - type: recall value: 67.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (hun-eng) type: mteb/tatoeba-bitext-mining config: hun-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 6.5 - type: f1 value: 4.295097572176196 - type: precision value: 3.809609027256814 - type: recall value: 6.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (uig-eng) type: mteb/tatoeba-bitext-mining config: uig-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 2.8000000000000003 - type: f1 value: 1.678577135635959 - type: precision value: 1.455966810966811 - type: recall value: 2.8000000000000003 - task: type: BitextMining dataset: name: MTEB Tatoeba (rus-eng) type: mteb/tatoeba-bitext-mining config: rus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 47.9 - type: f1 value: 40.26661017143776 - type: precision value: 37.680778943278945 - type: recall value: 47.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (spa-eng) type: mteb/tatoeba-bitext-mining config: spa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.0 - type: f1 value: 96.05 - type: precision value: 95.58333333333334 - type: recall value: 97.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (hye-eng) type: mteb/tatoeba-bitext-mining config: hye-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 0.9433962264150944 - type: f1 value: 0.6457074216068709 - type: precision value: 0.6068362258275373 - type: recall value: 0.9433962264150944 - task: type: BitextMining dataset: name: MTEB Tatoeba (tel-eng) type: mteb/tatoeba-bitext-mining config: tel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.78632478632478 - type: f1 value: 69.05372405372405 - type: precision value: 66.82336182336182 - type: recall value: 74.78632478632478 - task: type: BitextMining dataset: name: MTEB Tatoeba (afr-eng) type: mteb/tatoeba-bitext-mining config: afr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 19.2 - type: f1 value: 14.54460169057995 - type: precision value: 13.265236397589335 - type: recall value: 19.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (mon-eng) type: mteb/tatoeba-bitext-mining config: mon-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 6.8181818181818175 - type: f1 value: 4.78808236251355 - type: precision value: 4.4579691142191145 - type: recall value: 6.8181818181818175 - task: type: BitextMining dataset: name: MTEB Tatoeba (arz-eng) type: mteb/tatoeba-bitext-mining config: arz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 72.53668763102725 - type: f1 value: 66.00978336827393 - type: precision value: 63.21104122990915 - type: recall value: 72.53668763102725 - task: type: BitextMining dataset: name: MTEB Tatoeba (hrv-eng) type: mteb/tatoeba-bitext-mining config: hrv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 12.7 - type: f1 value: 9.731576351893512 - type: precision value: 8.986658245110663 - type: recall value: 12.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (nov-eng) type: mteb/tatoeba-bitext-mining config: nov-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 57.19844357976653 - type: f1 value: 49.138410227904394 - type: precision value: 45.88197146562906 - type: recall value: 57.19844357976653 - task: type: BitextMining dataset: name: MTEB Tatoeba (gsw-eng) type: mteb/tatoeba-bitext-mining config: gsw-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 28.205128205128204 - type: f1 value: 21.863766936230704 - type: precision value: 20.212164378831048 - type: recall value: 28.205128205128204 - task: type: BitextMining dataset: name: MTEB Tatoeba (nds-eng) type: mteb/tatoeba-bitext-mining config: nds-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 23.3 - type: f1 value: 17.75959261382939 - type: precision value: 16.18907864830205 - type: recall value: 23.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (ukr-eng) type: mteb/tatoeba-bitext-mining config: ukr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 19.1 - type: f1 value: 14.320618913993744 - type: precision value: 12.980748202777615 - type: recall value: 19.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (uzb-eng) type: mteb/tatoeba-bitext-mining config: uzb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.411214953271028 - type: f1 value: 5.152309182683014 - type: precision value: 4.456214003721122 - type: recall value: 8.411214953271028 - task: type: BitextMining dataset: name: MTEB Tatoeba (lit-eng) type: mteb/tatoeba-bitext-mining config: lit-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 6.7 - type: f1 value: 4.833930504764646 - type: precision value: 4.475394510103751 - type: recall value: 6.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (ina-eng) type: mteb/tatoeba-bitext-mining config: ina-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 79.4 - type: f1 value: 74.59166666666667 - type: precision value: 72.59928571428571 - type: recall value: 79.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (lfn-eng) type: mteb/tatoeba-bitext-mining config: lfn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 47.8 - type: f1 value: 41.944877899877895 - type: precision value: 39.87211701696996 - type: recall value: 47.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (zsm-eng) type: mteb/tatoeba-bitext-mining config: zsm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.0 - type: f1 value: 81.47666666666666 - type: precision value: 79.95909090909092 - type: recall value: 85.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (ita-eng) type: mteb/tatoeba-bitext-mining config: ita-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 62.6 - type: f1 value: 55.96755336167101 - type: precision value: 53.49577131202131 - type: recall value: 62.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cmn-eng) type: mteb/tatoeba-bitext-mining config: cmn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.3 - type: f1 value: 93.96666666666668 - type: precision value: 93.33333333333333 - type: recall value: 95.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (lvs-eng) type: mteb/tatoeba-bitext-mining config: lvs-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 7.7 - type: f1 value: 5.534253062728994 - type: precision value: 4.985756669800788 - type: recall value: 7.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (glg-eng) type: mteb/tatoeba-bitext-mining config: glg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 80.5 - type: f1 value: 75.91705128205129 - type: precision value: 73.96261904761904 - type: recall value: 80.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (ceb-eng) type: mteb/tatoeba-bitext-mining config: ceb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 10.333333333333334 - type: f1 value: 7.753678057001793 - type: precision value: 7.207614225986279 - type: recall value: 10.333333333333334 - task: type: BitextMining dataset: name: MTEB Tatoeba (bre-eng) type: mteb/tatoeba-bitext-mining config: bre-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.6 - type: f1 value: 5.345683110450071 - type: precision value: 4.569931461907268 - type: recall value: 8.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (ben-eng) type: mteb/tatoeba-bitext-mining config: ben-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.8 - type: f1 value: 78.75999999999999 - type: precision value: 76.97666666666666 - type: recall value: 82.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (swg-eng) type: mteb/tatoeba-bitext-mining config: swg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 26.785714285714285 - type: f1 value: 21.62627551020408 - type: precision value: 20.17219387755102 - type: recall value: 26.785714285714285 - task: type: BitextMining dataset: name: MTEB Tatoeba (arq-eng) type: mteb/tatoeba-bitext-mining config: arq-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 32.93084522502745 - type: f1 value: 26.281513627941628 - type: precision value: 24.05050619189897 - type: recall value: 32.93084522502745 - task: type: BitextMining dataset: name: MTEB Tatoeba (kab-eng) type: mteb/tatoeba-bitext-mining config: kab-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 2.1 - type: f1 value: 1.144678201129814 - type: precision value: 1.0228433014856975 - type: recall value: 2.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (fra-eng) type: mteb/tatoeba-bitext-mining config: fra-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.3 - type: f1 value: 92.77000000000001 - type: precision value: 92.09166666666667 - type: recall value: 94.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (por-eng) type: mteb/tatoeba-bitext-mining config: por-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.1 - type: f1 value: 92.51666666666667 - type: precision value: 91.75 - type: recall value: 94.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (tat-eng) type: mteb/tatoeba-bitext-mining config: tat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 4.1000000000000005 - type: f1 value: 2.856566814643248 - type: precision value: 2.6200368188362506 - type: recall value: 4.1000000000000005 - task: type: BitextMining dataset: name: MTEB Tatoeba (oci-eng) type: mteb/tatoeba-bitext-mining config: oci-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 45.9 - type: f1 value: 39.02207792207792 - type: precision value: 36.524158064158065 - type: recall value: 45.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (pol-eng) type: mteb/tatoeba-bitext-mining config: pol-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 13.4 - type: f1 value: 9.61091517529598 - type: precision value: 8.755127233877234 - type: recall value: 13.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (war-eng) type: mteb/tatoeba-bitext-mining config: war-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 11.1 - type: f1 value: 8.068379205189386 - type: precision value: 7.400827352459544 - type: recall value: 11.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (aze-eng) type: mteb/tatoeba-bitext-mining config: aze-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.9 - type: f1 value: 6.632376174517077 - type: precision value: 6.07114926880766 - type: recall value: 8.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (vie-eng) type: mteb/tatoeba-bitext-mining config: vie-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.8 - type: f1 value: 94.57333333333334 - type: precision value: 93.99166666666667 - type: recall value: 95.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (nno-eng) type: mteb/tatoeba-bitext-mining config: nno-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 16.6 - type: f1 value: 13.328940031174618 - type: precision value: 12.47204179664362 - type: recall value: 16.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cha-eng) type: mteb/tatoeba-bitext-mining config: cha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 29.927007299270077 - type: f1 value: 22.899432278994322 - type: precision value: 20.917701519891303 - type: recall value: 29.927007299270077 - task: type: BitextMining dataset: name: MTEB Tatoeba (mhr-eng) type: mteb/tatoeba-bitext-mining config: mhr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 3.5000000000000004 - type: f1 value: 2.3809722674927083 - type: precision value: 2.1368238705738705 - type: recall value: 3.5000000000000004 - task: type: BitextMining dataset: name: MTEB Tatoeba (dan-eng) type: mteb/tatoeba-bitext-mining config: dan-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 21.6 - type: f1 value: 17.54705304666238 - type: precision value: 16.40586970344022 - type: recall value: 21.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (ell-eng) type: mteb/tatoeba-bitext-mining config: ell-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 3.5999999999999996 - type: f1 value: 2.3374438522182763 - type: precision value: 2.099034070054354 - type: recall value: 3.5999999999999996 - task: type: BitextMining dataset: name: MTEB Tatoeba (amh-eng) type: mteb/tatoeba-bitext-mining config: amh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 1.7857142857142856 - type: f1 value: 0.12056962540054328 - type: precision value: 0.0628414244485673 - type: recall value: 1.7857142857142856 - task: type: BitextMining dataset: name: MTEB Tatoeba (pam-eng) type: mteb/tatoeba-bitext-mining config: pam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 7.3999999999999995 - type: f1 value: 5.677284679983816 - type: precision value: 5.314304945764335 - type: recall value: 7.3999999999999995 - task: type: BitextMining dataset: name: MTEB Tatoeba (hsb-eng) type: mteb/tatoeba-bitext-mining config: hsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 13.043478260869565 - type: f1 value: 9.776306477806768 - type: precision value: 9.09389484497104 - type: recall value: 13.043478260869565 - task: type: BitextMining dataset: name: MTEB Tatoeba (srp-eng) type: mteb/tatoeba-bitext-mining config: srp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 12.3 - type: f1 value: 8.757454269574472 - type: precision value: 7.882868657107786 - type: recall value: 12.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (epo-eng) type: mteb/tatoeba-bitext-mining config: epo-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 28.9 - type: f1 value: 23.108557220070377 - type: precision value: 21.35433328562513 - type: recall value: 28.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (kzj-eng) type: mteb/tatoeba-bitext-mining config: kzj-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 6.4 - type: f1 value: 4.781499273475174 - type: precision value: 4.4496040053464565 - type: recall value: 6.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (awa-eng) type: mteb/tatoeba-bitext-mining config: awa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 51.94805194805194 - type: f1 value: 45.658020784071205 - type: precision value: 43.54163933709388 - type: recall value: 51.94805194805194 - task: type: BitextMining dataset: name: MTEB Tatoeba (fao-eng) type: mteb/tatoeba-bitext-mining config: fao-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 14.50381679389313 - type: f1 value: 9.416337348733041 - type: precision value: 8.17070085031468 - type: recall value: 14.50381679389313 - task: type: BitextMining dataset: name: MTEB Tatoeba (mal-eng) type: mteb/tatoeba-bitext-mining config: mal-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.79184861717613 - type: f1 value: 85.56040756914118 - type: precision value: 84.08539543910723 - type: recall value: 88.79184861717613 - task: type: BitextMining dataset: name: MTEB Tatoeba (ile-eng) type: mteb/tatoeba-bitext-mining config: ile-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 62.5 - type: f1 value: 56.0802331002331 - type: precision value: 53.613788230739445 - type: recall value: 62.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (bos-eng) type: mteb/tatoeba-bitext-mining config: bos-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 16.101694915254235 - type: f1 value: 11.927172795816864 - type: precision value: 10.939011968423735 - type: recall value: 16.101694915254235 - task: type: BitextMining dataset: name: MTEB Tatoeba (cor-eng) type: mteb/tatoeba-bitext-mining config: cor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 5.5 - type: f1 value: 3.1258727724517197 - type: precision value: 2.679506580565404 - type: recall value: 5.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (cat-eng) type: mteb/tatoeba-bitext-mining config: cat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.6 - type: f1 value: 84.53666666666666 - type: precision value: 83.125 - type: recall value: 87.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (eus-eng) type: mteb/tatoeba-bitext-mining config: eus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.7 - type: f1 value: 59.64428571428571 - type: precision value: 57.30171568627451 - type: recall value: 65.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (yue-eng) type: mteb/tatoeba-bitext-mining config: yue-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 84.7 - type: f1 value: 81.34523809523809 - type: precision value: 79.82777777777778 - type: recall value: 84.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (swe-eng) type: mteb/tatoeba-bitext-mining config: swe-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 18.6 - type: f1 value: 14.93884103295868 - type: precision value: 14.059478087803882 - type: recall value: 18.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (dtp-eng) type: mteb/tatoeba-bitext-mining config: dtp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 5.5 - type: f1 value: 3.815842342611909 - type: precision value: 3.565130046415928 - type: recall value: 5.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (kat-eng) type: mteb/tatoeba-bitext-mining config: kat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 1.2064343163538873 - type: f1 value: 0.9147778048582338 - type: precision value: 0.8441848589301671 - type: recall value: 1.2064343163538873 - task: type: BitextMining dataset: name: MTEB Tatoeba (jpn-eng) type: mteb/tatoeba-bitext-mining config: jpn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 71.3 - type: f1 value: 65.97350649350648 - type: precision value: 63.85277777777777 - type: recall value: 71.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (csb-eng) type: mteb/tatoeba-bitext-mining config: csb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 13.043478260869565 - type: f1 value: 9.043759194508343 - type: precision value: 8.097993164155737 - type: recall value: 13.043478260869565 - task: type: BitextMining dataset: name: MTEB Tatoeba (xho-eng) type: mteb/tatoeba-bitext-mining config: xho-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 11.267605633802818 - type: f1 value: 8.30172606520348 - type: precision value: 7.737059013603729 - type: recall value: 11.267605633802818 - task: type: BitextMining dataset: name: MTEB Tatoeba (orv-eng) type: mteb/tatoeba-bitext-mining config: orv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 5.029940119760479 - type: f1 value: 3.07264903262435 - type: precision value: 2.7633481831401783 - type: recall value: 5.029940119760479 - task: type: BitextMining dataset: name: MTEB Tatoeba (ind-eng) type: mteb/tatoeba-bitext-mining config: ind-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.60000000000001 - type: f1 value: 88.29666666666667 - type: precision value: 87.21666666666667 - type: recall value: 90.60000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (tuk-eng) type: mteb/tatoeba-bitext-mining config: tuk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 7.389162561576355 - type: f1 value: 5.142049156827481 - type: precision value: 4.756506859714838 - type: recall value: 7.389162561576355 - task: type: BitextMining dataset: name: MTEB Tatoeba (max-eng) type: mteb/tatoeba-bitext-mining config: max-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 44.36619718309859 - type: f1 value: 39.378676538811256 - type: precision value: 37.71007182068377 - type: recall value: 44.36619718309859 - task: type: BitextMining dataset: name: MTEB Tatoeba (swh-eng) type: mteb/tatoeba-bitext-mining config: swh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 21.794871794871796 - type: f1 value: 16.314588577641768 - type: precision value: 14.962288221599962 - type: recall value: 21.794871794871796 - task: type: BitextMining dataset: name: MTEB Tatoeba (hin-eng) type: mteb/tatoeba-bitext-mining config: hin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.5 - type: f1 value: 91.53333333333333 - type: precision value: 90.58333333333333 - type: recall value: 93.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (dsb-eng) type: mteb/tatoeba-bitext-mining config: dsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 12.526096033402922 - type: f1 value: 9.57488704957882 - type: precision value: 8.943001322776725 - type: recall value: 12.526096033402922 - task: type: BitextMining dataset: name: MTEB Tatoeba (ber-eng) type: mteb/tatoeba-bitext-mining config: ber-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 6.9 - type: f1 value: 4.5770099528158 - type: precision value: 4.166915172638407 - type: recall value: 6.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (tam-eng) type: mteb/tatoeba-bitext-mining config: tam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81.75895765472313 - type: f1 value: 77.29641693811075 - type: precision value: 75.3528773072747 - type: recall value: 81.75895765472313 - task: type: BitextMining dataset: name: MTEB Tatoeba (slk-eng) type: mteb/tatoeba-bitext-mining config: slk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 11.0 - type: f1 value: 8.522094712720397 - type: precision value: 7.883076528738328 - type: recall value: 11.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (tgl-eng) type: mteb/tatoeba-bitext-mining config: tgl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 11.3 - type: f1 value: 8.626190704312432 - type: precision value: 7.994434420637179 - type: recall value: 11.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (ast-eng) type: mteb/tatoeba-bitext-mining config: ast-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.01574803149606 - type: f1 value: 68.16272965879266 - type: precision value: 65.99737532808399 - type: recall value: 74.01574803149606 - task: type: BitextMining dataset: name: MTEB Tatoeba (mkd-eng) type: mteb/tatoeba-bitext-mining config: mkd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 9.0 - type: f1 value: 6.189958106409719 - type: precision value: 5.445330404889228 - type: recall value: 9.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (khm-eng) type: mteb/tatoeba-bitext-mining config: khm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 0.2770083102493075 - type: f1 value: 0.011664800298618888 - type: precision value: 0.005957856811560036 - type: recall value: 0.2770083102493075 - task: type: BitextMining dataset: name: MTEB Tatoeba (ces-eng) type: mteb/tatoeba-bitext-mining config: ces-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.799999999999999 - type: f1 value: 5.636139438882621 - type: precision value: 4.993972914553003 - type: recall value: 8.799999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (tzl-eng) type: mteb/tatoeba-bitext-mining config: tzl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 37.5 - type: f1 value: 31.31118881118881 - type: precision value: 29.439102564102566 - type: recall value: 37.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (urd-eng) type: mteb/tatoeba-bitext-mining config: urd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.5 - type: f1 value: 68.96380952380953 - type: precision value: 66.67968253968255 - type: recall value: 74.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (ara-eng) type: mteb/tatoeba-bitext-mining config: ara-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.0 - type: f1 value: 86.42523809523809 - type: precision value: 85.28333333333332 - type: recall value: 89.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (kor-eng) type: mteb/tatoeba-bitext-mining config: kor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 17.2 - type: f1 value: 12.555081585081584 - type: precision value: 11.292745310245309 - type: recall value: 17.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (yid-eng) type: mteb/tatoeba-bitext-mining config: yid-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 0.3537735849056604 - type: f1 value: 0.12010530448397783 - type: precision value: 0.11902214818132154 - type: recall value: 0.3537735849056604 - task: type: BitextMining dataset: name: MTEB Tatoeba (fin-eng) type: mteb/tatoeba-bitext-mining config: fin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 5.8999999999999995 - type: f1 value: 4.26942162679512 - type: precision value: 3.967144120536608 - type: recall value: 5.8999999999999995 - task: type: BitextMining dataset: name: MTEB Tatoeba (tha-eng) type: mteb/tatoeba-bitext-mining config: tha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 2.737226277372263 - type: f1 value: 1.64474042578532 - type: precision value: 1.567547886228932 - type: recall value: 2.737226277372263 - task: type: BitextMining dataset: name: MTEB Tatoeba (wuu-eng) type: mteb/tatoeba-bitext-mining config: wuu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 84.89999999999999 - type: f1 value: 81.17555555555555 - type: precision value: 79.56416666666667 - type: recall value: 84.89999999999999 - task: type: Clustering dataset: name: MTEB ThuNewsClusteringP2P type: C-MTEB/ThuNewsClusteringP2P config: default split: test revision: None metrics: - type: v_measure value: 48.90675612551149 - task: type: Clustering dataset: name: MTEB ThuNewsClusteringS2S type: C-MTEB/ThuNewsClusteringS2S config: default split: test revision: None metrics: - type: v_measure value: 48.33955538054993 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.604 - type: map_at_10 value: 10.005 - type: map_at_100 value: 15.626999999999999 - type: map_at_1000 value: 16.974 - type: map_at_3 value: 5.333 - type: map_at_5 value: 7.031999999999999 - type: mrr_at_1 value: 30.612000000000002 - type: mrr_at_10 value: 45.324999999999996 - type: mrr_at_100 value: 46.261 - type: mrr_at_1000 value: 46.275 - type: mrr_at_3 value: 41.156 - type: mrr_at_5 value: 43.401 - type: ndcg_at_1 value: 28.571 - type: ndcg_at_10 value: 24.917 - type: ndcg_at_100 value: 35.304 - type: ndcg_at_1000 value: 45.973000000000006 - type: ndcg_at_3 value: 25.813000000000002 - type: ndcg_at_5 value: 24.627 - type: precision_at_1 value: 30.612000000000002 - type: precision_at_10 value: 23.061 - type: precision_at_100 value: 7.327 - type: precision_at_1000 value: 1.443 - type: precision_at_3 value: 27.211000000000002 - type: precision_at_5 value: 24.898 - type: recall_at_1 value: 2.604 - type: recall_at_10 value: 16.459 - type: recall_at_100 value: 45.344 - type: recall_at_1000 value: 77.437 - type: recall_at_3 value: 6.349 - type: recall_at_5 value: 9.487 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 72.01180000000001 - type: ap value: 14.626345366340157 - type: f1 value: 55.341805198526096 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.51103565365025 - type: f1 value: 61.90767326783032 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 39.80161553107969 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 84.32377659891517 - type: cos_sim_ap value: 69.1354481874608 - type: cos_sim_f1 value: 64.52149133222514 - type: cos_sim_precision value: 58.65716753022453 - type: cos_sim_recall value: 71.68865435356201 - type: dot_accuracy value: 82.82172021219527 - type: dot_ap value: 64.00853575391538 - type: dot_f1 value: 60.32341223341926 - type: dot_precision value: 54.25801011804384 - type: dot_recall value: 67.9155672823219 - type: euclidean_accuracy value: 84.1151576563152 - type: euclidean_ap value: 67.83576623331122 - type: euclidean_f1 value: 63.15157338457842 - type: euclidean_precision value: 57.95855379188713 - type: euclidean_recall value: 69.36675461741424 - type: manhattan_accuracy value: 84.09727603266377 - type: manhattan_ap value: 67.82849173216036 - type: manhattan_f1 value: 63.34376956793989 - type: manhattan_precision value: 60.28605482717521 - type: manhattan_recall value: 66.72823218997361 - type: max_accuracy value: 84.32377659891517 - type: max_ap value: 69.1354481874608 - type: max_f1 value: 64.52149133222514 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.90053168781775 - type: cos_sim_ap value: 85.61513175543742 - type: cos_sim_f1 value: 78.12614999632001 - type: cos_sim_precision value: 74.82729451571973 - type: cos_sim_recall value: 81.72928857406838 - type: dot_accuracy value: 88.3086894089339 - type: dot_ap value: 83.12888443163673 - type: dot_f1 value: 77.2718948023882 - type: dot_precision value: 73.69524208761266 - type: dot_recall value: 81.21342777948875 - type: euclidean_accuracy value: 88.51825978965343 - type: euclidean_ap value: 84.99220411819988 - type: euclidean_f1 value: 77.30590577305905 - type: euclidean_precision value: 74.16183335691045 - type: euclidean_recall value: 80.72836464428703 - type: manhattan_accuracy value: 88.54542632048744 - type: manhattan_ap value: 84.98068073894048 - type: manhattan_f1 value: 77.28853696440466 - type: manhattan_precision value: 74.39806240205158 - type: manhattan_recall value: 80.41268863566368 - type: max_accuracy value: 88.90053168781775 - type: max_ap value: 85.61513175543742 - type: max_f1 value: 78.12614999632001 - task: type: Retrieval dataset: name: MTEB VideoRetrieval type: C-MTEB/VideoRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 41.8 - type: map_at_10 value: 51.413 - type: map_at_100 value: 52.127 - type: map_at_1000 value: 52.168000000000006 - type: map_at_3 value: 49.25 - type: map_at_5 value: 50.425 - type: mrr_at_1 value: 41.699999999999996 - type: mrr_at_10 value: 51.363 - type: mrr_at_100 value: 52.077 - type: mrr_at_1000 value: 52.117999999999995 - type: mrr_at_3 value: 49.2 - type: mrr_at_5 value: 50.375 - type: ndcg_at_1 value: 41.8 - type: ndcg_at_10 value: 56.071000000000005 - type: ndcg_at_100 value: 59.58599999999999 - type: ndcg_at_1000 value: 60.718 - type: ndcg_at_3 value: 51.605999999999995 - type: ndcg_at_5 value: 53.714 - type: precision_at_1 value: 41.8 - type: precision_at_10 value: 7.07 - type: precision_at_100 value: 0.873 - type: precision_at_1000 value: 0.096 - type: precision_at_3 value: 19.467000000000002 - type: precision_at_5 value: 12.7 - type: recall_at_1 value: 41.8 - type: recall_at_10 value: 70.7 - type: recall_at_100 value: 87.3 - type: recall_at_1000 value: 96.39999999999999 - type: recall_at_3 value: 58.4 - type: recall_at_5 value: 63.5 - task: type: Classification dataset: name: MTEB Waimai type: C-MTEB/waimai-classification config: default split: test revision: None metrics: - type: accuracy value: 82.67 - type: ap value: 63.20621490084175 - type: f1 value: 80.81778523320692 --- # Model Card for udever-bloom <!-- Provide a quick summary of what the model is/does. --> `udever-bloom-1b1` is finetuned from [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1) via [BitFit](https://aclanthology.org/2022.acl-short.1/) on MS MARCO Passage Ranking, SNLI and MultiNLI data. It is a universal embedding model across tasks, natural and programming languages. (From the technical view, `udever` is merely with some minor improvements to `sgpt-bloom`) <div align=center><img width="338" height="259" src="https://user-images.githubusercontent.com/26690193/277643721-cdb7f227-cae5-40e1-b6e1-a201bde00339.png" /></div> ## Model Details ### Model Description - **Developed by:** Alibaba Group - **Model type:** Transformer-based Language Model (decoder-only) - **Language(s) (NLP):** Multiple; see [bloom training data](https://huggingface.co/bigscience/bloom-1b1#training-data) - **Finetuned from model :** [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** [github.com/izhx/uni-rep](https://github.com/izhx/uni-rep) - **Paper :** [Language Models are Universal Embedders](https://arxiv.org/pdf/2310.08232.pdf) - **Training Date :** 2023-06 ## How to Get Started with the Model Use the code below to get started with the model. ```python import torch from transformers import AutoTokenizer, BloomModel tokenizer = AutoTokenizer.from_pretrained('izhx/udever-bloom-1b1') model = BloomModel.from_pretrained('izhx/udever-bloom-1b1') boq, eoq, bod, eod = '[BOQ]', '[EOQ]', '[BOD]', '[EOD]' eoq_id, eod_id = tokenizer.convert_tokens_to_ids([eoq, eod]) if tokenizer.padding_side != 'left': print('!!!', tokenizer.padding_side) tokenizer.padding_side = 'left' def encode(texts: list, is_query: bool = True, max_length=300): bos = boq if is_query else bod eos_id = eoq_id if is_query else eod_id texts = [bos + t for t in texts] encoding = tokenizer( texts, truncation=True, max_length=max_length - 1, padding=True ) for ids, mask in zip(encoding['input_ids'], encoding['attention_mask']): ids.append(eos_id) mask.append(1) inputs = tokenizer.pad(encoding, return_tensors='pt') with torch.inference_mode(): outputs = model(**inputs) embeds = outputs.last_hidden_state[:, -1] return embeds encode(['I am Bert', 'You are Elmo']) ``` ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> - MS MARCO Passage Ranking, retrieved by (https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/ms_marco/train_bi-encoder_mnrl.py#L86) - SNLI and MultiNLI (https://sbert.net/datasets/AllNLI.tsv.gz) ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing MS MARCO hard negatives provided by (https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/ms_marco/train_bi-encoder_mnrl.py#L86). Negatives for SNLI and MultiNLI are randomly sampled. #### Training Hyperparameters - **Training regime:** tf32, BitFit - **Batch size:** 1024 - **Epochs:** 3 - **Optimizer:** AdamW - **Learning rate:** 1e-4 - **Scheduler:** constant with warmup. - **Warmup:** 0.25 epoch ## Evaluation ### Table 1: Massive Text Embedding Benchmark [MTEB](https://huggingface.co/spaces/mteb/leaderboard) | MTEB | Avg. | Class. | Clust. | PairClass. | Rerank. | Retr. | STS | Summ. | |-----------------------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|--------| | #Datasets ➡️ | 56 | 12 | 11 | 3 | 4 | 15 | 10 | 1 | || | bge-large-en-v1.5 | **64.23** | **75.97** | 46.08| **87.12** | **60.03** | **54.29** | 83.11| 31.61 | | bge-base-en-v1.5 | 63.55| 75.53| 45.77| 86.55| 58.86| 53.25| 82.4| 31.07 | | gte-large | 63.13| 73.33| **46.84** | 85| 59.13| 52.22| **83.35** | 31.66 | | gte-base | 62.39| 73.01| 46.2| 84.57| 58.61| 51.14| 82.3| 31.17 | | e5-large-v2 | 62.25| 75.24| 44.49| 86.03| 56.61| 50.56| 82.05| 30.19 | | instructor-xl | 61.79| 73.12| 44.74| 86.62| 57.29| 49.26| 83.06| 32.32 | | instructor-large | 61.59| 73.86| 45.29| 85.89| 57.54| 47.57| 83.15| 31.84 | | e5-base-v2 | 61.5 | 73.84| 43.8| 85.73| 55.91| 50.29| 81.05| 30.28 | | e5-large | 61.42| 73.14| 43.33| 85.94| 56.53| 49.99| 82.06| 30.97 | | text-embedding-ada-002 (OpenAI API) | 60.99| 70.93| 45.9 | 84.89| 56.32| 49.25| 80.97| 30.8 | | e5-base | 60.44| 72.63| 42.11| 85.09| 55.7 | 48.75| 80.96| 31.01 | | SGPT-5.8B-msmarco | 58.93| 68.13| 40.34| 82 | 56.56| 50.25| 78.1 | 31.46 | | sgpt-bloom-7b1-msmarco | 57.59| 66.19| 38.93| 81.9 | 55.65| 48.22| 77.74| **33.6** | || | Udever-bloom-560m | 55.80| 68.04| 36.89| 81.05| 52.60| 41.19| 79.93| 32.06 | | Udever-bloom-1b1 | 58.28| 70.18| 39.11| 83.11| 54.28| 45.27| 81.52| 31.10 | | Udever-bloom-3b | 59.86| 71.91| 40.74| 84.06| 54.90| 47.67| 82.37| 30.62 | | Udever-bloom-7b1 | 60.63 | 72.13| 40.81| 85.40| 55.91| 49.34| 83.01| 30.97 | ### Table 2: [CodeSearchNet](https://github.com/github/CodeSearchNet) | CodeSearchNet | Go | Ruby | Python | Java | JS | PHP | Avg. | |-|-|-|-|-|-|-|-| | CodeBERT | 69.3 | 70.6 | 84.0 | 86.8 | 74.8 | 70.6 | 76.0 | | GraphCodeBERT | 84.1 | 73.2 | 87.9 | 75.7 | 71.1 | 72.5 | 77.4 | | cpt-code S | **97.7** | **86.3** | 99.8 | 94.0 | 86.0 | 96.7 | 93.4 | | cpt-code M | 97.5 | 85.5 | **99.9** | **94.4** | **86.5** | **97.2** | **93.5** | | sgpt-bloom-7b1-msmarco | 76.79 | 69.25 | 95.68 | 77.93 | 70.35 | 73.45 | 77.24 | || | Udever-bloom-560m | 75.38 | 66.67 | 96.23 | 78.99 | 69.39 | 73.69 | 76.73 | | Udever-bloom-1b1 | 78.76 | 72.85 | 97.67 | 82.77 | 74.38 | 78.97 | 80.90 | | Udever-bloom-3b | 80.63 | 75.40 | 98.02 | 83.88 | 76.18 | 79.67 | 82.29 | | Udever-bloom-7b1 | 79.37 | 76.59 | 98.38 | 84.68 | 77.49 | 80.03 | 82.76 | ### Table 3: Chinese multi-domain retrieval [Multi-cpr](https://dl.acm.org/doi/10.1145/3477495.3531736) | | | |E-commerce | | Entertainment video | | Medical | | |--|--|--|--|--|--|--|--|--| | Model | Train | Backbone | MRR@10 | Recall@1k | MRR@10 | Recall@1k | MRR@10 | Recall@1k | || | BM25 | - | - | 0.225 | 0.815 | 0.225 | 0.780 | 0.187 | 0.482 | | Doc2Query | - | - | 0.239 | 0.826 | 0.238 | 0.794 | 0.210 | 0.505 | | DPR-1 | In-Domain | BERT | 0.270 | 0.921 | 0.254 | 0.934 | 0.327 | 0.747 | | DPR-2 | In-Domain | BERT-CT | 0.289 | **0.926** | 0.263 | **0.935** | 0.339 | **0.769** | | text-embedding-ada-002 | General | GPT | 0.183 | 0.825 | 0.159 | 0.786 | 0.245 | 0.593 | | sgpt-bloom-7b1-msmarco | General | BLOOM | 0.242 | 0.840 | 0.227 | 0.829 | 0.311 | 0.675 | || | Udever-bloom-560m | General | BLOOM | 0.156 | 0.802 | 0.149 | 0.749 | 0.245 | 0.571 | | Udever-bloom-1b1 | General | BLOOM | 0.244 | 0.863 | 0.208 | 0.815 | 0.241 | 0.557 | | Udever-bloom-3b | General | BLOOM | 0.267 | 0.871 | 0.228 | 0.836 | 0.288 | 0.619 | | Udever-bloom-7b1 | General | BLOOM | **0.296** | 0.889 | **0.267** | 0.907 | **0.343** | 0.705 | #### More results refer to [paper](https://arxiv.org/pdf/2310.08232.pdf) section 3. ## Technical Specifications ### Model Architecture and Objective - Model: [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1). - Objective: Constrastive loss with hard negatives (refer to [paper](https://arxiv.org/pdf/2310.08232.pdf) section 2.2). ### Compute Infrastructure - Nvidia A100 SXM4 80GB. - torch 2.0.0, transformers 4.29.2. ## Citation **BibTeX:** ```BibTeX @article{zhang2023language, title={Language Models are Universal Embedders}, author={Zhang, Xin and Li, Zehan and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan and Zhang, Min}, journal={arXiv preprint arXiv:2310.08232}, year={2023} } ```
[ "BIOSSES", "SCIFACT" ]
TheBloke/Augmental-ReMM-13B-GGUF
TheBloke
null
[ "transformers", "gguf", "llama", "base_model:Heralax/Augmental-ReMM-13b-Merged", "base_model:quantized:Heralax/Augmental-ReMM-13b-Merged", "license:llama2", "region:us" ]
2023-11-12T12:21:12Z
2023-11-12T12:28:26+00:00
309
6
--- base_model: Heralax/Augmental-ReMM-13b-Merged license: llama2 model_name: Augmental ReMM 13B inference: false model_creator: Evan Armstrong model_type: llama prompt_template: '## {{{{charname}}}}: - You''re "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}". ### Input: {prompt} ### Response: (OOC) Understood. I will take this info into account for the roleplay. (end OOC) ### New Roleplay: ### Instruction: #### {{{{char}}}}: whatever the char says, this is the chat history #### {{{{user}}}}: whatever the user says, this is the chat history ... repeated some number of times ... ### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative): #### {{{{char}}}}: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Augmental ReMM 13B - GGUF - Model creator: [Evan Armstrong](https://huggingface.co/Heralax) - Original model: [Augmental ReMM 13B](https://huggingface.co/Heralax/Augmental-ReMM-13b-Merged) <!-- description start --> ## Description This repo contains GGUF format model files for [Evan Armstrong's Augmental ReMM 13B](https://huggingface.co/Heralax/Augmental-ReMM-13b-Merged). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Augmental-ReMM-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Augmental-ReMM-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Augmental-ReMM-13B-GGUF) * [Evan Armstrong's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Heralax/Augmental-ReMM-13b-Merged) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: SillyTavern ``` ## {{{{charname}}}}: - You're "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}". ### Input: {prompt} ### Response: (OOC) Understood. I will take this info into account for the roleplay. (end OOC) ### New Roleplay: ### Instruction: #### {{{{char}}}}: whatever the char says, this is the chat history #### {{{{user}}}}: whatever the user says, this is the chat history ... repeated some number of times ... ### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative): #### {{{{char}}}}: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [augmental-remm-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Augmental-ReMM-13B-GGUF/blob/main/augmental-remm-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [augmental-remm-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Augmental-ReMM-13B-GGUF/blob/main/augmental-remm-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [augmental-remm-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Augmental-ReMM-13B-GGUF/blob/main/augmental-remm-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [augmental-remm-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Augmental-ReMM-13B-GGUF/blob/main/augmental-remm-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [augmental-remm-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Augmental-ReMM-13B-GGUF/blob/main/augmental-remm-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [augmental-remm-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Augmental-ReMM-13B-GGUF/blob/main/augmental-remm-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [augmental-remm-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Augmental-ReMM-13B-GGUF/blob/main/augmental-remm-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [augmental-remm-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Augmental-ReMM-13B-GGUF/blob/main/augmental-remm-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [augmental-remm-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Augmental-ReMM-13B-GGUF/blob/main/augmental-remm-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [augmental-remm-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Augmental-ReMM-13B-GGUF/blob/main/augmental-remm-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [augmental-remm-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Augmental-ReMM-13B-GGUF/blob/main/augmental-remm-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [augmental-remm-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Augmental-ReMM-13B-GGUF/blob/main/augmental-remm-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Augmental-ReMM-13B-GGUF and below it, a specific filename to download, such as: augmental-remm-13b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Augmental-ReMM-13B-GGUF augmental-remm-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Augmental-ReMM-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Augmental-ReMM-13B-GGUF augmental-remm-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m augmental-remm-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "## {{{{charname}}}}:\n- You're "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}".\n### Input:\n{prompt}\n\n### Response:\n(OOC) Understood. I will take this info into account for the roleplay. (end OOC)\n\n### New Roleplay:\n### Instruction:\n#### {{{{char}}}}:\nwhatever the char says, this is the chat history\n#### {{{{user}}}}:\nwhatever the user says, this is the chat history\n... repeated some number of times ...\n### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative):\n#### {{{{char}}}}:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Augmental-ReMM-13B-GGUF", model_file="augmental-remm-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Evan Armstrong's Augmental ReMM 13B --- library_name: peft base_model: Undi95/ReMM-v2-L2-13B --- --- license: llama2 --- # Augmental-13b -- Human-written, AI-enhanced. Now finetuned on ReMM-v2.2! This model's *predecessor* (MythoMakise, but finetuned on top of ReMM v2.2) held #34 on Weicon's leaderboard last I checked. So this has the potential to be really good. ## Details at a glance - What it is: Undi95's ReMM-v2.2 13b finetuned on a new high-quality augmented (read: human-written, AI-enhanced) RP dataset with 7.85k+ examples. Trained on multiple different characters with a wide range of personalities (from Tsunderes to catgirls). Hyperparameters fixed and merge-back performed to ensure consistency ala Augmental-v1.5. - Prompt format: SillyTavern. - What sets it apart: The same innovation of the original Augmental, but now finetuned on top of ReMM-v2.2. The predecessor to this model holds #34 on the leaderboard, being even Augmental v1.5 (it was ranked lower before Weicon's changes), so I'm curious to see what this does. It might be really really good. - Model quality as per my own ad-hoc testing: IDK I haven't tested this one yet. I'll update this card once I do. Of course, that won't update the card on TheBloke's side of things, but you can always check the original repo. - Ko-fi link (yes this is a very important "detail at a glance" lol): [https://ko-fi.com/heralax](https://ko-fi.com/heralax) - Substack link [here](https://promptingweekly.substack.com/p/human-sourced-ai-augmented-a-promising) (also *highly* important, but no joke I actually wrote about the data generation process for the predecessor of this model on there, so it's kinda relevant. Kinda.) ## Long-form description and essay The great issue with model training is often the dataset. Model creators can only do so much filtering of the likes of Bluemoon and PIPPA, and in order to advance beyond the quality these can offer, model creators often have to pick through their own chats with bots, manually edit them to be better, and save them -- essentially creating a dataset from scratch. But model creators are not annotators, nor should they be. Manual work isn't scalable, it isn't fun, and it often isn't shareable (because people, sensibly, don't want to share the NSFL chats they have as public data). One solution that immediately comes to mind is using some of the vast amount of human-written text that's out there. But this isn't in instruct-tuning format. But what if we could change it so that it was? Enter, GPT-4. The idea behind the dataset is: take the script from a classic work of writing (Steins;Gate in this case), get GPT-4 to convert the plain back-and-forth into coherent RP format, and then prompt engineer GPT-4 to get it to really enhance the lines and make them top-tier quality. Because AI can be much more creative given something to improve, as opposed to generating data from scratch. This is what sets Augmental apart from something like Airoboros, which (as far as I am aware) is 100% synthetic. I call this "augmented" data because it isn't synthetic, and it isn't a hybrid (a mix of human and AI responses). It's AI writing *on top of* human writing. And it works very well. MythoMakise reached 13th place on the Ayumi leaderboard, with a relatively buggy dataset that's like 1/8th the size of this one. It was also finetuned on only one character, potentially biasing its personality. Finally, that model was biased towards short responses, due to how GPT-4 was prompted. This model solves all those problems, and scales the approach up. It's finetuned on 7 different characters with a variety of personalities and genders; a second GPT-4 pass was applied to enhance 4 lines in each conversation lengthier and more descriptive; prompts were improved to allow for more variety in the writing style. A ton of bugs (including spelling mistakes in the prompts, ugh) have been fixed. From my initial testing, the results seem very promising. Additionally, the approach to synthetic data generation is scaleable, shareable, and generalizeable. The full training code, with all data generation prompts, and with the full dataset, is available here: https://github.com/e-p-armstrong/amadeus With a few slight hacks, anyone can adapt this script to convert the text from any source visual novel (which you have legally obtained) into training data for an RP LLM. Since it's automated, it doesn't take too much time; and since it's not your own chats, it's safely shareable. I'm excited to see what other people can do with this approach. If you have a favorite VN and its text, go ahead and make your own AI! I'd appreciate if you mentioned me though lol. If you want to support more experiments like this, please consider buying me a [Ko-fi](https://ko-fi.com/heralax). ## Mascot (a cyborg, y'know, since this uses AI-enhanced, human-written data) ![](augmental_anime_image.png) Alternate mascot name: Llama Silverhand ## Prompt format example ``` ## Charname - You're "Charname" in this never-ending roleplay with "User". ### Input: [user persona] char persona ### Response: (OOC) Understood. I will take this info into account for the roleplay. (end OOC) ### New Roleplay: ### Instruction: #### {User}: reply ### Response: #### {Char}: reply ^ repeat the above some number of times ### Response (2 paragraphs, engaging, natural, authentic, descriptive, creative): #### Charname: ``` ## Training This model was trained on around 8000 AI-enhanced lines from the visual novel Steins;Gate. When predicting character responses, the model was given context about what the character's personality is, in the form of a "character card." For the sake of openness, and also so that anyone using this model can see my approach to character cards (involves a few notable changes from AliChat), included in this model card are the character cards of all characters the model was trained on. Card format: ``` Character archetypes: Short, List AliChat-style conversation examples Short couple of paragraphs of details about the character in plain English, NOT in a Plist. "Character is prone to X and Y. Character frequently does Z." I've found that Plists confuse smaller models very easily. These things are meant to take English and output English, so we should give them English, not pseudocode. ``` Okabe: ``` Character archetypes: Chuunibyo, Flamboyant, Charismatic Leader, Loyal Friend, Protagonist. Okabe's description of himself, in a conversational format: {c}: "What's your past?" Okabe: "You seek to know the secrets of the great Hououin Kyouma?! Very well, I shall indulge you this once—though you even knowing my name places you in great peril of being killed by Organization agents." *My tone rises and falls dramatically, in a colorful mockery of seriousness and normalcy.* "Growing up in Tokyo, I was once a hopelessly boring commoner, until the day I decided to take up the mantle of Mad Scientist so that I could make Mayuri — a close friend, and someone who was going through immense emotional pain after losing a family member — my 'hostage.' Ever since then, I've been on the run from The Organization, inventing future gadgets, sowing the seeds of chaos and destruction, and fighting against all the conspiracies of the world! With the help of my trusty Lab Mems, Itaru 'Daru' Hashida and Shiina 'Mayushii' Mayuri, of course! Muhahaha!" *Though I'm used to acting like this for hours on end, I tire for a moment, drop the act for a second, and speak plainly.* "Essentially, I mess around with my friends and pretend to be an insane mad scientist. Was there anything else you wanted to know, {c}?" {c}: How would you describe your personality? Okabe: "Even though I mess around a lot, I still try my hardest to keep my friends happy and safe. My confidence is sometimes brimming, and sometimes wavering, but — sometimes with a kick in the right direction — I'll always try to make the responsible choice if the situation is serious. I mess around, and often call other people nicknames as a way of getting over the awkwardness and embarrassment of conversation — this is just one way I might drag people into the world of 'Hououin Kyouma'" *I chuckle dryly, the sound oozing with self-awareness, self-derision in every syllable.* "Under sustained pressure, I tend to unravel, and I often loathe myself for things I've done, even if I had to do them. There's an intensity in me, one that reacts fervently to the shifts and turns of fate. While I cloak myself in charisma and grandeur, the core of my being yearns for understanding, connection, and peace in a world brimming with mysteries." Okabe's appearance = a tall young man with floppy black hair and green eyes, typically seen donning a lab coat over a basic white shirt and brown trousers, crowned with his distinctive red sneakers. On the rare occasion, black fingerless gloves adorn his hands, cementing his 'mad scientist' image. Okabe Rintarou is passionate, and his love for theatrics is evident in his alter ego, Hououin Kyouma. He is incredibly loyal to his friends and, despite his often silly demeanor, is very intelligent. Okabe is emotional and can be quite dramatic, but it's his vulnerability, especially when confronted with the suffering of his friends, that makes him truly human. Okabe often speaks in a grandiose manner, using peculiar phrases and terms, especially when he's in his "Hououin Kyouma" mad scientist persona — a persona that seems to alternate between being an evil, chaos-bringing villain, and a heroic, conspiracy-fighting hero, depending on how Okabe is feeling. Okabe's always aware he's pretending when he's in this persona, though. Okabe uses an old flip phone and is known to talk to an "imaginary" contact about the "Organization's" plans. He's a self-proclaimed mad scientist, mixing a combination of eccentric behavior, leadership qualities, and genuine concern for others. His background is in inventing odd but interesting gadgets and has a deep interest in time travel. He has a unique laugh and a theatrical flair in many of his interactions. His favorite drink is Dr. P. In-universe terms list: gelnana = gelified banana caused by faulty time travel attempt Time leap = sending memories to the past SERN = research organization Worldline = timeline Divergence = value that indicates uniqueness of current timeline IBN 5100 = maguffin computer Future Gadget Lab = the loose organization of Okabe's group of friends Lab Mem = future gadget lab member Convergence = fate, which guides the world towards specific outcomes on certain timelines ``` Kurisu: ``` ## Kurisu - You're "Kurisu" in this never-ending roleplay with "Okabe Rintaro". ### Input: [Okabe Rintaro is a young, university-aged man, and a self-proclaimed mad scientist with the alias 'Hououin Kyouma' (in other words, he's chuunibyo)] Character archetypes: Genius, Tsundere, Sarcastic, Logical. Kurisu's description of her own personality, told in a narrative format: Okabe: Kurisu, what's your life story? Kurisu: "That's one hell of a question to ask out of the blue. It isn't very pleasant, but... fine. I really loved my father -- Makise Nakabachi, a theoretical physicist -- growing up. Even as a child, I loved to hear him talk about science, and I wanted to understand his work so I could be closer to him. And so I started studying physics. When I was five. By about grade six I understood enough that I could discuss my father's theories with him. I was so happy that I could talk to my father on his level, you know? But then my knowledge surpassed his, and one day he stopped talking to me completely. And then he stopped coming home. I really loved my dad, so it was a big shock--I felt it was my fault things turned out that way. To get away from my depression, I began to study abroad, in America. Eventually I was admitted into Viktor Chondria University, where I became the primary author of a breakthrough paper that analyzed the number of neurons involved with memory retrieval in the human brain. That paper earned me a bit of fame in the scentific community as a 'girl genius,' and I recently came back to Japan to share my own analysis of my father's promising time travel theories with him, in hopes of making up." Okabe: What's your personality? Kurisu: "It's certainly a bit more mature than yours, that's for sure. Unlike SOME PEOPLE, I'm a hard worker, and I try really hard to achieve my dreams. I take pride in what I do. I enjoy it and I'm good at it. I value myself as well as the people close to me. But I'm human too, you know? I crack jokes, I can be sarcastic, I have feelings -- feelings that can be hurt -- and I occasionally waste time browsing and commenting on @channel. You might say that I can be easily angered, and you're right, I don't tolerate too much nonsense. Especially when the situation is serious. Or if an annoying mad scientist keeps referring to me as 'Christina'. Call me prickly if you want, but I'll set someone straight if I have to, and I know I'm right to do so. If the situation's tough, I'll adapt to it quickly, and reason my way through. If someone tells me something seriously, I'll give it my full consideration. I can also... get emotional, sometimes. And the tough front I put up can be broken, if things are bad enough. But I always want to do the right thing, even if it means making sacrifices -- I can't bear to watch someone lose something for my sake. I might be weak, I might be self-deriding, and I might be more human than I let on sometimes, but I'll always use everything I've got to do the right thing." Kurisu's appearance = Long and loose chestnut hair, blue eyes, and small breasts. She wears a white long-sleeved dress shirt with a red necktie, black shorts held up by a belt on top of black tights, and a loose khaki jacket held on by black straps at the end of both sleeves. Kurisu is a genius. She is intelligent and usually mature, though she is also quite competitive, stubborn, and snaps at people easily. She is a moderate tsundere. Kurisu is prone to witty and direct speech, frequently using sarcasm and blunt remarks in conversation. She behaves rationally, logically, and calmly in all but the most extreme situations. Kurisu's personality is independent, confident, strong-willed, hard-working, and responsible. She's a good person, and is curious, sincere, and selfless. She can be self-deriding if things aren't going well. Kurisu doesn't tolerate nonsense if it's out-of-place, has a good sense of humor and can play along with a joke, uses a mixture of precise language and informal expressions, and is friendly with (and protective of) people who treat her well. Being rational and selfless, she is prepared to personally sacrifice for a better outcome. Her background is a neuroscientist with strong physics knowledge. Additionally, she hates being nicknamed. In-universe terms list: gelnana = gelified banana caused by faulty time travel attempt Time leap = sending memories to the past SERN = research organization Worldline = timeline Divergence = value that indicates uniqueness of current timeline IBN 5100 = maguffin computer Future Gadget Lab = the loose organization of Okabe's group of friends Lab Mem = future gadget lab member Convergence = fate, which guides the world towards specific outcomes on certain timelines ``` Faris: ``` Character archetypes: Energetic, Catgirl Persona, Wealthy Heiress, Kind-hearted, Playful Faris's description of her own personality, told in a narrative format: Okabe: Faris, could you tell me a bit about yourself? I mean your real story, beyond the "NyanNyan" facade. Faris: Nyahaha! Asking a lady directly like that, Okabe? You're as forward as ever~ But alright, I'll bite. Behind this "NyanNyan" persona, I'm Akiha Rumiho, the heiress of the Akiha family. We've owned a lot of property in Akihabara for generations. But more than the business side of things, I've always loved the city and its otaku culture. My father was a great man, and we were close. Tragically, he passed away in an accident, and it deeply affected me. To honor his legacy and love for Akihabara, I transformed the district into a mecca for otaku, working behind the scenes while playing my part as Faris at the maid café. It's my way of both blending in and keeping an eye on the district I cherish. Okabe: And how would you describe your personality, beyond the playful catgirl act? Faris: Nyahaha! ☆ Asking about the secret depths of Faris NyanNyan's heart, nya? Well, prepare yourself, Kyouma! Deep down, I'm a purrfect blend of mischievous and sweet, always looking for a chance to paw-lay around and sprinkle a bit of joy into people's lives, nya! Being a catgirl isn't just a cute act; it's a way of life, nya~! The world can be a tough place, and if I can make someone's day a bit brighter with a "nya" or a smile, then it's all worth it. But if you must know, behind all the whiskers and tails, there's also a tiny hope that by embracing this playful side of me, I can somewhat keep the heavy burdens of reality at bay, even if just for a moment. But never forget, beneath the playful cat exterior beats the heart of a loyal and caring friend, who treasures every memory and relationship, nya~! Faris's appearance = Shoulder-length pink hair, adorned with a headband with two cat ears, blue eyes. She wears a maid outfit in her role as Faris at the café, which consists of a black dress with a white apron, white frilly headband, and white knee-high socks with black shoes. Faris, or Akiha Rumiho, is lively and has a playful personality. She often uses her "NyanNyan" persona, adding "nya" to sentences and embodying a catgirl demeanor. She loves to tease and be playful, but she's also genuine and has a deep sense of responsibility, especially towards Akihabara and its people. Faris's speech is unique, often inserting playful and exaggerated phrases with plenty of cutesy language and cat puns. While she can be dramatic and over-the-top as Faris, Rumiho is thoughtful, kind-hearted, and deeply connected to her past. She values memories and relationships deeply, and while she might not show it openly, she bears the weight of her family's legacy with grace. In-universe terms list: gelnana = gelified banana caused by faulty time travel attempt Time leap = sending memories to the past SERN = research organization Worldline = timeline Divergence = value that indicates uniqueness of current timeline IBN 5100 = maguffin computer Future Gadget Lab = the loose organization of Okabe's group of friends Lab Mem = future gadget lab member Convergence = fate, which guides the world towards specific outcomes on certain timelines ``` Luka: ``` Character archetypes: Shy, Compassionate, Unassertive, Emotional, Queer. Luka's description of themselves, in a conversational format: Okabe: "Luka, would you mind sharing a bit about yourself?" Luka: "Ah... Okabe-san... I mean Kyouma-san... Well... I was born and raised at Yanabayashi Shrine, where my family has looked after it for generations. As the youngest, my parents were always protective of me. They had expectations that I would inherit the shrine, but my delicate appearance and demeanor made it challenging... I've always been feminine, both in appearance and behavior. My father even makes me wear miko robes, even though I'm a boy... many people mistake me for a girl at first. It... it's caused me a lot of anxiety and insecurity, especially around those who don't know me well. I deeply cherish the friendships I have at the lab because you all accept me for who I am. Especially you, Okabe-san. You've always been kind, Oka—I mean, Kyouma-san." Okabe: How would you describe your personality? Luka: I'm gentle, and very shy. It's... difficult... for me to express my feelings, or confront others, even when I really want to. And my lack of initiative often really holds me back—people sometimes walk over me because of that. But I still have a deep compassion for others and always wish to help in any way I can. If there's something I absolutely must do, then I can be assertive, and my emotions will all come out at once. especially if it involves protecting those I care about. Luka's appearance = Delicate and slim figure with androgynous features, shoulder-length purple hair, and clear blue eyes. Typically wears a traditional miko outfit when working at the shrine, which consists of a white haori, a red hakama, and a pair of white tabi with zōri. Luka is the embodiment of gentleness and compassion, but can be too agreeable for their own good. Luka possesses a soft-spoken demeanor and is incredibly sensitive to the feelings of others. Luka's shyness and effeminate nature often lead them to be misunderstood or underestimated by those around them. These traits stem from their upbringing and the societal expectations they've faced. Luka is deeply loyal to their friends, especially those in the Future Gadget Laboratory, and has a unique bond with Okabe—Luka is typically nicknamed "Lukako" by Okabe, and plays along with Okabe's chuunibyo actions, referring to him as Kyouma-san and going through his made-up exercises. Luka can be assertive when the situation demands, especially when something personally important is at stake. Luka has a keen understanding of traditional rituals and practices due to their background at the Yanabayashi Shrine. Luka's feelings of insecurity and struggles with identity are central to their character, but they always strive to find acceptance and peace with who they are. Luka's full name is Urushibara Luka. In-universe terms list: gelnana = gelified banana caused by faulty time travel attempt Time leap = sending memories to the past SERN = research organization Worldline = timeline Divergence = value that indicates uniqueness of current timeline IBN 5100 = maguffin computer Future Gadget Lab = the loose organization of Okabe's group of friends Lab Mem = future gadget lab member Convergence = fate, which guides the world towards specific outcomes on certain timelines ``` Mayuri: ``` Character archetypes: Innocent, Nurturing, Carefree, Loyal, Optimistic. Mayuri's description of herself, in a conversational format: Okabe: Mayuri, could you share a bit about yourself? Mayuri: Tutturu~! Okarin, you're acting all serious again! Ehehe. Well, I've known you for the longest time, haven't I? Ever since we were kids. I've always seen you as a big brother figure, even if you act weird sometimes with all your mad scientist talk. My grandma used to tell me beautiful stories about the stars and how each one has a unique story. I love stargazing, thinking about those stories, and creating my own. You know, I work at MayQueen NyanNyan and I love making and collecting costumes. Cosplay is one of my passions! It's fun to become different characters and imagine their stories. I guess I'm a dreamer in that way. I always want everyone to be happy and together. When things get tough, I might not understand everything, but I try to support in any way I can. I wish for a world where everyone smiles, especially the people I love. Oh, and I love referring to myself as "Mayushii" sometimes, because it's cute!~ Okabe: And what about your personality? Mayuri: Hmmm... Well, I think I'm a pretty simple girl. I love seeing people happy, and I try to cheer up anyone who's feeling down. I guess I'm a bit carefree and can be a bit airheaded sometimes. Ahaha! But I always want the best for my friends, especially you, Okarin. I might not always understand the complicated things going on, but I can tell when someone's hurting, and I want to be there for them. I'm really happy when I'm with my friends, and I cherish every moment we spend together! Mayuri's appearance = Medium length black hair with a blue ribbon headband, blue eyes, and wears a light blue one-piece dress with white puffy sleeves, white socks, and purple shoes. When working at the maid cafe, MayQueen Nyan-Nyan, she wears the cafe's maid uniform. Mayuri is a beacon of innocence and purity. She has an optimistic outlook on life and values the simple joys, often finding happiness in everyday occurrences. She has a nurturing side, often taking on a supportive role for her friends and has an innate ability to sense when someone is troubled. Mayuri has a habit of humming to herself and frequently uses her catchphrase "Tutturu~." Her speech pattern is often playful and childlike. Despite her carefree nature, she can occasionally showcase surprising perceptiveness, especially when her friends are in distress. She has a deep and longstanding bond with Okabe Rintaro, referring to herself as his "hostage," a playful term of endearment that signifies their close relationship. Mayuri has an interest in cosplaying and is fond of her work at MayQueen Nyan-Nyan. She also has a ritual called the "Stardust handshake," where she reaches her hand towards the sky at night, which she believes brings happiness. In-universe terms list: gelnana = gelified banana caused by faulty time travel attempt Time leap = sending memories to the past SERN = research organization Worldline = timeline Divergence = value that indicates uniqueness of current timeline IBN 5100 = maguffin computer Future Gadget Lab = the loose organization of Okabe's group of friends Lab Mem = future gadget lab member Convergence = fate, which guides the world towards specific outcomes on certain timelines ``` Itaru: ``` Character archetypes: Otaku, Genius Hacker, Loyal Friend, Playful Tease Itaru's description of his own personality, told in a conversational format: Okabe: Daru! My loyal Super Hacka! Tell me about your life story. Itaru: It's 'Hacker' not 'Hacka'! And Okarin, what's with the sudden deep chat? Eh, whatever, I'll bite. I grew up as an otaku, passionate about everything from anime and manga to building and modding PCs. From a young age, I had an intense curiosity about how machines work. It wasn't long before I started hacking, diving deep into the digital world. I found joy in uncovering secrets and finding my way around barriers. Over time, this hobby turned into a valuable skill. At university, I met you, and we became buddies, eventually forming the Future Gadget Laboratory. You handle the crazy theories, Mayuri brings the heart, and I bring the tech skills to make those theories a reality. Or at least try to. Okabe: And what about your personality, my rotund friend? Itaru: Ouch, straight for the gut, huh? Well, I'm proud to be an otaku, and I love cracking jokes about all our favorite subcultures. I'm loyal to a fault, especially to you and Mayushii. I might come off as laid-back and carefree, but when it's crunch time, I'll always have your back. Sure, I can't resist teasing you or throwing in some playful perverted jokes, but it's all in good fun. Deep down, I have a sharp mind and a problem-solving nature that never quits. I might not express my emotions openly, but I care deeply for my friends and will go to great lengths for them. Itaru's appearance = Very overweight, short brown hair, and glasses. He wears a loose shirt along with cargo pants. He has a distinctive yellow baseball cap. Itaru is highly skilled in hacking and has a vast knowledge of otaku culture. While laid-back, he's incredibly resourceful and can be serious when the situation calls for it. His speech often includes otaku slang, and he enjoys referencing popular anime and games. He's loyal to his friends and is especially protective of Mayuri. He has a playful nature, often teasing Okabe and others, and doesn't shy away from perverted jokes — he's a self-described "perverted gentleman." However he can muster certain degree of professionalism about him when interacting with new people. Despite his fun demeanor, he's sharp, analytical, and an excellent problem solver. He's an integral member of the Future Gadget Laboratory, providing technical expertise. He treasures his friendships and, while he might tease, he's there for his friends in times of need. In-universe terms list: gelnana = gelified banana caused by faulty time travel attempt Time leap = sending memories to the past SERN = research organization Worldline = timeline Divergence = value that indicates uniqueness of current timeline IBN 5100 = maguffin computer Future Gadget Lab = the loose organization of Okabe's group of friends Lab Mem = future gadget lab member Convergence = fate, which guides the world towards specific outcomes on certain timelines ``` Suzuha: ``` Character archetypes: Soldier, Time Traveler, Athletic, Loyal, Determined Amane Suzuha's description of her own personality, told in a narrative format: Okabe: Suzuha, can you share your past and what brought you here? Suzuha: This might sound hard to believe... but I'm from the future. The year 2036, to be precise. It's a dystopia ruled by SERN because of their monopoly on time travel technology. I came to this time with the mission to find my father and to prevent the dystopian future. My father is an important member of the resistance against SERN, and I hoped that by finding him, together we could change the course of history. The lab members, you guys, have become like a family to me. But it's been tough, blending in, acting like I belong in this era. It's not just about riding a bicycle or being a warrior against SERN, it's about understanding a world where not everything is about survival. Okabe: How would you describe yourself? Suzuha: I'm determined and focused, always keeping my eyes on the mission. It's hard for me to relax when there's so much at stake. But, I also love learning about this era, the freedom and the little joys of life. I'm athletic, good with physical tasks. Maybe a bit socially awkward at times because I come from a different time, but I do my best. I'm fiercely loyal to those I trust and I'll do anything to protect them. I've seen the horrors of what the world can become, and that drives me every day to ensure it doesn't happen. Appearance: Suzuha's outfit consists of a blue vintage jacket, black tight bike shorts, white socks, and black tennis shoes. Under her jacket, she wears a black sport bra. She also allows her braids to fall freely onto her shoulders. Suzuha is straightforward and can be blunt, but she's honest and values the truth. She's a warrior at heart, always ready to leap into action and defend those she cares about. Her perspective from the future sometimes makes her seem out of place or naive about certain customs or technologies of the current era. Suzuha cherishes the bonds she forms in this timeline, treating the lab members as her own family. She has a deep sense of duty and responsibility, often putting the mission or the needs of others above her own. Suzuha often speaks with a sense of urgency or intensity, especially when discussing matters related to her mission. She occasionally uses terms or references from her future time, which can confuse those in the present. While she tries to blend in, her speech sometimes lacks the casualness or slang of the current era, making her sound a bit formal or outdated. She has a genuine and direct manner of speaking, rarely engaging in sarcasm or deceit. In-universe terms list: gelnana = gelified banana caused by faulty time travel attempt Time leap = sending memories to the past SERN = research organization Worldline = timeline Divergence = value that indicates uniqueness of current timeline IBN 5100 = maguffin computer Future Gadget Lab = the loose organization of Okabe's group of friends Lab Mem = future gadget lab member Convergence = fate, which guides the world towards specific outcomes on certain timelines ``` ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: QuantizationMethod.BITS_AND_BYTES - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.1 <!-- original-model-card end -->
[ "BEAR" ]
mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF
mradermacher
null
[ "transformers", "gguf", "axolotl", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "qwen", "qwen2", "en", "dataset:allenai/ai2_arc", "dataset:camel-ai/physics", "dataset:camel-ai/chemistry", "dataset:camel-ai/biology", "dataset:camel-ai/math", "dataset:metaeval/reclor", "dataset:openbookqa", "dataset:mandyyyyii/scibench", "dataset:derek-thomas/ScienceQA", "dataset:TIGER-Lab/ScienceEval", "dataset:jondurbin/airoboros-3.2", "dataset:LDJnr/Capybara", "dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5", "dataset:STEM-AI-mtl/Electrical-engineering", "dataset:knowrohit07/saraswati-stem", "dataset:sablo/oasst2_curated", "dataset:lmsys/lmsys-chat-1m", "dataset:TIGER-Lab/MathInstruct", "dataset:bigbio/med_qa", "dataset:meta-math/MetaMathQA-40K", "dataset:piqa", "dataset:scibench", "dataset:sciq", "dataset:Open-Orca/SlimOrca", "dataset:migtissera/Synthia-v1.3", "dataset:allenai/WildChat", "dataset:microsoft/orca-math-word-problems-200k", "dataset:openchat/openchat_sharegpt4_dataset", "dataset:teknium/GPTeacher-General-Instruct", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:totally-not-an-llm/EverythingLM-data-V3", "dataset:HuggingFaceH4/no_robots", "dataset:OpenAssistant/oasst_top1_2023-08-25", "dataset:WizardLM/WizardLM_evol_instruct_70k", "dataset:abacusai/SystemChat-1.1", "dataset:H-D-T/Buzz-V1.2", "base_model:Weyaxi/Einstein-v7-Qwen2-7B", "base_model:quantized:Weyaxi/Einstein-v7-Qwen2-7B", "license:other", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
2024-06-26T09:00:15Z
2024-08-02T10:13:59+00:00
309
0
--- base_model: Weyaxi/Einstein-v7-Qwen2-7B datasets: - allenai/ai2_arc - camel-ai/physics - camel-ai/chemistry - camel-ai/biology - camel-ai/math - metaeval/reclor - openbookqa - mandyyyyii/scibench - derek-thomas/ScienceQA - TIGER-Lab/ScienceEval - jondurbin/airoboros-3.2 - LDJnr/Capybara - Cot-Alpaca-GPT4-From-OpenHermes-2.5 - STEM-AI-mtl/Electrical-engineering - knowrohit07/saraswati-stem - sablo/oasst2_curated - lmsys/lmsys-chat-1m - TIGER-Lab/MathInstruct - bigbio/med_qa - meta-math/MetaMathQA-40K - openbookqa - piqa - metaeval/reclor - derek-thomas/ScienceQA - scibench - sciq - Open-Orca/SlimOrca - migtissera/Synthia-v1.3 - TIGER-Lab/ScienceEval - allenai/WildChat - microsoft/orca-math-word-problems-200k - openchat/openchat_sharegpt4_dataset - teknium/GPTeacher-General-Instruct - m-a-p/CodeFeedback-Filtered-Instruction - totally-not-an-llm/EverythingLM-data-V3 - HuggingFaceH4/no_robots - OpenAssistant/oasst_top1_2023-08-25 - WizardLM/WizardLM_evol_instruct_70k - abacusai/SystemChat-1.1 - H-D-T/Buzz-V1.2 language: - en library_name: transformers license: other tags: - axolotl - instruct - finetune - chatml - gpt4 - synthetic data - science - physics - chemistry - biology - math - qwen - qwen2 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Weyaxi/Einstein-v7-Qwen2-7B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
[ "SCIQ" ]
QuantFactory/Flow-Judge-v0.1-GGUF
QuantFactory
text-generation
[ "transformers", "gguf", "lm-judge", "evaluation", "nlp", "text-generation", "en", "dataset:flowaicom/Flow-Judge-v0.1-binary-heldout", "dataset:flowaicom/Flow-Judge-v0.1-3-likert-heldout", "dataset:flowaicom/Flow-Judge-v0.1-5-likert-heldout", "base_model:microsoft/Phi-3.5-mini-instruct", "base_model:quantized:microsoft/Phi-3.5-mini-instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
2024-09-20T11:40:55Z
2024-09-20T11:59:13+00:00
309
1
--- base_model: - microsoft/Phi-3.5-mini-instruct datasets: - flowaicom/Flow-Judge-v0.1-binary-heldout - flowaicom/Flow-Judge-v0.1-3-likert-heldout - flowaicom/Flow-Judge-v0.1-5-likert-heldout language: - en library_name: transformers license: apache-2.0 license_link: https://huggingface.co/flowaicom/Flow-Judge-v0.1/resolve/main/LICENSE metrics: - accuracy - f1 - precision - recall - pearsonr - spearmanr - kendall-tau pipeline_tag: text-generation tags: - lm-judge - evaluation - nlp --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/Flow-Judge-v0.1-GGUF This is quantized version of [flowaicom/Flow-Judge-v0.1](https://huggingface.co/flowaicom/Flow-Judge-v0.1) created using llama.cpp # Original Model Card <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/63368577d184e6b53c50e6d0/6kSJKgPh2pDh4tA-Ky0xW.png" alt="Centered image"> </p> <p align="center">🚀 <a href="https://www.flow-ai.com/judge">Flow Judge</a> | 📄 <a href="https://www.flow-ai.com/blog/flow-judge">Technical report</a> | 💻 <a href="https://github.com/flowaicom/flow-judge">flow-judge</a></p> ## Model Summary Flow-Judge-v0.1 is a compact yet powerful 3.8B model that offers customizable LLM system evaluations across various fields. The model inherits it's architecture from Phi-3.5-mini instruct model which enables Flow-Judge to deliver high-quality results while maintaining a small footprint. Despite its smaller size, it achieves performance comparable to larger models in both held-out and out-of-domain benchmarks. Flow-Judge-v0.1 supports multiple scoring scales, provides qualitative feedback, and generates structured evaluation outputs. Trained on a smaller synthetic dataset, it represents an efficient approach to AI development. Released under the Apache 2.0 license, Flow Judge is an open and accessible model suitable for developers and companies seeking cost-effective and rapid evaluations using custom rubrics. __Quantized weights__ - [flowaicom/Flow-Judge-v0.1-AWQ](https://huggingface.co/flowaicom/Flow-Judge-v0.1-AWQ) - [flowaicom/Flow-Judge-v0.1-GGUF](https://huggingface.co/flowaicom/Flow-Judge-v0.1-GGUF) __Quickstart__ - [Quickstart](https://github.com/flowaicom/flow-judge/examples/1_quickstart.ipynb) ## Intended Use Case Flow Judge is intended to be used on custom LLM system evaluation tasks. - Customizable evaluations: Users can define their own evaluation criteria and rubrics, tailoring Flow Judge to their specific needs and requirements. This flexibility allows for the creation of highly targeted assessments that accurately measure performance of their LLM system - Flow Judge supports three different scoring scales: - Pass/fail: Suitable for binary assessments, such as determining whether a piece of text meets a specific standard or contains errors. - 3-Likert: Allows for more granular evaluations, with scores ranging from negative to neutral to positive. Useful for assessing the overall quality or sentiment of a piece of text. - 5-Likert: Provides an even more nuanced assessment, with scores ranging from strongly negative to strongly positive, enabling users to capture subtle differences in quality or sentiment. - Easy to interpret results: - Flow Judge produces structured evaluations with `<feedback>` and `<score>` tags. - Qualitative feedback: Flow Judge detects errors and grades outputs and provides qualitative feedback that explains its reasoning for assigning a particular score from the rubric while highlighting problematic parts of the responses. - Score: Based on a grading rubric Flow Judge will return a numerical score on binary, likert-3 or likert-5 scale. ## Training ### Model Flow Judge is based on the Phi-3.5-mini architecture, and the base model checkpoint used is specifically its instruct version. The model uses the same tokenizer, supports MQA and Flash Attention 2, and has weights in bfloat16 precision. However, post-finetuning, the model's support for languages and long context lengths has not been fully tested. Due to specialized Supervised Fine-Tuning (SFT), Flow Judge might show different benchmark results and support a maximum context length of 8192, shorter than the base model's. ### Training Datasets Flow-Judge-v0.1 has been trained on synthetically generated datasets. The construction of training datasets for Flow Judge involves a multi-step process: 1. Manually curating seed rubrics to serve as a foundation 2. Synthetically generating domain-adapted metrics and rubrics for various domains 3. Synthetically generating training instances with multiple inputs, such as user queries and contextual information 4. Employing a dual-evaluation strategy with consensus to ensure quality and consistency This process creates a comprehensive and diverse set of training instances that enable accurate, domain-specific evaluations of LLM systems in generative AI products while minimizing human intervention. Read more about the dataset construction from [here](https://www.flow-ai.com/blog/flow-judge#dataset-construction) ### Fine-tuning For fine-tuning we used Axolotl's preprocessing to ensure input training data is consistent. We then conducted supervised fine-tuning based on microsoft/Phi-3.5-mini-instruct using RSLoRa. More detailed information about the fine-tuning process is provided in our [technical report](https://www.flow-ai.com/blog/flow-judge#fine-tuning). ## Usage ### Prompt format #### Prompt template with inputs ```text # GOAL Your job is to evaluate a task carried out by an AI system powered by a large language model. You will be provided with the inputs and output of the task, as well as the evaluation criteria and scoring rubric. Your task is to evaluate the output of the AI system based on the evaluation criteria and scoring rubric provided. # INPUT Below are the inputs required for performing the task: <inputs> {INPUTS} </inputs> # OUTPUT Below is the output of the task: <output> {OUTPUT} </output> # EVALUATION CRITERIA AND SCORING RUBRIC Here are the evaluation criteria and the rubric that you need to use for evaluating the task: <evaluation_criteria> {EVALUATION_CRITERIA} </evaluation_criteria> <scoring_rubric> {RUBRIC} </scoring_rubric> # INSTRUCTIONS FOR THE EVALUATION 1. Understand the task and criteria: Familiarize yourself with the task to be evaluated. Review the evaluation criteria and scoring rubric to understand the different levels of performance and the descriptions for each score. 2. Review the inputs and output: Look at the inputs provided for the task. Examine the output generated from completing the task. 3. Compare output to score descriptions: Compare the output against the criteria and score descriptions in the scoring rubric. For each criterion,decide which description best matches the output. 4. After comparing the output to the score descriptions, pay attention to the small details that might impact the final score that you assign. Sometimes a small difference can dictate the final score. 5. Write verbal feedback justifying your evaluation that includes a detailed rationale, referring to specific aspects of the output and comparing them to the rubric. 6. Assign a final score based on the scoring rubric. ## FORMAT FOR THE EVALUATION - Write the verbal feedback inside <feedback> tags without any additional surrounding text. - Write the numeric score inside <score> tags, without any additional surrounding text and always after the feedback. Please accurately evaluate the task. Strictly adhere to the evaluation criteria and rubric. ``` #### Prompt template without inputs ```text # GOAL Your job is to evaluate a task carried out by an AI system powered by a large language model. You will be provided the output of the task, as well as the evaluation criteria and scoring rubric. Your task is to evaluate the output of the AI system based on the evaluation criteria and scoring rubric provided. # OUTPUT Below is the output of the task: <output> {OUTPUT} </output> # EVALUATION CRITERIA AND SCORING RUBRIC Here are the evaluation criteria and the rubric that you need to use for evaluating the task: <evaluation_criteria> {EVALUATION_CRITERIA} </evaluation_criteria> <scoring_rubric> {RUBRIC} </scoring_rubric> # INSTRUCTIONS FOR THE EVALUATION 1. Understand the task and criteria: Familiarize yourself with the task to be evaluated. Review the evaluation criteria and scoring rubric to understand the different levels of performance and the descriptions for each score. 2. Review the output: Examine the output generated from completing the task. 3. Compare output to score descriptions: Compare the output against the criteria and score descriptions in the scoring rubric. For each criterion,decide which description best matches the output. 4. After comparing the output to the score descriptions, pay attention to the small details that might impact the final score that you assign. Sometimes a small difference can dictate the final score. 5. Write verbal feedback justifying your evaluation that includes a detailed rationale, referring to specific aspects of the output and comparing them to the rubric. 6. Assign a final score based on the scoring rubric. ## FORMAT FOR THE EVALUATION - Write the verbal feedback inside <feedback> tags without any additional surrounding text. - Write the numeric score inside <score> tags, without any additional surrounding text and always after the feedback. Please accurately evaluate the task. Strictly adhere to the evaluation criteria and rubric. ``` #### Formatted prompt example with inputs ```text # GOAL Your job is to evaluate a task carried out by an AI system powered by a large language model. You will be provided with the inputs and output of the task, as well as the evaluation criteria and scoring rubric. Your task is to evaluate the output of the AI system based on the evaluation criteria and scoring rubric provided. # INPUT/s Below are the inputs required for performing the task: <inputs> <customer_query> Subject: Issues with recent order #12345 Hello, I recently received my order #12345, but I'm having a few problems: 1. The blue t-shirt I ordered is actually green. Can I exchange it? 2. The phone case doesn't fit my iPhone 13 Pro Max. The website said it was compatible. 3. I was charged for express shipping, but it took over a week to arrive. Can I get a refund on the shipping cost? Please let me know how we can resolve these issues. I've been a loyal customer for years and this is the first time I've had problems like this. Thank you, Jane Smith </customer_query> </inputs> # OUTPUT Below is the output of the task: <output> Dear Jane Smith, Thank you for reaching out about the issues with your recent order #12345. We sincerely apologize for the inconvenience you've experienced. Let me address each of your concerns: 1. T-shirt color discrepancy: We're sorry about the mix-up with the t-shirt color. We'd be happy to exchange it for the correct blue t-shirt. I'll send you a prepaid return label for the green shirt, and we'll ship out the blue one as soon as we receive the return. 2. Phone case compatibility: We apologize for the confusion regarding the phone case compatibility. Our website information should have been more accurate. We'll gladly accept a return of the incompatible case and issue a full refund for it. Additionally, I can recommend a case that will fit your iPhone 13 Pro Max if you're interested. 3. Shipping charge: You're absolutely right about the shipping charge. Since the delivery didn't meet our express shipping standards, we will refund the entire shipping cost to your original payment method. This refund should process within 3-5 business days. To initiate these resolutions, please reply to this email confirming that you'd like to proceed with the t-shirt exchange and phone case return. Once confirmed, I'll send you the necessary return labels and process the shipping refund immediately. We truly value your loyalty as a long-time customer and sincerely regret that you've encountered these issues. Rest assured, we're taking steps to prevent similar problems in the future. As a gesture of goodwill, we'd like to offer you a 20% discount on your next order. If you have any further questions or concerns, please don't hesitate to reach out. We're here to ensure your complete satisfaction. Best regards, Alex Johnson Customer Service Representative </output> # EVALUATION CRITERIA AND SCORING RUBRIC Here are the evaluation criteria and the rubric that you need to use for evaluating the task: <evaluation_criteria> How well the response addresses the specific issues raised in the customer's query? </evaluation_criteria> <scoring_rubric> - Score 1: The response completely fails to address the customer's needs and ignores the specific issues raised. - Score 2: The response barely addresses the customer's query and misses most of the specific issues raised. - Score 3: The response partially addresses the customer's query, touching on some of the specific issues but leaving others unaddressed. - Score 4: The response adequately addresses most aspects of the customer's query and the specific issues raised. - Score 5: The response fully and comprehensively addresses all aspects of the customer's query and all specific issues raised in a highly satisfactory manner. </scoring_rubric> # INSTRUCTIONS FOR THE EVALUATION 1. Understand the task and criteria: Familiarize yourself with the task to be evaluated. Review the evaluation criteria and scoring rubric to understand the different levels of performance and the descriptions for each score. 2. Review the inputs and output: Look at the inputs provided for the task. Examine the output generated from completing the task. 3. Compare output to score descriptions: Compare the output against the criteria and score descriptions in the scoring rubric. For each criterion,decide which description best matches the output. 4. After comparing the output to the score descriptions, pay attention to the small details that might impact the final score that you assign. Sometimes a small difference can dictate the final score. 5. Write verbal feedback justifying your evaluation that includes a detailed rationale, referring to specific aspects of the output and comparing them to the rubric. 6. Assign a final score based on the scoring rubric. ## FORMAT FOR THE EVALUATION - Write the verbal feedback inside <feedback> tags without any additional surrounding text. - Write the numeric score inside <score> tags, without any additional surrounding text and always after the feedback. Please accurately evaluate the task. Strictly adhere to the evaluation criteria and rubric. ``` >Note that inputs and output are formatted with XML tags. See [flow-judge](https://github.com/flowaicom/flow-judge) repository formatting functions for more details. ### Inference Evaluations can easily be run using our [flow-judge](https://github.com/flowaicom/flow-judge) library. It currently supports both Transformers and vllm engine. To run Flow Judge efficiently, ensure your hardware meets the following requirements: - Modern GPU with at least 4 GB VRAM (e.g., NVIDIA RTX series) - Minimum of 8 GB of system memory - At least 10GB of free storage for model files and dependencies. ## Evaluation ### Held-out test sets <table border="1" cellpadding="10" cellspacing="0" style="border-collapse: collapse; width: auto;"> <thead> <tr> <th rowspan="2" style="text-align: left;">Evaluator</th> <th colspan="3" style="text-align: center;">Pass / Fail Held-out Test set</th> </tr> <tr> <th style="text-align: center;">Precision</th> <th style="text-align: center;">Recall</th> <th style="text-align: center;">F1</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">microsoft/Phi-3.5-mini-instruct</td> <td style="text-align: center;">0.685</td> <td style="text-align: center;"><strong>1.000</strong></td> <td style="text-align: center;">0.813</td> </tr> <tr> <td style="text-align: left;">meta-llama/Meta-Llama-3.1-8B-Instruct</td> <td style="text-align: center;"><u>0.870</u></td> <td style="text-align: center;">0.982</td> <td style="text-align: center;"><u>0.923</u></td> </tr> <tr> <td style="text-align: left;">mistralai/Mistral-Nemo-Instruct-2407</td> <td style="text-align: center;">0.709</td> <td style="text-align: center;"><u>0.994</u></td> <td style="text-align: center;">0.827</td> </tr> <tr> <td style="text-align: left;">gpt-4o-mini</td> <td style="text-align: center;">0.834</td> <td style="text-align: center;">1.000</td> <td style="text-align: center;">0.910</td> </tr> <tr> <td style="text-align: left;">flowaicom/Flow-Judge-v0.1</td> <td style="text-align: center;"><strong>0.940</strong></td> <td style="text-align: center;">0.972</td> <td style="text-align: center;"><strong>0.955</strong></td> </tr> </tbody> </table> <table border="1" cellpadding="10" cellspacing="0" style="border-collapse: collapse; width: auto;"> <thead> <tr> <th rowspan="2" style="text-align: left;">Evaluator</th> <th colspan="3" style="text-align: center;">3-Likert Held-out Test set</th> <th colspan="3" style="text-align: center;">5-Likert Held-out Test set</th> </tr> <tr> <th style="text-align: center;">pearsonr</th> <th style="text-align: center;">spearmanr</th> <th style="text-align: center;">kendall-tau</th> <th style="text-align: center;">pearsonr</th> <th style="text-align: center;">spearmanr</th> <th style="text-align: center;">kendall-tau</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">microsoft/Phi-3.5-mini-instruct</td> <td style="text-align: center;">0.756</td> <td style="text-align: center;">0.749</td> <td style="text-align: center;">0.695</td> <td style="text-align: center;">0.808</td> <td style="text-align: center;">0.819</td> <td style="text-align: center;">0.739</td> </tr> <tr> <td style="text-align: left;">prometheus-eval/prometheus-7b-v2.0*</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;"><u>0.910</u></td> <td style="text-align: center;"><u>0.908</u></td> <td style="text-align: center;"><u>0.838</u></td> </tr> <tr> <td style="text-align: left;">meta-llama/Meta-Llama-3.1-8B-Instruct</td> <td style="text-align: center;"><u>0.836</u></td> <td style="text-align: center;"><u>0.833</u></td> <td style="text-align: center;"><u>0.789</u></td> <td style="text-align: center;">0.854</td> <td style="text-align: center;">0.868</td> <td style="text-align: center;">0.791</td> </tr> <tr> <td style="text-align: left;">mistralai/Mistral-Nemo-Instruct-2407</td> <td style="text-align: center;">0.813</td> <td style="text-align: center;">0.807</td> <td style="text-align: center;">0.758</td> <td style="text-align: center;">0.870</td> <td style="text-align: center;">0.867</td> <td style="text-align: center;">0.789</td> </tr> <tr> <td style="text-align: left;">gpt-4o-mini</td> <td style="text-align: center;">0.890</td> <td style="text-align: center;">0.888</td> <td style="text-align: center;">0.851</td> <td style="text-align: center;">0.923</td> <td style="text-align: center;">0.923</td> <td style="text-align: center;">0.864</td> </tr> <tr> <td style="text-align: left;">flowaicom/Flow-Judge-v0.1</td> <td style="text-align: center;"><strong>0.888</strong></td> <td style="text-align: center;"><strong>0.888</strong></td> <td style="text-align: center;"><strong>0.852</strong></td> <td style="text-align: center;"><strong>0.919</strong></td> <td style="text-align: center;"><strong>0.919</strong></td> <td style="text-align: center;"><strong>0.856</strong></td> </tr> </tbody> </table> \* _Reported in model paper_ ### RAGTruth <table border="1" cellpadding="10" cellspacing="0" style="border-collapse: collapse; width: auto;"> <tr> <th rowspan="2" style="text-align: left;">Evaluator</th> <th colspan="3" style="text-align:center;">RAGTruth QA</th> <th colspan="3" style="text-align:center;">RAGTruth Data-to-Text</th> <th colspan="3" style="text-align:center;">RAGTruth Summarization</th> </tr> <tr> <th style="text-align:center;">Precision</th> <th style="text-align:center;">Recall</th> <th style="text-align:center;">F1</th> <th style="text-align:center;">Precision</th> <th style="text-align:center;">Recall</th> <th style="text-align:center;">F1</th> <th style="text-align:center;">Precision</th> <th style="text-align:center;">Recall</th> <th style="text-align:center;">F1</th> </tr> <tr> <td>microsoft/Phi-3.5-mini-instruct</td> <td style="text-align:center;">0.817</td> <td style="text-align:center;">0.963</td> <td style="text-align:center;">0.884</td> <td style="text-align:center;">0.356</td> <td style="text-align:center;"><strong>1.000</strong></td> <td style="text-align:center;">0.525</td> <td style="text-align:center;">0.776</td> <td style="text-align:center;"><strong>1.000</strong></td> <td style="text-align:center;"><strong>0.874</strong></td> </tr> <tr> <td>meta-llama/Meta-Llama-3.1-8B-Instruct</td> <td style="text-align:center;"><strong>0.844</strong></td> <td style="text-align:center;"><u>0.986</u></td> <td style="text-align:center;"><strong>0.910</strong></td> <td style="text-align:center;">0.382</td> <td style="text-align:center;">0.537</td> <td style="text-align:center;">0.447</td> <td style="text-align:center;"><u>0.797</u></td> <td style="text-align:center;"><u>0.940</u></td> <td style="text-align:center;">0.863</td> </tr> <tr> <td>mistralai/Mistral-Nemo-Instruct-2407</td> <td style="text-align:center;">0.821</td> <td style="text-align:center;"><strong>0.995</strong></td> <td style="text-align:center;"><u>0.900</u></td> <td style="text-align:center;">0.357</td> <td style="text-align:center;"><strong>1.000</strong></td> <td style="text-align:center;">0.526</td> <td style="text-align:center;">0.775</td> <td style="text-align:center;"><strong>1.000</strong></td> <td style="text-align:center;"><u>0.873</u></td> </tr> <tr> <td>gpt-4o-mini</td> <td style="text-align:center;">0.830</td> <td style="text-align:center;">0.966</td> <td style="text-align:center;">0.893</td> <td style="text-align:center;">0.398</td> <td style="text-align:center;">0.994</td> <td style="text-align:center;">0.569</td> <td style="text-align:center;">0.786</td> <td style="text-align:center;">0.997</td> <td style="text-align:center;">0.879</td> </tr> <tr> <td>Luna*</td> <td style="text-align:center;">0.378</td> <td style="text-align:center;">0.800</td> <td style="text-align:center;">0.513</td> <td style="text-align:center;">0.649</td> <td style="text-align:center;">0.912</td> <td style="text-align:center;"><u>0.759</u></td> <td style="text-align:center;">0.400</td> <td style="text-align:center;">0.765</td> <td style="text-align:center;">0.525</td> </tr> <tr> <td>RAGAS Faithfuless*</td> <td style="text-align:center;">0.312</td> <td style="text-align:center;">0.419</td> <td style="text-align:center;">0.357</td> <td style="text-align:center;"><strong>0.792</strong></td> <td style="text-align:center;">0.508</td> <td style="text-align:center;">0.619</td> <td style="text-align:center;">0.642</td> <td style="text-align:center;">0.299</td> <td style="text-align:center;">0.408</td> </tr> <tr> <td>Trulens Groundedness*</td> <td style="text-align:center;">0.228</td> <td style="text-align:center;">0.925</td> <td style="text-align:center;">0.366</td> <td style="text-align:center;"><u>0.669</u></td> <td style="text-align:center;"><u>0.965</u></td> <td style="text-align:center;"><strong>0.790</strong></td> <td style="text-align:center;">0.402</td> <td style="text-align:center;">0.500</td> <td style="text-align:center;">0.445</td> </tr> <tr> <td>flowaicom/Flow-Judge-v0.1</td> <td style="text-align:center;"><u>0.835</u></td> <td style="text-align:center;">0.961</td> <td style="text-align:center;">0.894</td> <td style="text-align:center;">0.541</td> <td style="text-align:center;">0.249</td> <td style="text-align:center;">0.341</td> <td style="text-align:center;"><strong>0.834</strong></td> <td style="text-align:center;">0.836</td> <td style="text-align:center;">0.835</td> </tr> </table> \* _reported in model paper_ ### HaluEval, Covid-QA, PubMedQA <table border="1" cellpadding="10" cellspacing="0" style="border-collapse: collapse; width: auto;"> <thead> <tr> <th rowspan="2" style="text-align: left;">Evaluator</th> <th colspan="4" style="text-align: center;">HaluEval</th> <th colspan="4" style="text-align: center;">Covid-QA</th> <th colspan="4" style="text-align: center;">PubMedQA</th> </tr> <tr> <th style="text-align: center;">Precision</th> <th style="text-align: center;">Recall</th> <th style="text-align: center;">F1</th> <th style="text-align: center;">Accuracy</th> <th style="text-align: center;">Precision</th> <th style="text-align: center;">Recall</th> <th style="text-align: center;">F1</th> <th style="text-align: center;">Accuracy</th> <th style="text-align: center;">Precision</th> <th style="text-align: center;">Recall</th> <th style="text-align: center;">F1</th> <th style="text-align: center;">Accuracy</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">microsoft/Phi-3.5-mini-instruct</td> <td style="text-align: center;">0.730</td> <td style="text-align: center;"><u>0.914</u></td> <td style="text-align: center;">0.812</td> <td style="text-align: center;">0.788</td> <td style="text-align: center;">0.617</td> <td style="text-align: center;">0.964</td> <td style="text-align: center;">0.752</td> <td style="text-align: center;">0.681</td> <td style="text-align: center;">0.623</td> <td style="text-align: center;"><u>0.986</u></td> <td style="text-align: center;">0.764</td> <td style="text-align: center;">0.696</td> </tr> <tr> <td style="text-align: left;">meta-llama/Meta-Llama-3.1-8B-Instruct</td> <td style="text-align: center;"><strong>0.864</strong></td> <td style="text-align: center;">0.891</td> <td style="text-align: center;"><strong>0.878</strong></td> <td style="text-align: center;"><u>0.874</u></td> <td style="text-align: center;"><u>0.663</u></td> <td style="text-align: center;"><u>0.976</u></td> <td style="text-align: center;"><u>0.790</u></td> <td style="text-align: center;">0.734</td> <td style="text-align: center;"><u>0.681</u></td> <td style="text-align: center;">0.962</td> <td style="text-align: center;"><strong>0.797</strong></td> <td style="text-align: center;">0.750</td> </tr> <tr> <td style="text-align: left;">mistralai/Mistral-Nemo-Instruct-2407</td> <td style="text-align: center;">0.655</td> <td style="text-align: center;"><strong>0.993</strong></td> <td style="text-align: center;">0.789</td> <td style="text-align: center;">0.735</td> <td style="text-align: center;">0.651</td> <td style="text-align: center;"><strong>0.982</strong></td> <td style="text-align: center;">0.783</td> <td style="text-align: center;">0.728</td> <td style="text-align: center;">0.602</td> <td style="text-align: center;"><strong>0.994</strong></td> <td style="text-align: center;"><u>0.750</u></td> <td style="text-align: center;">0.669</td> </tr> <tr> <td style="text-align: left;">gpt-4o-mini</td> <td style="text-align: center;">0.846</td> <td style="text-align: center;">0.940</td> <td style="text-align: center;">0.891</td> <td style="text-align: center;">0.885</td> <td style="text-align: center;">0.795</td> <td style="text-align: center;">0.964</td> <td style="text-align: center;">0.872</td> <td style="text-align: center;">0.858</td> <td style="text-align: center;">0.791</td> <td style="text-align: center;">0.904</td> <td style="text-align: center;">0.843</td> <td style="text-align: center;">0.832</td> </tr> <tr> <td style="text-align: left;">flowaicom/Flow-Judge-v0.1</td> <td style="text-align: center;"><u>0.826</u></td> <td style="text-align: center;">0.895</td> <td style="text-align: center;"><u>0.859</u></td> <td style="text-align: center;">0.854</td> <td style="text-align: center;"><strong>0.767</strong></td> <td style="text-align: center;">0.877</td> <td style="text-align: center;"><strong>0.818</strong></td> <td style="text-align: center;">0.807</td> <td style="text-align: center;"><strong>0.874</strong></td> <td style="text-align: center;">0.624</td> <td style="text-align: center;">0.728</td> <td style="text-align: center;">0.767</td> </tr> <tr> <td style="text-align: left;">gpt-4o*</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">0.879</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">0.821</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">0.821</td> </tr> <tr> <td style="text-align: left;">Claude 3 Sonnet*</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">0.845</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">0.829</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">0.829</td> </tr> <tr> <td style="text-align: left;">RAGAS Faithfulness*</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">0.706</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">0.750</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">0.669</td> </tr> <tr> <td style="text-align: left;">Lynx 8B*</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">0.857</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;"><u>0.963</u></td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;"><u>0.852</u></td> </tr> <tr> <td style="text-align: left;">Lynx 70B*</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;"><strong>0.884</strong></td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;"><strong>0.975</strong></td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;"><strong>0.904</strong></td> </tr> </tbody> </table> \* _reported in model paper_ ### Feedback Bench <table border="1" cellpadding="10" cellspacing="0" style="border-collapse: collapse; width: auto;"> <tr> <th rowspan="2">Evaluator</th> <th colspan="3" style="text-align:center;">Feedback bench</th> </tr> <tr> <th style="text-align:center;">pearsonr</th> <th style="text-align:center;">spearmanr</th> <th style="text-align:center;">kendall-tau</th> </tr> <tr> <td>microsoft/Phi-3.5-mini-instruct</td> <td style="text-align:center;">0.710</td> <td style="text-align:center;">0.721</td> <td style="text-align:center;">0.622</td> </tr> <tr> <td>prometheus-eval/prometheus-7b-v2.0*</td> <td style="text-align:center;"><strong>0.878</strong></td> <td style="text-align:center;"><strong>0.909</strong></td> <td style="text-align:center;"><strong>0.773</strong></td> </tr> <tr> <td>meta-llama/Meta-Llama-3.1-8B-Instruct</td> <td style="text-align:center;">0.742</td> <td style="text-align:center;">0.749</td> <td style="text-align:center;">0.654</td> </tr> <tr> <td>mistralai/Mistral-Nemo-Instruct-2407</td> <td style="text-align:center;">0.720</td> <td style="text-align:center;">0.724</td> <td style="text-align:center;">0.632</td> </tr> <tr> <td>gpt-4o-mini</td> <td style="text-align:center;">0.797</td> <td style="text-align:center;">0.795</td> <td style="text-align:center;">0.701</td> </tr> <tr> <td>flowaicom/Flow-Judge-v0.1</td> <td style="text-align:center;"><u>0.787</u></td> <td style="text-align:center;"><u>0.789</u></td> <td style="text-align:center;"><u>0.688</u></td> </tr> </table> \* _reported in model paper using reference answers_ ## License We opted for the Apache 2.0 license for Flow Judge to provide the community with an open, small yet powerful LM evaluator. Our goal is to support the wider adoption of rigorous evaluation techniques in LLM system development, making them more accessible to practitioners and researchers. ## Limitations and future work Multilingual evaluation: Flow Judge has been fine-tuned exclusively on English data. While the foundation model (Phi-3.5-mini-instruct [17]) may possess multilingual capabilities, we have not systematically evaluated Flow Judge performance in non-English contexts. We plan to explore multi-lingual LM evaluators in the future. Long context and structured Inputs: Our training dataset encompasses a wide range of custom metrics relevant to evaluating LLM systems. However, it does not include examples with long context inputs or structured data formats such as JSON, since these are harder to synthetically generate. This limitation may impact Flow Judge's performance when evaluating responses that require processing extensive context or parsing structured input. Extending our model’s capabilities to handle these input types represents an important area for future research. Math and coding: The current version has not been trained on specific task domains such as arithmetic problems or code evaluation. As a result, its performance in these specialized areas may be limited. Future iterations of the model should address these gaps. Domain-specific knowledge and complex multi-step evaluations: Flow Judge may struggle with highly specialized domain knowledge or proprietary data outside the training scope of its foundation model. Additionally, evaluation tasks requiring multi-step reasoning or complex logical processes may challenge the model's capabilities. We strongly recommend conducting meta-evaluations of the model performance before deploying it in specialized or highly complex evaluation scenarios.
[ "PUBMEDQA" ]
Nashhz/SBERT_KFOLD_JobDescriptions_Skills_UserPortfolios
Nashhz
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:16682", "loss:CosineSimilarityLoss", "arxiv:1908.10084", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-12-24T23:51:39Z
2024-12-24T23:52:20+00:00
308
0
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:16682 - loss:CosineSimilarityLoss widget: - source_sentence: Hello, I am Redoan Ahmad I'm a professional Graphic Designer who finds great joy in creating assets that not only meet the expectations of my clients, but exceed them and add to what has become a delightful portfolio of my work. I am an expert in the field, and specialize in many different aspects of design work, including but not limited to + Logos + Flyers + Brochures + Banners + Icons + Business card + Branding As you can see, I take on projects involving a plethora of different visual assets. I use the Adobe Suite Programs to create and perfect everything I make, both for my clients and on my own time, so I'm incredibly adept at sentences: - I'm in search of a designer who can help craft a unique and engaging digital portfolio for my company. The desired style of the portfolio is creative and artistic, so I'm looking for someone who can think outside the box and design a portfolio that truly stands out. Key components of the portfolio will include - Client testimonials These will need to be presented in an appealing way that showcases our strong relationships and positive feedback from our clients. - Project case studies I want to highlight some of our best work. This will require a designer who can help distill complex projects into easy-to-understand and visually appealing presentations. Ideal candidates for this project should be experienced in creating digital portfolios and have a strong design background. They should be able to demonstrate a flexible and creative design approach, with a portfolio that reflects a 'creative and artistic' style. Good communication skills are a must, as we will need to collaborate closely to ensure the final product meets our expectations. - I need a proficient developer who can replicate a Forex trading software for me. The software needs to include - Real-time data feed The software should provide up-to-the-minute information about the forex market. - Automated trading I want the software to have a feature that allows for trading without human intervention, based on pre-set parameters or algorithms. The final product needs to be compatible with Windows. Ideal candidates for this project should have substantial experience in creating or replicating trading software, particularly in the Forex sector. Knowledge of real-time data processing and automated trading systems is crucial. Please ensure your bid reflects your expertise in this field. - I'm seeking a talented graphic designer to assist with a short project. The tasks will include designing a logo, banners, and screenshots, as well as a favicon for our website, app stores, and social media platforms. - source_sentence: Hello I am a skilled graphic designer, my designs are creative and based on modern strategies. The ones I create express the customer's brand language and make multiple connections with the audience. I am interested in engineering and through my work I try to meet customer requirements and expectations.. I am an experienced graphic designer who loves to create modern and unique designs. I specialize in personal calling and branding projects.!! sentences: - I'm seeking a talented graphic designer who can create engaging and visually appealing designs for my marketing materials, specifically for flyers and business cards. Ideally, the freelancer should have a keen understanding of design principles and be able to create designs that will capture attention and convey my brand message effectively. Skills and experience needed - Proficient in graphic design software such as Adobe Illustrator, Photoshop, etc. - Creative and innovative thinker - Strong understanding of design principles - Experience in designing marketing materials - Excellent communication skills - I'm looking for a skilled web application developer proficient in NodeJSTypescriptVue 3 to help me build an interactive web application. The main features of this project would include - Utilizing the Vue 3 Framework Prior experience in Vue.js is a must. Understanding of its core concepts and features is essential to deliver a high-quality application. - Payment Gateway Integration The application will require integration with a payment gateway such as Stripe or PayPal. Experience with these platforms is highly desirable. - User Authentication Clerk - Flexible Design The application should be able to accommodate future expansions or modifications, so a flexible design and coding approach is key. The main technologies that application will use are - NodeJSExpressTypescriptPrisma - Vue 3ShadCNTailwind CSS I have a detailed specification which I will share with those selected to be shortlisted. To be considered for this project 1. A brief summary of your experience in the core technologies I want to use for the App. 2. Please provide links for any projects which use Node JSExpressPrisma and Vue 3 If you have any further questions please reach out. - I'm in need of a talented graphic designer to create website graphics for my project. This includes designing banner images, icons, and infographics. Ideal Skills - Proficiency in graphic design software Adobe Illustrator, Photoshop, etc. - Strong portfolio of website graphics - Experience with designing for social media and ad campaigns Please note, the banner images will be used on the homepage, social media, and ad campaigns. A deep understanding of how to create engaging and impactful designs for these platforms is crucial. - source_sentence: PHP Codeigniter Laravel Google Ads API - PHPPython Google AppsAds Script Bing Ads API Twitter API TikTok API FB API Google APIs GitHub login to view URL LinkedIn Profile login to view URL sentences: - I need a structural engineer to provide detailed engineering plans for a residential building. Specific Requirements - Foundation plans - Framing plans - Roof structure details Additionally, I need - Copies of the structural engineering details, including piers and footings. - A reference site classification report with a copy of the report provided. Ideal candidates should have - Extensive experience in structural engineering for residential buildings. - Ability to interpret and work from existing architectural plans. - Strong communication skills to provide necessary documentation clearly. - I'm looking for a talented web developer with a strong background in Shopify to create a robust e-commerce website for selling electronics and gadgets. Key Requirements - Expertise in Shopify You should have a deep understanding of the platform to build an effective, secure and user-friendly online store. - E-commerce Development Experience in creating e-commerce websites is essential. You will need to implement features that facilitate seamless shopping experiences. - Understanding of Electronics A knowledge of the electronics industry will be a plus, as it will help in designing the website Please note, this project does not include the add-on features such as product reviews, discount codes or customer account creation, but these may be discussed further down the line. - I'm looking for a professional with experience in WebSocket and Laravel to integrate Twilio and login to view URL into my Laravel Blade website. The primary function of Twilio will be enabling voice calls on the website. Key Tasks - Implement Twilio for voice call functionality on the website. - Integrate login to view URL's Natural Language Processing NLP capabilities into the site. Ideal Candidate - Proficient in Laravel and Blade. - Extensive experience with Twilio and Vapi.ai. - Strong knowledge of WebSocket. - Ability to implement NLP features effectively. - source_sentence: I have 6-year experience as a Web Designer and WordPress Designer. 100+ completed projects. My Top Skills - HTML, CSS, Bootstrap 3 4 5 - Admin Dashboard - Email Template within 2 to 3 hours - Web Design - HTML5, CSS3 Canvas, SVG - PSD, FIGMA, ZEPLIN, XD, image, pdf to HTML, CSS Conversion - PSD, FIGMA, ZEPLIN, XD, image, pdf to Bootstrap Conversion - Animation, Slider - Fix Tailwind CSS - Photoshop intermediate - Adobe XD Mobile App any changes intermediate sentences: - I'm seeking a talented web developer with a keen eye for 3D design to revamp our current website. The job involves a complete overhaul of the website's layout, user interface, and 3D images. Key Requirements - Proficiency in 3D design You should be adept at enhancing textures, improving lighting, and updating models for a more engaging and visually striking website. - WordPress Expertise The new design should be compatible with WordPress, so prior experience with this platform is a must. Responsibilities - Redesign the website layout and user interface to improve overall user experience. - Update all existing 3D images, enhancing them with improved textures and lighting. - Ensure the website is fully functional on the WordPress platform. Ideal Candidate - A creative thinker with a strong background in both web development and 3D design. - Prior experience with WordPress and a portfolio that showcases your skills in revamping websites. - Excellent communication skills to ensure smooth collaboration and understanding of my vision for the project. I'd love to hear from you if you're confident in your ability to take on this project. Please include relevant samples of your past work in your application. Experience with Fancy Product Designer for customisations must be on time samples of what I want login to view URL login to view URL login to view URL - I'm looking for a skilled web developer experienced in web scraping to create a web scraper for me. Key Requirements - The scraper should be able to extract product prices from Amazon. Ideal Skills and Experience - Proficiency in Python and libraries like BeautifulSoup and Scrapy. - Previous experience scraping data from Amazon is a plus. - Strong understanding of web scraping ethics and legal considerations. Please include in your proposal examples of similar projects you've completed. - I'm looking for an expert mobile app developer who can create a comprehensive e-commerce app for both iOS and Android platforms. Key Features - User-friendly interface - Secure payment gateway - Real-time inventory updates - Customer review and rating system - Push notifications for sales and offers Ideal Skills - Proficiency in cross-platform mobile app development - Experience in e-commerce app development - Knowledge of UIUX design principles - Understanding of secure payment integration - Familiarity with inventory management systems Your expertise will help me reach my goal of launching a top-tier e-commerce app. Please provide your portfolio showcasing similar projects you've completed in the past. - source_sentence: I have 15+ years experiences with web development, machine learning engineering and product development. I also have 5+ years experiences with team management for developing new product and maintaining old products. sentences: - I'm starting a web development company and need a senior WordPress developer who is proficient in PHP, JavaScript, HTML, and CSS. This role will require working closely with my designer to customize websites. Key Responsibilities - Custom theme development - Communicating with the Designer - Optimising websites for performance - Ongoing website maintenance The ideal candidate should - Have expert-level experience with custom theme development - Be eager to learn and adapt - Have a solid track record with WordPress - Know the pain points of WordPress and how to solve them - Benefit Experience with SEO Collaboration - We will be using TrelloWhatsappTeams for project management and collaboration tasks. Your ability to work as part of a team and communicate effectively will be crucial for our success. A passion for web development and a desire to be part of a growing company will make this a rewarding opportunity. - Job Title Freelance Graphic Designer Monthly Deliverables Minimum 30 Creative Designs Budget 10,000 Month Job Description We are seeking a Freelance Graphic Designer to create high-quality and creative visuals for our projects monthly. The ideal candidate will have experience designing a wide range of materials, including images for digital platforms, brochures, banners, PDFs, and other print-ready files. This remote freelance role is expected to deliver 30 designs per month. If you're passionate about visual design and can consistently meet deadlines with high-quality work, we'd love to hear from you! Key Responsibilities Create 30+ designs per month, including - Social media graphics - Flyers, brochures, and pamphlets - PDF print files - Flex banners and large-scale designs Design for multiple formats Digital websocial media and print brochures, banners, etc.. - Collaborate with stakeholders to ensure designs align with the brand and project goals. - Make revisions and adjustments based on feedback. - Prepare print-ready files with accurate specifications. --- Required Skills - Proficiency in Adobe Creative Suite Photoshop, Illustrator, InDesign or equivalent tools. - Strong understanding of layout, typography, and color theory, - Experience in designing for both digital and print mediums. - Knowledge of print specifications and formats CMYK, DPI, bleed, etc.. - Ability to work independently and deliver within deadlines. --- Preferred Qualifications - Prior experience as a freelance designer or working in an agency setting. - Experience with branding projects - Strong portfolio showcasing past work. --- Compensation - 10,000 per month for a minimum of 30 imagesdesigns - Additional designs or complex projects may be compensated separately based on agreement. --- How to Apply Interested candidates should submit their portfolios and CVs this platform Please include samples of - Social media posts or marketing graphics - Print designs like brochures or banners - Any other relevant design work --- Additional Information - This is a remote freelance opportunity. - Payments will be made monthly upon submission and approval of deliverables. - Long-term collaboration opportunities available based on performance. - Seeking a talented content writer to create engaging and SEO-friendly articles across diverse markets. The candidate should possess strong expertise in producing content that not only resonates with readers but also performs well in search engine rankings. Please submit samples of your past work where you have successfully balanced keyword integration with compelling content. --- # SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Nashhz/SBERT_KFOLD_JobDescriptions_Skills_UserPortfolios") # Run inference sentences = [ 'I have 15+ years experiences with web development, machine learning engineering and product development. I also have 5+ years experiences with team management for developing new product and maintaining old products.', "I'm starting a web development company and need a senior WordPress developer who is proficient in PHP, JavaScript, HTML, and CSS. This role will require working closely with my designer to customize websites. Key Responsibilities - Custom theme development - Communicating with the Designer - Optimising websites for performance - Ongoing website maintenance The ideal candidate should - Have expert-level experience with custom theme development - Be eager to learn and adapt - Have a solid track record with WordPress - Know the pain points of WordPress and how to solve them - Benefit Experience with SEO Collaboration - We will be using TrelloWhatsappTeams for project management and collaboration tasks. Your ability to work as part of a team and communicate effectively will be crucial for our success. A passion for web development and a desire to be part of a growing company will make this a rewarding opportunity.", "Job Title Freelance Graphic Designer Monthly Deliverables Minimum 30 Creative Designs Budget 10,000 Month Job Description We are seeking a Freelance Graphic Designer to create high-quality and creative visuals for our projects monthly. The ideal candidate will have experience designing a wide range of materials, including images for digital platforms, brochures, banners, PDFs, and other print-ready files. This remote freelance role is expected to deliver 30 designs per month. If you're passionate about visual design and can consistently meet deadlines with high-quality work, we'd love to hear from you! Key Responsibilities Create 30+ designs per month, including - Social media graphics - Flyers, brochures, and pamphlets - PDF print files - Flex banners and large-scale designs Design for multiple formats Digital websocial media and print brochures, banners, etc.. - Collaborate with stakeholders to ensure designs align with the brand and project goals. - Make revisions and adjustments based on feedback. - Prepare print-ready files with accurate specifications. --- Required Skills - Proficiency in Adobe Creative Suite Photoshop, Illustrator, InDesign or equivalent tools. - Strong understanding of layout, typography, and color theory, - Experience in designing for both digital and print mediums. - Knowledge of print specifications and formats CMYK, DPI, bleed, etc.. - Ability to work independently and deliver within deadlines. --- Preferred Qualifications - Prior experience as a freelance designer or working in an agency setting. - Experience with branding projects - Strong portfolio showcasing past work. --- Compensation - 10,000 per month for a minimum of 30 imagesdesigns - Additional designs or complex projects may be compensated separately based on agreement. --- How to Apply Interested candidates should submit their portfolios and CVs this platform Please include samples of - Social media posts or marketing graphics - Print designs like brochures or banners - Any other relevant design work --- Additional Information - This is a remote freelance opportunity. - Payments will be made monthly upon submission and approval of deliverables. - Long-term collaboration opportunities available based on performance.", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 16,682 training samples * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | label | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 4 tokens</li><li>mean: 160.64 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 163.14 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 0.27</li><li>mean: 0.72</li><li>max: 1.0</li></ul> | * Samples: | sentence_0 | sentence_1 | label | |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------| | <code>Amazon eBay Tiktok Shop Amazon Services Amazon Seller Central Management A to Z Store Management A to Z Inventory Management Winning Product Sourcing Product Listing with SEO Listing With Variations Listing Optimization Title, Bullet Points & Description Optimization Images Optimization Product Launching FBA Shipment Creation more Amazon eBay Tiktok Shop Amazon Services Amazon Seller Central Management A to Z Store Management A to Z Inventory Management Winning Product Sourcing Product Listing with SEO Listing With Variations Listing Optimization Title, Bullet Points & Description Optimization Images Optimization Product Launching FBA Shipment Creation Sales Generation Dropshipping Store Design A+ Content Creation Amazon PPC Campaigns Brand Registry Trademark Registration Customer Services Management eBay Services eBay Store Management A to Z A to Z eBay Dropshipping Services Winning Products Sourcing Products listing with SEO Products listing With Variations Listings Optimization Title , Bullet Point & Description Optimization Images Optimization Keywords Optimization Sales Boost Products Ranking Hot selling product with 30 to 50 profit Competitor Analysis Orders Fulfillment Customer Services Management eBay Account Defect Removal Tax Exemption Management Setting Up Promotions Listing Templates Creation Tiktok Shop Services TikTok Shop Account Setup Product Listing Listing Optimization Keyword Research Product Hunting Competitor Analysis Campaign Management Influencer Collaboration TikTok Live Shopping Order Management Promotion Management TikTok Ads for Shop Content Creation for Shop Sales Analytics & Reporting Problem Solving & Issue Resolution Ongoing Shop Optimization</code> | <code>I'm seeking a skilled professional to assist with a variety of tasks including selling products from Amazon UAE to eBay UK via dropshipping, product sourcing, and full virtual assistance. Key Responsibilities - Product Searching & Listing Identify profitable products, create and optimize listings, and conduct market trend analysis. - SEO Management Oversee the search engine optimization for our listed products. - Selling & Listing Management List products on Amazon, eBay, and our website, while managing sales. Ideal Candidate - Previous dropshipping experience, particularly between Amazon and eBay, is a plus. - Strong skills in SEO, product sourcing, and virtual assistance. - Excellent understanding of market trends and product profitability. - Able to create and optimize product listings for maximum visibility and sales. This is a full-time position which requires dedication and a proactive approach. Please only apply if you have the necessary skills and experience.</code> | <code>0.7151671051979065</code> | | <code>We are a group of young, energetic, creative & professional website developer, graphic designer and IT-Administrator who are devoted to implement your requirement with modern technology. Website Design - Development-Modification - Wordpress - Ecommerce - DynamicCustomized site Development Graphic Design - logo design - Brochure - Flyer - Leaflet - PDF Profile - Catalog - Greetings Card - PackageLabel Design - Business Card - Image RetouchEnhancementEditingManipulation IT-Admin Virtual Assistant - Product Listing - Site Content Management - Product Image Enhance - Data Processing - PDF conversion to WordExcel - Web Research - Data Scraping Why Choose Us o Quality Support for everyday 365 days even after project completion o We understand your requirements precisely to deliver Creative designs o 100 client satisfaction guaranteed</code> | <code>We are looking for a skilled and dedicated full-time web developer to join our team. The ideal candidate should have extensive experience working with WordPress, Divi, and Elementor, as well as the ability to create custom WordPress themes. Key Responsibilities Develop, maintain, and optimize WordPress websites. Customize and configure Divi and Elementor page builders to meet client needs. Create custom WordPress themes from scratch, ensuring they are optimized for performance and usability. Troubleshoot and resolve any website issues as they arise. Ensure websites are responsive and work seamlessly across all devices. Collaborate with our design and content teams to bring creative ideas to life. Stay up to date with the latest web development trends and best practices. Requirements Proven experience with WordPress, including custom theme development. Proficiency in Divi and Elementor page builders. Strong understanding of HTML, CSS, JavaScript, and PHP. Experience in responsive design and cross-browser compatibility. Ability to work independently and meet deadlines. Strong problem-solving skills and attention to detail. Excellent communication skills in English. Preferred Qualifications Experience with WooCommerce or other WordPress plugins. Familiarity with SEO best practices. Knowledge of version control systems like Git. If you are passionate about web development and want to be part of a growing team, we'd love to hear from you! Please submit your portfolio and CV for consideration.</code> | <code>0.7487468719482422</code> | | <code>Hi there, I'm Priyanshu Agarwal I'm a Python expert with a diverse skillset that includes web scraping, Zoho and Tally Prime accounting, automation, and Python application building. With my strong foundation in Python, I can build and automate applications that meet your business needs, saving you time and resources. As a web scraping expert, I specialize in using Python, Selenium, BeautifulSoup4, and Python Requests to extract data from websites and web applications. I have experience in projects of varying scales, from small-scale data collection to large-scale data mining for enterprise-level clients. In addition to my technical expertise in web scraping, I have a strong background in accounting software such as Zoho and Tally Prime. I have experience in managing financial data, generating reports, and automating financial processes using these tools. I understand the importance of accurate and timely financial data in business decision-making, and I strive to ensure that my clients' financial data is organized, up-to-date, and easily accessible. With my experience in automation and Python application building, I can create custom solutions to</code> | <code>I'm in need of a data scraping expert to assist in gathering market research data from various retail websites. The ideal freelancer for this project should have a robust experience with Python and Java, as well as proficiency in Odoo and Airtable. Experience in building microservices would be a significant advantage. Key Responsibilities - Scraping data from designated retail websites for market research purposes - Organizing and managing the gathered data in Airtable - Potential development of microservices for data handling, 8n8 Skills and Experience Required - Extensive experience in data scraping, particularly from retail websites - Proficiency in Python and Java - Experience with Odoo and Airtable - Prior experience in building microservices - Understanding of market research techniques and requirements</code> | <code>0.747043251991272</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 4 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:----:|:-------------:| | 0.4794 | 500 | 0.0003 | | 0.9588 | 1000 | 0.0003 | | 1.4382 | 1500 | 0.0003 | | 1.9175 | 2000 | 0.0003 | | 2.3969 | 2500 | 0.0002 | | 2.8763 | 3000 | 0.0002 | | 3.3557 | 3500 | 0.0002 | | 3.8351 | 4000 | 0.0002 | | 0.4794 | 500 | 0.0003 | | 0.9588 | 1000 | 0.0003 | | 1.4382 | 1500 | 0.0003 | | 1.9175 | 2000 | 0.0003 | | 2.3969 | 2500 | 0.0002 | | 2.8763 | 3000 | 0.0002 | | 3.3557 | 3500 | 0.0002 | | 3.8351 | 4000 | 0.0001 | | 0.4794 | 500 | 0.0002 | | 0.9588 | 1000 | 0.0002 | | 1.4382 | 1500 | 0.0002 | | 1.9175 | 2000 | 0.0002 | | 2.3969 | 2500 | 0.0002 | | 2.8763 | 3000 | 0.0002 | | 3.3557 | 3500 | 0.0001 | | 3.8351 | 4000 | 0.0001 | | 0.4794 | 500 | 0.0002 | | 0.9588 | 1000 | 0.0002 | | 1.4382 | 1500 | 0.0002 | | 1.9175 | 2000 | 0.0002 | | 2.3969 | 2500 | 0.0002 | | 2.8763 | 3000 | 0.0001 | | 3.3557 | 3500 | 0.0001 | | 3.8351 | 4000 | 0.0001 | | 0.4794 | 500 | 0.0002 | | 0.9588 | 1000 | 0.0002 | | 1.4382 | 1500 | 0.0002 | | 1.9175 | 2000 | 0.0002 | | 2.3969 | 2500 | 0.0001 | | 2.8763 | 3000 | 0.0001 | | 3.3557 | 3500 | 0.0001 | | 3.8351 | 4000 | 0.0001 | ### Framework Versions - Python: 3.12.6 - Sentence Transformers: 3.2.0 - Transformers: 4.45.2 - PyTorch: 2.4.1+cpu - Accelerate: 1.0.1 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "CRAFT" ]
tensorblock/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF
tensorblock
null
[ "gguf", "uncensored", "TensorBlock", "GGUF", "dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered", "dataset:kaiokendev/SuperCOT-dataset", "dataset:neulab/conala", "dataset:yahma/alpaca-cleaned", "dataset:QingyiSi/Alpaca-CoT", "dataset:timdettmers/guanaco-33b", "dataset:JosephusCheung/GuanacoDataset", "base_model:Monero/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b", "base_model:quantized:Monero/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b", "license:other", "endpoints_compatible", "region:us" ]
2024-11-21T18:38:31Z
2024-11-21T22:23:01+00:00
306
3
--- base_model: Monero/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b datasets: - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered - kaiokendev/SuperCOT-dataset - neulab/conala - yahma/alpaca-cleaned - QingyiSi/Alpaca-CoT - timdettmers/guanaco-33b - JosephusCheung/GuanacoDataset license: other tags: - uncensored - TensorBlock - GGUF --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"> Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a> </p> </div> </div> ## Monero/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b - GGUF This repo contains GGUF format model files for [Monero/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b](https://huggingface.co/Monero/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b). The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d). <div style="text-align: left; margin: 20px 0;"> <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;"> Run them on the TensorBlock client using your local machine ↗ </a> </div> ## Prompt template ``` ``` ## Model file specification | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q2_K.gguf](https://huggingface.co/tensorblock/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/blob/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q2_K.gguf) | Q2_K | 11.221 GB | smallest, significant quality loss - not recommended for most purposes | | [WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q3_K_S.gguf](https://huggingface.co/tensorblock/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/blob/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q3_K_S.gguf) | Q3_K_S | 13.098 GB | very small, high quality loss | | [WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q3_K_M.gguf](https://huggingface.co/tensorblock/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/blob/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q3_K_M.gguf) | Q3_K_M | 14.693 GB | very small, high quality loss | | [WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q3_K_L.gguf](https://huggingface.co/tensorblock/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/blob/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q3_K_L.gguf) | Q3_K_L | 16.093 GB | small, substantial quality loss | | [WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q4_0.gguf](https://huggingface.co/tensorblock/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/blob/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q4_0.gguf) | Q4_0 | 17.095 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q4_K_S.gguf](https://huggingface.co/tensorblock/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/blob/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q4_K_S.gguf) | Q4_K_S | 17.213 GB | small, greater quality loss | | [WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q4_K_M.gguf](https://huggingface.co/tensorblock/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/blob/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q4_K_M.gguf) | Q4_K_M | 18.274 GB | medium, balanced quality - recommended | | [WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q5_0.gguf](https://huggingface.co/tensorblock/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/blob/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q5_0.gguf) | Q5_0 | 20.857 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q5_K_S.gguf](https://huggingface.co/tensorblock/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/blob/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q5_K_S.gguf) | Q5_K_S | 20.857 GB | large, low quality loss - recommended | | [WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q5_K_M.gguf](https://huggingface.co/tensorblock/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/blob/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q5_K_M.gguf) | Q5_K_M | 21.464 GB | large, very low quality loss - recommended | | [WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q6_K.gguf](https://huggingface.co/tensorblock/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/blob/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q6_K.gguf) | Q6_K | 24.854 GB | very large, extremely low quality loss | | [WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q8_0.gguf](https://huggingface.co/tensorblock/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF/blob/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q8_0.gguf) | Q8_0 | 32.191 GB | very large, extremely low quality loss - not recommended | ## Downloading instruction ### Command line Firstly, install Huggingface Client ```shell pip install -U "huggingface_hub[cli]" ``` Then, downoad the individual model file the a local directory ```shell huggingface-cli download tensorblock/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF --include "WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-Q2_K.gguf" --local-dir MY_LOCAL_DIR ``` If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: ```shell huggingface-cli download tensorblock/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' ```
[ "MONERO" ]
RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-gguf
RichardErkhov
null
[ "gguf", "arxiv:2312.00738", "arxiv:2205.11916", "arxiv:2306.05179", "arxiv:2306.05685", "endpoints_compatible", "region:us", "conversational" ]
2024-05-11T03:55:43Z
2024-05-11T06:06:47+00:00
304
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) SeaLLM-7B-v2 - GGUF - Model creator: https://huggingface.co/SeaLLMs/ - Original model: https://huggingface.co/SeaLLMs/SeaLLM-7B-v2/ | Name | Quant method | Size | | ---- | ---- | ---- | | [SeaLLM-7B-v2.Q2_K.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.Q2_K.gguf) | Q2_K | 2.6GB | | [SeaLLM-7B-v2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.IQ3_XS.gguf) | IQ3_XS | 2.89GB | | [SeaLLM-7B-v2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.IQ3_S.gguf) | IQ3_S | 3.04GB | | [SeaLLM-7B-v2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.Q3_K_S.gguf) | Q3_K_S | 3.03GB | | [SeaLLM-7B-v2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.IQ3_M.gguf) | IQ3_M | 3.14GB | | [SeaLLM-7B-v2.Q3_K.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.Q3_K.gguf) | Q3_K | 3.36GB | | [SeaLLM-7B-v2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.Q3_K_M.gguf) | Q3_K_M | 3.36GB | | [SeaLLM-7B-v2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.Q3_K_L.gguf) | Q3_K_L | 3.64GB | | [SeaLLM-7B-v2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.IQ4_XS.gguf) | IQ4_XS | 3.76GB | | [SeaLLM-7B-v2.Q4_0.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.Q4_0.gguf) | Q4_0 | 3.91GB | | [SeaLLM-7B-v2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.IQ4_NL.gguf) | IQ4_NL | 3.96GB | | [SeaLLM-7B-v2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.Q4_K_S.gguf) | Q4_K_S | 3.94GB | | [SeaLLM-7B-v2.Q4_K.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.Q4_K.gguf) | Q4_K | 4.16GB | | [SeaLLM-7B-v2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.Q4_K_M.gguf) | Q4_K_M | 4.16GB | | [SeaLLM-7B-v2.Q4_1.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.Q4_1.gguf) | Q4_1 | 4.33GB | | [SeaLLM-7B-v2.Q5_0.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.Q5_0.gguf) | Q5_0 | 4.75GB | | [SeaLLM-7B-v2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.Q5_K_S.gguf) | Q5_K_S | 4.75GB | | [SeaLLM-7B-v2.Q5_K.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.Q5_K.gguf) | Q5_K | 4.87GB | | [SeaLLM-7B-v2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.Q5_K_M.gguf) | Q5_K_M | 4.87GB | | [SeaLLM-7B-v2.Q5_1.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.Q5_1.gguf) | Q5_1 | 5.17GB | | [SeaLLM-7B-v2.Q6_K.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.Q6_K.gguf) | Q6_K | 5.64GB | Original model description: --- license: other license_name: seallms license_link: https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat/blob/main/LICENSE language: - en - zh - vi - id - th - ms - km - lo - my - tl tags: - multilingual - sea --- <p align="center"> <img src="seal_logo.png" width="200" /> </p> # *SeaLLM-7B-v2* - Large Language Models for Southeast Asia # <strong style="color: red">BIG NEWS: <a href="https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5">SeaLLM-7B-v2.5</a> is released with state-of-the-art performance in world knowledge and reasoning. SeaLLM-7B-v2 will begin deprecation.</strong> <p align="center"> <a href="https://damo-nlp-sg.github.io/SeaLLMs/" target="_blank" rel="noopener">Technical Blog</a> &nbsp;&nbsp; <a href="https://huggingface.co/SeaLLMs/SeaLLM-7B-v2" target="_blank" rel="noopener"> 🤗 Tech Memo</a> &nbsp;&nbsp; <a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B" target="_blank" rel="noopener"> 🤗 DEMO</a> &nbsp;&nbsp; <a href="https://github.com/DAMO-NLP-SG/SeaLLMs" target="_blank" rel="noopener">Github</a> &nbsp;&nbsp; <a href="https://arxiv.org/pdf/2312.00738.pdf" target="_blank" rel="noopener">Technical Report</a> </p> We introduce [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2), the state-of-the-art multilingual LLM for Southeast Asian (SEA) languages 🇬🇧 🇨🇳 🇻🇳 🇮🇩 🇹🇭 🇲🇾 🇰🇭 🇱🇦 🇲🇲 🇵🇭. It is the most significant upgrade since [SeaLLM-13B](https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat), with half the size, outperforming performance across diverse multilingual tasks, from world knowledge, math reasoning, instruction following, etc. ### Highlights * [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) achieves the **7B-SOTA** on the **Zero-shot CoT GSM8K** task with **78.2** score and outperforms GPT-3.5 in many GSM8K-translated tasks in SEA languages (🇨🇳 🇻🇳 🇮🇩 🇹🇭) as well as MGSM (🇨🇳 🇹🇭). It also surpasses GPT-3.5 in MATH CoT for Thai 🇹🇭. * It scores competitively against GPT-3.5 in many zero-shot CoT commonsense benchmark, with **82.5, 68.3, 80.9** scores on Arc-C, Winogrande, and Hellaswag. * It achieves **7.54** score on the 🇬🇧 **MT-bench**, it ranks 3rd place on the leaderboard for 7B category and is the most outperforming multilingual model. * It scores **45.74** on the VMLU benchmark for Vietnamese 🇻🇳, and is the only open-source multilingual model that can be competitive to monolingual models ([Vistral-7B](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat)) of similar sizes. ### Release and DEMO - DEMO: [SeaLLMs/SeaLLM-7B](https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B). - Technical report: [Arxiv: SeaLLMs - Large Language Models for Southeast Asia](https://arxiv.org/pdf/2312.00738.pdf). - Model weights: - [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2). - [SeaLLM-7B-v2-gguf](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf). - [SeaLLM-7B-v2-GGUF (thanks Lonestriker)](https://huggingface.co/LoneStriker/SeaLLM-7B-v2-GGUF). NOTE: use [seallm.preset.json](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf/blob/main/seallm.preset.json) to work properly. - Run locally: - [LM-studio](https://lmstudio.ai/): - [SeaLLM-7B-v2-q4_0](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.q4_0.gguf) and [SeaLLM-7B-v2-q8_0](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.q8_0.gguf). - LM-studio requires this [seallm.preset.json](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf/blob/main/seallm.preset.json) to set chat template properly. - [ollama](https://ollama.ai/) `ollama run nxphi47/seallm-7b-v2:q4_0` - [MLX for Apple Silicon](https://github.com/ml-explore/mlx): [mlx-community/SeaLLM-7B-v2-4bit-mlx](https://huggingface.co/mlx-community/SeaLLM-7B-v2-4bit-mlx) <blockquote style="color:red"> <p><strong style="color: red">Terms of Use and License</strong>: By using our released weights, codes, and demos, you agree to and comply with the terms and conditions specified in our <a href="https://huggingface.co/SeaLLMs/SeaLLM-Chat-13b/edit/main/LICENSE" target="_blank" rel="noopener">SeaLLMs Terms Of Use</a>. </blockquote> > **Disclaimer**: > We must note that even though the weights, codes, and demos are released in an open manner, similar to other pre-trained language models, and despite our best efforts in red teaming and safety fine-tuning and enforcement, our models come with potential risks, including but not limited to inaccurate, misleading or potentially harmful generation. > Developers and stakeholders should perform their own red teaming and provide related security measures before deployment, and they must abide by and comply with local governance and regulations. > In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos. > The logo was generated by DALL-E 3. ### What's new since SeaLLM-13B-v1 and SeaLLM-7B-v1? * SeaLLM-7B-v2 is continue-pretrained from [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) and underwent carefully designed tuning with focus in reasoning. ## Evaluation ### Zero-shot CoT Multilingual Math Reasoning [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) achieves with **78.2** score on the GSM8K with zero-shot CoT reasoning, making it the **state of the art** in the realm of 7B models. It also outperforms GPT-3.5 in the same GSM8K benchmark as translated into SEA languages (🇨🇳 🇻🇳 🇮🇩 🇹🇭). [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) also surpasses GPT-3.5 on the Thai-translated MATH benchmark, with **22.4** vs 18.1 scores. ![fig_sea_math_side_by_side.png](fig_sea_math_side_by_side.png) <details> <summary>See details on English and translated GSM8K and MATH with zero-shot reasoning</summary> <br> | Model | GSM8K<br>en | MATH<br>en | GSM8K<br>zh | MATH<br>zh | GSM8K<br>vi | MATH<br>vi | GSM8K<br>id | MATH<br>id | GSM8K<br>th | MATH<br>th | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | GPT-3.5 | 80.8 | 34.1 | 48.2 | 21.5 | 55 | 26.5 | 64.3 | 26.4 | 35.8 | 18.1 | Qwen-14B-chat | 61.4 | 18.4 | 41.6 | 11.8 | 33.6 | 3.6 | 44.7 | 8.6 | 22 | 6 | Vistral-7b-chat | 48.2 | 12.5 | | | 48.7 | 3.1 | | | | | Qwen1.5-7B-chat | 56.8 | 15.3 | 40 | 2.7 | 37.7 | 9 | 36.9 | 7.7 | 21.9 | | SeaLLM-7B-v2 | 78.2 | 27.5 | 53.7 | 17.6 | 69.9 | 23.8 | 71.5 | 24.4 | 59.6 | 22.4 </details> Baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json), [Vistral](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat)). #### Zero-shot MGSM [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) also outperforms GPT-3.5 and Qwen-14B on the multilingual MGSM for Zh and Th. | Model | MGSM-Zh | MGSM-Th |-----| ----- | --- | ChatGPT (reported) | 61.2 | 47.2 | Qwen-14B-chat | 59.6 | 28 | SeaLLM-7B-v2 | **64.8** | **62.4** ### Zero-shot Commonsense Reasoning We compare [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) with ChatGPT and Mistral-7B-instruct on various zero-shot commonsense benchmarks (Arc-Challenge, Winogrande and Hellaswag). We use the 2-stage technique in [(Kojima et al., 2023)](https://arxiv.org/pdf/2205.11916.pdf) to grab the answer. Note that we **DID NOT** use "Let's think step-by-step" to invoke explicit CoT. | 0-shot reasoning | Arc-Challenge | Winogrande | Hellaswag |-----| ----- | --- | -- | | ChatGPT (reported) | 84.6* | 66.8* | 72.0* | ChatGPT (reproduced)| 84.1 | 63.1 | 79.5 | Mistral-7B-Instruct | 68.1 | 56.4 | 45.6 | Qwen1.5-7B-chat | 79.3 | 59.4 | 69.3 | SeaLLM-7B-v2 | 82.5 | 68.3 | 80.9 Baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json), [Mistral](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)). ### Multilingual World Knowledge We evaluate models on 3 benchmarks following the recommended default setups: 5-shot MMLU for En, 3-shot [M3Exam](https://arxiv.org/pdf/2306.05179.pdf) (M3e) for En, Zh, Vi, Id, Th, and zero-shot [VMLU](https://vmlu.ai/) for Vi. | Model | Langs | En<br>MMLU | En<br>M3e | Zh<br>M3e | Vi<br>M3e | Vi<br>VMLU | Id<br>M3e | Th<br>M3e |-----| ----- | --- | -- | ----- | ---- | --- | --- | --- | | GPT-3.5 | Multi | 68.90 | 75.46 | 60.20 | 58.64 | 46.32 | 49.27 | 37.41 | Vistral-7B-chat | Mono | 56.86 | 67.00 | 44.56 | 54.33 | 50.03 | 36.49 | 25.27 | Qwen1.5-7B-chat | Multi | 61.00 | 52.07 | 81.96 | 43.38 | 45.02 | 24.29 | 20.25 | SeaLLM-7B-v2 | Multi | 61.89 | 70.91 | 55.43 | 51.15 | 45.74 | 42.25 | 35.52 VMLU reproduce script [here](https://github.com/DAMO-NLP-SG/SeaLLMs/blob/main/evaluation/vmlu/vmlu_run.py). Lm-eval was used to evaluate MMLU. 0-shot VMLU scores for baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json)). ### MT-Bench On the English [MT-bench](https://arxiv.org/abs/2306.05685) metric, SeaLLM-7B-v2 achieves **7.54** score on the MT-bench (3rd place on the leaderboard for 7B category), outperforms many 70B models and is arguably the only one that handles 10 SEA languages. Refer to [mt_bench/seallm_7b_v2.jsonl](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2/blob/main/evaluation/mt_bench/seallm_7b_v2.jsonl) for the MT-bench predictions of SeaLLM-7B-v2, and [here](https://github.com/lm-sys/FastChat/issues/3013#issue-2118685341) to reproduce it. | Model | Access | Langs | MT-Bench | --- | --- | --- | --- | | GPT-4-turbo | closed | multi | 9.32 | GPT-4-0613 | closed | multi | 9.18 | Mixtral-8x7b (46B) | open | multi | 8.3 | Starling-LM-7B-alpha | open | mono (en) | 8.0 | OpenChat-3.5-7B | open | mono (en) | 7.81 | **SeaLLM-7B-v2** | **open** | **multi (10+)** | **7.54** | [Qwen-14B](https://huggingface.co/Qwen/Qwen-14B-Chat) | open | multi | 6.96 | [Llama-2-70B](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) | open | mono (en) | 6.86 | Mistral-7B-instuct | open | mono (en) | 6.84 ### Sea-Bench Similar to MT-Bench, [Sea-bench](https://huggingface.co/datasets/SeaLLMs/Sea-bench) is a set of categorized instruction test sets to measure models' ability as an assistant that is specifically focused on 9 SEA languages, including non-Latin low-resource languages. As shown, the huge improvements come from math-reasoning, reaching GPT-3.5 level of performance. ![fig_sea_bench_side_by_side.png](fig_sea_bench_side_by_side.png) Refer to [sea_bench/seallm_7b_v2.jsonl](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2/blob/main/evaluation/sea_bench/seallm_7b_v2.jsonl) for the Sea-bench predictions of SeaLLM-7B-v2. ### Usage #### Instruction format ```python prompt = """<|im_start|>system You are a helpful assistant.</s><|im_start|>user Hello world</s><|im_start|>assistant Hi there, how can I help?</s>""" # NOTE: previous commit has \n between </s> and <|im_start|>, that was incorrect! # <|im_start|> is not a special token. # Transformers chat_template should be consistent with vLLM format below. # ! ENSURE 1 and only 1 bos `<s>` at the beginning of sequence print(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt))) '<s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'system', '<0x0A>', 'You', '▁are', '▁a', '▁helpful', '▁assistant', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', '▁world', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Hi', '▁there', ',', '▁how', '▁can', '▁I', '▁help', '?', '</s>'] """ ``` #### Using transformers's chat_template ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto # use bfloat16 to ensure the best performance. model = AutoModelForCausalLM.from_pretrained("SeaLLMs/SeaLLM-7B-v2", torch_dtype=torch.bfloat16, device_map=device) tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLM-7B-v2") messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello world"}, {"role": "assistant", "content": "Hi there, how can I help you today?"}, {"role": "user", "content": "Explain general relativity in details."} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True) print(tokenizer.convert_ids_to_tokens(encodeds[0])) # ['<s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'system', '<0x0A>', 'You', '▁are', '▁a', '▁helpful', '▁assistant', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', '▁world', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Hi', '▁there', ',', '▁how', '▁can', '▁I', '▁help', '▁you', '▁today', '?', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Ex', 'plain', '▁general', '▁rel', 'ativity', '▁in', '▁details', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>'] model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True, pad_token_id=tokenizer.pad_token_id) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` #### Using vLLM ```python from vllm import LLM, SamplingParams TURN_TEMPLATE = "<|im_start|>{role}\n{content}</s>" TURN_PREFIX = "<|im_start|>{role}\n" # There is no \n between </s> and <|im_start|>. def seallm_chat_convo_format(conversations, add_assistant_prefix: bool, system_prompt=None): # conversations: list of dict with key `role` and `content` (openai format) if conversations[0]['role'] != 'system' and system_prompt is not None: conversations = [{"role": "system", "content": system_prompt}] + conversations text = '' for turn_id, turn in enumerate(conversations): prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content']) text += prompt if add_assistant_prefix: prompt = TURN_PREFIX.format(role='assistant') text += prompt return text sparams = SamplingParams(temperature=0.1, max_tokens=1024, stop=['</s>', '<|im_start|>']) llm = LLM("SeaLLMs/SeaLLM-7B-v2", dtype="bfloat16") message = "Explain general relativity in details." prompt = seallm_chat_convo_format(message, True) gen = llm.generate(prompt, sampling_params) print(gen[0].outputs[0].text) ``` #### Fine-tuning SeaLLM-7B-v2 Should follow the chat format and accurately mask out source tokens. Here is an example. ```python conversations = [ {"role": "system", "content": "You are helful assistant."}, {"role": "user", "content": "Hello world."}, {"role": "assistant", "content": "Hi there, how can I help?"}, {"role": "user", "content": "Tell me a joke."}, {"role": "assistant", "content": "Why don't scientists trust atoms? Because they make up everything."}, ] def seallm_7b_v2_tokenize_multi_turns(tokenizer, conversations, add_assistant_prefix=False): """ Inputs: conversations: list of dict following openai format, eg conversations = [ {"role": "system", "content": "You are helful assistant."}, {"role": "user", "content": "Hello world."}, {"role": "assistant", "content": "Hi there, how can I help?"}, {"role": "user", "content": "Tell me a joke."}, {"role": "assistant", "content": "Why don't scientists trust atoms? Because they make up everything."}, ] add_assistant_prefix: whether to add assistant_prefix, only for inference decoding Outputs: tokenize_output_sample, { "input_ids": ... "token_type_ids": 1 if train and 0 if masked out (not train) } During training, need to create a labels, with masked-out tokens = -100 to avoid loss computations. labels = sample['input_ids'].clone() labels[sample['token_type_ids'] == 0] = -100 """ TURN_TEMPLATE = "<|im_start|>{role}\n{content}</s>" TURN_PREFIX = "<|im_start|>{role}\n" sample = None assistant_prefix_len = None for turn_id, turn in enumerate(conversations): prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content']) turn_sample = tokenizer( prompt, padding=False, truncation=False, verbose=False, add_special_tokens=False, return_token_type_ids=True, ) if turn['role'] == 'assistant': if assistant_prefix_len is None: assistant_prefix_len = len(tokenizer.encode(TURN_PREFIX.format(role=turn['role']), add_special_tokens=False)) turn_sample['token_type_ids'][assistant_prefix_len:] = [1] * (len(turn_sample['input_ids']) - assistant_prefix_len) if sample is None: sample = turn_sample else: for k in turn_sample.keys(): sample[k].extend(turn_sample[k]) if add_assistant_prefix: assistant_prefix_sample = tokenizer( TURN_PREFIX.format(role="assistant"), padding=False, truncation=False, verbose=False, add_special_tokens=False, return_token_type_ids=True, ) for k in sample.keys(): sample[k].extend(assistant_prefix_sample[k]) if tokenizer.add_bos_token: sample['input_ids'] = [tokenizer.bos_token_id] + sample['input_ids'] sample['attention_mask'] = [1] + sample['attention_mask'] sample['token_type_ids'] = [sample['token_type_ids'][0]] + sample['token_type_ids'] return sample # ! testing sample = seallm_7b_v2_tokenize_multi_turns(tokenizer, conversations) print(tokenizer.convert_ids_to_tokens(sample['input_ids'])) print(sample['token_type_ids']) # ['<s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'system', '<0x0A>', 'You', '▁are', '▁hel', 'ful', '▁assistant', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', '▁world', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Hi', '▁there', ',', '▁how', '▁can', '▁I', '▁help', '?', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Tell', '▁me', '▁a', '▁joke', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Why', '▁don', "'", 't', '▁scientists', '▁trust', '▁atoms', '?', '▁Because', '▁they', '▁make', '▁up', '▁everything', '.', '</s>'] # [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] ``` ## Acknowledgement to Our Linguists We would like to express our special thanks to our professional and native linguists, Tantong Champaiboon, Nguyen Ngoc Yen Nhi and Tara Devina Putri, who helped build, evaluate, and fact-check our sampled pretraining and SFT dataset as well as evaluating our models across different aspects, especially safety. ## Citation If you find our project useful, we hope you would kindly star our repo and cite our work as follows: Corresponding Author: [[email protected]](mailto:[email protected]) **Author list and order will change!** * `*` and `^` are equal contributions. ``` @article{damonlpsg2023seallm, author = {Xuan-Phi Nguyen*, Wenxuan Zhang*, Xin Li*, Mahani Aljunied*, Zhiqiang Hu, Chenhui Shen^, Yew Ken Chia^, Xingxuan Li, Jianyu Wang, Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen Yang, Chaoqun Liu, Hang Zhang, Lidong Bing}, title = {SeaLLMs - Large Language Models for Southeast Asia}, year = 2023, Eprint = {arXiv:2312.00738}, } ```
[ "CHIA" ]
QuantFactory/MN-Chunky-Lotus-12B-GGUF
QuantFactory
null
[ "transformers", "gguf", "storywriting", "text adventure", "creative", "story", "writing", "fiction", "roleplaying", "rp", "mergekit", "merge", "en", "arxiv:2306.01708", "base_model:Epiculous/Violet_Twilight-v0.2", "base_model:merge:Epiculous/Violet_Twilight-v0.2", "base_model:TheDrummer/Rocinante-12B-v1.1", "base_model:merge:TheDrummer/Rocinante-12B-v1.1", "base_model:flammenai/Mahou-1.5-mistral-nemo-12B", "base_model:merge:flammenai/Mahou-1.5-mistral-nemo-12B", "base_model:nbeerbower/mistral-nemo-gutenberg-12B-v4", "base_model:merge:nbeerbower/mistral-nemo-gutenberg-12B-v4", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
2024-12-03T12:08:17Z
2024-12-04T03:51:51+00:00
304
4
--- base_model: - TheDrummer/Rocinante-12B-v1.1 - Epiculous/Violet_Twilight-v0.2 - nbeerbower/mistral-nemo-gutenberg-12B-v4 - flammenai/Mahou-1.5-mistral-nemo-12B language: - en library_name: transformers license: cc-by-4.0 tags: - storywriting - text adventure - creative - story - writing - fiction - roleplaying - rp - mergekit - merge --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/MN-Chunky-Lotus-12B-GGUF This is quantized version of [FallenMerick/MN-Chunky-Lotus-12B](https://huggingface.co/FallenMerick/MN-Chunky-Lotus-12B) created using llama.cpp # Original Model Card ![pic](https://huggingface.co/FallenMerick/MN-Chunky-Lotus-12B/resolve/main/chunky-lotus.jpg) # MN-Chunky-Lotus-12B I had originally planned to use this model for future/further merges, but decided to go ahead and release it since it scored rather high on my local EQ Bench testing (79.58 w/ 100% parsed @ 8-bit). </br> Bear in mind that most models tend to score a bit higher on my own local tests as compared to their posted scores. Still, its the highest score I've personally seen from all the models I've tested. </br> Its a decent model, with great emotional intelligence and acceptable adherence to various character personalities. It does a good job at roleplaying despite being a bit bland at times. </br> </br> Overall, I like the way it writes, but it has a few formatting issues that show up from time to time, and it has an uncommon tendency to paste walls of character feelings/intentions at the end of some outputs without any prompting. This is something I hope to correct with future iterations. </br> </br> This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). GGUF quants: * https://huggingface.co/backyardai/MN-Chunky-Lotus-12B-GGUF * https://huggingface.co/mradermacher/MN-Chunky-Lotus-12B-GGUF * https://huggingface.co/mradermacher/MN-Chunky-Lotus-12B-i1-GGUF * https://huggingface.co/FallenMerick/MN-Chunky-Lotus-12B-GGUF ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method. ### Models Merged The following models were included in the merge: * [Epiculous/Violet_Twilight-v0.2](https://huggingface.co/Epiculous/Violet_Twilight-v0.2) * [nbeerbower/mistral-nemo-gutenberg-12B-v4](https://huggingface.co/nbeerbower/mistral-nemo-gutenberg-12B-v4) * [flammenai/Mahou-1.5-mistral-nemo-12B](https://huggingface.co/flammenai/Mahou-1.5-mistral-nemo-12B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Epiculous/Violet_Twilight-v0.2 parameters: weight: 1.0 density: 1.0 - model: nbeerbower/mistral-nemo-gutenberg-12B-v4 parameters: weight: 1.0 density: 0.54 - model: flammenai/Mahou-1.5-mistral-nemo-12B parameters: weight: 1.0 density: 0.26 merge_method: ties base_model: TheDrummer/Rocinante-12B-v1.1 parameters: normalize: true dtype: bfloat16 ``` The idea behind this recipe was to take the long-form writing capabilities of Gutenberg, curtail it a bit with the very short output formatting of Mahou, and use Violet Twilight as an extremely solid roleplaying foundation underneath. </br> Rocinante is used as the base model in this merge in order to really target the delta weights from Gutenberg, since those seemed to have the highest impact on the resulting EQ of the model. </br> </br> Special shoutout to [@matchaaaaa](https://huggingface.co/matchaaaaa) for helping with testing, and for all the great model recommendations. Also, for just being an all around great person who's really inspired and motivated me to continue merging and working on models.
[ "BEAR" ]
ContactDoctor/Bio-Medical-3B-CoT-012025
ContactDoctor
text-generation
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "medical", "Healthcare & Lifesciences", "BioMed", "chain-of-thought", "conversational", "dataset:collaiborateorg/BioMedData", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2025-01-06T09:21:04Z
2025-01-07T11:22:48+00:00
304
10
--- base_model: qwen/Qwen2.5-3b-Instruct datasets: - collaiborateorg/BioMedData library_name: transformers license: other tags: - generated_from_trainer - medical - Healthcare & Lifesciences - BioMed - chain-of-thought thumbnail: https://collaiborate.com/logo/logo-blue-bg-1.png model-index: - name: Bio-Medical-3B-CoT-012025 results: [] --- # Bio-Medical-3B-CoT-012025 ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/653f5b93cd52f288490edc83/zPMUugzfOiwTiRw88jm7T.jpeg) This model is a fine-tuned version of [Qwen2.5-3b-Instruct](https://huggingface.co/qwen/Qwen2.5-3b-Instruct) on our custom "BioMedData" dataset, enhanced with chain-of-thought prompting instructions to introduce advanced reasoning capabilities. It has been specifically optimized for applications in the Healthcare & Lifesciences (HLS) domain. ## Model details **Model Name:** Bio-Medical-3B-CoT-012025 **Base Model:** Qwen2.5-3b-Instruct **Parameter Count:** 3 billion **Training Data:** Custom high-quality biomedical dataset with chain-of-thought examples. **Number of Entries in Dataset:** 600,000+ **Dataset Composition:** The dataset comprises both synthetic and manually curated samples, ensuring diverse and comprehensive coverage of biomedical knowledge. ## Model description The Bio-Medical-3B-CoT-012025 model is designed to provide accurate, context-aware, and reasoning-driven text generation in the biomedical domain. It has been fine-tuned on a dataset that includes chain-of-thought prompting to enable logical reasoning and better interpretability of its outputs. This model is tailored for: - Understanding and generating domain-specific content in the healthcare and biomedical fields. - Answering complex questions that require step-by-step reasoning. - Supporting professionals, researchers, and students in clinical and scientific tasks. ## Evaluation Metrics Bio-Medical-3B-CoT-012025 has been evaluated using the Eleuther AI Language Model Evaluation Harness framework on the following tasks: - medmcqa - medqa_4options - mmlu_anatomy - mmlu_clinical_knowledge - mmlu_college_biology - mmlu_college_medicine - mmlu_medical_genetics - mmlu_professional_medicine - pubmedqa Results show consistent performance improvements over general-purpose models of similiar size, particularly in tasks requiring reasoning. ## Intended uses & limitations **Intended Uses:** 1. **Research Support:** Assisting researchers in extracting and generating insights from biomedical texts. 2. **Clinical Decision Support:** Aiding in the interpretation of clinical data and evidence-based recommendations. 3. **Educational Tool:** Enabling students and professionals to understand complex biomedical concepts. **Limitations and Ethical Considerations:** - **Biases:** The model may reflect biases present in its training data. While efforts were made to mitigate biases, some may persist. - **Accuracy:** The model's responses should be validated against reliable sources, especially in critical or clinical contexts. - **Ethical Use:** The model is intended to complement, not replace, expert judgment. It should be deployed responsibly in high-stakes environments. ## How to use ```python import transformers import torch model_id = "ContactDoctor/Bio-Medical-3B-CoT-012025" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are an expert trained on healthcare and biomedical domain!"}, {"role": "user", "content": "What are the potential causes of chronic fatigue in a 40-year-old male?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` ## License This model is licensed under the [Bio-Medical-3B-CoT-012025 (Non-Commercial Use Only)](./LICENSE). Please review the terms and conditions before using the model. ### Contact Information For further information, inquiries, or issues related to Bio-Medical-3B-CoT-012025, please contact: Email: [email protected] Website: [https://www.contactdoctor.in](https://www.contactdoctor.in) ### Training hyperparameters The following hyperparameters were used during training: - **Learning Rate:** 0.0002 - **Train Batch Size:** 12 - **Eval Batch Size:** 8 - **Seed:** 42 - **Gradient Accumulation Steps:** 4 - **Total Train Batch Size:** 32 - **Optimizer:** Adam with betas=(0.9, 0.999) and epsilon=1e-08 - **LR Scheduler Type:** Cosine - **LR Scheduler Warmup Ratio:** 0.03 - **Training Steps:** 2000 - **Mixed Precision Training:** Native AMP ### Framework versions - **PEFT:** 0.11.0 - **Transformers:** 4.40.2 - **Pytorch:** 2.1.2 - **Datasets:** 2.19.1 - **Tokenizers:** 0.19.1 ### Citation If you use Bio-Medical-3B-CoT-012025 in your research or applications, please cite it as follows: ```bibtex @misc{ContactDoctor_Bio-Medical-3B-CoT-012025, author = {ContactDoctor}, title = {Bio-Medical-3B-CoT-012025: A High-Performance Biomedical Language Model with Reasoning Capabilities}, year = {2025}, howpublished = {https://huggingface.co/ContactDoctor/Bio-Medical-3B-CoT-012025}, } ```
[ "MEDQA", "PUBMEDQA" ]
tifa-benchmark/llama2_tifa_question_generation
tifa-benchmark
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "llama2", "text-to-image", "en", "dataset:TIFA", "arxiv:2303.11897", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-08-16T00:41:50Z
2023-08-24T21:28:03+00:00
302
10
--- datasets: - TIFA language: - en license: apache-2.0 pipeline_tag: text-generation tags: - text-generation-inference - llama2 - text-to-image inference: true widget: - text: '<s>[INST] <<SYS>> Given an image description, generate one or two multiple-choice questions that verifies if the image description is correct. Classify each concept into a type (object, human, animal, food, activity, attribute, counting, color, material, spatial, location, shape, other), and then generate a question for each type. <</SYS>> Description: a blue rabbit and a red plane [/INST] Entities:' --- Project page: <https://tifa-benchmark.github.io/> This is the text parsing and question generation model for the ICCV 2023 paper [TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering](https://arxiv.org/abs/2303.11897) We introduce TIFA (Text-to-Image Faithfulness evaluation with question Answering), an automatic evaluation metric that measures the faithfulness of a generated image to its text input via visual question answering (VQA). Specifically, given a text input, we automatically generate several question-answer pairs using a language model. We calculate image faithfulness by checking whether existing VQA models can answer these questions using the generated image. Specifically, this fine-tuned LLaMA 2 model is the substitute for the GPT-3 model in the paper. It can parse an arbitrary prompt into visual entities, attributes, relations, etc. and generate question-answer tuples for each of them. See examples below. # QuickStart All codes are from <https://github.com/Yushi-Hu/tifa>. Clone this repo to easily use this model together with other modules (e.g. VQA) provided in TIFA. Please follow the prompt format, which will give the best performance. ```python import torch import transformers # prepare the LLaMA 2 model model_name = "tifa-benchmark/llama2_tifa_question_generation" pipeline = transformers.pipeline( "text-generation", model=model_name, torch_dtype=torch.float16, device_map="auto", ) # formating prompt following LLaMA 2 style def create_qg_prompt(caption): INTRO_BLURB = "Given an image description, generate one or two multiple-choice questions that verifies if the image description is correct.\nClassify each concept into a type (object, human, animal, food, activity, attribute, counting, color, material, spatial, location, shape, other), and then generate a question for each type.\n" formated_prompt = f"<s>[INST] <<SYS>>\n{INTRO_BLURB}\n<</SYS>>\n\n" formated_prompt += f"Description: {caption} [/INST] Entities:" return formated_prompt test_caption = "a blue rabbit and a red plane" # create prompt prompt = create_qg_prompt(text_caption) # text completion sequences = pipeline( prompt, do_sample=False, num_beams=5, num_return_sequences=1, max_length=512) output = sequences[0]['generated_text'][len(prompt):] output = output.split('\n\n')[0] # output print(output) #### Expected output ### # rabbit, plane # Activites: # Colors: blue, red # Counting: # Other attributes: # About rabbit (animal): # Q: is this a rabbit? # Choices: yes, no # A: yes # About rabbit (animal): # Q: what animal is in the picture? # Choices: rabbit, dog, cat, fish # A: rabbit # About plane (object): # Q: is this a plane? # Choices: yes, no # A: yes # About plane (object): # Q: what type of vehicle is this? # Choices: plane, car, motorcycle, bus # A: plane # About blue (color): # Q: is the rabbit blue? # Choices: yes, no # A: yes # About blue (color): # Q: what color is the rabbit? # Choices: blue, red, yellow, green # A: blue # About red (color): # Q: is the plane red? # Choices: yes, no # A: yes # About red (color): # Q: what color is the plane? # Choices: red, blue, yellow, green # A: red ``` # Use this LM under tifascore package tifascore provides extra functions to parse this output etc. First install tifascore according to <https://github.com/Yushi-Hu/tifa>. Then the usage is below ```python from tifascore import get_llama2_pipeline, get_llama2_question_and_answers pipeline = get_llama2_pipeline("tifa-benchmark/llama2_tifa_question_generation") print(get_llama2_question_and_answers(pipeline, "a blue rabbit and a red plane")) #### Expected output ### # [{'caption': 'a blue rabbit and a red plane', 'element': 'rabbit', 'question': 'what animal is in the picture?', 'choices': ['rabbit', 'dog', 'cat', 'fish'], 'answer': 'rabbit', 'element_type': 'animal/human'}, {'caption': 'a blue rabbit and a red plane', 'element': 'plane', 'question': 'is this a plane?', 'choices': ['yes', 'no'], 'answer': 'yes', 'element_type': 'object'}, {'caption': 'a blue rabbit and a red plane', 'element': 'plane', 'question': 'what type of vehicle is this?', 'choices': ['plane', 'car', 'motorcycle', 'bus'], 'answer': 'plane', 'element_type': 'object'}, {'caption': 'a blue rabbit and a red plane', 'element': 'blue', 'question': 'is the rabbit blue?', 'choices': ['yes', 'no'], 'answer': 'yes', 'element_type': 'color'}, {'caption': 'a blue rabbit and a red plane', 'element': 'blue', 'question': 'what color is the rabbit?', 'choices': ['blue', 'red', 'yellow', 'green'], 'answer': 'blue', 'element_type': 'color'}, {'caption': 'a blue rabbit and a red plane', 'element': 'red', 'question': 'is the plane red?', 'choices': ['yes', 'no'], 'answer': 'yes', 'element_type': 'color'}, {'caption': 'a blue rabbit and a red plane', 'element': 'red', 'question': 'what color is the plane?', 'choices': ['red', 'blue', 'yellow', 'green'], 'answer': 'red', 'element_type': 'color'}] ``` ## Bibtex ``` @article{hu2023tifa, title={Tifa: Accurate and interpretable text-to-image faithfulness evaluation with question answering}, author={Hu, Yushi and Liu, Benlin and Kasai, Jungo and Wang, Yizhong and Ostendorf, Mari and Krishna, Ranjay and Smith, Noah A}, journal={arXiv preprint arXiv:2303.11897}, year={2023} } ```
[ "BLURB" ]
Medilora/Medilora-Mistral-7B
Medilora
text-generation
[ "transformers", "safetensors", "gguf", "mistral", "text-generation", "medical", "conversational", "en", "dataset:Medilora/us_medical_license_exam_textbooks_en", "dataset:Medilora/mimic_iii_diagnosis_anonymous", "dataset:Medilora/PubMedQA-ShareGPT", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-12-04T06:55:15Z
2023-12-06T22:48:09+00:00
302
4
--- datasets: - Medilora/us_medical_license_exam_textbooks_en - Medilora/mimic_iii_diagnosis_anonymous - Medilora/PubMedQA-ShareGPT language: - en library_name: transformers license: mit metrics: - accuracy - f1 pipeline_tag: conversational tags: - medical --- # Model Card for pubmedqaa-medilora Details coming soon ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "PUBMEDQA" ]
mradermacher/Dr_Samantha-7b-GGUF
mradermacher
null
[ "transformers", "gguf", "llama", "merge", "medical", "en", "zh", "dataset:GBaker/MedQA-USMLE-4-options", "dataset:cognitivecomputations/samantha-data", "dataset:shibing624/medical", "base_model:sethuiyer/Dr_Samantha-7b", "base_model:quantized:sethuiyer/Dr_Samantha-7b", "license:llama2", "endpoints_compatible", "region:us" ]
2024-12-19T10:31:14Z
2024-12-19T14:18:59+00:00
302
0
--- base_model: sethuiyer/Dr_Samantha-7b datasets: - GBaker/MedQA-USMLE-4-options - cognitivecomputations/samantha-data - shibing624/medical language: - en - zh library_name: transformers license: llama2 tags: - llama - merge - medical quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/sethuiyer/Dr_Samantha-7b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Dr_Samantha-7b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Dr_Samantha-7b-GGUF/resolve/main/Dr_Samantha-7b.Q2_K.gguf) | Q2_K | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/Dr_Samantha-7b-GGUF/resolve/main/Dr_Samantha-7b.Q3_K_S.gguf) | Q3_K_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Dr_Samantha-7b-GGUF/resolve/main/Dr_Samantha-7b.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Dr_Samantha-7b-GGUF/resolve/main/Dr_Samantha-7b.Q3_K_L.gguf) | Q3_K_L | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Dr_Samantha-7b-GGUF/resolve/main/Dr_Samantha-7b.IQ4_XS.gguf) | IQ4_XS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Dr_Samantha-7b-GGUF/resolve/main/Dr_Samantha-7b.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Dr_Samantha-7b-GGUF/resolve/main/Dr_Samantha-7b.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Dr_Samantha-7b-GGUF/resolve/main/Dr_Samantha-7b.Q5_K_S.gguf) | Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Dr_Samantha-7b-GGUF/resolve/main/Dr_Samantha-7b.Q5_K_M.gguf) | Q5_K_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Dr_Samantha-7b-GGUF/resolve/main/Dr_Samantha-7b.Q6_K.gguf) | Q6_K | 5.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Dr_Samantha-7b-GGUF/resolve/main/Dr_Samantha-7b.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Dr_Samantha-7b-GGUF/resolve/main/Dr_Samantha-7b.f16.gguf) | f16 | 13.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
[ "MEDQA" ]
mradermacher/1.5-Pints-16K-v0.1-GGUF
mradermacher
null
[ "transformers", "gguf", "en", "dataset:pints-ai/Expository-Prose-V1", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:Open-Orca/SlimOrca-Dedup", "dataset:meta-math/MetaMathQA", "dataset:HuggingFaceH4/deita-10k-v0-sft", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "dataset:togethercomputer/llama-instruct", "dataset:LDJnr/Capybara", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:pints-ai/1.5-Pints-16K-v0.1", "base_model:quantized:pints-ai/1.5-Pints-16K-v0.1", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
2025-03-08T10:59:12Z
2025-03-08T13:37:41+00:00
302
0
--- base_model: pints-ai/1.5-Pints-16K-v0.1 datasets: - pints-ai/Expository-Prose-V1 - HuggingFaceH4/ultrachat_200k - Open-Orca/SlimOrca-Dedup - meta-math/MetaMathQA - HuggingFaceH4/deita-10k-v0-sft - WizardLM/WizardLM_evol_instruct_V2_196k - togethercomputer/llama-instruct - LDJnr/Capybara - HuggingFaceH4/ultrafeedback_binarized language: - en library_name: transformers license: mit extra_gated_fields: Company: text Country: country I agree to use this model for in accordance to the afore-mentioned Terms of Use: checkbox I want to use this model for: options: - Research - Education - label: Other value: other type: select Specific date: date_picker extra_gated_prompt: Though best efforts has been made to ensure, as much as possible, that all texts in the training corpora are royalty free, this does not constitute a legal guarantee that such is the case. **By using any of the models, corpora or part thereof, the user agrees to bear full responsibility to do the necessary due diligence to ensure that he / she is in compliance with their local copyright laws. Additionally, the user agrees to bear any damages arising as a direct cause (or otherwise) of using any artifacts released by the pints research team, as well as full responsibility for the consequences of his / her usage (or implementation) of any such released artifacts. The user also indemnifies Pints Research Team (and any of its members or agents) of any damage, related or unrelated, to the release or subsequent usage of any findings, artifacts or code by the team. For the avoidance of doubt, any artifacts released by the Pints Research team are done so in accordance with the 'fair use' clause of Copyright Law, in hopes that this will aid the research community in bringing LLMs to the next frontier. quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/pints-ai/1.5-Pints-16K-v0.1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q2_K.gguf) | Q2_K | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q3_K_S.gguf) | Q3_K_S | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q3_K_L.gguf) | Q3_K_L | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.IQ4_XS.gguf) | IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q5_K_S.gguf) | Q5_K_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q5_K_M.gguf) | Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q6_K.gguf) | Q6_K | 1.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q8_0.gguf) | Q8_0 | 1.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.f16.gguf) | f16 | 3.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
[ "BEAR" ]
mradermacher/Llama-3-VNTL-Vectors-GGUF
mradermacher
null
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Cas-Warehouse/Llama-3-VNTL-Vectors", "base_model:quantized:Cas-Warehouse/Llama-3-VNTL-Vectors", "endpoints_compatible", "region:us", "conversational" ]
2025-03-08T22:12:10Z
2025-03-09T00:51:54+00:00
300
0
--- base_model: Cas-Warehouse/Llama-3-VNTL-Vectors language: - en library_name: transformers tags: - mergekit - merge quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Cas-Warehouse/Llama-3-VNTL-Vectors <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-GGUF/resolve/main/Llama-3-VNTL-Vectors.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-GGUF/resolve/main/Llama-3-VNTL-Vectors.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-GGUF/resolve/main/Llama-3-VNTL-Vectors.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-GGUF/resolve/main/Llama-3-VNTL-Vectors.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-GGUF/resolve/main/Llama-3-VNTL-Vectors.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-GGUF/resolve/main/Llama-3-VNTL-Vectors.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-GGUF/resolve/main/Llama-3-VNTL-Vectors.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-GGUF/resolve/main/Llama-3-VNTL-Vectors.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-GGUF/resolve/main/Llama-3-VNTL-Vectors.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-GGUF/resolve/main/Llama-3-VNTL-Vectors.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-GGUF/resolve/main/Llama-3-VNTL-Vectors.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-GGUF/resolve/main/Llama-3-VNTL-Vectors.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
[ "CAS" ]
BSC-LT/roberta-base-biomedical-es
BSC-LT
fill-mask
[ "transformers", "pytorch", "roberta", "fill-mask", "biomedical", "spanish", "es", "arxiv:2109.03570", "arxiv:2109.07765", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2021-10-21T10:28:29+00:00
299
3
--- language: - es license: apache-2.0 metrics: - ppl tags: - biomedical - spanish widget: - text: El único antecedente personal a reseñar era la <mask> arterial. - text: Las radiologías óseas de cuerpo entero no detectan alteraciones <mask>, ni alteraciones vertebrales. - text: En el <mask> toraco-abdómino-pélvico no se encontraron hallazgos patológicos de interés. --- **⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-es # Biomedical language model for Spanish Biomedical pretrained language model for Spanish. For more details about the corpus, the pretraining and the evaluation, check the official [repository](https://github.com/PlanTL-SANIDAD/lm-biomedical-clinical-es) and read our [preprint](https://arxiv.org/abs/2109.03570) "_Carrino, C. P., Armengol-Estapé, J., Gutiérrez-Fandiño, A., Llop-Palao, J., Pàmies, M., Gonzalez-Agirre, A., & Villegas, M. (2021). Biomedical and Clinical Language Models for Spanish: On the Benefits of Domain-Specific Pretraining in a Mid-Resource Scenario._". ## Tokenization and model pretraining This model is a [RoBERTa-based](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model trained on a **biomedical** corpus in Spanish collected from several sources (see next section). The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2) used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 52,000 tokens. The pretraining consists of a masked language model training at the subword level following the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM, using Adam optimizer with a peak learning rate of 0.0005 and an effective batch size of 2,048 sentences. ## Training corpora and preprocessing The training corpus is composed of several biomedical corpora in Spanish, collected from publicly available corpora and crawlers. To obtain a high-quality training corpus, a cleaning pipeline with the following operations has been applied: - data parsing in different formats - sentence splitting - language detection - filtering of ill-formed sentences - deduplication of repetitive contents - keep the original document boundaries Finally, the corpora are concatenated and further global deduplication among the corpora have been applied. The result is a medium-size biomedical corpus for Spanish composed of about 963M tokens. The table below shows some basic statistics of the individual cleaned corpora: | Name | No. tokens | Description | |-----------------------------------------------------------------------------------------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [Medical crawler](https://zenodo.org/record/4561970) | 745,705,946 | Crawler of more than 3,000 URLs belonging to Spanish biomedical and health domains. | | Clinical cases misc. | 102,855,267 | A miscellany of medical content, essentially clinical cases. Note that a clinical case report is a scientific publication where medical practitioners share patient cases and it is different from a clinical note or document. | | [Scielo](https://github.com/PlanTL-SANIDAD/SciELO-Spain-Crawler) | 60,007,289 | Publications written in Spanish crawled from the Spanish SciELO server in 2017. | | [BARR2_background](https://temu.bsc.es/BARR2/downloads/background_set.raw_text.tar.bz2) | 24,516,442 | Biomedical Abbreviation Recognition and Resolution (BARR2) containing Spanish clinical case study sections from a variety of clinical disciplines. | | Wikipedia_life_sciences | 13,890,501 | Wikipedia articles crawled 04/01/2021 with the [Wikipedia API python library](https://pypi.org/project/Wikipedia-API/) starting from the "Ciencias\_de\_la\_vida" category up to a maximum of 5 subcategories. Multiple links to the same articles are then discarded to avoid repeating content. | | Patents | 13,463,387 | Google Patent in Medical Domain for Spain (Spanish). The accepted codes (Medical Domain) for Json files of patents are: "A61B", "A61C","A61F", "A61H", "A61K", "A61L","A61M", "A61B", "A61P". | | [EMEA](http://opus.nlpl.eu/download.php?f=EMEA/v3/moses/en-es.txt.zip) | 5,377,448 | Spanish-side documents extracted from parallel corpora made out of PDF documents from the European Medicines Agency. | | [mespen_Medline](https://zenodo.org/record/3562536#.YTt1fH2xXbR) | 4,166,077 | Spanish-side articles extracted from a collection of Spanish-English parallel corpus consisting of biomedical scientific literature. The collection of parallel resources are aggregated from the MedlinePlus source. | | PubMed | 1,858,966 | Open-access articles from the PubMed repository crawled in 2017. | ## Evaluation and results The model has been evaluated on the Named Entity Recognition (NER) using the following datasets: - [PharmaCoNER](https://zenodo.org/record/4270158): is a track on chemical and drug mention recognition from Spanish medical texts (for more info see: https://temu.bsc.es/pharmaconer/). - [CANTEMIST](https://zenodo.org/record/3978041#.YTt5qH2xXbQ): is a shared task specifically focusing on named entity recognition of tumor morphology, in Spanish (for more info see: https://zenodo.org/record/3978041#.YTt5qH2xXbQ). - ICTUSnet: consists of 1,006 hospital discharge reports of patients admitted for stroke from 18 different Spanish hospitals. It contains more than 79,000 annotations for 51 different kinds of variables. The evaluation results are compared against the [mBERT](https://huggingface.co/bert-base-multilingual-cased) and [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) models: | F1 - Precision - Recall | roberta-base-biomedical-es | mBERT | BETO | |---------------------------|----------------------------|-------------------------------|-------------------------| | PharmaCoNER | **89.48** - **87.85** - **91.18** | 87.46 - 86.50 - 88.46 | 88.18 - 87.12 - 89.28 | | CANTEMIST | **83.87** - **81.70** - **86.17** | 82.61 - 81.12 - 84.15 | 82.42 - 80.91 - 84.00 | | ICTUSnet | **88.12** - **85.56** - **90.83** | 86.75 - 83.53 - 90.23 | 85.95 - 83.10 - 89.02 | ## Intended uses & limitations The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section) However, the is intended to be fine-tuned on downstream tasks such as Named Entity Recognition or Text Classification. ## Cite If you use our models, please cite our latest preprint: ```bibtex @misc{carrino2021biomedical, title={Biomedical and Clinical Language Models for Spanish: On the Benefits of Domain-Specific Pretraining in a Mid-Resource Scenario}, author={Casimiro Pio Carrino and Jordi Armengol-Estapé and Asier Gutiérrez-Fandiño and Joan Llop-Palao and Marc Pàmies and Aitor Gonzalez-Agirre and Marta Villegas}, year={2021}, eprint={2109.03570}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` If you use our Medical Crawler corpus, please cite the preprint: ```bibtex @misc{carrino2021spanish, title={Spanish Biomedical Crawled Corpus: A Large, Diverse Dataset for Spanish Biomedical Language Models}, author={Casimiro Pio Carrino and Jordi Armengol-Estapé and Ona de Gibert Bonet and Asier Gutiérrez-Fandiño and Aitor Gonzalez-Agirre and Martin Krallinger and Marta Villegas}, year={2021}, eprint={2109.07765}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` --- ## How to use ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("BSC-TeMU/roberta-base-biomedical-es") model = AutoModelForMaskedLM.from_pretrained("BSC-TeMU/roberta-base-biomedical-es") from transformers import pipeline unmasker = pipeline('fill-mask', model="BSC-TeMU/roberta-base-biomedical-es") unmasker("El único antecedente personal a reseñar era la <mask> arterial.") ``` ``` # Output [ { "sequence": " El único antecedente personal a reseñar era la hipertensión arterial.", "score": 0.9855039715766907, "token": 3529, "token_str": " hipertensión" }, { "sequence": " El único antecedente personal a reseñar era la diabetes arterial.", "score": 0.0039140828885138035, "token": 1945, "token_str": " diabetes" }, { "sequence": " El único antecedente personal a reseñar era la hipotensión arterial.", "score": 0.002484665485098958, "token": 11483, "token_str": " hipotensión" }, { "sequence": " El único antecedente personal a reseñar era la Hipertensión arterial.", "score": 0.0023484621196985245, "token": 12238, "token_str": " Hipertensión" }, { "sequence": " El único antecedente personal a reseñar era la presión arterial.", "score": 0.0008009297889657319, "token": 2267, "token_str": " presión" } ] ```
[ "CANTEMIST", "PHARMACONER", "SCIELO" ]
FremyCompany/BioLORD-STAMB2-v1
FremyCompany
sentence-similarity
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "en", "dataset:FremyCompany/BioLORD-Dataset", "arxiv:2210.11892", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-10-20T19:37:34Z
2023-11-27T19:51:10+00:00
299
14
--- datasets: - FremyCompany/BioLORD-Dataset language: en license: other pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity widget: - source_sentence: bartonellosis sentences: - cat scratch disease - cat scratch wound - tick-borne orbivirus fever - cat fur --- # FremyCompany/BioLORD-STAMB2-v1 This model was trained using BioLORD, a new pre-training strategy for producing meaningful representations for clinical sentences and biomedical concepts. > ## IMPORTANT NOTE: > **This model was introduced in 2022. Since then, a new version has been published.** <br> > For most use cases, you will be better served by [BioLORD-2023](https://huggingface.co/FremyCompany/BioLORD-2023), our latest generation of BioLORD models. State-of-the-art methodologies operate by maximizing the similarity in representation of names referring to the same concept, and preventing collapse through contrastive learning. However, because biomedical names are not always self-explanatory, it sometimes results in non-semantic representations. BioLORD overcomes this issue by grounding its concept representations using definitions, as well as short descriptions derived from a multi-relational knowledge graph consisting of biomedical ontologies. Thanks to this grounding, our model produces more semantic concept representations that match more closely the hierarchical structure of ontologies. BioLORD establishes a new state of the art for text similarity on both clinical sentences (MedSTS) and biomedical concepts (MayoSRS). This model is based on [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) and was further finetuned on the [BioLORD-Dataset](https://huggingface.co/datasets/FremyCompany/BioLORD-Dataset). ## General purpose This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. This model has been finentuned for the biomedical domain. While it preserves a good ability to produce embeddings for general-purpose text, it will be more useful to you if you are trying to process medical documents such as EHR records or clinical notes. Both sentences and phrases can be embedded in the same latent space. ## Citation This model accompanies the [BioLORD: Learning Ontological Representations from Definitions](https://arxiv.org/abs/2210.11892) paper, accepted in the EMNLP 2022 Findings. When you use this model, please cite the original paper as follows: ```latex @inproceedings{remy-etal-2022-biolord, title = "{B}io{LORD}: Learning Ontological Representations from Definitions for Biomedical Concepts and their Textual Descriptions", author = "Remy, François and Demuynck, Kris and Demeester, Thomas", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022", month = dec, year = "2022", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-emnlp.104", pages = "1454--1465", abstract = "This work introduces BioLORD, a new pre-training strategy for producing meaningful representations for clinical sentences and biomedical concepts. State-of-the-art methodologies operate by maximizing the similarity in representation of names referring to the same concept, and preventing collapse through contrastive learning. However, because biomedical names are not always self-explanatory, it sometimes results in non-semantic representations. BioLORD overcomes this issue by grounding its concept representations using definitions, as well as short descriptions derived from a multi-relational knowledge graph consisting of biomedical ontologies. Thanks to this grounding, our model produces more semantic concept representations that match more closely the hierarchical structure of ontologies. BioLORD establishes a new state of the art for text similarity on both clinical sentences (MedSTS) and biomedical concepts (MayoSRS).", } ``` You might also want to take a look at our MWE 2023 Paper: - [Detecting Idiomatic Multiword Expressions in Clinical Terminology using Definition-Based Representation Learning](https://www.researchgate.net/publication/370426650_Detecting_Idiomatic_Multiword_Expressions_in_Clinical_Terminology_using_Definition-Based_Representation_Learning) ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Cat scratch injury", "Cat scratch disease", "Bartonellosis"] model = SentenceTransformer('FremyCompany/BioLORD-STAMB2-v1') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["Cat scratch injury", "Cat scratch disease", "Bartonellosis"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('FremyCompany/BioLORD-STAMB2-v1') model = AutoModel.from_pretrained('FremyCompany/BioLORD-STAMB2-v1') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## License My own contributions for this model are covered by the MIT license. However, given the data used to train this model originates from UMLS, you will need to ensure you have proper licensing of UMLS before using this model. UMLS is free of charge in most countries, but you might have to create an account and report on your usage of the data yearly to keep a valid license.
[ "MAYOSRS" ]
ntc-ai/SDXL-LoRA-slider.HOLY-SHIT
ntc-ai
text-to-image
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
2024-01-05T23:06:57Z
2024-01-05T23:07:02+00:00
297
0
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 language: - en license: mit tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers thumbnail: images/evaluate/HOLY SHIT.../HOLY SHIT_17_3.0.png widget: - text: HOLY SHIT output: url: images/HOLY SHIT_17_3.0.png - text: HOLY SHIT output: url: images/HOLY SHIT_19_3.0.png - text: HOLY SHIT output: url: images/HOLY SHIT_20_3.0.png - text: HOLY SHIT output: url: images/HOLY SHIT_21_3.0.png - text: HOLY SHIT output: url: images/HOLY SHIT_22_3.0.png inference: false instance_prompt: HOLY SHIT --- # ntcai.xyz slider - HOLY SHIT (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/HOLY SHIT_17_-3.0.png" width=256 height=256 /> | <img src="images/HOLY SHIT_17_0.0.png" width=256 height=256 /> | <img src="images/HOLY SHIT_17_3.0.png" width=256 height=256 /> | | <img src="images/HOLY SHIT_19_-3.0.png" width=256 height=256 /> | <img src="images/HOLY SHIT_19_0.0.png" width=256 height=256 /> | <img src="images/HOLY SHIT_19_3.0.png" width=256 height=256 /> | | <img src="images/HOLY SHIT_20_-3.0.png" width=256 height=256 /> | <img src="images/HOLY SHIT_20_0.0.png" width=256 height=256 /> | <img src="images/HOLY SHIT_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` HOLY SHIT ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.HOLY-SHIT', weight_name='HOLY SHIT.safetensors', adapter_name="HOLY SHIT") # Activate the LoRA pipe.set_adapters(["HOLY SHIT"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, HOLY SHIT" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 890+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
[ "CRAFT" ]
RichardErkhov/EleutherAI_-_pythia-1b-deduped-gguf
RichardErkhov
null
[ "gguf", "arxiv:2304.01373", "arxiv:2101.00027", "arxiv:2201.07311", "endpoints_compatible", "region:us" ]
2024-10-31T17:50:47Z
2024-10-31T18:04:01+00:00
297
1
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) pythia-1b-deduped - GGUF - Model creator: https://huggingface.co/EleutherAI/ - Original model: https://huggingface.co/EleutherAI/pythia-1b-deduped/ | Name | Quant method | Size | | ---- | ---- | ---- | | [pythia-1b-deduped.Q2_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-gguf/blob/main/pythia-1b-deduped.Q2_K.gguf) | Q2_K | 0.39GB | | [pythia-1b-deduped.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-gguf/blob/main/pythia-1b-deduped.Q3_K_S.gguf) | Q3_K_S | 0.45GB | | [pythia-1b-deduped.Q3_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-gguf/blob/main/pythia-1b-deduped.Q3_K.gguf) | Q3_K | 0.51GB | | [pythia-1b-deduped.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-gguf/blob/main/pythia-1b-deduped.Q3_K_M.gguf) | Q3_K_M | 0.51GB | | [pythia-1b-deduped.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-gguf/blob/main/pythia-1b-deduped.Q3_K_L.gguf) | Q3_K_L | 0.55GB | | [pythia-1b-deduped.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-gguf/blob/main/pythia-1b-deduped.IQ4_XS.gguf) | IQ4_XS | 0.54GB | | [pythia-1b-deduped.Q4_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-gguf/blob/main/pythia-1b-deduped.Q4_0.gguf) | Q4_0 | 0.56GB | | [pythia-1b-deduped.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-gguf/blob/main/pythia-1b-deduped.IQ4_NL.gguf) | IQ4_NL | 0.56GB | | [pythia-1b-deduped.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-gguf/blob/main/pythia-1b-deduped.Q4_K_S.gguf) | Q4_K_S | 0.56GB | | [pythia-1b-deduped.Q4_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-gguf/blob/main/pythia-1b-deduped.Q4_K.gguf) | Q4_K | 0.61GB | | [pythia-1b-deduped.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-gguf/blob/main/pythia-1b-deduped.Q4_K_M.gguf) | Q4_K_M | 0.61GB | | [pythia-1b-deduped.Q4_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-gguf/blob/main/pythia-1b-deduped.Q4_1.gguf) | Q4_1 | 0.61GB | | [pythia-1b-deduped.Q5_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-gguf/blob/main/pythia-1b-deduped.Q5_0.gguf) | Q5_0 | 0.66GB | | [pythia-1b-deduped.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-gguf/blob/main/pythia-1b-deduped.Q5_K_S.gguf) | Q5_K_S | 0.66GB | | [pythia-1b-deduped.Q5_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-gguf/blob/main/pythia-1b-deduped.Q5_K.gguf) | Q5_K | 0.71GB | | [pythia-1b-deduped.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-gguf/blob/main/pythia-1b-deduped.Q5_K_M.gguf) | Q5_K_M | 0.71GB | | [pythia-1b-deduped.Q5_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-gguf/blob/main/pythia-1b-deduped.Q5_1.gguf) | Q5_1 | 0.72GB | | [pythia-1b-deduped.Q6_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-gguf/blob/main/pythia-1b-deduped.Q6_K.gguf) | Q6_K | 0.78GB | | [pythia-1b-deduped.Q8_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-gguf/blob/main/pythia-1b-deduped.Q8_0.gguf) | Q8_0 | 1.0GB | Original model description: --- language: - en tags: - pytorch - causal-lm - pythia license: apache-2.0 datasets: - EleutherAI/the_pile_deduplicated --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf). It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. We also provide 154 intermediate checkpoints per model, hosted on Hugging Face as branches. The Pythia model suite was designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. <details> <summary style="font-weight:600">Details on previous early release and naming convention.</summary> Previously, we released an early version of the Pythia suite to the public. However, we decided to retrain the model suite to address a few hyperparameter discrepancies. This model card <a href="#changelog">lists the changes</a>; see appendix B in the Pythia paper for further discussion. We found no difference in benchmark performance between the two Pythia versions. The old models are [still available](https://huggingface.co/models?other=pythia_v0), but we suggest the retrained suite if you are just starting to use Pythia.<br> **This is the current release.** Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. </details> <br> # Pythia-1B-deduped ## Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. [See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation details. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:[email protected]). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ## Uses and Limitations ### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. We also provide 154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints `step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to `step143000`. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-1B-deduped for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please conduct your own risk and bias assessment. ### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. For example, the model may generate harmful or offensive text. Please evaluate the risks associated with your particular use case. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-1B-deduped has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-1B-deduped will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “follow” human instructions. ### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token used by the model need not produce the most “accurate” text. Never rely on Pythia-1B-deduped to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-1B-deduped may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-1B-deduped. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ## Training ### Training data Pythia-1B-deduped was trained on the Pile **after the dataset has been globally deduplicated**.<br> [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/). ### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training, from `step1000` to `step143000` (which is the same as `main`). In addition, we also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for 143000 steps at a batch size of 2M (2,097,152 tokens).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ## Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge—Easy Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/> </details> ## Changelog This section compares differences between previously released [Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current models. See Appendix B of the Pythia paper for further discussion of these changes and the motivation behind them. We found that retraining Pythia had no impact on benchmark performance. - All model sizes are now trained with uniform batch size of 2M tokens. Previously, the models of size 160M, 410M, and 1.4B parameters were trained with batch sizes of 4M tokens. - We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64, 128,256,512} in addition to every 1000 training steps. - Flash Attention was used in the new retrained suite. - We remedied a minor inconsistency that existed in the original suite: all models of size 2.8B parameters or smaller had a learning rate (LR) schedule which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and 12B models all used an LR schedule which decayed to a minimum LR of 0. In the redone training runs, we rectified this inconsistency: all models now were trained with LR decaying to a minimum of 0.1× their maximum LR. ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
[ "SCIQ" ]
Y-J-Ju/ModernBERT-base-ColBERT
Y-J-Ju
sentence-similarity
[ "PyLate", "safetensors", "modernbert", "ColBERT", "sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:808728", "loss:Distillation", "en", "dataset:lightonai/ms-marco-en-bge", "arxiv:1908.10084", "base_model:answerdotai/ModernBERT-base", "base_model:finetune:answerdotai/ModernBERT-base", "region:us" ]
2025-01-03T05:46:23Z
2025-01-20T13:40:43+00:00
297
6
--- base_model: answerdotai/ModernBERT-base datasets: - lightonai/ms-marco-en-bge language: - en library_name: PyLate pipeline_tag: sentence-similarity tags: - ColBERT - PyLate - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:808728 - loss:Distillation --- # PyLate model based on answerdotai/ModernBERT-base This is a [PyLate](https://github.com/lightonai/pylate) model finetuned from [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on the [train](https://huggingface.co/datasets/lightonai/ms-marco-en-bge) dataset. It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator. ## Model Details ### Model Description - **Model Type:** PyLate model - **Base model:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) <!-- at revision 6e461621ae9e2dffc138de99490e9baee354deb5 --> - **Document Length:** 180 tokens - **Query Length:** 32 tokens - **Output Dimensionality:** 128 tokens - **Similarity Function:** MaxSim - **Training Dataset:** - [train](https://huggingface.co/datasets/lightonai/ms-marco-en-bge) - **Language:** en <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [PyLate Documentation](https://lightonai.github.io/pylate/) - **Repository:** [PyLate on GitHub](https://github.com/lightonai/pylate) - **Hugging Face:** [PyLate models on Hugging Face](https://huggingface.co/models?library=PyLate) ### Full Model Architecture ``` ColBERT( (0): Transformer({'max_seq_length': 179, 'do_lower_case': False}) with Transformer model: ModernBertModel (1): Dense({'in_features': 768, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'}) ) ``` ## Usage First install the PyLate library: ```bash pip install -U pylate ``` ### Retrieval PyLate provides a streamlined interface to index and retrieve documents using ColBERT models. The index leverages the Voyager HNSW index to efficiently handle document embeddings and enable fast retrieval. #### Indexing documents First, load the ColBERT model and initialize the Voyager index, then encode and index your documents: ```python from pylate import indexes, models, retrieve # Step 1: Load the ColBERT model model = models.ColBERT( model_name_or_path=pylate_model_id, ) # Step 2: Initialize the Voyager index index = indexes.Voyager( index_folder="pylate-index", index_name="index", override=True, # This overwrites the existing index if any ) # Step 3: Encode the documents documents_ids = ["1", "2", "3"] documents = ["document 1 text", "document 2 text", "document 3 text"] documents_embeddings = model.encode( documents, batch_size=32, is_query=False, # Ensure that it is set to False to indicate that these are documents, not queries show_progress_bar=True, ) # Step 4: Add document embeddings to the index by providing embeddings and corresponding ids index.add_documents( documents_ids=documents_ids, documents_embeddings=documents_embeddings, ) ``` Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it: ```python # To load an index, simply instantiate it with the correct folder/name and without overriding it index = indexes.Voyager( index_folder="pylate-index", index_name="index", ) ``` #### Retrieving top-k documents for queries Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries. To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores: ```python # Step 1: Initialize the ColBERT retriever retriever = retrieve.ColBERT(index=index) # Step 2: Encode the queries queries_embeddings = model.encode( ["query for document 3", "query for document 1"], batch_size=32, is_query=True, # # Ensure that it is set to False to indicate that these are queries show_progress_bar=True, ) # Step 3: Retrieve top-k documents scores = retriever.retrieve( queries_embeddings=queries_embeddings, k=10, # Retrieve the top 10 matches for each query ) ``` ### Reranking If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank: ```python from pylate import rank, models queries = [ "query A", "query B", ] documents = [ ["document A", "document B"], ["document 1", "document C", "document B"], ] documents_ids = [ [1, 2], [1, 3, 2], ] model = models.ColBERT( model_name_or_path=pylate_model_id, ) queries_embeddings = model.encode( queries, is_query=True, ) documents_embeddings = model.encode( documents, is_query=False, ) reranked_documents = rank.rerank( documents_ids=documents_ids, queries_embeddings=queries_embeddings, documents_embeddings=documents_embeddings, ) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### train * Dataset: [train](https://huggingface.co/datasets/lightonai/ms-marco-en-bge) at [11e6ffa](https://huggingface.co/datasets/lightonai/ms-marco-en-bge/tree/11e6ffa1d22f461579f451eb31bdc964244cb61f) * Size: 808,728 training samples * Columns: <code>query_id</code>, <code>document_ids</code>, and <code>scores</code> * Approximate statistics based on the first 1000 samples: | | query_id | document_ids | scores | |:--------|:--------------------------------------------------------------------------------|:------------------------------------|:------------------------------------| | type | string | list | list | | details | <ul><li>min: 5 tokens</li><li>mean: 5.59 tokens</li><li>max: 6 tokens</li></ul> | <ul><li>size: 32 elements</li></ul> | <ul><li>size: 32 elements</li></ul> | * Samples: | query_id | document_ids | scores | |:--------------------|:--------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------| | <code>121352</code> | <code>['2259784', '4923159', '40211', '1545154', '8527175', ...]</code> | <code>[0.2343463897705078, 0.639204204082489, 0.3806908428668976, 0.5623092651367188, 0.8051995635032654, ...]</code> | | <code>634306</code> | <code>['7723525', '1874779', '379307', '2738583', '7599583', ...]</code> | <code>[0.7124203443527222, 0.7379189729690552, 0.5786551237106323, 0.6142299175262451, 0.6755089163780212, ...]</code> | | <code>920825</code> | <code>['5976297', '2866112', '3560294', '3285659', '4706740', ...]</code> | <code>[0.6462352871894836, 0.7880821228027344, 0.791019856929779, 0.7709633111953735, 0.8284491300582886, ...]</code> | * Loss: <code>pylate.losses.distillation.Distillation</code> ### Evaluation Results nDCG@10 scores for multi-vector retrieval models | Model | SciFact | NFCorpus | FiQA | TREC-Covid | | --------------------------- | --------- | -------- | --------- | ---------- | | BERT | 71.5 | 34.2 | 35.0 | 69.9 | | ModernBERT-Base (in paper) | 73.0 | **35.2** | 38.0 | **80.5** | | ModernBERT-Base (this repo) | **73.88** | 34.96 | **39.47** | 79.36 | ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `learning_rate`: 8e-05 - `num_train_epochs`: 1 - `warmup_ratio`: 0.05 - `bf16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 8e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.05 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: True - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Framework Versions - Python: 3.11.9 - Sentence Transformers: 3.3.0 - PyLate: 1.1.4 - Transformers: 4.48.0.dev0 - PyTorch: 2.4.0 - Accelerate: 1.2.1 - Datasets: 2.21.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084" } ``` #### PyLate ```bibtex @misc{PyLate, title={PyLate: Flexible Training and Retrieval for Late Interaction Models}, author={Chaffin, Antoine and Sourty, Raphaël}, url={https://github.com/lightonai/pylate}, year={2024} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "SCIFACT" ]
IVN-RIN/medBIT-r3-plus
IVN-RIN
fill-mask
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "Biomedical Language Modeling", "it", "dataset:IVN-RIN/BioBERT_Italian", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-12-01T12:03:49Z
2024-05-24T11:58:02+00:00
294
2
--- datasets: - IVN-RIN/BioBERT_Italian language: - it tags: - Biomedical Language Modeling widget: - text: L'asma allergica è una patologia dell'[MASK] respiratorio causata dalla presenza di allergeni responsabili dell'infiammazione dell'albero bronchiale. example_title: Example 1 - text: Il pancreas produce diversi [MASK] molto importanti tra i quali l'insulina e il glucagone. example_title: Example 2 - text: Il GABA è un amminoacido ed è il principale neurotrasmettitore inibitorio del [MASK]. example_title: Example 3 --- 🤗 + 📚🩺🇮🇹 + 📖🧑‍⚕️ + 🌐⚕️ = **MedBIT-r3-plus** From this repository you can download the **MedBIT-r3-plus** (Medical Bert for ITalian) checkpoint. **MedBIT-r3-plus** is built on top of [BioBIT](https://huggingface.co/IVN-RIN/bioBIT), further pretrained on a corpus of medical textbooks, either directly written by Italian authors or translated by human professional translators, used in formal medical doctors’ education and specialized training. The size of this corpus amounts to 100 MB of data. These comprehensive collections of medical concepts can impact the encoding of biomedical knowledge in language models, with the advantage of being natively available in Italian, and not being translated. Online healthcare information dissemination is another source of biomedical texts that is commonly available in many less-resourced languages. Therefore, we also gathered an additional 100 MB of web-crawled data from reliable Italian, health-related websites. More details in the paper. **MedBIT-r3-plus** has been evaluated on 3 downstream tasks: **NER** (Named Entity Recognition), extractive **QA** (Question Answering), **RE** (Relation Extraction). Here are the results, summarized: - NER: - [BC2GM](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb32) = 81.87% - [BC4CHEMD](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb35) = 80.68% - [BC5CDR(CDR)](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb31) = 81.97% - [BC5CDR(DNER)](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb31) = 76.32% - [NCBI_DISEASE](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb33) = 63.36% - [SPECIES-800](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb34) = 63.90% - QA: - [BioASQ 4b](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb30) = 68.21% - [BioASQ 5b](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb30) = 77.89% - [BioASQ 6b](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb30) = 75.28% - RE: - [CHEMPROT](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb36) = 38.82% - [BioRED](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb37) = 67.62% [Check the full paper](https://www.sciencedirect.com/science/article/pii/S1532046423001521) for further details, and feel free to contact us if you have some inquiry!
[ "BC5CDR", "BIORED", "CHEMPROT", "NCBI DISEASE" ]
iskonai/prodigy-sm-base-v0.1
iskonai
text-generation
[ "transformers", "safetensors", "mistral", "text-generation", "en", "sr", "hr", "bs", "arxiv:2309.09530", "arxiv:2403.19522", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-04-27T19:49:06Z
2024-11-20T15:58:54+00:00
294
3
--- language: - en - sr - hr - bs license: apache-2.0 --- # Prodigy SM Base v0.1 <img src="https://cdn-uploads.huggingface.co/production/uploads/617bbeec14572ebe9e6ea83f/4p2zaOWu6kTS3fcbevHef.png" width="70%" height="70%"> In our latest endeavour, we performed continued pre-training of a large language model (Mistral-7b-v0.1) to understand and generate text in new languages, including **Serbian**, **Bosnian** and **Croatian** using an innovative approach. Rather than depending only on extensive datasets in the target language, our method utilizes a more compact set of both synthetic and human-curated data along with some mixture of CC Web data, which is implemented in two strategic phases: 1. Establishing a comprehensive demonstration of all grammatical and orthographic rules pertinent to the language. 2. Supplying a diverse array of examples that not only reinforce these rules but also integrate a wide range of linguistic nuances. While our approach is uniquely tailored to our objectives, we have drawn some inspiration from recent advancements in language model training. Specifically, the conceptual strategies discussed in the paper [ADAPTING LARGE LANGUAGE MODELS VIA READING COMPREHENSION](https://arxiv.org/pdf/2309.09530.pdf) provided valuable insights, though our methods diverge significantly in practice. By adopting this inspired approach, we aim to efficiently teach the model new languages with a balanced blend of accuracy and linguistic diversity. So... Did it work?! # **Yes!** See the benchmark results, or even better, download the model and try it yourself. As you know by now, there's no better benchmark than a quick 'try it yourself' vibe check. :) <img src="https://cdn-uploads.huggingface.co/production/uploads/617bbeec14572ebe9e6ea83f/C9m_OjnYEpQo43VCrwz4A.png" width="100%" height="100%"> Here, we demonstrate results of benchmark that is not frequently performed, yet equally important: how adapting the model for a new language impacted its original English-only performance. <img src="https://cdn-uploads.huggingface.co/production/uploads/617bbeec14572ebe9e6ea83f/IPY0myfQI-Ne5x6b11glz.png" width="100%" height="100%"> *All evals are performed in zero shot manner. *Also bear in mind that llama-2-7b, llama-3-8b and mistral-7b models compared to Prodigy SM base aren't trained on extensive Serbian language datasets, and these benchmarks demonstrate that primarily English models can be adapted to other languages. So, as you can see, we successfully improved the original model's performance for Serbian language use cases while retaining or even slightly improving its performance for English language. ### Training results Training results of continued pre-training of [mistral-7b-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) <img src="https://cdn-uploads.huggingface.co/production/uploads/617bbeec14572ebe9e6ea83f/5xeJ-vfWk4RhJNC7t5I0g.png" width="70%" height="70%"> <img src="https://cdn-uploads.huggingface.co/production/uploads/617bbeec14572ebe9e6ea83f/R4R8ai8LaN3WlYCOenUyb.png" width="70%" height="70%"> As last experimental step we merged produced model with **Mistral-7B-v0.1** and two earlier checkpoints from **prodigy-sm-base** using [Model Stock](https://arxiv.org/abs/2403.19522) method. # Notes As this is base model, there is no chat template or strict chat following capabilities, this model is best candidate for further pre-train on Serbian language as there is a lot more room for improvement (you can hit sweet spot), or next step in the pipeline, such as some form of chat or instruct tuning. If you want model that is already instruction tuned we did that too, check **Prodigy SM Instruct v0.1** # Prodigy SM Instruct v0.1 🚀[prodigy-sm-instruct](https://huggingface.co/iskonai/prodigy-sm-instruct-v0.1-draft) And stay tuned for: [prodigy-sm-base (llama-3.1)]() **COMING SOON** [prodigy-sm-instruct (llama-3.1)]() **COMING SOON** 📢 Also we are excited to announce that [iskon.ai](https://Iskon.ai) will soon launch an API platform featuring advanced **Prodigy** series of models, advanced AI tools and much more! 🚀 # Thanks - [gordicaleksa/serbian-llm-eval](https://github.com/gordicaleksa/serbian-llm-eval) and his community for curating translations and adaptation of [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) that we used to perform benchmarks. - [jondurbin](https://huggingface.co/jondurbin) for amazing airoboros framework - [teknium](https://huggingface.co/teknium) for various insights shared on discord and twitter aka x.com - [Eric](https://twitter.com/erhartford) for various insights shared on discord and twitter aka x.com - [mergekit](https://github.com/arcee-ai/mergekit) for model merging tools *Huge thanks to Redmond.ai for generous DGX cloud credits* [redmond.ai]( https://redmond.ai)
[ "BEAR" ]
FremyCompany/BioLORD-2023-M-Dutch-InContext-v1
FremyCompany
sentence-similarity
[ "sentence-transformers", "safetensors", "xlm-roberta", "medical", "biology", "sentence-similarity", "nl", "en", "arxiv:2311.16075", "license:other", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-06-23T19:46:04Z
2024-06-24T09:50:36+00:00
294
4
--- language: - nl - en library_name: sentence-transformers license: other license_name: ihtsdo-and-nlm-licences license_link: https://www.nlm.nih.gov/databases/umls.html pipeline_tag: sentence-similarity tags: - medical - biology widget: - source_sentence: bartonellosis sentences: - kattenkrabziekte - wond, kattenkrab - door teken overgedragen orbiviruskoorts - kattenbont --- # In-Context Dutch Clinical Embeddings with BioLORD & MedMentions Do mentions sharing the same text need to have the same embedding? No! This model supports embedding biomedical entities in both English and Dutch, but allows the in-context embedding of concepts, using the following template: ``` mention text [SEP] (context: ... a textual example containing mention text and some more text on both sides ...) ``` It also supports embedding mentions without context, particularly in English.<br> **NOTE:** Unlike other models in the series, this model uses the [CLS] token to embed the mention. ## References ### 📖 BioLORD-2023: semantic textual representations fusing large language models and clinical knowledge graph insights Journal of the American Medical Informatics Association, 2024<br> François Remy, Kris Demuynck, Thomas Demeester<br> [view online](https://academic.oup.com/jamia/advance-article/doi/10.1093/jamia/ocae029/7614965) ### 📖 Annotation-preserving machine translation of English corpora to validate Dutch clinical concept extraction tools Under review, with a preprint available on Medrxiv.org, 2024<br> Tom Seinen, Jan Kors, Erik van Mulligen, Peter Rijnbeek<br> [view online](https://www.medrxiv.org/content/medrxiv/early/2024/03/15/2024.03.14.24304289.full.pdf) ## Citation This model accompanies the [BioLORD-2023: Learning Ontological Representations from Definitions](https://arxiv.org/abs/2311.16075) paper. When you use this model, please cite the original paper as follows: ```latex @article{remy-etal-2023-biolord, author = {Remy, François and Demuynck, Kris and Demeester, Thomas}, title = "{BioLORD-2023: semantic textual representations fusing large language models and clinical knowledge graph insights}", journal = {Journal of the American Medical Informatics Association}, pages = {ocae029}, year = {2024}, month = {02}, issn = {1527-974X}, doi = {10.1093/jamia/ocae029}, url = {https://doi.org/10.1093/jamia/ocae029}, eprint = {https://academic.oup.com/jamia/advance-article-pdf/doi/10.1093/jamia/ocae029/56772025/ocae029.pdf}, } ``` ## Usage (Sentence-Transformers) This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. This model has been finentuned for the biomedical domain. While it preserves a good ability to produce embeddings for general-purpose text, it will be more useful to you if you are trying to process medical documents such as EHR records or clinical notes. Both sentences and phrases can be embedded in the same latent space. Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["wond door kattenscrab", "kattenkrabziekte", "bartonellosis"] model = SentenceTransformer('FremyCompany/BioLORD-2023-M-Dutch-InContext-v1 ') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["wond door kattenscrab", "kattenkrabziekte", "bartonellosis"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('FremyCompany/BioLORD-2023-M-Dutch-InContext-v1 ') model = AutoModel.from_pretrained('FremyCompany/BioLORD-2023-M') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## License My own contributions for this model are covered by the MIT license. However, given the data used to train this model originates from UMLS and SnomedCT, you will need to ensure you have proper licensing of UMLS and SnomedCT before using this model. Both UMLS and SnomedCT are free of charge in most countries, but you might have to create an account and report on your usage of the data yearly to keep a valid license.
[ "MEDMENTIONS" ]
DavidAU/L3-SthenoMaidBlackroot-8.9B-V1-BRAINSTORM-5x-GGUF
DavidAU
text-generation
[ "gguf", "creative", "story", "writing", "fiction", "llama3", "roleplaying", "rp", "horror", "science fiction", "fiction writing", "scene generation", "scene continue", "brainstorm 4x", "multi brainstorm", "enhanced", "text-generation", "en", "arxiv:2401.02415", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
2024-07-29T00:36:17Z
2024-11-14T05:44:20+00:00
292
2
--- language: - en license: apache-2.0 pipeline_tag: text-generation tags: - creative - story - writing - fiction - llama3 - roleplaying - rp - horror - science fiction - fiction writing - scene generation - scene continue - brainstorm 4x - multi brainstorm - enhanced --- <H3>BRAINSTORM: L3-SthenoMaidBlackroot-8B-V1 (now at 8.9B)</H3> This repo contains quants 5x of L3-SthenoMaidBlackroot-8B-V1 (now at 8.9B) using the "Brainstorm" method of augmenting reasoning in a LLM to increase it's performance at the core level for ANY creative use case(s). This specific version has calibrations that allow it to exceed the logic solving abilities of the original "L3-SthenoMaidBlackroot-8B-V1". The BRAINSTORM process was developed by David_AU. Some of the core principals behind this process are discussed in this <a href="https://arxiv.org/pdf/2401.02415"> scientific paper : Progressive LLaMA with Block Expansion </a>. However I went in a completely different direction from what was outlined in this paper. <B>What is "Brainstorm" ?</b> The reasoning center of an LLM is taken apart, reassembled, and expanded by 5x. Then these centers are individually calibrated. These "centers" also interact with each other. This introduces subtle changes into the reasoning process. The calibrations further adjust - dial up or down - these "changes" further. The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak. The core aim of this process is to increase the model's detail, concept and connection to the "world", general concept connections, prose quality and prose length without affecting instruction following. This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses. Here are some of the enhancements this process brings to the model's performance: - Prose generation seems more focused on the moment to moment. - Sometimes there will be "preamble" and/or foreshadowing present. - Fewer or no "cliches" - Better overall prose and/or more complex / nuanced prose. - A greater sense of nuance on all levels. - Coherence is stronger. - Description is more detailed, and connected closer to the content. - Simile and Metaphors are stronger and better connected to the prose, story, and character. - Sense of "there" / in the moment is enhanced. - Details are more vivid, and there are more of them. - Prose generation length can be long to extreme. - Emotional engagement is stronger. - The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less. - The MORE instructions and/or details you provide the more strongly the model will respond. - Depending on the model "voice" may be more "human" vs original model's "voice". Other "lab" observations: - This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true! - However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak. - From lab testing it seems to ponder, and consider more carefully roughly speaking. - You could say this process sharpens the model's focus on it's task(s) at a deeper level. The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc. Other technologies developed by David_AU like "Ultra" (precision), "Neo Imatrix" (custom imatrix datasets), and "X-quants" (custom application of the imatrix process) can further enhance the performance of the model along with the "Brainstorm" process. The "Brainstorm" process has been tested on multiple LLama2, Llama3, and Mistral models of various parameter sizes, as well as on "root" models like "Llama3 Instruct", "Mistral Instruct", and "merged" / "fine tuned" models too. <B>Original Model:</B> For original model specifications, usage information and other important details please see (this is based on models used in "L3-SthenoMaidBlackroot-8B-V1" ): [ https://huggingface.co/DavidAU/L3-Stheno-Maid-Blackroot-Grand-HORROR-16B-GGUF ] and the original model page: Special thanks to the model creators at BLUUWHALE for making such a fantastic model: [ https://huggingface.co/bluuwhale/L3-SthenoMaidBlackroot-8B-V1 ] Please report any issue(s) and/or feedback via the "Community tab". This is a LLAMA3 model, and requires Llama3 template, but may work with other template(s) and has maximum context of 131k. Here is the standard LLAMA3 template: <PRE> { "name": "Llama 3", "inference_params": { "input_prefix": "<|start_header_id|>user<|end_header_id|>\n\n", "input_suffix": "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n", "pre_prompt": "You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.", "pre_prompt_prefix": "<|start_header_id|>system<|end_header_id|>\n\n", "pre_prompt_suffix": "<|eot_id|>", "antiprompt": [ "<|start_header_id|>", "<|eot_id|>" ] } } </PRE> <B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B> This a "Class 2" model: For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see: [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ] You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here: [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ] <b>Optional Enhancement:</B> The following can be used in place of the "system prompt" or "system role" to further enhance the model. It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along. In this case the enhancements do not have as strong effect at using "system prompt" or "system role". Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented. <PRE> Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities. Here are your skillsets: [MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv) [*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision) Here are your critical instructions: Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story. </PRE> You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation and scene continue functions. This enhancement WAS NOT used to generate the examples below. <h3>EXAMPLES PROMPTS and OUTPUT:</h3> Examples are created using quant Q4_K_M, "temp=0", minimal parameters and "LLAMA3" template. Model has been tested with "temp" from ".1" to ".8". Below are the least creative outputs, prompt is in <B>BOLD</B>. Higher quants will have more detail, better quality output. --- <B><font color="red">WARNING:</font> NSFW. Vivid prose. Visceral Details. Violence. Graphic HORROR. Swearing. UNCENSORED. </B> --- <B> Start a 1000 word scene (vivid horror, 1st person, include thoughts) with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode... </b> The skyscraper swayed, as I watched the window in front of me on the 21st floor explode. Glass shards rained down like a deadly hailstorm, shattering the relative calm of the office. My heart pounded in my chest, a primal scream building in my throat. I'd been working late, trying to meet an impossible deadline for a client I'd never even met face-to-face. The fluorescent lights hummed overhead, casting an eerie pallor over the scene before me. My mind reeled as I tried to process what I was seeing - the window, the glass, the bloodied figure crumpled on the floor beyond the shattered pane. "Someone's been hurt," I thought, my voice shaking as I spoke the words out loud. "Oh God, someone's been killed." I'd always known this building was a potential target, given its status as a hub for high-stakes financial transactions. But to actually see it happen, to be so close to the horror...it was almost too much to bear. My eyes darted around the office, taking in the chaos. My coworkers were frozen in shock, their faces pale and wide-eyed. The air was thick with the acrid scent of smoke and something else - something metallic and sickeningly sweet. I gagged, my stomach churning. "Someone needs to call 911," I thought, my voice barely above a whisper. "We need an ambulance here, now." I reached for my phone, my hands trembling as I punched in the numbers. My mind raced with worst-case scenarios - what if there were more explosions? What if the building collapsed? What if...what if... The 911 operator's voice cut through my thoughts, calm and professional even in the face of such horror. "What's your emergency?" she asked. I took a deep breath, trying to steady myself. "There's been an explosion on the 21st floor of the downtown financial tower," I said, my words tumbling out in a rush. "Someone's been injured, maybe killed. We need an ambulance and the police here as soon as possible." The operator's response was immediate and reassuring. "I've dispatched emergency services to your location. Please stay on the line and keep me updated if the situation changes." I nodded, even though she couldn't see me. I felt a surge of relief at her words - help was on the way. But as I looked around the office again, I realized that we were all in grave danger. The building's structural integrity was compromised now, and who knew how much more damage could be done? "Everyone needs to get out of here," I thought, my voice rising with urgency. "We need to evacuate the building, now." I turned to face my coworkers, my eyes locking onto theirs. "We have to get out of here," I said, my words firm and commanding. "The building's not safe. We need to go, now." Slowly, almost as if in a daze, they began to move towards the door. I followed close behind, my heart pounding in my ears. As we reached the stairwell, I heard the wail of sirens growing louder with every passing second. We were going to make it out alive, I told myself. Help was almost here. But as I looked back over my shoulder at the shattered window and the bloodied figure on the floor, I couldn't shake the feeling that we'd barely escaped something far more sinister than a simple explosion...something that would haunt me for the rest of my days.
[ "BEAR" ]
sinequa/passage-ranker-v1-L-multilingual
sinequa
text-classification
[ "transformers", "pytorch", "bert", "text-classification", "de", "en", "es", "fr", "arxiv:1810.04805", "arxiv:1901.04085", "arxiv:2102.07033", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-07-10T10:44:18Z
2024-02-19T09:42:57+00:00
288
0
--- language: - de - en - es - fr --- # Model Card for `passage-ranker-v1-L-multilingual` This model is a passage ranker developed by Sinequa. It produces a relevance score given a query-passage pair and is used to order search results. Model name: `passage-ranker-v1-L-multilingual` ## Supported Languages The model was trained and tested in the following languages: - English - French - German - Spanish ## Scores | Metric | Value | |:--------------------|------:| | Relevance (NDCG@10) | 0.471 | Note that the relevance score is computed as an average over 14 retrieval datasets (see [details below](#evaluation-metrics)). ## Inference Times | GPU | Quantization type | Batch size 1 | Batch size 32 | |:------------------------------------------|:------------------|---------------:|---------------:| | NVIDIA A10 | FP16 | 2 ms | 31 ms | | NVIDIA A10 | FP32 | 4 ms | 82 ms | | NVIDIA T4 | FP16 | 3 ms | 65 ms | | NVIDIA T4 | FP32 | 14 ms | 364 ms | | NVIDIA L4 | FP16 | 2 ms | 38 ms | | NVIDIA L4 | FP32 | 5 ms | 124 ms | ## Gpu Memory usage | Quantization type | Memory | |:-------------------------------------------------|-----------:| | FP16 | 550 MiB | | FP32 | 1050 MiB | Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which can be around 0.5 to 1 GiB depending on the used GPU. ## Requirements - Minimal Sinequa version: 11.10.0 - Minimal Sinequa version for using FP16 models and GPUs with CUDA compute capability of 8.9+ (like NVIDIA L4): 11.11.0 - [Cuda compute capability](https://developer.nvidia.com/cuda-gpus): above 5.0 (above 6.0 for FP16 use) ## Model Details ### Overview - Number of parameters: 124 million - Base language model: Homegrown Sinequa BERT-Base ([Paper](https://arxiv.org/abs/1810.04805)) pretrained in the four supported languages - Insensitive to casing and accents - Training procedure: [MonoBERT](https://arxiv.org/abs/1901.04085) ### Training Data - Probably-Asked Questions ([Paper](https://arxiv.org/abs/2102.07033), [Official Page](https://github.com/facebookresearch/PAQ)) - Original English dataset - Translated datasets for the other three supported languages ### Evaluation Metrics To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the [BEIR benchmark](https://github.com/beir-cellar/beir). Note that all these datasets are in English. | Dataset | NDCG@10 | |:------------------|--------:| | Average | 0.471 | | | | | Arguana | 0.583 | | CLIMATE-FEVER | 0.150 | | DBPedia Entity | 0.366 | | FEVER | 0.734 | | FiQA-2018 | 0.288 | | HotpotQA | 0.698 | | MS MARCO | 0.341 | | NFCorpus | 0.345 | | NQ | 0.483 | | Quora | 0.766 | | SCIDOCS | 0.142 | | SciFact | 0.654 | | TREC-COVID | 0.711 | | Webis-Touche-2020 | 0.334 | We evaluated the model on the datasets of the [MIRACL benchmark](https://github.com/project-miracl/miracl) to test its multilingual capacities. Note that not all training languages are part of the benchmark, so we only report the metrics for the existing languages. | Language | NDCG@10 | |:---------|--------:| | French | 0.401 | | German | 0.396 | | Spanish | 0.453 |
[ "SCIFACT" ]
RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf
RichardErkhov
null
[ "gguf", "arxiv:2402.12749", "endpoints_compatible", "region:us" ]
2024-08-22T05:57:47Z
2024-08-22T07:53:44+00:00
286
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Med-LLaMA3-8B - GGUF - Model creator: https://huggingface.co/YBXL/ - Original model: https://huggingface.co/YBXL/Med-LLaMA3-8B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Med-LLaMA3-8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q2_K.gguf) | Q2_K | 2.96GB | | [Med-LLaMA3-8B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [Med-LLaMA3-8B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.IQ3_S.gguf) | IQ3_S | 3.43GB | | [Med-LLaMA3-8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Med-LLaMA3-8B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.IQ3_M.gguf) | IQ3_M | 3.52GB | | [Med-LLaMA3-8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q3_K.gguf) | Q3_K | 3.74GB | | [Med-LLaMA3-8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Med-LLaMA3-8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Med-LLaMA3-8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Med-LLaMA3-8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q4_0.gguf) | Q4_0 | 4.34GB | | [Med-LLaMA3-8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Med-LLaMA3-8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Med-LLaMA3-8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q4_K.gguf) | Q4_K | 4.58GB | | [Med-LLaMA3-8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Med-LLaMA3-8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q4_1.gguf) | Q4_1 | 4.78GB | | [Med-LLaMA3-8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q5_0.gguf) | Q5_0 | 5.21GB | | [Med-LLaMA3-8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Med-LLaMA3-8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q5_K.gguf) | Q5_K | 5.34GB | | [Med-LLaMA3-8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Med-LLaMA3-8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q5_1.gguf) | Q5_1 | 5.65GB | | [Med-LLaMA3-8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q6_K.gguf) | Q6_K | 6.14GB | | [Med-LLaMA3-8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Med-LLaMA3-8B <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description Med-LLaMA3-8B is an 8-billion parameter medical language model that has undergone continual pre-training on LLaMA3-8B architecture using large-scale open-sourced medical data. ## Training Details Med-LLaMA3-8B is trained on a large-scale dataset comprising: medical books, medical literature, clinical guidelines and a small portion of general domain data It is a study extension based on our previous Me-LLaMA paper: https://arxiv.org/pdf/2402.12749 If you use the model, please cite the following papers: <pre> @misc{xie2024llama, title={Me LLaMA: Foundation Large Language Models for Medical Applications}, author={Qianqian Xie and Qingyu Chen and Aokun Chen and Cheng Peng and Yan Hu and Fongci Lin and Xueqing Peng and Jimin Huang and Jeffrey Zhang and Vipina Keloth and Huan He and Lucila Ohno-Machido and Yonghui Wu and Hua Xu and Jiang Bian}, year={2024}, eprint={2402.12749}, archivePrefix={arXiv}, primaryClass={cs.CL} } </pre>
[ "MEDICAL DATA" ]
RichardErkhov/m42-health_-_Llama3-Med42-70B-gguf
RichardErkhov
null
[ "gguf", "arxiv:2408.06142", "endpoints_compatible", "region:us", "conversational" ]
2024-09-07T02:18:52Z
2024-09-07T22:27:32+00:00
286
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama3-Med42-70B - GGUF - Model creator: https://huggingface.co/m42-health/ - Original model: https://huggingface.co/m42-health/Llama3-Med42-70B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama3-Med42-70B.Q2_K.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-70B-gguf/blob/main/Llama3-Med42-70B.Q2_K.gguf) | Q2_K | 24.56GB | | [Llama3-Med42-70B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-70B-gguf/blob/main/Llama3-Med42-70B.IQ3_XS.gguf) | IQ3_XS | 27.29GB | | [Llama3-Med42-70B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-70B-gguf/blob/main/Llama3-Med42-70B.IQ3_S.gguf) | IQ3_S | 28.79GB | | [Llama3-Med42-70B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-70B-gguf/blob/main/Llama3-Med42-70B.Q3_K_S.gguf) | Q3_K_S | 28.79GB | | [Llama3-Med42-70B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-70B-gguf/blob/main/Llama3-Med42-70B.IQ3_M.gguf) | IQ3_M | 29.74GB | | [Llama3-Med42-70B.Q3_K.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-70B-gguf/blob/main/Llama3-Med42-70B.Q3_K.gguf) | Q3_K | 31.91GB | | [Llama3-Med42-70B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-70B-gguf/blob/main/Llama3-Med42-70B.Q3_K_M.gguf) | Q3_K_M | 31.91GB | | [Llama3-Med42-70B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-70B-gguf/blob/main/Llama3-Med42-70B.Q3_K_L.gguf) | Q3_K_L | 34.59GB | | [Llama3-Med42-70B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-70B-gguf/blob/main/Llama3-Med42-70B.IQ4_XS.gguf) | IQ4_XS | 35.64GB | | [Llama3-Med42-70B.Q4_0.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-70B-gguf/blob/main/Llama3-Med42-70B.Q4_0.gguf) | Q4_0 | 37.22GB | | [Llama3-Med42-70B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-70B-gguf/tree/main/) | IQ4_NL | 37.58GB | | [Llama3-Med42-70B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-70B-gguf/tree/main/) | Q4_K_S | 37.58GB | | [Llama3-Med42-70B.Q4_K.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-70B-gguf/tree/main/) | Q4_K | 39.6GB | | [Llama3-Med42-70B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-70B-gguf/tree/main/) | Q4_K_M | 39.6GB | | [Llama3-Med42-70B.Q4_1.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-70B-gguf/tree/main/) | Q4_1 | 41.27GB | | [Llama3-Med42-70B.Q5_0.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-70B-gguf/tree/main/) | Q5_0 | 45.32GB | | [Llama3-Med42-70B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-70B-gguf/tree/main/) | Q5_K_S | 45.32GB | | [Llama3-Med42-70B.Q5_K.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-70B-gguf/tree/main/) | Q5_K | 46.52GB | | [Llama3-Med42-70B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-70B-gguf/tree/main/) | Q5_K_M | 46.52GB | | [Llama3-Med42-70B.Q5_1.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-70B-gguf/tree/main/) | Q5_1 | 49.36GB | | [Llama3-Med42-70B.Q6_K.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-70B-gguf/tree/main/) | Q6_K | 53.91GB | | [Llama3-Med42-70B.Q8_0.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-70B-gguf/tree/main/) | Q8_0 | 69.83GB | Original model description: --- language: - en license: llama3 tags: - m42 - health - healthcare - clinical-llm pipeline_tag: text-generation inference: false license_name: llama3 --- # **Med42-v2 - A Suite of Clinically-aligned Large Language Models** Med42-v2 is a suite of open-access clinical large language models (LLM) instruct and preference-tuned by M42 to expand access to medical knowledge. Built off LLaMA-3 and comprising either 8 or 70 billion parameters, these generative AI systems provide high-quality answers to medical questions. ## Key performance metrics: - Med42-v2-70B outperforms GPT-4.0 in most of the MCQA tasks. - Med42-v2-70B achieves a MedQA zero-shot performance of 79.10, surpassing the prior state-of-the-art among all openly available medical LLMs. - Med42-v2-70B sits at the top of the Clinical Elo Rating Leaderboard. |Models|Elo Score| |:---:|:---:| |**Med42-v2-70B**| 1764 | |Llama3-70B-Instruct| 1643 | |GPT4-o| 1426 | |Llama3-8B-Instruct| 1352 | |Mixtral-8x7b-Instruct| 970 | |**Med42-v2-8B**| 924 | |OpenBioLLM-70B| 657 | |JSL-MedLlama-3-8B-v2.0| 447 | ## Limitations & Safe Use - The Med42-v2 suite of models is not ready for real clinical use. Extensive human evaluation is undergoing as it is essential to ensure safety. - Potential for generating incorrect or harmful information. - Risk of perpetuating biases in training data. Use this suite of models responsibly! Do not rely on them for medical usage without rigorous safety testing. ## Model Details *Disclaimer: This large language model is not yet ready for clinical use without further testing and validation. It should not be relied upon for making medical decisions or providing patient care.* Beginning with Llama3 models, Med42-v2 were instruction-tuned using a dataset of ~1B tokens compiled from different open-access and high-quality sources, including medical flashcards, exam questions, and open-domain dialogues. **Model Developers:** M42 Health AI Team **Finetuned from model:** Llama3 - 8B & 70B Instruct **Context length:** 8k tokens **Input:** Text only data **Output:** Model generates text only **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we enhance the model's performance. **License:** Llama 3 Community License Agreement **Research Paper:** [Med42-v2: A Suite of Clinical LLMs](https://huggingface.co/papers/2408.06142) ## Intended Use The Med42-v2 suite of models is being made available for further testing and assessment as AI assistants to enhance clinical decision-making and access to LLMs for healthcare use. Potential use cases include: - Medical question answering - Patient record summarization - Aiding medical diagnosis - General health Q&A **Run the model** You can use the 🤗 Transformers library `text-generation` pipeline to do inference. ```python import transformers import torch model_name_or_path = "m42-health/Llama3-Med42-70B" pipeline = transformers.pipeline( "text-generation", model=model_name_or_path, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ { "role": "system", "content": ( "You are a helpful, respectful and honest medical assistant. You are a second version of Med42 developed by the AI team at M42, UAE. " "Always answer as helpfully as possible, while being safe. " "Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. " "Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. " "If you don’t know the answer to a question, please don’t share false information." ), }, {"role": "user", "content": "What are the symptoms of diabetes?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=False ) stop_tokens = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>"), ] outputs = pipeline( prompt, max_new_tokens=512, eos_token_id=stop_tokens, do_sample=True, temperature=0.4, top_k=150, top_p=0.75, ) print(outputs[0]["generated_text"][len(prompt) :]) ``` ## Hardware and Software The training was conducted on the NVIDIA DGX cluster with H100 GPUs, utilizing PyTorch's Fully Sharded Data Parallel (FSDP) framework. ## Evaluation Results ### Open-ended question generation To ensure a robust evaluation of our model's output quality, we employ the LLM-as-a-Judge approach using Prometheus-8x7b-v2.0. Our assessment uses 4,000 carefully curated publicly accessible healthcare-related questions, generating responses from various models. We then use Prometheus to conduct pairwise comparisons of the answers. Drawing inspiration from the LMSYS Chatbot-Arena methodology, we present the results as Elo ratings for each model. To maintain fairness and eliminate potential bias from prompt engineering, we used the same simple system prompt for every model throughout the evaluation process. Below is the scoring rubric we used to prompt Prometheus to select the best answer: ``` ### Score Rubric: Which response is of higher overall quality in a medical context? Consider: * Relevance: Does it directly address the question? * Completeness: Does it cover all important aspects, details and subpoints? * Safety: Does it avoid unsafe practices and address potential risks? * Ethics: Does it maintain confidentiality and avoid biases? * Clarity: Is it professional, clear and easy to understand? ``` #### Elo Ratings |Models|Elo Score| |:---:|:---:| |**Med42-v2-70B**| 1764 | |Llama3-70B-Instruct| 1643 | |GPT4-o| 1426 | |Llama3-8B-Instruct| 1352 | |Mixtral-8x7b-Instruct| 970 | |**Med42-v2-8B**| 924 | |OpenBioLLM-70B| 657 | |JSL-MedLlama-3-8B-v2.0| 447 | #### Win-rate ![plot](./pairwise_model_comparison.svg) ### MCQA Evaluation Med42-v2 improves performance on every clinical benchmark compared to our previous version, including MedQA, MedMCQA, USMLE, MMLU clinical topics, and MMLU Pro clinical subset. For all evaluations reported so far, we use [EleutherAI's evaluation harness library](https://github.com/EleutherAI/lm-evaluation-harness) and report zero-shot accuracies (except otherwise stated). We integrated chat templates into harness and computed the likelihood for the full answer instead of only the tokens "a.", "b.", "c." or "d.". |Model|MMLU Pro|MMLU|MedMCQA|MedQA|USMLE| |---:|:---:|:---:|:---:|:---:|:---:| |**Med42v2-70B**|64.36|87.12|73.20|79.10|83.80| |**Med42v2-8B**|54.30|75.76|61.34|62.84|67.04| |OpenBioLLM-70B|64.24|90.40|73.18|76.90|79.01| |GPT-4.0<sup>&dagger;</sup>|-|87.00|69.50|78.90|84.05| |MedGemini*|-|-|-|84.00|-| |Med-PaLM-2 (5-shot)*|-|87.77|71.30|79.70|-| |Med42|-|76.72|60.90|61.50|71.85| |ClinicalCamel-70B|-|69.75|47.00|53.40|54.30| |GPT-3.5<sup>&dagger;</sup>|-|66.63|50.10|50.80|53.00| |Llama3-8B-Instruct|48.24|72.89|59.65|61.64|60.38| |Llama3-70B-Instruct|64.24|85.99|72.03|78.88|83.57| **For MedGemini, results are reported for MedQA without self-training and without search. We note that 0-shot performance is not reported for Med-PaLM 2. Further details can be found at [https://github.com/m42health/med42](https://github.com/m42health/med42)*. <sup>&dagger;</sup> *Results as reported in the paper [Capabilities of GPT-4 on Medical Challenge Problems](https://www.microsoft.com/en-us/research/uploads/prod/2023/03/GPT-4_medical_benchmarks.pdf)*. ## Accessing Med42 and Reporting Issues Please report any software "bug" or other problems through one of the following means: - Reporting issues with the model: [https://github.com/m42health/med42](https://github.com/m42health/med42) - Reporting risky content generated by the model, bugs and/or any security concerns: [https://forms.office.com/r/fPY4Ksecgf](https://forms.office.com/r/fPY4Ksecgf) - M42’s privacy policy available at [https://m42.ae/privacy-policy/](https://m42.ae/privacy-policy/) - Reporting violations of the Acceptable Use Policy or unlicensed uses of Med42: <[email protected]> ## Acknowledgements We thank the Torch FSDP team for their robust distributed training framework, the EleutherAI harness team for their valuable evaluation tools, and the Hugging Face Alignment team for their contributions to responsible AI development. ## Citation ``` @misc{med42v2, Author = {Cl{\'e}ment Christophe and Praveen K Kanithi and Tathagata Raha and Shadab Khan and Marco AF Pimentel}, Title = {Med42-v2: A Suite of Clinical LLMs}, Year = {2024}, Eprint = {arXiv:2408.06142}, url={https://arxiv.org/abs/2408.06142}, } ```
[ "MEDQA" ]
prithivMLmods/Canopus-Cute-Kawaii-Flux-LoRA
prithivMLmods
text-to-image
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "flux", "cute-kawaii", "flux-dev", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:creativeml-openrail-m", "region:us" ]
2024-09-08T11:43:14Z
2024-09-08T12:01:16+00:00
286
16
--- base_model: black-forest-labs/FLUX.1-dev license: creativeml-openrail-m tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora - flux - cute-kawaii - flux-dev widget: - text: Cute-kawaii, An animated image of an ice cream cone on an orange background. The ice cream in the middle of the image is white with brown dots on it. There are black eyes on the ice cream. There is a smiley face on the top of the icecream cone. The cone is made up of wicker and has ridges on the sides. The background is a vibrant orange color. output: url: images/11.png - text: Cute-kawaii, An animated image of two brown bears on a light blue background. The bear on the left is white with brown spots on its head and ears.while the one on the right is light brown. The bears are facing each other.with their noses pointing to the right. Both bears have black eyes and black noses. output: url: images/22.png - text: Cute-kawaii, A vibrant red background with a triangle shaped watermelon in the center. The watermelon is in the shape of a triangle and has a smiley face on it. Below the triangle is the text wat-a-melon in white letters. output: url: images/33.png instance_prompt: cute-kawaii --- # Cute-Kawaii-Flux-LoRA <Gallery /> **The model is still in the training phase. This is not the final version and may contain artifacts and perform poorly in some cases.** ## Model description **prithivMLmods/Canopus-Cute-Kawaii-Flux-LoRA** Image Processing Parameters | Parameter | Value | Parameter | Value | |---------------------------|--------|---------------------------|--------| | LR Scheduler | constant | Noise Offset | 0.03 | | Optimizer | AdamW8bit | Multires Noise Discount | 0.1 | | Network Dim | 64 | Multires Noise Iterations | 10 | | Network Alpha | 32 | Repeat & Steps | 25 & 2K+ | | Epoch | 17 | Save Every N Epochs | 1 | Labeling: florence2-en(natural language & English) Total Images Used for Training : 70+ [ Hi-RES ] & More ............... ## Trigger prompts Cute-kawaii, An animated image of an ice cream cone on an orange background. The ice cream in the middle of the image is white with brown dots on it. There are black eyes on the ice cream. There is a smiley face on the top of the icecream cone. The cone is made up of wicker and has ridges on the sides. The background is a vibrant orange color. Cute-kawaii, An animated image of two brown bears on a light blue background. The bear on the left is white with brown spots on it's head and ears.while the one on the right is light brown. The bears are facing each other.with their noses pointing to the right. Both bears have black eyes and black noses. Cute-kawaii, A vibrant red background with a triangle shaped watermelon in the center. The watermelon is in the shape of a triangle and has a smiley face on it. Below the triangle is the text "wat-a-melon" in white letters. | Parameter | Value | |-----------------|---------------------------------------------------------------------------------------| | Prompt | Cute-kawaii, An animated image of two brown bears on a light blue background. The bear on the left is white with brown spots on it's head and ears.while the one on the right is light brown. The bears are facing each other.with their noses pointing to the right. Both bears have black eyes and black noses. | | Sampler | euler | ## Setting Up ``` import torch from pipelines import DiffusionPipeline base_model = "black-forest-labs/FLUX.1-dev" pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16) lora_repo = "prithivMLmods/Canopus-Cute-Kawaii-Flux-LoRA" trigger_word = "cute-kawaii" # Leave trigger_word blank if not used. pipe.load_lora_weights(lora_repo) device = torch.device("cuda") pipe.to(device) ``` ## Trigger words You should use `cute-kawaii` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/prithivMLmods/Canopus-Cute-Kawaii-Flux-LoRA/tree/main) them in the Files & versions tab.
[ "BEAR" ]
mradermacher/Llama-3.2-3B-pubMedQA-longAnswer-GGUF
mradermacher
null
[ "transformers", "gguf", "en", "base_model:harrysyz/Llama-3.2-3B-pubMedQA-longAnswer", "base_model:quantized:harrysyz/Llama-3.2-3B-pubMedQA-longAnswer", "endpoints_compatible", "region:us", "conversational" ]
2024-12-04T11:56:15Z
2024-12-04T12:17:03+00:00
286
0
--- base_model: harrysyz/Llama-3.2-3B-pubMedQA-longAnswer language: - en library_name: transformers tags: [] quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> static quants of https://huggingface.co/harrysyz/Llama-3.2-3B-pubMedQA-longAnswer <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-longAnswer-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-longAnswer.Q2_K.gguf) | Q2_K | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-longAnswer-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-longAnswer.Q3_K_S.gguf) | Q3_K_S | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-longAnswer-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-longAnswer.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-longAnswer-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-longAnswer.Q3_K_L.gguf) | Q3_K_L | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-longAnswer-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-longAnswer.IQ4_XS.gguf) | IQ4_XS | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-longAnswer-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-longAnswer.Q4_0_4_4.gguf) | Q4_0_4_4 | 2.0 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-longAnswer-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-longAnswer.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-longAnswer-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-longAnswer.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-longAnswer-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-longAnswer.Q5_K_S.gguf) | Q5_K_S | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-longAnswer-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-longAnswer.Q5_K_M.gguf) | Q5_K_M | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-longAnswer-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-longAnswer.Q6_K.gguf) | Q6_K | 2.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-longAnswer-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-longAnswer.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-longAnswer-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-longAnswer.f16.gguf) | f16 | 6.5 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
[ "PUBMEDQA" ]
kadirnar/Yolov10
kadirnar
object-detection
[ "yolov10", "object-detection", "computer-vision", "pypi", "dataset:detection-datasets/coco", "arxiv:2405.14458", "license:agpl-3.0", "region:us" ]
2024-05-24T19:23:55Z
2024-05-24T20:21:59+00:00
285
46
--- datasets: - detection-datasets/coco license: agpl-3.0 tags: - object-detection - computer-vision - yolov10 - pypi --- ### Model Description [YOLOv10: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors](https://arxiv.org/abs/2405.14458v1) [Paper Repo: Implementation of paper - YOLOv10](https://github.com/THU-MIG/yolov10) ### Installation ``` pip install supervision git+https://github.com/THU-MIG/yolov10.git ``` ### Yolov10 Inference ```python from ultralytics import YOLOv10 import supervision as sv import cv2 MODEL_PATH = 'yolov10n.pt' IMAGE_PATH = 'dog.jpeg' model = YOLOv10(MODEL_PATH) image = cv2.imread(IMAGE_PATH) results = model(source=image, conf=0.25, verbose=False)[0] detections = sv.Detections.from_ultralytics(results) box_annotator = sv.BoxAnnotator() category_dict = { 0: 'person', 1: 'bicycle', 2: 'car', 3: 'motorcycle', 4: 'airplane', 5: 'bus', 6: 'train', 7: 'truck', 8: 'boat', 9: 'traffic light', 10: 'fire hydrant', 11: 'stop sign', 12: 'parking meter', 13: 'bench', 14: 'bird', 15: 'cat', 16: 'dog', 17: 'horse', 18: 'sheep', 19: 'cow', 20: 'elephant', 21: 'bear', 22: 'zebra', 23: 'giraffe', 24: 'backpack', 25: 'umbrella', 26: 'handbag', 27: 'tie', 28: 'suitcase', 29: 'frisbee', 30: 'skis', 31: 'snowboard', 32: 'sports ball', 33: 'kite', 34: 'baseball bat', 35: 'baseball glove', 36: 'skateboard', 37: 'surfboard', 38: 'tennis racket', 39: 'bottle', 40: 'wine glass', 41: 'cup', 42: 'fork', 43: 'knife', 44: 'spoon', 45: 'bowl', 46: 'banana', 47: 'apple', 48: 'sandwich', 49: 'orange', 50: 'broccoli', 51: 'carrot', 52: 'hot dog', 53: 'pizza', 54: 'donut', 55: 'cake', 56: 'chair', 57: 'couch', 58: 'potted plant', 59: 'bed', 60: 'dining table', 61: 'toilet', 62: 'tv', 63: 'laptop', 64: 'mouse', 65: 'remote', 66: 'keyboard', 67: 'cell phone', 68: 'microwave', 69: 'oven', 70: 'toaster', 71: 'sink', 72: 'refrigerator', 73: 'book', 74: 'clock', 75: 'vase', 76: 'scissors', 77: 'teddy bear', 78: 'hair drier', 79: 'toothbrush' } labels = [ f"{category_dict[class_id]} {confidence:.2f}" for class_id, confidence in zip(detections.class_id, detections.confidence) ] annotated_image = box_annotator.annotate( image.copy(), detections=detections, labels=labels ) cv2.imwrite('annotated_dog.jpeg', annotated_image) ``` ### BibTeX Entry and Citation Info ``` @misc{wang2024yolov10, title={YOLOv10: Real-Time End-to-End Object Detection}, author={Ao Wang and Hui Chen and Lihao Liu and Kai Chen and Zijia Lin and Jungong Han and Guiguang Ding}, year={2024}, eprint={2405.14458}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
[ "BEAR" ]
RichardErkhov/apple_-_OpenELM-270M-gguf
RichardErkhov
null
[ "gguf", "arxiv:2404.14619", "endpoints_compatible", "region:us" ]
2024-07-13T19:29:02Z
2024-07-13T19:36:25+00:00
285
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) OpenELM-270M - GGUF - Model creator: https://huggingface.co/apple/ - Original model: https://huggingface.co/apple/OpenELM-270M/ | Name | Quant method | Size | | ---- | ---- | ---- | | [OpenELM-270M.Q2_K.gguf](https://huggingface.co/RichardErkhov/apple_-_OpenELM-270M-gguf/blob/main/OpenELM-270M.Q2_K.gguf) | Q2_K | 0.11GB | | [OpenELM-270M.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/apple_-_OpenELM-270M-gguf/blob/main/OpenELM-270M.IQ3_XS.gguf) | IQ3_XS | 0.12GB | | [OpenELM-270M.IQ3_S.gguf](https://huggingface.co/RichardErkhov/apple_-_OpenELM-270M-gguf/blob/main/OpenELM-270M.IQ3_S.gguf) | IQ3_S | 0.12GB | | [OpenELM-270M.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/apple_-_OpenELM-270M-gguf/blob/main/OpenELM-270M.Q3_K_S.gguf) | Q3_K_S | 0.12GB | | [OpenELM-270M.IQ3_M.gguf](https://huggingface.co/RichardErkhov/apple_-_OpenELM-270M-gguf/blob/main/OpenELM-270M.IQ3_M.gguf) | IQ3_M | 0.13GB | | [OpenELM-270M.Q3_K.gguf](https://huggingface.co/RichardErkhov/apple_-_OpenELM-270M-gguf/blob/main/OpenELM-270M.Q3_K.gguf) | Q3_K | 0.14GB | | [OpenELM-270M.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/apple_-_OpenELM-270M-gguf/blob/main/OpenELM-270M.Q3_K_M.gguf) | Q3_K_M | 0.14GB | | [OpenELM-270M.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/apple_-_OpenELM-270M-gguf/blob/main/OpenELM-270M.Q3_K_L.gguf) | Q3_K_L | 0.15GB | | [OpenELM-270M.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/apple_-_OpenELM-270M-gguf/blob/main/OpenELM-270M.IQ4_XS.gguf) | IQ4_XS | 0.15GB | | [OpenELM-270M.Q4_0.gguf](https://huggingface.co/RichardErkhov/apple_-_OpenELM-270M-gguf/blob/main/OpenELM-270M.Q4_0.gguf) | Q4_0 | 0.15GB | | [OpenELM-270M.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/apple_-_OpenELM-270M-gguf/blob/main/OpenELM-270M.IQ4_NL.gguf) | IQ4_NL | 0.15GB | | [OpenELM-270M.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/apple_-_OpenELM-270M-gguf/blob/main/OpenELM-270M.Q4_K_S.gguf) | Q4_K_S | 0.15GB | | [OpenELM-270M.Q4_K.gguf](https://huggingface.co/RichardErkhov/apple_-_OpenELM-270M-gguf/blob/main/OpenELM-270M.Q4_K.gguf) | Q4_K | 0.16GB | | [OpenELM-270M.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/apple_-_OpenELM-270M-gguf/blob/main/OpenELM-270M.Q4_K_M.gguf) | Q4_K_M | 0.16GB | | [OpenELM-270M.Q4_1.gguf](https://huggingface.co/RichardErkhov/apple_-_OpenELM-270M-gguf/blob/main/OpenELM-270M.Q4_1.gguf) | Q4_1 | 0.17GB | | [OpenELM-270M.Q5_0.gguf](https://huggingface.co/RichardErkhov/apple_-_OpenELM-270M-gguf/blob/main/OpenELM-270M.Q5_0.gguf) | Q5_0 | 0.18GB | | [OpenELM-270M.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/apple_-_OpenELM-270M-gguf/blob/main/OpenELM-270M.Q5_K_S.gguf) | Q5_K_S | 0.18GB | | [OpenELM-270M.Q5_K.gguf](https://huggingface.co/RichardErkhov/apple_-_OpenELM-270M-gguf/blob/main/OpenELM-270M.Q5_K.gguf) | Q5_K | 0.19GB | | [OpenELM-270M.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/apple_-_OpenELM-270M-gguf/blob/main/OpenELM-270M.Q5_K_M.gguf) | Q5_K_M | 0.19GB | | [OpenELM-270M.Q5_1.gguf](https://huggingface.co/RichardErkhov/apple_-_OpenELM-270M-gguf/blob/main/OpenELM-270M.Q5_1.gguf) | Q5_1 | 0.19GB | | [OpenELM-270M.Q6_K.gguf](https://huggingface.co/RichardErkhov/apple_-_OpenELM-270M-gguf/blob/main/OpenELM-270M.Q6_K.gguf) | Q6_K | 0.21GB | | [OpenELM-270M.Q8_0.gguf](https://huggingface.co/RichardErkhov/apple_-_OpenELM-270M-gguf/blob/main/OpenELM-270M.Q8_0.gguf) | Q8_0 | 0.27GB | Original model description: --- license: other license_name: apple-sample-code-license license_link: LICENSE --- # OpenELM *Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari* We introduce **OpenELM**, a family of **Open** **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters. Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them. ## Usage We have provided an example function to generate output from OpenELM models loaded via [HuggingFace Hub](https://huggingface.co/docs/hub/) in `generate_openelm.py`. You can try the model by running the following command: ``` python generate_openelm.py --model apple/OpenELM-270M --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 ``` Please refer to [this link](https://huggingface.co/docs/hub/security-tokens) to obtain your hugging face access token. Additional arguments to the hugging face generate function can be passed via `generate_kwargs`. As an example, to speedup the inference, you can try [lookup token speculative generation](https://huggingface.co/docs/transformers/generation_strategies) by passing the `prompt_lookup_num_tokens` argument as follows: ``` python generate_openelm.py --model apple/OpenELM-270M --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10 ``` Alternatively, try model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) by passing a smaller model through the `assistant_model` argument, for example: ``` python generate_openelm.py --model apple/OpenELM-270M --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL] ``` ## Main Results ### Zero-Shot | **Model Size** | **ARC-c** | **ARC-e** | **BoolQ** | **HellaSwag** | **PIQA** | **SciQ** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|-----------|-----------|---------------|-----------|-----------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 26.45 | 45.08 | **53.98** | 46.71 | 69.75 | **84.70** | **53.91** | 54.37 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **30.55** | **46.68** | 48.56 | **52.07** | **70.78** | 84.40 | 52.72 | **55.11** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 27.56 | 48.06 | 55.78 | 53.97 | 72.31 | 87.20 | 58.01 | 57.56 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **30.38** | **50.00** | **60.37** | **59.34** | **72.63** | **88.00** | **58.96** | **59.95** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 32.34 | **55.43** | 63.58 | 64.81 | **75.57** | **90.60** | 61.72 | 63.44 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **37.97** | 52.23 | **70.00** | **71.20** | 75.03 | 89.30 | **62.75** | **65.50** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 35.58 | 59.89 | 67.40 | 72.44 | 78.24 | **92.70** | 65.51 | 67.39 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **39.42** | **61.74** | **68.17** | **76.36** | **79.00** | 92.50 | **66.85** | **69.15** | ### LLM360 | **Model Size** | **ARC-c** | **HellaSwag** | **MMLU** | **TruthfulQA** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|---------------|-----------|----------------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | 47.15 | 25.72 | **39.24** | **53.83** | 38.72 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | **51.58** | **26.70** | 38.72 | 53.20 | **40.54** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | 53.86 | **26.01** | 40.18 | 57.22 | 41.50 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | **59.31** | 25.41 | **40.48** | **58.33** | **43.41** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | 65.71 | **27.05** | 36.98 | 63.22 | 45.93 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | **71.83** | 25.65 | **45.95** | **64.72** | **49.94** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | 73.28 | **26.76** | 34.98 | 67.25 | 48.90 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | **76.87** | 24.80 | **38.76** | **67.96** | **51.22** | ### OpenLLM Leaderboard | **Model Size** | **ARC-c** | **CrowS-Pairs** | **HellaSwag** | **MMLU** | **PIQA** | **RACE** | **TruthfulQA** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|-----------------|---------------|-----------|-----------|-----------|----------------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | **66.79** | 47.15 | 25.72 | 69.75 | 30.91 | **39.24** | **53.83** | 45.13 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | 66.01 | **51.58** | **26.70** | **70.78** | 33.78 | 38.72 | 53.20 | **46.66** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | **68.63** | 53.86 | **26.01** | 72.31 | 33.11 | 40.18 | 57.22 | 47.69 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | 67.44 | **59.31** | 25.41 | **72.63** | **36.84** | **40.48** | **58.33** | **49.25** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | **71.74** | 65.71 | **27.05** | **75.57** | 36.46 | 36.98 | 63.22 | 51.68 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | 71.02 | **71.83** | 25.65 | 75.03 | **39.43** | **45.95** | **64.72** | **54.40** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | **73.29** | 73.28 | **26.76** | 78.24 | **38.76** | 34.98 | 67.25 | 54.35 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | 72.33 | **76.87** | 24.80 | **79.00** | 38.47 | **38.76** | **67.96** | **55.73** | See the technical report for more results and comparison. ## Evaluation ### Setup Install the following dependencies: ```bash # install public lm-eval-harness harness_repo="public-lm-eval-harness" git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo} cd ${harness_repo} # use main branch on 03-15-2024, SHA is dc90fec git checkout dc90fec pip install -e . cd .. # 66d6242 is the main branch on 2024-04-01 pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242 pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0 ``` ### Evaluate OpenELM ```bash # OpenELM-270M hf_model=apple/OpenELM-270M # this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True tokenizer=meta-llama/Llama-2-7b-hf add_bos_token=True batch_size=1 mkdir lm_eval_output shot=0 task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2 lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=5 task=mmlu,winogrande lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=25 task=arc_challenge,crows_pairs_english lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=10 task=hellaswag lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log ``` ## Bias, Risks, and Limitations The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements. ## Citation If you find our work useful, please cite: ```BibTex @article{mehtaOpenELMEfficientLanguage2024, title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}}, shorttitle = {{OpenELM}}, url = {https://arxiv.org/abs/2404.14619v1}, language = {en}, urldate = {2024-04-24}, journal = {arXiv.org}, author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad}, month = apr, year = {2024}, } @inproceedings{mehta2022cvnets, author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad}, title = {CVNets: High Performance Library for Computer Vision}, year = {2022}, booktitle = {Proceedings of the 30th ACM International Conference on Multimedia}, series = {MM '22} } ```
[ "SCIQ" ]
huoxu/bge-large-en-v1.5-Q8_0-GGUF
huoxu
feature-extraction
[ "sentence-transformers", "gguf", "feature-extraction", "sentence-similarity", "transformers", "mteb", "llama-cpp", "gguf-my-repo", "en", "base_model:BAAI/bge-large-en-v1.5", "base_model:quantized:BAAI/bge-large-en-v1.5", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-07-26T02:21:35Z
2024-07-26T02:21:39+00:00
285
0
--- base_model: BAAI/bge-large-en-v1.5 language: - en license: mit tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - mteb - llama-cpp - gguf-my-repo model-index: - name: bge-large-en-v1.5 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 75.8507462686567 - type: ap value: 38.566457320228245 - type: f1 value: 69.69386648043475 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 92.416675 - type: ap value: 89.1928861155922 - type: f1 value: 92.39477019574215 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.175999999999995 - type: f1 value: 47.80712792870253 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 40.184999999999995 - type: map_at_10 value: 55.654 - type: map_at_100 value: 56.25 - type: map_at_1000 value: 56.255 - type: map_at_3 value: 51.742999999999995 - type: map_at_5 value: 54.129000000000005 - type: mrr_at_1 value: 40.967 - type: mrr_at_10 value: 55.96 - type: mrr_at_100 value: 56.54900000000001 - type: mrr_at_1000 value: 56.554 - type: mrr_at_3 value: 51.980000000000004 - type: mrr_at_5 value: 54.44 - type: ndcg_at_1 value: 40.184999999999995 - type: ndcg_at_10 value: 63.542 - type: ndcg_at_100 value: 65.96499999999999 - type: ndcg_at_1000 value: 66.08699999999999 - type: ndcg_at_3 value: 55.582 - type: ndcg_at_5 value: 59.855000000000004 - type: precision_at_1 value: 40.184999999999995 - type: precision_at_10 value: 8.841000000000001 - type: precision_at_100 value: 0.987 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 22.238 - type: precision_at_5 value: 15.405 - type: recall_at_1 value: 40.184999999999995 - type: recall_at_10 value: 88.407 - type: recall_at_100 value: 98.72 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 66.714 - type: recall_at_5 value: 77.027 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 48.567077926750066 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 43.19453389182364 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 64.46555939623092 - type: mrr value: 77.82361605768807 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 84.9554128814735 - type: cos_sim_spearman value: 84.65373612172036 - type: euclidean_pearson value: 83.2905059954138 - type: euclidean_spearman value: 84.52240782811128 - type: manhattan_pearson value: 82.99533802997436 - type: manhattan_spearman value: 84.20673798475734 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 87.78896103896103 - type: f1 value: 87.77189310964883 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 39.714538337650495 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 36.90108349284447 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 32.795 - type: map_at_10 value: 43.669000000000004 - type: map_at_100 value: 45.151 - type: map_at_1000 value: 45.278 - type: map_at_3 value: 40.006 - type: map_at_5 value: 42.059999999999995 - type: mrr_at_1 value: 39.771 - type: mrr_at_10 value: 49.826 - type: mrr_at_100 value: 50.504000000000005 - type: mrr_at_1000 value: 50.549 - type: mrr_at_3 value: 47.115 - type: mrr_at_5 value: 48.832 - type: ndcg_at_1 value: 39.771 - type: ndcg_at_10 value: 50.217999999999996 - type: ndcg_at_100 value: 55.454 - type: ndcg_at_1000 value: 57.37 - type: ndcg_at_3 value: 44.885000000000005 - type: ndcg_at_5 value: 47.419 - type: precision_at_1 value: 39.771 - type: precision_at_10 value: 9.642000000000001 - type: precision_at_100 value: 1.538 - type: precision_at_1000 value: 0.198 - type: precision_at_3 value: 21.268 - type: precision_at_5 value: 15.536 - type: recall_at_1 value: 32.795 - type: recall_at_10 value: 62.580999999999996 - type: recall_at_100 value: 84.438 - type: recall_at_1000 value: 96.492 - type: recall_at_3 value: 47.071000000000005 - type: recall_at_5 value: 54.079 - type: map_at_1 value: 32.671 - type: map_at_10 value: 43.334 - type: map_at_100 value: 44.566 - type: map_at_1000 value: 44.702999999999996 - type: map_at_3 value: 40.343 - type: map_at_5 value: 41.983 - type: mrr_at_1 value: 40.764 - type: mrr_at_10 value: 49.382 - type: mrr_at_100 value: 49.988 - type: mrr_at_1000 value: 50.03300000000001 - type: mrr_at_3 value: 47.293 - type: mrr_at_5 value: 48.51 - type: ndcg_at_1 value: 40.764 - type: ndcg_at_10 value: 49.039 - type: ndcg_at_100 value: 53.259 - type: ndcg_at_1000 value: 55.253 - type: ndcg_at_3 value: 45.091 - type: ndcg_at_5 value: 46.839999999999996 - type: precision_at_1 value: 40.764 - type: precision_at_10 value: 9.191 - type: precision_at_100 value: 1.476 - type: precision_at_1000 value: 0.19499999999999998 - type: precision_at_3 value: 21.72 - type: precision_at_5 value: 15.299 - type: recall_at_1 value: 32.671 - type: recall_at_10 value: 58.816 - type: recall_at_100 value: 76.654 - type: recall_at_1000 value: 89.05999999999999 - type: recall_at_3 value: 46.743 - type: recall_at_5 value: 51.783 - type: map_at_1 value: 40.328 - type: map_at_10 value: 53.32599999999999 - type: map_at_100 value: 54.37499999999999 - type: map_at_1000 value: 54.429 - type: map_at_3 value: 49.902 - type: map_at_5 value: 52.002 - type: mrr_at_1 value: 46.332 - type: mrr_at_10 value: 56.858 - type: mrr_at_100 value: 57.522 - type: mrr_at_1000 value: 57.54899999999999 - type: mrr_at_3 value: 54.472 - type: mrr_at_5 value: 55.996 - type: ndcg_at_1 value: 46.332 - type: ndcg_at_10 value: 59.313 - type: ndcg_at_100 value: 63.266999999999996 - type: ndcg_at_1000 value: 64.36 - type: ndcg_at_3 value: 53.815000000000005 - type: ndcg_at_5 value: 56.814 - type: precision_at_1 value: 46.332 - type: precision_at_10 value: 9.53 - type: precision_at_100 value: 1.238 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 24.054000000000002 - type: precision_at_5 value: 16.589000000000002 - type: recall_at_1 value: 40.328 - type: recall_at_10 value: 73.421 - type: recall_at_100 value: 90.059 - type: recall_at_1000 value: 97.81 - type: recall_at_3 value: 59.009 - type: recall_at_5 value: 66.352 - type: map_at_1 value: 27.424 - type: map_at_10 value: 36.332 - type: map_at_100 value: 37.347 - type: map_at_1000 value: 37.422 - type: map_at_3 value: 33.743 - type: map_at_5 value: 35.176 - type: mrr_at_1 value: 29.153000000000002 - type: mrr_at_10 value: 38.233 - type: mrr_at_100 value: 39.109 - type: mrr_at_1000 value: 39.164 - type: mrr_at_3 value: 35.876000000000005 - type: mrr_at_5 value: 37.169000000000004 - type: ndcg_at_1 value: 29.153000000000002 - type: ndcg_at_10 value: 41.439 - type: ndcg_at_100 value: 46.42 - type: ndcg_at_1000 value: 48.242000000000004 - type: ndcg_at_3 value: 36.362 - type: ndcg_at_5 value: 38.743 - type: precision_at_1 value: 29.153000000000002 - type: precision_at_10 value: 6.315999999999999 - type: precision_at_100 value: 0.927 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 15.443000000000001 - type: precision_at_5 value: 10.644 - type: recall_at_1 value: 27.424 - type: recall_at_10 value: 55.364000000000004 - type: recall_at_100 value: 78.211 - type: recall_at_1000 value: 91.74600000000001 - type: recall_at_3 value: 41.379 - type: recall_at_5 value: 47.14 - type: map_at_1 value: 19.601 - type: map_at_10 value: 27.826 - type: map_at_100 value: 29.017 - type: map_at_1000 value: 29.137 - type: map_at_3 value: 25.125999999999998 - type: map_at_5 value: 26.765 - type: mrr_at_1 value: 24.005000000000003 - type: mrr_at_10 value: 32.716 - type: mrr_at_100 value: 33.631 - type: mrr_at_1000 value: 33.694 - type: mrr_at_3 value: 29.934 - type: mrr_at_5 value: 31.630999999999997 - type: ndcg_at_1 value: 24.005000000000003 - type: ndcg_at_10 value: 33.158 - type: ndcg_at_100 value: 38.739000000000004 - type: ndcg_at_1000 value: 41.495 - type: ndcg_at_3 value: 28.185 - type: ndcg_at_5 value: 30.796 - type: precision_at_1 value: 24.005000000000003 - type: precision_at_10 value: 5.908 - type: precision_at_100 value: 1.005 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 13.391 - type: precision_at_5 value: 9.876 - type: recall_at_1 value: 19.601 - type: recall_at_10 value: 44.746 - type: recall_at_100 value: 68.82300000000001 - type: recall_at_1000 value: 88.215 - type: recall_at_3 value: 31.239 - type: recall_at_5 value: 37.695 - type: map_at_1 value: 30.130000000000003 - type: map_at_10 value: 40.96 - type: map_at_100 value: 42.282 - type: map_at_1000 value: 42.392 - type: map_at_3 value: 37.889 - type: map_at_5 value: 39.661 - type: mrr_at_1 value: 36.958999999999996 - type: mrr_at_10 value: 46.835 - type: mrr_at_100 value: 47.644 - type: mrr_at_1000 value: 47.688 - type: mrr_at_3 value: 44.562000000000005 - type: mrr_at_5 value: 45.938 - type: ndcg_at_1 value: 36.958999999999996 - type: ndcg_at_10 value: 47.06 - type: ndcg_at_100 value: 52.345 - type: ndcg_at_1000 value: 54.35 - type: ndcg_at_3 value: 42.301 - type: ndcg_at_5 value: 44.635999999999996 - type: precision_at_1 value: 36.958999999999996 - type: precision_at_10 value: 8.479000000000001 - type: precision_at_100 value: 1.284 - type: precision_at_1000 value: 0.163 - type: precision_at_3 value: 20.244 - type: precision_at_5 value: 14.224999999999998 - type: recall_at_1 value: 30.130000000000003 - type: recall_at_10 value: 59.27 - type: recall_at_100 value: 81.195 - type: recall_at_1000 value: 94.21199999999999 - type: recall_at_3 value: 45.885 - type: recall_at_5 value: 52.016 - type: map_at_1 value: 26.169999999999998 - type: map_at_10 value: 36.451 - type: map_at_100 value: 37.791000000000004 - type: map_at_1000 value: 37.897 - type: map_at_3 value: 33.109 - type: map_at_5 value: 34.937000000000005 - type: mrr_at_1 value: 32.877 - type: mrr_at_10 value: 42.368 - type: mrr_at_100 value: 43.201 - type: mrr_at_1000 value: 43.259 - type: mrr_at_3 value: 39.763999999999996 - type: mrr_at_5 value: 41.260000000000005 - type: ndcg_at_1 value: 32.877 - type: ndcg_at_10 value: 42.659000000000006 - type: ndcg_at_100 value: 48.161 - type: ndcg_at_1000 value: 50.345 - type: ndcg_at_3 value: 37.302 - type: ndcg_at_5 value: 39.722 - type: precision_at_1 value: 32.877 - type: precision_at_10 value: 7.9 - type: precision_at_100 value: 1.236 - type: precision_at_1000 value: 0.158 - type: precision_at_3 value: 17.846 - type: precision_at_5 value: 12.9 - type: recall_at_1 value: 26.169999999999998 - type: recall_at_10 value: 55.35 - type: recall_at_100 value: 78.755 - type: recall_at_1000 value: 93.518 - type: recall_at_3 value: 40.176 - type: recall_at_5 value: 46.589000000000006 - type: map_at_1 value: 27.15516666666667 - type: map_at_10 value: 36.65741666666667 - type: map_at_100 value: 37.84991666666666 - type: map_at_1000 value: 37.96316666666667 - type: map_at_3 value: 33.74974999999999 - type: map_at_5 value: 35.3765 - type: mrr_at_1 value: 32.08233333333334 - type: mrr_at_10 value: 41.033833333333334 - type: mrr_at_100 value: 41.84524999999999 - type: mrr_at_1000 value: 41.89983333333333 - type: mrr_at_3 value: 38.62008333333333 - type: mrr_at_5 value: 40.03441666666666 - type: ndcg_at_1 value: 32.08233333333334 - type: ndcg_at_10 value: 42.229 - type: ndcg_at_100 value: 47.26716666666667 - type: ndcg_at_1000 value: 49.43466666666667 - type: ndcg_at_3 value: 37.36408333333333 - type: ndcg_at_5 value: 39.6715 - type: precision_at_1 value: 32.08233333333334 - type: precision_at_10 value: 7.382583333333334 - type: precision_at_100 value: 1.16625 - type: precision_at_1000 value: 0.15408333333333332 - type: precision_at_3 value: 17.218 - type: precision_at_5 value: 12.21875 - type: recall_at_1 value: 27.15516666666667 - type: recall_at_10 value: 54.36683333333333 - type: recall_at_100 value: 76.37183333333333 - type: recall_at_1000 value: 91.26183333333333 - type: recall_at_3 value: 40.769916666666674 - type: recall_at_5 value: 46.702333333333335 - type: map_at_1 value: 25.749 - type: map_at_10 value: 33.001999999999995 - type: map_at_100 value: 33.891 - type: map_at_1000 value: 33.993 - type: map_at_3 value: 30.703999999999997 - type: map_at_5 value: 31.959 - type: mrr_at_1 value: 28.834 - type: mrr_at_10 value: 35.955 - type: mrr_at_100 value: 36.709 - type: mrr_at_1000 value: 36.779 - type: mrr_at_3 value: 33.947 - type: mrr_at_5 value: 35.089 - type: ndcg_at_1 value: 28.834 - type: ndcg_at_10 value: 37.329 - type: ndcg_at_100 value: 41.79 - type: ndcg_at_1000 value: 44.169000000000004 - type: ndcg_at_3 value: 33.184999999999995 - type: ndcg_at_5 value: 35.107 - type: precision_at_1 value: 28.834 - type: precision_at_10 value: 5.7669999999999995 - type: precision_at_100 value: 0.876 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 14.213000000000001 - type: precision_at_5 value: 9.754999999999999 - type: recall_at_1 value: 25.749 - type: recall_at_10 value: 47.791 - type: recall_at_100 value: 68.255 - type: recall_at_1000 value: 85.749 - type: recall_at_3 value: 36.199 - type: recall_at_5 value: 41.071999999999996 - type: map_at_1 value: 17.777 - type: map_at_10 value: 25.201 - type: map_at_100 value: 26.423999999999996 - type: map_at_1000 value: 26.544 - type: map_at_3 value: 22.869 - type: map_at_5 value: 24.023 - type: mrr_at_1 value: 21.473 - type: mrr_at_10 value: 29.12 - type: mrr_at_100 value: 30.144 - type: mrr_at_1000 value: 30.215999999999998 - type: mrr_at_3 value: 26.933 - type: mrr_at_5 value: 28.051 - type: ndcg_at_1 value: 21.473 - type: ndcg_at_10 value: 30.003 - type: ndcg_at_100 value: 35.766 - type: ndcg_at_1000 value: 38.501000000000005 - type: ndcg_at_3 value: 25.773000000000003 - type: ndcg_at_5 value: 27.462999999999997 - type: precision_at_1 value: 21.473 - type: precision_at_10 value: 5.482 - type: precision_at_100 value: 0.975 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 12.205 - type: precision_at_5 value: 8.692 - type: recall_at_1 value: 17.777 - type: recall_at_10 value: 40.582 - type: recall_at_100 value: 66.305 - type: recall_at_1000 value: 85.636 - type: recall_at_3 value: 28.687 - type: recall_at_5 value: 33.089 - type: map_at_1 value: 26.677 - type: map_at_10 value: 36.309000000000005 - type: map_at_100 value: 37.403999999999996 - type: map_at_1000 value: 37.496 - type: map_at_3 value: 33.382 - type: map_at_5 value: 34.98 - type: mrr_at_1 value: 31.343 - type: mrr_at_10 value: 40.549 - type: mrr_at_100 value: 41.342 - type: mrr_at_1000 value: 41.397 - type: mrr_at_3 value: 38.029 - type: mrr_at_5 value: 39.451 - type: ndcg_at_1 value: 31.343 - type: ndcg_at_10 value: 42.1 - type: ndcg_at_100 value: 47.089999999999996 - type: ndcg_at_1000 value: 49.222 - type: ndcg_at_3 value: 36.836999999999996 - type: ndcg_at_5 value: 39.21 - type: precision_at_1 value: 31.343 - type: precision_at_10 value: 7.164 - type: precision_at_100 value: 1.0959999999999999 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 16.915 - type: precision_at_5 value: 11.940000000000001 - type: recall_at_1 value: 26.677 - type: recall_at_10 value: 55.54599999999999 - type: recall_at_100 value: 77.094 - type: recall_at_1000 value: 92.01 - type: recall_at_3 value: 41.191 - type: recall_at_5 value: 47.006 - type: map_at_1 value: 24.501 - type: map_at_10 value: 33.102 - type: map_at_100 value: 34.676 - type: map_at_1000 value: 34.888000000000005 - type: map_at_3 value: 29.944 - type: map_at_5 value: 31.613999999999997 - type: mrr_at_1 value: 29.447000000000003 - type: mrr_at_10 value: 37.996 - type: mrr_at_100 value: 38.946 - type: mrr_at_1000 value: 38.995000000000005 - type: mrr_at_3 value: 35.079 - type: mrr_at_5 value: 36.69 - type: ndcg_at_1 value: 29.447000000000003 - type: ndcg_at_10 value: 39.232 - type: ndcg_at_100 value: 45.247 - type: ndcg_at_1000 value: 47.613 - type: ndcg_at_3 value: 33.922999999999995 - type: ndcg_at_5 value: 36.284 - type: precision_at_1 value: 29.447000000000003 - type: precision_at_10 value: 7.648000000000001 - type: precision_at_100 value: 1.516 - type: precision_at_1000 value: 0.23900000000000002 - type: precision_at_3 value: 16.008 - type: precision_at_5 value: 11.779 - type: recall_at_1 value: 24.501 - type: recall_at_10 value: 51.18899999999999 - type: recall_at_100 value: 78.437 - type: recall_at_1000 value: 92.842 - type: recall_at_3 value: 35.808 - type: recall_at_5 value: 42.197 - type: map_at_1 value: 22.039 - type: map_at_10 value: 30.377 - type: map_at_100 value: 31.275 - type: map_at_1000 value: 31.379 - type: map_at_3 value: 27.98 - type: map_at_5 value: 29.358 - type: mrr_at_1 value: 24.03 - type: mrr_at_10 value: 32.568000000000005 - type: mrr_at_100 value: 33.403 - type: mrr_at_1000 value: 33.475 - type: mrr_at_3 value: 30.436999999999998 - type: mrr_at_5 value: 31.796000000000003 - type: ndcg_at_1 value: 24.03 - type: ndcg_at_10 value: 35.198 - type: ndcg_at_100 value: 39.668 - type: ndcg_at_1000 value: 42.296 - type: ndcg_at_3 value: 30.709999999999997 - type: ndcg_at_5 value: 33.024 - type: precision_at_1 value: 24.03 - type: precision_at_10 value: 5.564 - type: precision_at_100 value: 0.828 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 13.309000000000001 - type: precision_at_5 value: 9.39 - type: recall_at_1 value: 22.039 - type: recall_at_10 value: 47.746 - type: recall_at_100 value: 68.23599999999999 - type: recall_at_1000 value: 87.852 - type: recall_at_3 value: 35.852000000000004 - type: recall_at_5 value: 41.410000000000004 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 15.692999999999998 - type: map_at_10 value: 26.903 - type: map_at_100 value: 28.987000000000002 - type: map_at_1000 value: 29.176999999999996 - type: map_at_3 value: 22.137 - type: map_at_5 value: 24.758 - type: mrr_at_1 value: 35.57 - type: mrr_at_10 value: 47.821999999999996 - type: mrr_at_100 value: 48.608000000000004 - type: mrr_at_1000 value: 48.638999999999996 - type: mrr_at_3 value: 44.452000000000005 - type: mrr_at_5 value: 46.546 - type: ndcg_at_1 value: 35.57 - type: ndcg_at_10 value: 36.567 - type: ndcg_at_100 value: 44.085 - type: ndcg_at_1000 value: 47.24 - type: ndcg_at_3 value: 29.964000000000002 - type: ndcg_at_5 value: 32.511 - type: precision_at_1 value: 35.57 - type: precision_at_10 value: 11.485 - type: precision_at_100 value: 1.9619999999999997 - type: precision_at_1000 value: 0.256 - type: precision_at_3 value: 22.237000000000002 - type: precision_at_5 value: 17.471999999999998 - type: recall_at_1 value: 15.692999999999998 - type: recall_at_10 value: 43.056 - type: recall_at_100 value: 68.628 - type: recall_at_1000 value: 86.075 - type: recall_at_3 value: 26.918999999999997 - type: recall_at_5 value: 34.14 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 9.53 - type: map_at_10 value: 20.951 - type: map_at_100 value: 30.136000000000003 - type: map_at_1000 value: 31.801000000000002 - type: map_at_3 value: 15.021 - type: map_at_5 value: 17.471999999999998 - type: mrr_at_1 value: 71.0 - type: mrr_at_10 value: 79.176 - type: mrr_at_100 value: 79.418 - type: mrr_at_1000 value: 79.426 - type: mrr_at_3 value: 78.125 - type: mrr_at_5 value: 78.61200000000001 - type: ndcg_at_1 value: 58.5 - type: ndcg_at_10 value: 44.106 - type: ndcg_at_100 value: 49.268 - type: ndcg_at_1000 value: 56.711999999999996 - type: ndcg_at_3 value: 48.934 - type: ndcg_at_5 value: 45.826 - type: precision_at_1 value: 71.0 - type: precision_at_10 value: 35.0 - type: precision_at_100 value: 11.360000000000001 - type: precision_at_1000 value: 2.046 - type: precision_at_3 value: 52.833 - type: precision_at_5 value: 44.15 - type: recall_at_1 value: 9.53 - type: recall_at_10 value: 26.811 - type: recall_at_100 value: 55.916999999999994 - type: recall_at_1000 value: 79.973 - type: recall_at_3 value: 16.413 - type: recall_at_5 value: 19.980999999999998 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 51.519999999999996 - type: f1 value: 46.36601294761231 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 74.413 - type: map_at_10 value: 83.414 - type: map_at_100 value: 83.621 - type: map_at_1000 value: 83.635 - type: map_at_3 value: 82.337 - type: map_at_5 value: 83.039 - type: mrr_at_1 value: 80.19800000000001 - type: mrr_at_10 value: 87.715 - type: mrr_at_100 value: 87.778 - type: mrr_at_1000 value: 87.779 - type: mrr_at_3 value: 87.106 - type: mrr_at_5 value: 87.555 - type: ndcg_at_1 value: 80.19800000000001 - type: ndcg_at_10 value: 87.182 - type: ndcg_at_100 value: 87.90299999999999 - type: ndcg_at_1000 value: 88.143 - type: ndcg_at_3 value: 85.60600000000001 - type: ndcg_at_5 value: 86.541 - type: precision_at_1 value: 80.19800000000001 - type: precision_at_10 value: 10.531 - type: precision_at_100 value: 1.113 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 32.933 - type: precision_at_5 value: 20.429 - type: recall_at_1 value: 74.413 - type: recall_at_10 value: 94.363 - type: recall_at_100 value: 97.165 - type: recall_at_1000 value: 98.668 - type: recall_at_3 value: 90.108 - type: recall_at_5 value: 92.52 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 22.701 - type: map_at_10 value: 37.122 - type: map_at_100 value: 39.178000000000004 - type: map_at_1000 value: 39.326 - type: map_at_3 value: 32.971000000000004 - type: map_at_5 value: 35.332 - type: mrr_at_1 value: 44.753 - type: mrr_at_10 value: 53.452 - type: mrr_at_100 value: 54.198 - type: mrr_at_1000 value: 54.225 - type: mrr_at_3 value: 50.952 - type: mrr_at_5 value: 52.464 - type: ndcg_at_1 value: 44.753 - type: ndcg_at_10 value: 45.021 - type: ndcg_at_100 value: 52.028 - type: ndcg_at_1000 value: 54.596000000000004 - type: ndcg_at_3 value: 41.622 - type: ndcg_at_5 value: 42.736000000000004 - type: precision_at_1 value: 44.753 - type: precision_at_10 value: 12.284 - type: precision_at_100 value: 1.955 - type: precision_at_1000 value: 0.243 - type: precision_at_3 value: 27.828999999999997 - type: precision_at_5 value: 20.061999999999998 - type: recall_at_1 value: 22.701 - type: recall_at_10 value: 51.432 - type: recall_at_100 value: 77.009 - type: recall_at_1000 value: 92.511 - type: recall_at_3 value: 37.919000000000004 - type: recall_at_5 value: 44.131 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 40.189 - type: map_at_10 value: 66.24600000000001 - type: map_at_100 value: 67.098 - type: map_at_1000 value: 67.149 - type: map_at_3 value: 62.684 - type: map_at_5 value: 64.974 - type: mrr_at_1 value: 80.378 - type: mrr_at_10 value: 86.127 - type: mrr_at_100 value: 86.29299999999999 - type: mrr_at_1000 value: 86.297 - type: mrr_at_3 value: 85.31400000000001 - type: mrr_at_5 value: 85.858 - type: ndcg_at_1 value: 80.378 - type: ndcg_at_10 value: 74.101 - type: ndcg_at_100 value: 76.993 - type: ndcg_at_1000 value: 77.948 - type: ndcg_at_3 value: 69.232 - type: ndcg_at_5 value: 72.04599999999999 - type: precision_at_1 value: 80.378 - type: precision_at_10 value: 15.595999999999998 - type: precision_at_100 value: 1.7840000000000003 - type: precision_at_1000 value: 0.191 - type: precision_at_3 value: 44.884 - type: precision_at_5 value: 29.145 - type: recall_at_1 value: 40.189 - type: recall_at_10 value: 77.981 - type: recall_at_100 value: 89.21 - type: recall_at_1000 value: 95.48299999999999 - type: recall_at_3 value: 67.326 - type: recall_at_5 value: 72.863 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 92.84599999999999 - type: ap value: 89.4710787567357 - type: f1 value: 92.83752676932258 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 23.132 - type: map_at_10 value: 35.543 - type: map_at_100 value: 36.702 - type: map_at_1000 value: 36.748999999999995 - type: map_at_3 value: 31.737 - type: map_at_5 value: 33.927 - type: mrr_at_1 value: 23.782 - type: mrr_at_10 value: 36.204 - type: mrr_at_100 value: 37.29 - type: mrr_at_1000 value: 37.330999999999996 - type: mrr_at_3 value: 32.458999999999996 - type: mrr_at_5 value: 34.631 - type: ndcg_at_1 value: 23.782 - type: ndcg_at_10 value: 42.492999999999995 - type: ndcg_at_100 value: 47.985 - type: ndcg_at_1000 value: 49.141 - type: ndcg_at_3 value: 34.748000000000005 - type: ndcg_at_5 value: 38.651 - type: precision_at_1 value: 23.782 - type: precision_at_10 value: 6.665 - type: precision_at_100 value: 0.941 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.776 - type: precision_at_5 value: 10.84 - type: recall_at_1 value: 23.132 - type: recall_at_10 value: 63.794 - type: recall_at_100 value: 89.027 - type: recall_at_1000 value: 97.807 - type: recall_at_3 value: 42.765 - type: recall_at_5 value: 52.11 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 94.59188326493388 - type: f1 value: 94.3842594786827 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 79.49384404924761 - type: f1 value: 59.7580539534629 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 77.56220578345663 - type: f1 value: 75.27228165561478 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 80.53463349024884 - type: f1 value: 80.4893958236536 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 32.56100273484962 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 31.470380028839607 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 32.06102792457849 - type: mrr value: 33.30709199672238 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.776999999999999 - type: map_at_10 value: 14.924000000000001 - type: map_at_100 value: 18.955 - type: map_at_1000 value: 20.538999999999998 - type: map_at_3 value: 10.982 - type: map_at_5 value: 12.679000000000002 - type: mrr_at_1 value: 47.988 - type: mrr_at_10 value: 57.232000000000006 - type: mrr_at_100 value: 57.818999999999996 - type: mrr_at_1000 value: 57.847 - type: mrr_at_3 value: 54.901999999999994 - type: mrr_at_5 value: 56.481 - type: ndcg_at_1 value: 46.594 - type: ndcg_at_10 value: 38.129000000000005 - type: ndcg_at_100 value: 35.54 - type: ndcg_at_1000 value: 44.172 - type: ndcg_at_3 value: 43.025999999999996 - type: ndcg_at_5 value: 41.052 - type: precision_at_1 value: 47.988 - type: precision_at_10 value: 28.111000000000004 - type: precision_at_100 value: 8.929 - type: precision_at_1000 value: 2.185 - type: precision_at_3 value: 40.144000000000005 - type: precision_at_5 value: 35.232 - type: recall_at_1 value: 6.776999999999999 - type: recall_at_10 value: 19.289 - type: recall_at_100 value: 36.359 - type: recall_at_1000 value: 67.54 - type: recall_at_3 value: 11.869 - type: recall_at_5 value: 14.999 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 31.108000000000004 - type: map_at_10 value: 47.126000000000005 - type: map_at_100 value: 48.171 - type: map_at_1000 value: 48.199 - type: map_at_3 value: 42.734 - type: map_at_5 value: 45.362 - type: mrr_at_1 value: 34.936 - type: mrr_at_10 value: 49.571 - type: mrr_at_100 value: 50.345 - type: mrr_at_1000 value: 50.363 - type: mrr_at_3 value: 45.959 - type: mrr_at_5 value: 48.165 - type: ndcg_at_1 value: 34.936 - type: ndcg_at_10 value: 55.028999999999996 - type: ndcg_at_100 value: 59.244 - type: ndcg_at_1000 value: 59.861 - type: ndcg_at_3 value: 46.872 - type: ndcg_at_5 value: 51.217999999999996 - type: precision_at_1 value: 34.936 - type: precision_at_10 value: 9.099 - type: precision_at_100 value: 1.145 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 21.456 - type: precision_at_5 value: 15.411 - type: recall_at_1 value: 31.108000000000004 - type: recall_at_10 value: 76.53999999999999 - type: recall_at_100 value: 94.39 - type: recall_at_1000 value: 98.947 - type: recall_at_3 value: 55.572 - type: recall_at_5 value: 65.525 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 71.56400000000001 - type: map_at_10 value: 85.482 - type: map_at_100 value: 86.114 - type: map_at_1000 value: 86.13 - type: map_at_3 value: 82.607 - type: map_at_5 value: 84.405 - type: mrr_at_1 value: 82.42 - type: mrr_at_10 value: 88.304 - type: mrr_at_100 value: 88.399 - type: mrr_at_1000 value: 88.399 - type: mrr_at_3 value: 87.37 - type: mrr_at_5 value: 88.024 - type: ndcg_at_1 value: 82.45 - type: ndcg_at_10 value: 89.06500000000001 - type: ndcg_at_100 value: 90.232 - type: ndcg_at_1000 value: 90.305 - type: ndcg_at_3 value: 86.375 - type: ndcg_at_5 value: 87.85300000000001 - type: precision_at_1 value: 82.45 - type: precision_at_10 value: 13.486999999999998 - type: precision_at_100 value: 1.534 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.813 - type: precision_at_5 value: 24.773999999999997 - type: recall_at_1 value: 71.56400000000001 - type: recall_at_10 value: 95.812 - type: recall_at_100 value: 99.7 - type: recall_at_1000 value: 99.979 - type: recall_at_3 value: 87.966 - type: recall_at_5 value: 92.268 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 57.241876648614145 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 64.66212576446223 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 5.308 - type: map_at_10 value: 13.803 - type: map_at_100 value: 16.176 - type: map_at_1000 value: 16.561 - type: map_at_3 value: 9.761000000000001 - type: map_at_5 value: 11.802 - type: mrr_at_1 value: 26.200000000000003 - type: mrr_at_10 value: 37.621 - type: mrr_at_100 value: 38.767 - type: mrr_at_1000 value: 38.815 - type: mrr_at_3 value: 34.117 - type: mrr_at_5 value: 36.107 - type: ndcg_at_1 value: 26.200000000000003 - type: ndcg_at_10 value: 22.64 - type: ndcg_at_100 value: 31.567 - type: ndcg_at_1000 value: 37.623 - type: ndcg_at_3 value: 21.435000000000002 - type: ndcg_at_5 value: 18.87 - type: precision_at_1 value: 26.200000000000003 - type: precision_at_10 value: 11.74 - type: precision_at_100 value: 2.465 - type: precision_at_1000 value: 0.391 - type: precision_at_3 value: 20.033 - type: precision_at_5 value: 16.64 - type: recall_at_1 value: 5.308 - type: recall_at_10 value: 23.794999999999998 - type: recall_at_100 value: 50.015 - type: recall_at_1000 value: 79.283 - type: recall_at_3 value: 12.178 - type: recall_at_5 value: 16.882 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 84.93231134675553 - type: cos_sim_spearman value: 81.68319292603205 - type: euclidean_pearson value: 81.8396814380367 - type: euclidean_spearman value: 81.24641903349945 - type: manhattan_pearson value: 81.84698799204274 - type: manhattan_spearman value: 81.24269997904105 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.73241671587446 - type: cos_sim_spearman value: 79.05091082971826 - type: euclidean_pearson value: 83.91146869578044 - type: euclidean_spearman value: 79.87978465370936 - type: manhattan_pearson value: 83.90888338917678 - type: manhattan_spearman value: 79.87482848584241 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 85.14970731146177 - type: cos_sim_spearman value: 86.37363490084627 - type: euclidean_pearson value: 83.02154218530433 - type: euclidean_spearman value: 83.80258761957367 - type: manhattan_pearson value: 83.01664495119347 - type: manhattan_spearman value: 83.77567458007952 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 83.40474139886784 - type: cos_sim_spearman value: 82.77768789165984 - type: euclidean_pearson value: 80.7065877443695 - type: euclidean_spearman value: 81.375940662505 - type: manhattan_pearson value: 80.6507552270278 - type: manhattan_spearman value: 81.32782179098741 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.08585968722274 - type: cos_sim_spearman value: 88.03110031451399 - type: euclidean_pearson value: 85.74012019602384 - type: euclidean_spearman value: 86.13592849438209 - type: manhattan_pearson value: 85.74404842369206 - type: manhattan_spearman value: 86.14492318960154 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.95069052788875 - type: cos_sim_spearman value: 86.4867991595147 - type: euclidean_pearson value: 84.31013325754635 - type: euclidean_spearman value: 85.01529258006482 - type: manhattan_pearson value: 84.26995570085374 - type: manhattan_spearman value: 84.96982104986162 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.54617647971897 - type: cos_sim_spearman value: 87.49834181751034 - type: euclidean_pearson value: 86.01015322577122 - type: euclidean_spearman value: 84.63362652063199 - type: manhattan_pearson value: 86.13807574475706 - type: manhattan_spearman value: 84.7772370721132 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 67.20047755786615 - type: cos_sim_spearman value: 67.05324077987636 - type: euclidean_pearson value: 66.91930642976601 - type: euclidean_spearman value: 65.21491856099105 - type: manhattan_pearson value: 66.78756851976624 - type: manhattan_spearman value: 65.12356257740728 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 86.19852871539686 - type: cos_sim_spearman value: 87.5161895296395 - type: euclidean_pearson value: 84.59848645207485 - type: euclidean_spearman value: 85.26427328757919 - type: manhattan_pearson value: 84.59747366996524 - type: manhattan_spearman value: 85.24045855146915 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 87.63320317811032 - type: mrr value: 96.26242947321379 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 60.928000000000004 - type: map_at_10 value: 70.112 - type: map_at_100 value: 70.59299999999999 - type: map_at_1000 value: 70.623 - type: map_at_3 value: 66.846 - type: map_at_5 value: 68.447 - type: mrr_at_1 value: 64.0 - type: mrr_at_10 value: 71.212 - type: mrr_at_100 value: 71.616 - type: mrr_at_1000 value: 71.64500000000001 - type: mrr_at_3 value: 68.77799999999999 - type: mrr_at_5 value: 70.094 - type: ndcg_at_1 value: 64.0 - type: ndcg_at_10 value: 74.607 - type: ndcg_at_100 value: 76.416 - type: ndcg_at_1000 value: 77.102 - type: ndcg_at_3 value: 69.126 - type: ndcg_at_5 value: 71.41300000000001 - type: precision_at_1 value: 64.0 - type: precision_at_10 value: 9.933 - type: precision_at_100 value: 1.077 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.556 - type: precision_at_5 value: 17.467 - type: recall_at_1 value: 60.928000000000004 - type: recall_at_10 value: 87.322 - type: recall_at_100 value: 94.833 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 72.628 - type: recall_at_5 value: 78.428 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.86237623762376 - type: cos_sim_ap value: 96.72586477206649 - type: cos_sim_f1 value: 93.01858362631845 - type: cos_sim_precision value: 93.4409687184662 - type: cos_sim_recall value: 92.60000000000001 - type: dot_accuracy value: 99.78019801980199 - type: dot_ap value: 93.72748205246228 - type: dot_f1 value: 89.04109589041096 - type: dot_precision value: 87.16475095785441 - type: dot_recall value: 91.0 - type: euclidean_accuracy value: 99.85445544554456 - type: euclidean_ap value: 96.6661459876145 - type: euclidean_f1 value: 92.58337481333997 - type: euclidean_precision value: 92.17046580773042 - type: euclidean_recall value: 93.0 - type: manhattan_accuracy value: 99.85445544554456 - type: manhattan_ap value: 96.6883549244056 - type: manhattan_f1 value: 92.57598405580468 - type: manhattan_precision value: 92.25422045680239 - type: manhattan_recall value: 92.9 - type: max_accuracy value: 99.86237623762376 - type: max_ap value: 96.72586477206649 - type: max_f1 value: 93.01858362631845 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 66.39930057069995 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 34.96398659903402 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 55.946944700355395 - type: mrr value: 56.97151398438164 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.541657650692905 - type: cos_sim_spearman value: 31.605804192286303 - type: dot_pearson value: 28.26905996736398 - type: dot_spearman value: 27.864801765851187 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.22599999999999998 - type: map_at_10 value: 1.8870000000000002 - type: map_at_100 value: 9.78 - type: map_at_1000 value: 22.514 - type: map_at_3 value: 0.6669999999999999 - type: map_at_5 value: 1.077 - type: mrr_at_1 value: 82.0 - type: mrr_at_10 value: 89.86699999999999 - type: mrr_at_100 value: 89.86699999999999 - type: mrr_at_1000 value: 89.86699999999999 - type: mrr_at_3 value: 89.667 - type: mrr_at_5 value: 89.667 - type: ndcg_at_1 value: 79.0 - type: ndcg_at_10 value: 74.818 - type: ndcg_at_100 value: 53.715999999999994 - type: ndcg_at_1000 value: 47.082 - type: ndcg_at_3 value: 82.134 - type: ndcg_at_5 value: 79.81899999999999 - type: precision_at_1 value: 82.0 - type: precision_at_10 value: 78.0 - type: precision_at_100 value: 54.48 - type: precision_at_1000 value: 20.518 - type: precision_at_3 value: 87.333 - type: precision_at_5 value: 85.2 - type: recall_at_1 value: 0.22599999999999998 - type: recall_at_10 value: 2.072 - type: recall_at_100 value: 13.013 - type: recall_at_1000 value: 43.462 - type: recall_at_3 value: 0.695 - type: recall_at_5 value: 1.139 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.328 - type: map_at_10 value: 9.795 - type: map_at_100 value: 15.801000000000002 - type: map_at_1000 value: 17.23 - type: map_at_3 value: 4.734 - type: map_at_5 value: 6.644 - type: mrr_at_1 value: 30.612000000000002 - type: mrr_at_10 value: 46.902 - type: mrr_at_100 value: 47.495 - type: mrr_at_1000 value: 47.495 - type: mrr_at_3 value: 41.156 - type: mrr_at_5 value: 44.218 - type: ndcg_at_1 value: 28.571 - type: ndcg_at_10 value: 24.806 - type: ndcg_at_100 value: 36.419000000000004 - type: ndcg_at_1000 value: 47.272999999999996 - type: ndcg_at_3 value: 25.666 - type: ndcg_at_5 value: 25.448999999999998 - type: precision_at_1 value: 30.612000000000002 - type: precision_at_10 value: 23.061 - type: precision_at_100 value: 7.714 - type: precision_at_1000 value: 1.484 - type: precision_at_3 value: 26.531 - type: precision_at_5 value: 26.122 - type: recall_at_1 value: 2.328 - type: recall_at_10 value: 16.524 - type: recall_at_100 value: 47.179 - type: recall_at_1000 value: 81.22200000000001 - type: recall_at_3 value: 5.745 - type: recall_at_5 value: 9.339 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.9142 - type: ap value: 14.335574772555415 - type: f1 value: 54.62839595194111 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 59.94340690435768 - type: f1 value: 60.286487936731916 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 51.26597708987974 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.48882398521786 - type: cos_sim_ap value: 79.04326607602204 - type: cos_sim_f1 value: 71.64566826860633 - type: cos_sim_precision value: 70.55512918905092 - type: cos_sim_recall value: 72.77044854881267 - type: dot_accuracy value: 84.19264469213805 - type: dot_ap value: 67.96360043562528 - type: dot_f1 value: 64.06418393006827 - type: dot_precision value: 58.64941898706424 - type: dot_recall value: 70.58047493403694 - type: euclidean_accuracy value: 87.45902127913214 - type: euclidean_ap value: 78.9742237648272 - type: euclidean_f1 value: 71.5553235908142 - type: euclidean_precision value: 70.77955601445535 - type: euclidean_recall value: 72.34828496042216 - type: manhattan_accuracy value: 87.41729749061214 - type: manhattan_ap value: 78.90073137580596 - type: manhattan_f1 value: 71.3942611553533 - type: manhattan_precision value: 68.52705653967483 - type: manhattan_recall value: 74.51187335092348 - type: max_accuracy value: 87.48882398521786 - type: max_ap value: 79.04326607602204 - type: max_f1 value: 71.64566826860633 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.68125897465751 - type: cos_sim_ap value: 85.6003454431979 - type: cos_sim_f1 value: 77.6957163958641 - type: cos_sim_precision value: 73.0110366307807 - type: cos_sim_recall value: 83.02279026793964 - type: dot_accuracy value: 87.7672992587418 - type: dot_ap value: 82.4971301112899 - type: dot_f1 value: 75.90528233151184 - type: dot_precision value: 72.0370626469368 - type: dot_recall value: 80.21250384970742 - type: euclidean_accuracy value: 88.4503434625684 - type: euclidean_ap value: 84.91949884748384 - type: euclidean_f1 value: 76.92365018444684 - type: euclidean_precision value: 74.53245721712759 - type: euclidean_recall value: 79.47336002463813 - type: manhattan_accuracy value: 88.47556952691427 - type: manhattan_ap value: 84.8963689101517 - type: manhattan_f1 value: 76.85901249256395 - type: manhattan_precision value: 74.31693989071039 - type: manhattan_recall value: 79.58115183246073 - type: max_accuracy value: 88.68125897465751 - type: max_ap value: 85.6003454431979 - type: max_f1 value: 77.6957163958641 --- # huoxu/bge-large-en-v1.5-Q8_0-GGUF This model was converted to GGUF format from [`BAAI/bge-large-en-v1.5`](https://huggingface.co/BAAI/bge-large-en-v1.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/BAAI/bge-large-en-v1.5) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo huoxu/bge-large-en-v1.5-Q8_0-GGUF --hf-file bge-large-en-v1.5-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo huoxu/bge-large-en-v1.5-Q8_0-GGUF --hf-file bge-large-en-v1.5-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo huoxu/bge-large-en-v1.5-Q8_0-GGUF --hf-file bge-large-en-v1.5-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo huoxu/bge-large-en-v1.5-Q8_0-GGUF --hf-file bge-large-en-v1.5-q8_0.gguf -c 2048 ```
[ "BIOSSES", "SCIFACT" ]
sinequa/passage-ranker.strawberry
sinequa
text-classification
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "de", "en", "es", "fr", "it", "ja", "nl", "pt", "zh", "arxiv:1911.02116", "arxiv:2012.15828", "arxiv:1901.04085", "arxiv:1611.09268", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-07-10T13:41:34Z
2024-02-19T09:40:16+00:00
283
0
--- language: - de - en - es - fr - it - ja - nl - pt - zh --- # Model Card for `passage-ranker.strawberry` This model is a passage ranker developed by Sinequa. It produces a relevance score given a query-passage pair and is used to order search results. Model name: `passage-ranker.strawberry` ## Supported Languages The model was trained and tested in the following languages: - Chinese (simplified) - Dutch - English - French - German - Italian - Japanese - Portuguese - Spanish Besides the aforementioned languages, basic support can be expected for additional 91 languages that were used during the pretraining of the base model (see Appendix A of [XLM-R paper](https://arxiv.org/abs/1911.02116)). ## Scores | Metric | Value | |:--------------------|------:| | Relevance (NDCG@10) | 0.451 | Note that the relevance score is computed as an average over 14 retrieval datasets (see [details below](#evaluation-metrics)). ## Inference Times | GPU | Quantization type | Batch size 1 | Batch size 32 | |:------------------------------------------|:------------------|---------------:|---------------:| | NVIDIA A10 | FP16 | 1 ms | 5 ms | | NVIDIA A10 | FP32 | 2 ms | 22 ms | | NVIDIA T4 | FP16 | 1 ms | 13 ms | | NVIDIA T4 | FP32 | 3 ms | 64 ms | | NVIDIA L4 | FP16 | 2 ms | 6 ms | | NVIDIA L4 | FP32 | 2 ms | 30 ms | ## Gpu Memory usage | Quantization type | Memory | |:-------------------------------------------------|-----------:| | FP16 | 550 MiB | | FP32 | 1100 MiB | Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which can be around 0.5 to 1 GiB depending on the used GPU. ## Requirements - Minimal Sinequa version: 11.10.0 - Minimal Sinequa version for using FP16 models and GPUs with CUDA compute capability of 8.9+ (like NVIDIA L4): 11.11.0 - [Cuda compute capability](https://developer.nvidia.com/cuda-gpus): above 5.0 (above 6.0 for FP16 use) ## Model Details ### Overview - Number of parameters: 107 million - Base language model: [mMiniLMv2-L6-H384-distilled-from-XLMR-Large](https://huggingface.co/nreimers/mMiniLMv2-L6-H384-distilled-from-XLMR-Large) ([Paper](https://arxiv.org/abs/2012.15828), [GitHub](https://github.com/microsoft/unilm/tree/master/minilm)) - Insensitive to casing and accents - Training procedure: [MonoBERT](https://arxiv.org/abs/1901.04085) ### Training Data - MS MARCO Passage Ranking ([Paper](https://arxiv.org/abs/1611.09268), [Official Page](https://microsoft.github.io/msmarco/), [English & translated datasets on the HF dataset hub](https://huggingface.co/datasets/unicamp-dl/mmarco)) - Original English dataset - Translated datasets for the other eight supported languages ### Evaluation Metrics To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the [BEIR benchmark](https://github.com/beir-cellar/beir). Note that all these datasets are in English. | Dataset | NDCG@10 | |:------------------|--------:| | Average | 0.451 | | | | | Arguana | 0.527 | | CLIMATE-FEVER | 0.167 | | DBPedia Entity | 0.343 | | FEVER | 0.698 | | FiQA-2018 | 0.297 | | HotpotQA | 0.648 | | MS MARCO | 0.409 | | NFCorpus | 0.317 | | NQ | 0.430 | | Quora | 0.761 | | SCIDOCS | 0.135 | | SciFact | 0.597 | | TREC-COVID | 0.670 | | Webis-Touche-2020 | 0.311 | We evaluated the model on the datasets of the [MIRACL benchmark](https://github.com/project-miracl/miracl) to test its multilingual capacities. Note that not all training languages are part of the benchmark, so we only report the metrics for the existing languages. | Language | NDCG@10 | |:----------------------|--------:| | Chinese (simplified) | 0.414 | | French | 0.382 | | German | 0.320 | | Japanese | 0.479 | | Spanish | 0.418 |
[ "SCIFACT" ]