--- dataset_info: - config_name: cited_count_en features: - name: paper_id dtype: int64 - name: text dtype: string - name: value dtype: int64 splits: - name: train num_bytes: 153923446 num_examples: 164037 - name: test num_bytes: 17299606 num_examples: 18227 download_size: 96766516 dataset_size: 171223052 - config_name: cited_count_ru features: - name: paper_id dtype: int64 - name: text dtype: string - name: value dtype: int64 splits: - name: train num_bytes: 270027978 num_examples: 164037 - name: test num_bytes: 30221754 num_examples: 18227 download_size: 142100419 dataset_size: 300249732 - config_name: corerisc_en features: - name: paper_id dtype: int64 - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 73341997 num_examples: 71179 - name: test num_bytes: 8166576 num_examples: 7909 download_size: 46316647 dataset_size: 81508573 - config_name: corerisc_ru features: - name: paper_id dtype: int64 - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 126882101 num_examples: 71179 - name: test num_bytes: 14138146 num_examples: 7909 download_size: 66804789 dataset_size: 141020247 - config_name: grnti_en features: - name: paper_id dtype: int64 - name: text dtype: string - name: label dtype: string splits: - name: train num_bytes: 23162752 num_examples: 24338 - name: test num_bytes: 2370958 num_examples: 2517 download_size: 14573785 dataset_size: 25533710 - config_name: grnti_ru features: - name: paper_id dtype: int64 - name: text dtype: string - name: label dtype: string splits: - name: train num_bytes: 45889708 num_examples: 28399 - name: test num_bytes: 4505718 num_examples: 2764 download_size: 23982191 dataset_size: 50395426 - config_name: oecd_en features: - name: paper_id dtype: int64 - name: text dtype: string - name: label dtype: string splits: - name: train num_bytes: 23554397 num_examples: 25524 - name: test num_bytes: 2731300 num_examples: 2982 download_size: 15027636 dataset_size: 26285697 - config_name: oecd_ru features: - name: paper_id dtype: int64 - name: text dtype: string - name: label dtype: string splits: - name: train num_bytes: 43096907 num_examples: 27680 - name: test num_bytes: 4970053 num_examples: 3197 download_size: 23005782 dataset_size: 48066960 - config_name: pub_type_en features: - name: paper_id dtype: int64 - name: text dtype: string - name: label dtype: string splits: - name: train num_bytes: 3313212 num_examples: 4150 - name: test num_bytes: 365129 num_examples: 462 download_size: 2117609 dataset_size: 3678341 - config_name: pub_type_ru features: - name: paper_id dtype: int64 - name: text dtype: string - name: label dtype: string splits: - name: train num_bytes: 5840922 num_examples: 4150 - name: test num_bytes: 659760 num_examples: 462 download_size: 3117503 dataset_size: 6500682 - config_name: yearpubl_en features: - name: paper_id dtype: int64 - name: text dtype: string - name: value dtype: int64 splits: - name: train num_bytes: 154116011 num_examples: 164037 - name: test num_bytes: 17107041 num_examples: 18227 download_size: 96733496 dataset_size: 171223052 - config_name: yearpubl_ru features: - name: paper_id dtype: int64 - name: text dtype: string - name: value dtype: int64 splits: - name: train num_bytes: 270193175 num_examples: 164037 - name: test num_bytes: 30056557 num_examples: 18227 download_size: 141958702 dataset_size: 300249732 configs: - config_name: cited_count_en data_files: - split: train path: cited_count_en/train-* - split: test path: cited_count_en/test-* - config_name: cited_count_ru data_files: - split: train path: cited_count_ru/train-* - split: test path: cited_count_ru/test-* - config_name: corerisc_en data_files: - split: train path: corerisc_en/train-* - split: test path: corerisc_en/test-* - config_name: corerisc_ru data_files: - split: train path: corerisc_ru/train-* - split: test path: corerisc_ru/test-* - config_name: grnti_en data_files: - split: train path: grnti_en/train-* - split: test path: grnti_en/test-* - config_name: grnti_ru data_files: - split: train path: grnti_ru/train-* - split: test path: grnti_ru/test-* - config_name: oecd_en data_files: - split: train path: oecd_en/train-* - split: test path: oecd_en/test-* - config_name: oecd_ru data_files: - split: train path: oecd_ru/train-* - split: test path: oecd_ru/test-* - config_name: pub_type_en data_files: - split: train path: pub_type_en/train-* - split: test path: pub_type_en/test-* - config_name: pub_type_ru data_files: - split: train path: pub_type_ru/train-* - split: test path: pub_type_ru/test-* - config_name: yearpubl_en data_files: - split: train path: yearpubl_en/train-* - split: test path: yearpubl_en/test-* - config_name: yearpubl_ru data_files: - split: train path: yearpubl_ru/train-* - split: test path: yearpubl_ru/test-* language: - ru - en tags: - benchmark - mteb - text classification - text regression --- # RuSciBench Dataset Collection This repository contains the datasets for the **RuSciBench** benchmark, designed for evaluating semantic vector representations of scientific texts in Russian and English. ## Dataset Description **RuSciBench** is the first benchmark specifically targeting scientific documents in the Russian language, alongside their English counterparts (abstracts and titles). The data is sourced from [eLibrary.ru](https://www.elibrary.ru), the largest Russian electronic library of scientific publications, integrated with the Russian Science Citation Index (RSCI). The dataset comprises approximately 182,000 scientific paper abstracts and titles. All papers included in the benchmark have open licenses. ## Tasks The benchmark includes a variety of tasks grouped into Classification, Regression, and Retrieval categories, designed for both Russian and English texts based on paper abstracts. ### Classification Tasks 1. **Topic Classification (OECD):** Classify papers based on the first two levels of the Organization for Economic Co-operation and Development (OECD) rubricator (29 classes). * `RuSciBenchOecdRuClassification` (subset `oecd_ru`) * `RuSciBenchOecdEnClassification` (subset `oecd_en`) 2. **Topic Classification (GRNTI/SRSTI):** Classify papers based on the first level of the State Rubricator of Scientific and Technical Information (GRNTI/SRSTI) (29 classes). * `RuSciBenchGrntiRuClassification` (subset `grnti_ru`) * `RuSciBenchGrntiEnClassification` (subset `grnti_en`) 3. **Core RISC Affiliation:** Binary classification task to determine if a paper belongs to the Core of the Russian Index of Science Citation (RISC). * `RuSciBenchCoreRiscRuClassification` (subset `corerisc_ru`) * `RuSciBenchCoreRiscEnClassification` (subset `corerisc_en`) 4. **Publication Type Classification:** Classify documents into types like 'article', 'conference proceedings', 'survey', etc. (7 classes, balanced subset used). * `RuSciBenchPubTypesRuClassification` (subset `pub_type_ru`) * `RuSciBenchPubTypesEnClassification` (subset `pub_type_en`) ### Regression Tasks 1. **Year of Publication Prediction:** Predict the publication year of the paper. * `RuSciBenchYearPublRuRegression` (subset `yearpubl_ru`) * `RuSciBenchYearPublEnRegression` (subset `yearpubl_en`) 2. **Citation Count Prediction:** Predict the number of times a paper has been cited. * `RuSciBenchCitedCountRuRegression` (subset `cited_count_ru`) * `RuSciBenchCitedCountEnRegression` (subset `cited_count_en`) ### Retrieval Tasks 1. **Direct Citation Prediction:** Given a query paper abstract, retrieve abstracts of papers it directly cites from the corpus. Uses a retrieval setup (all non-positive documents are negative). ([Dataset Link](https://huggingface.co/datasets/mlsa-iai-msu-lab/ru_sci_bench_cite_retrieval)) * `RuSciBenchCiteRuRetrieval` * `RuSciBenchCiteEnRetrieval` 2. **Co-Citation Prediction:** Given a query paper abstract, retrieve abstracts of papers that are co-cited with it (cited by at least 5 common papers). Uses a retrieval setup. ([Dataset Link](https://huggingface.co/datasets/mlsa-iai-msu-lab/ru_sci_bench_cocite_retrieval)) * `RuSciBenchCociteRuRetrieval` * `RuSciBenchCociteEnRetrieval` 3. **Translation Search:** Given an abstract in one language (e.g., Russian), retrieve its corresponding translation (e.g., English abstract of the same paper) from the corpus of abstracts in the target language. ([Dataset Link](https://huggingface.co/datasets/mlsa-iai-msu-lab/ru_sci_bench_translation_search)) * `RuSciBenchTranslationSearchEnRetrieval` (Query: En, Corpus: Ru) * `RuSciBenchTranslationSearchRuRetrieval` (Query: Ru, Corpus: En) ## Usage These datasets are designed to be used with the MTEB library. **First, you need to install the MTEB fork containing the RuSciBench tasks:** ```bash pip install git+https://github.com/mlsa-iai-msu-lab/ru_sci_bench_mteb ``` Then you can evaluate sentence-transformer models easily: ```python from sentence_transformers import SentenceTransformer from mteb import MTEB # Example: Evaluate on Russian GRNTI classification model_name = "mlsa-iai-msu-lab/sci-rus-tiny3.1" # Or any other sentence transformer model = SentenceTransformer(model_name) evaluation = MTEB(tasks=["RuSciBenchGrntiRuClassification"]) # Select tasks results = evaluation.run(model, output_folder=f"results/{model_name.split('/')[-1]}") print(results) ``` For more details on the benchmark, tasks, and baseline model evaluations, please refer to the associated paper and code repository. * **Code Repository:** [https://github.com/mlsa-iai-msu-lab/ru_sci_bench_mteb](https://github.com/mlsa-iai-msu-lab/ru_sci_bench_mteb) * **Paper:** https://doi.org/10.1134/S1064562424602191 ## Citation If you use RuSciBench in your research, please cite the following paper: ```bibtex @article{Vatolin2024, author = {Vatolin, A. and Gerasimenko, N. and Ianina, A. and Vorontsov, K.}, title = {RuSciBench: Open Benchmark for Russian and English Scientific Document Representations}, journal = {Doklady Mathematics}, year = {2024}, volume = {110}, number = {1}, pages = {S251--S260}, month = dec, doi = {10.1134/S1064562424602191}, url = {https://doi.org/10.1134/S1064562424602191}, issn = {1531-8362} } ```