--- task_categories: - text-retrieval language: - sa - en tags: - sanskrit pretty_name: Anveshana size_categories: - 1K we embarked on a comprehensive benchmarking study to explore and evaluate current state-of-the-art models for Cross-Lingual Information Retrieval (CLIR) from English to Sanskrit. Our primary objective is to assess the effectiveness of these models in accurately retrieving Sanskrit documents based on English queries. To achieve this, we meticulously assembled a robust dataset, focusing on the Srimadbhagavatam, comprising 3,400 query-document pairs from 334 different documents. These documents were carefully curated to represent a wide spectrum of thematic content and complexity within the texts. Our dataset includes detailed preprocessing of Sanskrit documents to preserve their poetic structure while accommodating computational analysis, and minimal preprocessing of English queries to maintain their original intent. - **Funded by:** The work was supported in part by the National Language Translation Mission (NLTM): Bhashini project by Government of India. - **Language(s) (NLP):** Sanskrit, English ### Dataset Sources [optional] - **Repository:** TBD - **Paper:** TBD ## Uses The dataset can be used to train cross lingual information retrieval models. ## Dataset Creation ### Curation Rationale To effectively train and evaluate the necessary CLIR model, it was imperative to have Sanskrit documents paired with their English queries. We identified the Srimadbhagavatam as the only texts that provided the requisite data, with the Sanskrit documents being various chapters of the Srimadbhagavatam. ### Source Data #### Data Collection and Processing In the data collection phase of our CLIR research, we implemented web scraping techniques to harvest textual content from the website Vedabase1. This digital platform hosts a variety of Sanskrit documents, including multiple chapters of the ancient text Srimadbhagavatam. To facilitate the development of query-document pairs essential for our cross-language retrieval tasks, we meticulously examined English translations of each document, and then manually crafted an average of 10 queries per document, resulting in a total of 3400 query-relevant document pairs across 334 documents ## Citation Please do cite it in case you are using the dataset **BibTeX:** ``` @misc{jagadeeshan2025anveshananewbenchmarkdataset, title={Anveshana: A New Benchmark Dataset for Cross-Lingual Information Retrieval On English Queries and Sanskrit Documents}, author={Manoj Balaji Jagadeeshan and Prince Raj and Pawan Goyal}, year={2025}, eprint={2505.19494}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2505.19494}, } ```