Error loading Code Translation

#2
by ayazdan - opened

I followed the instructions as follows

import datasets
code_translation_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "code_translation")
print(code_translation_dataset)

It raises the following error

FileNotFoundError                         Traceback (most recent call last)
<ipython-input-4-e5fb5f11b2e5> in <cell line: 2>()
      1 import datasets
----> 2 code_translation_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "code_translation")
      3 print(code_translation_dataset)

17 frames
/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_file_system.py in _raise_file_not_found(path, err)
    886     elif isinstance(err, HFValidationError):
    887         msg = f"{path} (invalid repository id)"
--> 888     raise FileNotFoundError(msg) from err
    889 
    890 

FileNotFoundError: datasets/NTU-NLP-sg/xCodeEval@main/code_translation/validation/C%23.jsonl
NLP Group of Nanyang Technological University org
edited Sep 9, 2024

I just downloaded the entire dataset by the following command,

code_translation_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "code_translation")

Sometime in huggingface datasets package, the cache folder gets corrupted. I would suggest deleting your huggingface datasets's cache folder. You can also try to set a different cache path by the following command,

code_translation_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "code_translation", cache_dir="path/to/the/cache/")

Please let me know if that works for you. If you are in a hurry, you can also git lfs pull the entire repo.

GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/NTU-NLP-sg/xCodeEval
cd xCodeEval
git lfs pull --include "code_translation/*"

Also if you are looking into translation data, please check this thread.

Screenshot 2024-09-10 at 1.53.59 AM.png

NLP Group of Nanyang Technological University org

我download下来怎么本地load呢,因为里面src还得映射比较麻烦

已经使用xcodeeval.py加载了。
I would like to ask whether the data in this table is correct, whether the compilation can be passed, whether the test samples can be passed, and why the scores of HUMAN_EVAL and bigcodebench in 7B model training are reduced using the data here?

Sign up or log in to comment