Data issue
One of the human eval instances contains an extra quotation mark so it can't be parsed correctly.
>>> ex = load_dataset("bigcode/humanevalpack", "python", trust_remote_code=True)["test"][142]
>>> print(ex["prompt"])
def sum_squares(lst):
""""
This function will take a list of integers. For all entries in the list, the function shall square the integer entry if its index is a
multiple of 3 and will cube the integer entry if its index is a multiple of 4 and not a multiple of 3. The function will not
change the entries in the list whose indexes are not a multiple of 3 or 4. The function shall then return the sum of all entries.
Examples:
For lst = [1,2,3] the output should be 6
For lst = [] the output should be 0
For lst = [-1,-5,2,-1,-5] the output should be -126
"""
>>> print(ex["declaration"])
def sum_squares(lst):
"
Great find, do you want to submit a PR to fix it?
I'd be happy to, but wanted to confirm first: was this dataset automatically converted from the original github repo? If so, I worry that there might be a bug in the conversion code that led to this and which could lead to other errors too? But on the other hand if this was a one-off human error then it's probably fine.
This is a one-off human error that is also present in the original OpenAI HumanEval. You will find the line {"task_id": "HumanEval/142", "prompt": "\n\n\ndef sum_squares(lst):\n \"\"\"\"\n
in https://raw.githubusercontent.com/bigcode-project/octopack/main/evaluation/create/humaneval-x/HumanEval_original.jsonl.
Maybe you could
- Fix it at https://github.com/bigcode-project/octopack/blob/0c41f2ec631ddb9f0a14689f5cca9a72cc9b5520/evaluation/create/humaneval-x/data/python/data/humaneval.jsonl#L143
- Fix it at https://github.com/bigcode-project/octopack/blob/0c41f2ec631ddb9f0a14689f5cca9a72cc9b5520/evaluation/create/humaneval-x/data/python/data/humanevalpack.jsonl#L143C23-L143C107
- Add the change here https://github.com/bigcode-project/octopack/tree/main/evaluation/create/humaneval-x#modifications-muennighoff
Amazing merged! Now we just need to change it here: https://huggingface.co/datasets/bigcode/humanevalpack/blob/main/data/python/data/humanevalpack.jsonl
You can just upload the new humanevalpack.jsonl that you edited in the code via a PR here and we can merge that too!
Great all merged I think - should we close this?
Sounds good, thanks!