Dataset Preview
Full Screen Viewer
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError Exception: ArrowNotImplementedError Message: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field. Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 583, in write_table self._build_writer(inferred_schema=pa_table.schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer self.pa_writer = self._WRITER_CLASS(self.stream, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__ self.writer = _parquet.ParquetWriter( File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__ File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2027, in _prepare_split_single num_examples, num_bytes = writer.finalize() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 602, in finalize self._build_writer(self.schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer self.pa_writer = self._WRITER_CLASS(self.stream, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__ self.writer = _parquet.ParquetWriter( File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__ File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
_data_files
list | _fingerprint
string | _format_columns
sequence | _format_kwargs
dict | _format_type
null | _output_all_columns
bool | _split
null |
---|---|---|---|---|---|---|
[
{
"filename": "data-00000-of-00001.arrow"
}
] | d8bd24361e547277 | [
"answers.answer_start",
"answers.text",
"context",
"feat_answer_category",
"feat_answer_end",
"feat_answer_id",
"feat_document_id",
"feat_file_name",
"feat_question_id",
"question"
] | {} | null | false | null |
AutoTrain Dataset for project: demoqa2
Dataset Description
This dataset has been automatically processed by AutoTrain for project demoqa2.
Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
Data Instances
A sample from this dataset looks as follows:
[
{
"context": "After completion of two years from the date of registration, the candidate may be\nconverted from JRF to SRF subject to the fulfillment of the following criteria.\n1. The candidate has to apply in the Academic section for SRF through Supervisor(s)\nand HoD.\n2. Minimum of one SSCI/SCOPUS/SCI/SCIE indexed Journal publication/accepted or\none granted Patent is required.\n3. The candidate has to appear for a progress seminar. For that, a committee has to be\nformed with an external member by the Supervisor(s) through the respective HoD for\nevaluation of the progress of the Junior Research Fellow (JRF) for the last two years.",
"question": "What are the criteria's for converting SRF to JRF?",
"answers.text": [
"1. The candidate has to apply in the Academic section for SRF through Supervisor(s)\nand HoD.\n2. Minimum of one SSCI/SCOPUS/SCI/SCIE indexed Journal publication/accepted or\none granted Patent is required.\n3. The candidate has to appear for a progress seminar. For that, a committee has to be\nformed with an external member by the Supervisor(s) through the respective HoD for\nevaluation of the progress of the Junior Research Fellow (JRF) for the last two years."
],
"answers.answer_start": [
162
],
"feat_answer_id": [
950597
],
"feat_document_id": [
1582265
],
"feat_question_id": [
1063809
],
"feat_answer_end": [
622
],
"feat_answer_category": [
NaN
],
"feat_file_name": [
NaN
]
},
{
"context": "AC/DRC. The chairperson, RAC should conduct the\nexamination (written and or oral) within the first 15 (fifteen) months from the date of\nregistration of PS. The syllabus of the comprehensive examination is based on any three\ncourses recommended by the RAC/DRC. If PS fails the exam on the first try, he or she\nmay be allowed another chance within two months. PS's registration with the Institute\nwill be terminated if he/she fails the exam on his/her second attempt as well.",
"question": "What happen if anyone fail in comprehensive exam in first time?",
"answers.text": [
"e or she\nmay be allowed another chance within two months."
],
"answers.answer_start": [
408
],
"feat_answer_id": [
950679
],
"feat_document_id": [
1582276
],
"feat_question_id": [
1063857
],
"feat_answer_end": [
465
],
"feat_answer_category": [
NaN
],
"feat_file_name": [
NaN
]
}
]
Dataset Fields
The dataset has the following fields (also called "features"):
{
"context": "Value(dtype='string', id=None)",
"question": "Value(dtype='string', id=None)",
"answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)",
"feat_answer_id": "Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)",
"feat_document_id": "Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)",
"feat_question_id": "Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)",
"feat_answer_end": "Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)",
"feat_answer_category": "Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None)",
"feat_file_name": "Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None)"
}
Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
Split name | Num samples |
---|---|
train | 43 |
valid | 11 |
- Downloads last month
- 2