The dataset viewer is not available for this split.
Error code: FeaturesError Exception: ParserError Message: Error tokenizing data. C error: Expected 5 fields in line 2380, saw 6 Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 233, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2831, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1845, in _head return _examples_to_batch(list(self.take(n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2012, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1507, in __iter__ for key_example in islice(self.ex_iterable, self.n - ex_iterable_num_taken): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 268, in __iter__ for key, pa_table in self.generate_tables_fn(**gen_kwags): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 190, in _generate_tables for batch_idx, df in enumerate(csv_file_reader): File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1843, in __next__ return self.get_chunk() File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1985, in get_chunk return self.read(nrows=size) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1923, in read ) = self._engine.read( # type: ignore[attr-defined] File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read chunks = self._reader.read_low_memory(nrows) File "parsers.pyx", line 850, in pandas._libs.parsers.TextReader.read_low_memory File "parsers.pyx", line 905, in pandas._libs.parsers.TextReader._read_rows File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status File "parsers.pyx", line 2061, in pandas._libs.parsers.raise_parser_error pandas.errors.ParserError: Error tokenizing data. C error: Expected 5 fields in line 2380, saw 6
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
The dataset of SLG framework. The paper as the following: https://link.springer.com/chapter/10.1007/978-3-031-35320-8_18 arxiv : https://arxiv.org/abs/2306.15978
Paper code is opensource in GitHub: https://github.com/ganchengguang/SLG-framework
Cite BibTex:
@inproceedings{gan2023sentence, title={Sentence-to-label generation framework for multi-task learning of japanese sentence classification and named entity recognition}, author={Gan, Chengguang and Zhang, Qinghao and Mori, Tatsunori}, booktitle={International Conference on Applications of Natural Language to Information Systems}, pages={257--270}, year={2023}, organization={Springer} }
This dataset is constructed base Japanese Wikpidia NER dataset: Japanese Wikipedia NER dataset Takahiro Omi https://github.com/stockmarkteam/ner-wikipedia-dataset
- Downloads last month
- 50