Dataset Viewer
Auto-converted to Parquet
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
image: struct<bytes: binary, path: null>
  child 0, bytes: binary
  child 1, path: null
doc_id: int64
sequence_: int64
page: string
-- schema metadata --
pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 516
to
{'image': Image(mode=None, decode=True, id=None), 'doc_id': Value(dtype='int64', id=None), 'sequence': Value(dtype='int64', id=None), 'page': Value(dtype='string', id=None)}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2270, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1879, in _iter_arrow
                  for key, pa_table in self.ex_iterable._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 476, in _iter_arrow
                  for key, pa_table in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 323, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables
                  yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2293, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2241, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              image: struct<bytes: binary, path: null>
                child 0, bytes: binary
                child 1, path: null
              doc_id: int64
              sequence_: int64
              page: string
              -- schema metadata --
              pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 516
              to
              {'image': Image(mode=None, decode=True, id=None), 'doc_id': Value(dtype='int64', id=None), 'sequence': Value(dtype='int64', id=None), 'page': Value(dtype='string', id=None)}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for HANS

HANS is under development and not suited for use yet.

Dataset Details

Dataset Description

The Handwriting Archive: Nordic Samples dataset is a small Danish-language dataset, primarily consisting of transscribed, publically available documents from the Danish National Archive.

The purpose of the dataset is explore how to properly create, maintain and update a repository of HTR training data.

  • Curated by: [Joen Rommedahl]
  • Language(s) (NLP): [Danish]
  • License: [cc-by-4.0]

Dataset Sources [optional]

Most images are from the Danish National Archives viewing platform Arkivalier Online (https://www.rigsarkivet.dk/arkivalieronline/)

Uses

The dataset is currently too small to have any value as a HTR training data set. However, if one should choose to use it anyway, be aware of the following annotation peculiarities:

  • unreadable words are annotated "??" or "???". Make sure to exclude any lines containing "??" from the dataset before usage.

Dataset Structure

[More Information Needed]

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Data Collection and Processing

[More Information Needed]

Who are the source data producers?

[More Information Needed]

Annotations [optional]

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Bias, Risks, and Limitations

[More Information Needed]

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Dataset Card Authors [optional]

[More Information Needed]

Dataset Card Contact

[More Information Needed]

Downloads last month
72