frimelle's picture
frimelle HF staff
Update README.md
bd137a0 verified
metadata
configs:
  - config_name: commons_images
    data_files:
      - split: train
        path: commons_images/train/*.tar
      - split: validation
        path: commons_images/validation/*.tar
      - split: test
        path: commons_images/test/*.tar
  - config_name: all_wikidata_items
    data_files: all_wikidata_items/*.tar
  - config_name: frequent_wikidata_items
    data_files: frequent_wikidata_items/*.tar
language:
  - en
pretty_name: 'Visual Entity Linking: Wikimedia Commons & Wikidata'
size_categories:
  - 1M<n<10M
license: cc-by-sa-4.0
tags:
  - wikimedia

Visual Entity Linking: Wikimedia Commons & Wikidata

This dataset allows to train and evaluate ML models that link Wikimedia Commons images to the Wikidata items they depict.

Disclaimer: All images contained in this dataset are generally assumed to be freely usable (as intended for Wikimedia Commons). Each image's license and author/ uploader is - to the best of our ability - reported in its metadata (see section Dataset Structure). If you want your image's attribution changed or the image completely removed from the dataset, please use the Community tab of this repository or the contact information at the bottom of this dataset card to inform us.

Description

Wikimedia Commons acts as the media storage service for other wikis such as Wikipedia and contains over 100 million images. Wikidata, on the other hand, represents a knowledge graph (KG) of over 100 million entities, mainly comprising so-called items (such as house cat or Angela Merkel). In order to facilitate image understanding and the search and organization of Commons images in a machine-friendly way, the Wikimedia community initiated the Structured Data project: Users can add multiple items to the dedicated depicts statement of a Commons image (on the Structured Data tab), indicating that the image portrays these annotated item(s). However, as of November 2023 only about 15% of all Commons images have at least one annotated item, leaving a gap that may be filled via automation.

The objective that follows from our problem task is to predict for a given Commons image the Wikidata items it depicts. Specifically, we match all items of our KG to the Commons image and consider the top-k results, which overall can be seen as one application of Visual Entity Linking (VEL). The k results are usually collected by taking the items whose learned representation have the highest cosine similarity to the Commons image's representation. They can then either be used to evaluate model performance via measures such as Recall@k or Mean Average Precision or, in practice, to provide them to a user in order for them to decide which items are actually suitable candidates for an image's depicts statement.

The user-provided item annotations act as our dataset's ground-truth labels. Notice that this dataset constitutes a multi-label challenge, since each image can have multiple items as labels (even though the majority does have only one). The dataset and task are multi-modal at their core: In the simple scenario each Commons image is matched against the KG items being represented as text (item name plus short description). Because of these image-text pairs, many VEL approaches build upon the CLIP architecture. However, advanced scenarios can additionally utilize the textual information present for Commons images (description, Commons categories) as well as the image(s) often available for Wikidata items. Another source of input data are KG embeddings which aim at capturing similarities between KG entities in a latent space. There exist pre-trained KG embeddings for Wikidata items in the form of 200-dimensional embeddings that are also included in this dataset (see section Dataset Structure).

It is important to note that this dataset only contains text for a Commons image or Wikidata item (if any) that is ensured to be in English (usually detected by a prefix or JSON key such as "en:"). Incorporating more languages might be of interest for further research and datasets. Big challenges that the problem task imposes include the high number of candidate items, their similarity or varying granularity as well as the skewed distribution of annotations across these items.

Use Cases

The original and main use case of this dataset is VEL between Wikimedia Commons images and Wikidata items. However, depending on the need and with according processing or further input data, the dataset may also be used for other purposes:

  • image classification: establish (fine-grained or rather coarse) classes from the Wikidata items,
  • visual question answering: construct natural-language questions from the ground-truth item(s) of a Commons image,
  • image search: find the best-matching Commons image(s) to add to a Wikidata item or Wikipedia page (a "reversed" VEL task compared to ours).

Dataset Creation

The motivation for this dataset is to ease the training and evaluation of ML models suitable for the VEL task at hand. Overall, it aims to contribute to Commons' Structured Data project by exploring the potential of automated approaches, possibly resulting in a solution that will be actually used in production on Commons. Compared to much related work, our dataset is open-domain (not limited to images of only persons or plants, etc.) and includes many more images for model training, validation and testing (1 million in total).

The data included in this dataset stems from the following sources (November 2023, here linking to latest):

  • a dump for Commons structured data (image ID, ground-truth item labels),
  • a dump for Commons metadata (image ID, description, categories, image license),
  • a dump for Wikidata entities incl. all items (item QID, label, description, superclasses, item image),
  • download of all desired raw Commons images (not included in a separate dump, width 224px) via the MediaWiki API,
  • pre-trained KG embeddings of (most of) the candidate items from PyTorch Big Graph.

All content that is related to the Wikimedia projects (the uploaded images, attached metadata, and item pages) is created and maintained by the Wikimedia community. Note that there is no additional annotation procedure conducted by us. However, we do some filtering steps: We only consider those Commons images from the dump which do have at least one depicts statement (about 15 million). Then, we randomly shuffle this set once to remove any biases of the upload date or upload user. Lastly, we select the first 1 million images which comprise the dataset. Similarly, out of all Wikidata items extracted from their dump, we only keep those which are annotated at least once across the ~15 million images, resulting in ~2.3 million items. This is a naive, but plausible approach of restricting the candidate pool to only items that potentially can be even depicted and accordingly annotated (as opposed to abstract concepts, scholarly articles, etc. of which there are many in Wikidata's KG).

One separate processing step is to handle the item imbalance issue: Over 50% of all ~2.3 million candidate items are only depicted once and over 90% less than ten times. Knowing the challenges of ML when dealing with (too) few examples per class, we also want to provide an easier version of the problem task: This is done by essentially getting rid of these long-tail items and replacing them with more frequent, more generic related items. In particular, we utilize the parsed KG item hierarchy to find related superclass items for the ones we want to replace.

We define an integer threshold f which determines what items to keep as candidates and, accordingly, how to adjust the ground-truth labels: Only those items are further considered that appear at least f times in our train split. However, "appearing" accounts for up to three hops in the KG item hierarchy; e.g. "human" is a rather rare actual label (since usually the concrete depicted person has a Wikidata item which is linked to), but is a direct superclass of every specific person's item and as such the specific labels also implies one occurrence of "human". In the same way, labels of discarded items get changed to the nearest found superclass item(s). In the unlikely case that no sufficient replacement item(s) could be found, the image is simply skipped.

In this dataset repository and in our own experiments, we mainly used f=10 as a reasonable requirement for the kept items (only ~18.5k are then left). Additionally, this repository contains all data for f=0, meaning all candidate items are kept and ground-truth labels remain unchanged. Note that for this dataset we ensured both f=0 and f=10 being comprised of the same exact set of images for better comparison of results. For a more detailed explanation on the dataset structure and the individual data fields, take a look at the next section.

Dataset Structure

This dataset is implemented as a WebDataset (that can be both downloaded in full or processed in a streaming fashion) in order to easily deal with its total size of around 60 GB.

As can be inspected in the Dataset Viewer, this dataset contains three configurations (data subsets) that can be loaded individually:

  1. commons_images: All Commons images incl. their metadata (esp. ground-truth labels), divided into train/validation/test splits (80-10-10).
  2. all_wikidata_items: Information of all candidate Wikidata items (metadata, possibly image, f=0).
  3. frequent_wikidata_items: Information of rather frequent Wikidata items (metadata, possibly image, f=10).

Below you can find a table summarizing some statistics regarding the splits and candidate items:

f = 0 f = 10
#images train
(#rows)
(#gt_items)
800,000
(1,377,684)
(490,876)
800,000
(1,498,026)
(17,287)
#images validation
(#rows)
(#gt_items)
100,000
(195,535)
(72,055)
100,000
(212,885)
(14,253)
#images test
(#rows)
(#gt_items)
100,000
(100,000)
(72,271)
100,000
(100,000)
(14,351)
#items 2,305,611 18,522

Note that the number of rows (or examples) for the train and validations splits is higher than their respective number of images, because many images have more than one ground-truth label while we want to make use of each of them in training and validation mini-batches. So, while the Commons images themselves were randomly shuffled beforehand, users have to ensure this also holds true on the level of individual rows if they do not want all labels of an image to be part of the same mini-batch. #gt_items indicates the number of unique Wikidata items present as ground-truth labels in the respective split (and threshold).

In the following, the detailed structure and content of every configuration (and split) is described, listing the the column names and potentially subfields:

Commons Images Config

The structure of the train, validation and test splits of commons_images is identical.

  • "__key__": The image's unique Commons page ID. The corresponding Commons media page URL is constructed by https://commons.wikimedia.org/?curid=<ID>.
  • "jpg" and "png": The Commons image itself as a PIL.Image. Since we collect both jpg/jpeg and png images from Commons but HF datasets are required to have the same set of columns per row (unless explicitly stating Features on dataset loading), we keep a "jpg" and a "png" column for every row. On the other hand, the WebDataset library needs a column content that is valid for the according column name for it to get automatically decoded. So, we decide to use the minimal jpg or png image for the image type not actually given in order to limit the required space overhead (which is negligible in relation to the remaining dataset size).
  • "json": All of the image's metadata:
    • img_id: int - the image's Commons page ID (same as __key__),
    • categories: List[string] - the Commons categories associated with the image,
    • description: string - the English image description (empty string if not available),
    • f0_labels: List[int] - the ground-truth item labels (QIDs) for f=0 (i.e. no threshold),
    • f0_label_indices: List[int] - global indices of the f=0 item labels (in the unshuffled all_wikidata_items subset) for easy access,
    • f10_labels: List[int] - the ground-truth item labels (QIDs) for f=10,
    • f10_label_indices: List[int] - global indices of the f=10 item labels (in the unshuffled frequent_wikidata_items subset) for easy access,
    • img_extension: string - the image type of the actual image (as opposed to the minimum image),
    • img_author: string - the inferred image author or uploader (empty string if not available),
    • img_license: string - the inferred image license stated on Commons (empty string if not available).

Wikidata Items Config

The structure of all_wikidata_items and frequent_wikidata_items is identical.

  • "__key__": The item's unique Wikidata QID. The corresponding Wikidata item page URL is constructed by https://www.wikidata.org/wiki/Q<QID>.
  • "jpg" and "png": The item's first linked image from the image statement - if any -, otherwise both "jpg" and "png" are their respective default files as explained above.
  • "json": All of the item's data and image metadata:
    • qid: int - the item's Wikidata QID (same as __key__),
    • name: string - the English short name of the item (in rare cases empty),
    • description: string - the English item description (in rare cases empty),
    • img_extension: string|null - the image type of the actual image (as opposed to the minimum image); if null, no actual image is available,
    • img_author: string - the inferred image author or uploader (empty string if not available),
    • img_license: string - the inferred image license stated on Commons (empty string if not available),
    • superclasses: List[List[int]] - superclasses of the item across all candidate items, divided up by the number of hops in the KG item hierarchy.
  • "npy": The pre-trained Wikidata KG embedding of this item, represented as a 200-dimensional float numpy array. If no pre-trained is available, it is filled with zeros.

Bias, Risks and Limitations

None of the Commons images used in this dataset were filtered by their depicted content, meaning that they might contain violent, explicit or other sensitive content. Accordingly, personal or private data (assumed to be compatible with the policies of the Wikimedia community) might also be present in the dataset.

The ground-truth quality of the dataset might suffer from the fact that the item annotation itself is not unambiguous and that partly contradicting community guidelines exist on what items to add to the depicts statement. We did not refine the ground-truth labels in any way, which is why on rare occasions a label might be unreasonable or even plain wrong.

Since we directly rely on the Wikimedia community to upload images and annotate depicted Wikidata items, biases present in this upload or annotation behaviors likely are reflected in our dataset, too. This regards both what images even get uploaded and annotated (and, therefore, can be part of this dataset) as well as which items are chosen to be included in the depicts statements - and which not (especially because in most cases there are plenty of different items plausible to select). No explicit steps were taken to assess or reduce these biases, relying on the size and diversity of the Wikimedia community itself.

Citation

BibTeX: TBA

Dataset & Dataset Card Creators

This dataset was created as part of a university project at the HPI AI & Intelligent Systems chair, under supervision of Lucie-Aimée Kaffee, Russa Biswas, and Gerard de Melo.

Its creators can be contacted under the following e-mail addresses:

[email protected]
[email protected]
[email protected]
[email protected]
[email protected]