Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:

How did you collect urls?

#4
by images9 - opened

Hi!
Thank you for your fantastic work! I'm so impressed!

I want to create my own dataset by using the Flickr API to extract URLs and metadata of all CC-BY licensed content on Flickr. According to the public license information on Flickr's website, it seems that I can generate a dataset of about 95 million items.

I am running the code below, trying to extract URLs in ascending order of the posted date using 'date-posted-asc'. However, the number of results returned by the search is sparse, and I cannot extract the predicted number of URLs.

Which API and args did you use to extract these URLs? (or do you have any plans to make cc-by version of this dataset? haha)

Thank you!

from flickrapi import FlickrAPI, exceptions
import json
import os, time
import time
import zipfile
from tqdm import tqdm

key = "xxxxxxx"
secret = "xxxxxx"
wait_time = 0.
flickr = FlickrAPI(key, secret, format='parsed-json')

per_page = 500
now = time.time() - 60 * 60 * 24  # photos of yesterday

# load checkpoint
with open('meta/count') as f:
    count = int(f.read())
with open('meta/min_upload_date') as f:
    min_upload_date = str(f.read())

while int(min_upload_date) < now:
    try:
        for page in tqdm(range(1, 5)):
            zip_path = f'./data/{count:09n}.zip'
            json_path = f'./{count:09n}.json'
            result = flickr.photos.search(
                page=page,
                per_page=per_page,
                media='photos',
                sort='date-posted-asc',
                safe_search=1,
                license=4,
                media='photos',
                min_upload_date=min_upload_date,
                extras='url_h, url_m, url_s, licence, date_upload, description'
            )
            photos = result['photos']
            json_data = json.dumps(photos)
            with zipfile.ZipFile(zip_path, 'w', zipfile.ZIP_DEFLATED) as zipf:
                zipf.writestr(json_path, json_data)
            count += per_page
            time.sleep(wait_time)
        min_upload_date = photos['photo'][-1]['dateupload']
    except exceptions.FlickrError:
        time.sleep(10)
        continue

    # write checkpoint
    with open('meta/min_upload_date', mode='w') as f:
        f.write(min_upload_date)
    with open('meta/count', mode='w') as f:
        f.write(str(count))

I'm not planning on a CC-BY version of Megalith. I think https://huggingface.co/datasets/common-canvas/commoncatalog-cc-by provides a reasonable baseline for CC-BY training (though it's a subset of YFCC from 2014, not the full Flickr API output).

To collect URLs I used the same flickr.photos.search API you're using.

image.png

Specifically, I started from a search of the entire time window (ending in 2022, to avoid AI slop) and recursively subdivided a tree of search queries until there was only a single page of results for that search (at which point I would process that "leaf node" and save the results). This worked around the non-uniform distribution of timestamps as well as the 4000-max-results-per-search issue. I also used thread parallelism on the top few levels of the tree to speed things up.

I tracked progress as the proportion of the total time range that had been successfully processed.

image.png

You can probably do something similar with conventional Flickr scrapers like https://github.com/ultralytics/flickr_scraper, but I wrote my own from scratch for the educational value.

Hi.
Thank you for your quick response!
I could reproduce that!
Thank you!

images9 changed discussion status to closed

Sign up or log in to comment