Datasets:
Question: Where should a cleaned version of uk_pv be published?
I plan to clean this dataset (remove spurious values, remove PV systems that misbehave, etc.). I'll document the cleaning process. I'll also publish the cleaned dataset.
My question is: Where should I publish the cleaned dataset?
I definitely don't think that the cleaned dataset should replace the raw dataset.
I guess the two options are:
- We published the cleaned dataset in this repo, or,
- I publish the cleaned dataset in a separate repo (and possibly on Google Cloud, rather than HF), and provides links in the READMEs between the two datasets.
Advantages of keeping the cleaned dataset in this repo:
- Easier for people to find.
Advantages of publishing the cleaned dataset somewhere else:
- I think the digital object identifier (DOI) should only point to one version of the dataset. This feels cleaner and less likely to cause confusion.
- I probably wouldn't plan to update the cleaned dataset as regularly as the raw dataset is updated.
- I may also split the cleaned dataset into "train" and "test".
My current thinking is that I should publish a "cleaned" version of this dataset somewhere else. Maybe a new HF repo. Or maybe just on Google Cloud. This cleaned version would also be presented as a single parquet file in a "very wide" format (with each SS_ID as a new column), and with minimal data types.
The conclusion from a discussion with Peter and Sol is that I should aim to include a simple little Python script with this repo which removes "bad" data.
We don't want to completely replace the original files with the "cleaned" files, just in case people need access to the raw data. And cleaning is slightly subjective.