--- annotations_creators: - machine-generated language: - en language_creators: - machine-generated license: - mit multilinguality: - monolingual pretty_name: United Kingdom PV Solar generation size_categories: - 1B` on your command line before running the Python snippet above. (Note that, if you want to use `hive_partitioning` in Polars then you'll have to wait for [Polars PR #22661](https://github.com/pola-rs/polars/pull/22661) to be merged, or [compile Polars](https://docs.pola.rs/development/contributing/#setting-up-your-local-environment) with that PR. Alternatively, you can use `scan_parquet("hf://datasets/openclimatefix/uk_pv/30_minutely")` without `hive_partitioning` or `hive_schema`.) ### Downloading the dataset Streaming from Hugging Face is slow. So streaming isn't practical if you're planning to use a lot of the data. It's best to download the data first. See the [Hugging Face docs on downloading datasets](https://huggingface.co/docs/hub/en/datasets-downloading). (Note that, on Ubuntu, you can install `git lfs` with `apt install git-lfs`. See [this page](https://github.com/git-lfs/git-lfs#getting-started) for more info on installing `git lfs`.) Note that, if you want the best performance when filtering by `datetime_GMT` then you'll want to make sure that you're only reading the parquet partitions that contain the data you need. In Polars, you have to explicitly filter by the Hive partitions `year` and `month`, like this: ```python df.filter( # Select the Hive partitions for this sample (which doubles performance!) pl.date(pl.col("year"), pl.col("month"), day=1).is_between( start_datetime.date().replace(day=1), end_datetime.date().replace(day=1) ), # Select the precise date range for this sample: pl.col("datetime_GMT").is_between(start_datetime, end_datetime), ) ``` ## Known issues - There is no data at exactly midnight on the first day of each month, for the years 2010 to 2024. - There are some missing readings. This is inevitable with any large-scale real-world dataset. Sometimes there will be data missing for a single timestep. Sometimes longer periods of readings will be missing (days, weeks, or months). ## Data cleaning We have already dropped rows with NaNs, and to dropped any rows in the half-hourly data where the `generation_Wh` is above 100,000 Wh. We have not performed any more cleaning steps because data cleaning is slightly subjective. And "obviously wrong" data (like an insanely high value) can indicate that all the readings near to that "insane" reading should be dropped. We recommend the following cleaning steps: - Remove all the periods of bad data described in `bad_data.csv`. (And feel free to suggest more periods of bad data!) - Remove all negative `generation_Wh` values. Consider also removing the readings immediately before and after any negative value (for a given SS_ID). Or, if you're feeling really ruthless (!), drop the entire day of data whenever a solar system produces a negative value during that day. - For the half-hourly data, remove any rows where the `generation_Wh` is more than that solar system's `kWp x 750` (where `kWp` is from the metadata). (In principle, the highest legal `generation_Wh` should be `kWp x 500`: We multiply by 1,000 to get from kW to watts, and then divide by 2 to get from watts to watt-hours per half hour). We increase the threshold to 750 because some solar systems do sometimes generate more than their nominal capacity and/or perhaps the nominal capacity is slightly wrong. - For each day of data for a specific solar PV system: - Remove any day where there `generation_Wh` is non-zero at night. - Remove any day where `generation_Wh` is zero during the day when there is significant amounts of irradiance. (Or, another way to find suspicious data is to compare each PV system's power output with the average of its geospatial neighbours). Here's a Python snippet for removing rows where the `generation_Wh` is higher than 750 x `kWp`: ```python import polars as pl from datetime import date import pathlib # Change these paths! PV_DATA_PATH = pathlib.Path("~/data/uk_pv").expanduser() OUTPUT_PATH = pathlib.Path("~/data/uk_pv_cleaned/30_minutely").expanduser() # Lazily open source data df = pl.scan_parquet( PV_DATA_PATH / "30_minutely", hive_schema={"year": pl.Int16, "month": pl.Int8}, ) metadata = pl.read_csv(PV_DATA_PATH / "metadata.csv") # Process one month of data at a time, to limit memory usage months = pl.date_range(date(2010, 11, 1), date(2025, 4, 1), interval="1mo", eager=True) for _first_day_of_month in months: output_path = OUTPUT_PATH / f"year={_first_day_of_month.year}" / f"month={_first_day_of_month.month}" output_path.mkdir(parents=True, exist_ok=True) ( df.filter( # Select the Parquet partition for this month: pl.col.year == _first_day_of_month.year, pl.col.month == _first_day_of_month.month ) .join(metadata.select(["ss_id", "kWp"]).lazy(), on="ss_id", how="left") .filter(pl.col.generation_Wh < pl.col.kWp * 750) .drop(["year", "month", "kWp"]) .write_parquet(output_path / "data.parquet", statistics=True) ) ``` ## Citing this dataset For referencing - you can use the DOI: 10.57967/hf/0878, or this is the full reference BibteX: ```bibtex @misc {open_climate_fix_2025, author = { {Open Climate Fix} }, title = { uk_pv (Revision ) }, year = 2025, url = { https://huggingface.co/datasets/openclimatefix/uk_pv }, doi = { 10.57967/hf/0878 }, publisher = { Hugging Face } } ``` ## useful links https://huggingface.co/docs/datasets/share - this repo was made by following this tutorial