Proposal: Pivot the PV data so it's smaller & easier to query

#14
by Jack-Kelly - opened
Open Climate Fix org
edited Apr 28

The way the data is structured at the moment

The data in the parquet files is currently stored in a very "tall and thin" shape, like this:

ss_id (i64) datetime_GMT generation_Wh (f64)
2405 2018-01-01T00:30:00Z 0.0
2406 2018-01-01T00:30:00Z 0.0
2408 2018-01-01T00:30:00Z 0.0
2409 2018-01-01T00:30:00Z 0.0
2410 2018-01-01T00:30:00Z 0.0

Proposed data structure

I'd like to propose that we pivot the data, to get a very wide data structure like this:

datetime_GMT 2405 2406 2408 2409 2410 ... for 25,000 columns...
2018-01-01T00:30:00Z 0.0 0.0 0.0 0.0 0.0 ...
2018-01-01T01:00:00Z 0.0 0.0 0.0 0.0 0.0 ...
2018-01-01T01:30:00Z 0.0 0.0 0.0 0.0 0.0 ...
2018-01-01T02:00:00Z 0.0 0.0 0.0 0.0 0.0 ...

Benefits of the pivoted structure

Benefit 1: Easier & faster to query

A large benefit is that the pivoted shape is easier to query (less code to write) and much faster to query (because the dataframe library doesn't have to scan through every row for every query). One of the nice features of Parquet files is that users can lazily open a large Parquet file and only load the columns that they need into memory. But users can only take advantage of this feature if the data is laid out such that uses can load only a subset of the columns. In the "tall and thin" shape, even if users only want to load data for a single PV system, users must still load all the data into memory.

Benefit 2: Smaller!

The pivoted shape requires a sixth of the storage space (which makes it easier to store and faster to download):

A year (2018) of 30-minutely PV data takes up this much space:

Compressed and using 64-bit numerical types:

shape size compression ratio
tall & thin 2,700 MB 3.9x
pivoted 469 MB 7.5x

Uncompressed (in memory) and using 64-bit numerical types:

shape size
tall & thin 10.5 GB
pivoted 3.5 GB

Using more concise data types:

shape size compressed? data types compression ratio
pivoted 430 MB compressed float32 4.2x
tall & thin 6,100 MB uncompressed ss_id: uint16, datetime: uint64, generation_Wh: float32
pivoted 1,800 MB uncompressed float32

Why is the pivoted data structure smaller?

In the "tall and thin" shape, we repeat each SS_ID 17,520 times per year of data (the number of half hours in a year), and we repeat each datetime label 25,000 times. In constrast, in the pivoted structure, we're only storing each datetime label once, and we're only storing each SS_ID once.

Benefit 3: Parquet is columnar. So our data should be columnar.

In the "tall and thin" shape, we're not really storing data in the way Parquet is designed to store data. Parquet is a columnar format (see WikiPedia for a concise explanation of what "columnar" means). In short: Parquet works best when each column represents a contiguous, ordered sequence of measurements from a single object.

Disadvantages to using a pivoted structure

The only downside I can think of is that it's a different data structure, so people who are expecting the old data structure might get a shock! (And we'd have to change OCF's data prep code)

Conclusions

So, should we pivot all the Parquet files in this dataset?

Please let us know your thoughts!

Jack-Kelly changed discussion title from Proposal: Pivot the PV data so that it's smaller on disk and easier to query. to Proposal: Pivot the PV data so it's smaller & easier to query
Open Climate Fix org

I'm now thinking that we should leave the shape as-is in this repo. And I'll publish a cleaned and pivoted dataset elsewhere (see issue #16)

Open Climate Fix org
edited Apr 29

Actually, I'm going off the idea of pivoting the data to a very wide format (where each SS_ID is its own column). I now think it may be better to keep the "tall and thin" format, but sort by SS_ID and datetime, and maybe use Parquet's partitioning (perhaps one partition per SS_ID).... I'll do some experiments...

Open Climate Fix org

The experiments show that it's best to keep the "tall and thin" format, but to partition by month (using Hive partitioning naming), and - crucially - to sort by SS_ID first, and then by datetime and, just as crucially, to save "full" stats within each Parquet file

Jack-Kelly changed discussion status to closed

Sign up or log in to comment