Corrupt Zarr data in dwd-icon-eu dataset
Hi,
I've been trying to access the data via streaming and encountered a fundamental issue with the data files themselves.
File: data/2022/12/29/20221229_00.zarr.zip
Variable: t_2m
The .zarray metadata for this variable claims the data chunks are compressed with blosc2 and should decompress to a size of 456,400 bytes (for a chunk of shape (326, 350) with dtype float32).
However, a direct HTTP Range Request for the first chunk (t_2m/0.0.0) downloads a blob of 5,454,001 bytes. The blosc2.decompress function has no effect on this data, indicating the compression metadata is incorrect.
This leads to a ValueError: buffer size must be a multiple of element size when trying to interpret the downloaded bytes as a numpy array, as 5,454,001 is not divisible by 4.
It seems the Zarr metadata in the archive is corrupted or does not match the actual data chunks. Could you please investigate this?
Hi @obwohl ,
Are you able to share the HTTP request you are making? So far as I can tell, the metadata is correct, but I am loading the data differently: downloading the file and investigating locally via
>>> import xarray as xr
>>> import zarr
>>> import ocf_blosc2
>>> store = zarr.storage.ZipStore("/path/to/20221229_12.zarr.zip", mode='r')
>>> ds = xr.open_zarr(store)
>>> print(ds.data_vars["t_2m"].chunks)
((37, 37, 19), (326, 326, 5), (350, 350, 350, 327))
So getting the first chunk as a numpy array can be done via the following (swap .data
for .values
)
>>> ds.data_vars["t_2m"].isel(step=slice(0,37), latitude=slice(0,326), longitude=slice(0,350)).data
dask.array<getitem, shape=(37, 326, 350), dtype=float32, chunksize=(37, 326, 350), chunktype=numpy.ndarray>