VirtualiZarr#

Create virtual Zarr stores for cloud-friendly access to archival data, using familiar xarray syntax.

The best way to distribute large scientific datasets is via the Cloud, in Cloud-Optimized formats [1]. But often this data is stuck in archival pre-Cloud file formats such as netCDF.

VirtualiZarr[2] makes it easy to create “Virtual” Zarr stores, allowing performant access to archival data as if it were in the Cloud-Optimized Zarr format, without duplicating any data.

Motivation#

“Virtualized data” solves an incredibly important problem: accessing big archival datasets via a cloud-optimized pattern, but without copying or modifying the original data in any way. This is a win-win-win for users, data engineers, and data providers. Users see fast-opening zarr-compliant stores that work performantly with libraries like xarray and dask, data engineers can provide this speed by adding a lightweight virtualization layer on top of existing data (without having to ask anyone’s permission), and data providers don’t have to change anything about their archival files for them to be used in a cloud-optimized way.

VirtualiZarr aims to make the creation of cloud-optimized virtualized zarr data from existing scientific data as easy as possible.

Features#

Inspired by Kerchunk#

VirtualiZarr grew out of discussions on the Kerchunk repository, and is an attempt to provide the game-changing power of kerchunk but in a zarr-native way, and with a familiar array-like API.

You now have a choice between using VirtualiZarr and Kerchunk: VirtualiZarr provides almost all the same features as Kerchunk.

Usage#

Creating the virtual store looks very similar to how we normally open data with xarray:

from virtualizarr import open_virtual_dataset

virtual_datasets = [
    open_virtual_dataset(filepath)
    for filepath in glob.glob('/my/files*.nc')
]

# this Dataset wraps a bunch of virtual ManifestArray objects directly
virtual_ds = xr.combine_nested(virtual_datasets, concat_dim=['time'])

# cache the combined dataset pattern to disk, in this case using the existing kerchunk specification for reference files
virtual_ds.virtualize.to_kerchunk('combined.json', format='json')

Now you can open your shiny new Zarr store instantly:

ds = xr.open_dataset('combined.json', engine='kerchunk', chunks={})  # normal xarray.Dataset object, wrapping dask/numpy arrays etc.

No data has been loaded or copied in this process, we have merely created an on-disk lookup table that points xarray into the specific parts of the original netCDF files when it needs to read each chunk.

See the Usage docs page for more details.

Talks and Presentations#

  • 2024/11/21 - MET Office Architecture Guild - Tom Nicholas - Slides

  • 2024/11/13 - Cloud-Native Geospatial conference - Raphael Hagen - Slides

  • 2024/07/24 - ESIP Meeting - Sean Harkins - Event / Recording

  • 2024/05/15 - Pangeo showcase - Tom Nicholas - Event / Recording / Slides

Credits#

This package was originally developed by Tom Nicholas whilst working at [C]Worthy, who deserve credit for allowing him to prioritise a generalizable open-source solution to the dataset virtualization problem. VirtualiZarr is now a community-owned multi-stakeholder project.

Licence#

Apache 2.0

Pages#

References#