Rechunker Tutorial

This tutorial notebook explains how to use rechunker with real datasets. We will also use xarray to make some things easier and prettier, but we note that xarray is not a dependency for rechunker.

Toy Example

Create Example Data

Here we load one of xarray’s tutorial datasets and write it to Zarr. This is not actually a big dataset, so rechunker is not really needed here. But it’s a convenient example.

[1]:
import xarray as xr
xr.set_options(display_style='text')
import zarr
import dask.array as dsa

ds = xr.tutorial.open_dataset("air_temperature")
# create initial chunk structure
ds = ds.chunk({'time': 100})
ds.air.encoding = {} # helps when writing to zarr
ds
[1]:
<xarray.Dataset>
Dimensions:  (lat: 25, time: 2920, lon: 53)
Coordinates:
  * lat      (lat) float32 75.0 72.5 70.0 67.5 65.0 ... 25.0 22.5 20.0 17.5 15.0
  * lon      (lon) float32 200.0 202.5 205.0 207.5 ... 322.5 325.0 327.5 330.0
  * time     (time) datetime64[ns] 2013-01-01 ... 2014-12-31T18:00:00
Data variables:
    air      (time, lat, lon) float32 dask.array<chunksize=(100, 25, 53), meta=np.ndarray>
Attributes:
    Conventions:  COARDS
    title:        4x daily NMC reanalysis (1948)
    description:  Data is from NMC initialized reanalysis\n(4x/day).  These a...
    platform:     Model
    references:   http://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanaly...

We can examine the chunk structure of the data variable using Dask’s pretty Array repr.

[2]:
ds.air.data
[2]:
Array Chunk
Bytes 14.76 MiB 517.58 kiB
Shape (2920, 25, 53) (100, 25, 53)
Count 31 Tasks 30 Chunks
Type float32 numpy.ndarray
53 25 2920
[3]:
! rm -rf *.zarr # clean up any existing temporary data
ds.to_zarr('air_temperature.zarr')
[3]:
<xarray.backends.zarr.ZarrStore at 0x16c135070>

Now we open up a Zarr Group and Array that we will use as inputs to rechunker.

[4]:
source_group = zarr.open('air_temperature.zarr')
print(source_group.tree())
/
 ├── air (2920, 25, 53) float32
 ├── lat (25,) float32
 ├── lon (53,) float32
 └── time (2920,) float32
[5]:
source_array = source_group['air']
source_array.info
[5]:
Name/air
Typezarr.core.Array
Data typefloat32
Shape(2920, 25, 53)
Chunk shape(100, 25, 53)
OrderC
Read-onlyFalse
CompressorBlosc(cname='lz4', clevel=5, shuffle=SHUFFLE, blocksize=0)
Store typezarr.storage.DirectoryStore
No. bytes15476000 (14.8M)
No. bytes stored9005544 (8.6M)
Storage ratio1.7
Chunks initialized30/30

Rechunk a single Array

The original array has chunks of (100, 25, 53). Let’s rechunk it to be contiguous in time, but chunked in space. We specify a small value of max_mem in order to force rechunker to create an intermediate dataset. We also have to specify a place to store the final and intermediate data.

We use the rechunk function, which returns a Rechunked object.

[6]:
from rechunker import rechunk

target_chunks = (2920, 25, 1)
max_mem = '1MB'

target_store = 'air_rechunked.zarr'
temp_store = 'air_rechunked-tmp.zarr'

array_plan = rechunk(source_array, target_chunks, max_mem, target_store, temp_store=temp_store)
array_plan
[6]:

Rechunked

Source

<zarr.core.Array (2920, 25, 53) float32>

Intermediate

<zarr.core.Array (2920, 25, 53) float32>

Target

<zarr.core.Array (2920, 25, 53) float32>

Since this array has dimensions, we can also specify the chunks using a dictionary syntax.

[7]:
target_chunks_dict = {'time': 2920, 'lat': 25, 'lon': 1}

# need to remove the existing stores or it won't work
!rm -rf air_rechunked.zarr air_rechunked-tmp.zarr
array_plan = rechunk(source_array, target_chunks_dict, max_mem, target_store, temp_store=temp_store)
array_plan
[7]:

Rechunked

Source

<zarr.core.Array (2920, 25, 53) float32>

Intermediate

<zarr.core.Array (2920, 25, 53) float32>

Target

<zarr.core.Array (2920, 25, 53) float32>

The array_plan is a Rechunked object. It has not actually performed the rechunking yet. To do this, we need to call the execute method. This will use Dask to perform the rechunking.

[8]:
result = array_plan.execute()
result.chunks
_copy_chunk((slice(0, 100, None), slice(0, 25, None), slice(0, 53, None)))_copy_chunk((slice(100, 200, None), slice(0, 25, None), slice(0, 53, None)))

_copy_chunk((slice(200, 300, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(300, 400, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(400, 500, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(500, 600, None), slice(0, 25, None), slice(0, 53, None)))_copy_chunk((slice(600, 700, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(700, 800, None), slice(0, 25, None), slice(0, 53, None)))

_copy_chunk((slice(800, 900, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(900, 1000, None), slice(0, 25, None), slice(0, 53, None)))_copy_chunk((slice(1000, 1100, None), slice(0, 25, None), slice(0, 53, None)))_copy_chunk((slice(1100, 1200, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1200, 1300, None), slice(0, 25, None), slice(0, 53, None)))


_copy_chunk((slice(1300, 1400, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1400, 1500, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1500, 1600, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1600, 1700, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1700, 1800, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1800, 1900, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1900, 2000, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2000, 2100, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2100, 2200, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2200, 2300, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2300, 2400, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2400, 2500, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2500, 2600, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2600, 2700, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2700, 2800, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2800, 2900, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2900, 2920, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(0, 3, None)))_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(3, 6, None)))_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(6, 9, None)))_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(9, 12, None)))


_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(12, 15, None)))

_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(15, 18, None)))_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(18, 21, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(21, 24, None)))

_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(24, 27, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(27, 30, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(30, 33, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(33, 36, None)))_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(36, 39, None)))

_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(39, 42, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(42, 45, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(45, 48, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(48, 51, None)))_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(51, 53, None)))

[8]:
(2920, 25, 1)

By default, Dask will use the multi-threaded scheduler. Since rechunking can take a long time, we might want to use a progress bar.

[9]:
from dask.diagnostics import ProgressBar

with ProgressBar():
    array_plan.execute()
[                                        ] | 0% Completed |  0.0s_copy_chunk((slice(0, 100, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(100, 200, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(200, 300, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(300, 400, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(400, 500, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(500, 600, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(600, 700, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(700, 800, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(800, 900, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(900, 1000, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1000, 1100, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1100, 1200, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1200, 1300, None), slice(0, 25, None), slice(0, 53, None)))
[#########                               ] | 23% Completed |  0.1s_copy_chunk((slice(1300, 1400, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1400, 1500, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1500, 1600, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1600, 1700, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1700, 1800, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1800, 1900, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1900, 2000, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2000, 2100, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2100, 2200, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2200, 2300, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2300, 2400, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2400, 2500, None), slice(0, 25, None), slice(0, 53, None)))
[##################                      ] | 47% Completed |  0.2s_copy_chunk((slice(2500, 2600, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2600, 2700, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2700, 2800, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2800, 2900, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2900, 2920, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(0, 3, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(3, 6, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(6, 9, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(9, 12, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(12, 15, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(15, 18, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(18, 21, None)))
[#############################           ] | 72% Completed |  0.3s_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(21, 24, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(24, 27, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(27, 30, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(30, 33, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(33, 36, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(36, 39, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(39, 42, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(42, 45, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(45, 48, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(48, 51, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(51, 53, None)))
[########################################] | 100% Completed |  0.4s

If we create a distributed cluster, then rechunker will use that when it executes.

[10]:
from dask.distributed import Client, LocalCluster, progress

cluster = LocalCluster()
client = Client(cluster)
distributed.diskutils - INFO - Found stale lock file and directory '/Users/juliusbusecke/code/rechunker/docs/dask-worker-space/worker-r3qkrty3', purging
distributed.diskutils - INFO - Found stale lock file and directory '/Users/juliusbusecke/code/rechunker/docs/dask-worker-space/worker-bn0so3yv', purging
distributed.diskutils - INFO - Found stale lock file and directory '/Users/juliusbusecke/code/rechunker/docs/dask-worker-space/worker-6lqi6o3m', purging
distributed.diskutils - INFO - Found stale lock file and directory '/Users/juliusbusecke/code/rechunker/docs/dask-worker-space/worker-6lsvcs2v', purging
distributed.diskutils - INFO - Found stale lock file and directory '/Users/juliusbusecke/code/rechunker/docs/dask-worker-space/worker-4xan14dj', purging
distributed.diskutils - INFO - Found stale lock file and directory '/Users/juliusbusecke/code/rechunker/docs/dask-worker-space/worker-r_8zlwv3', purging
distributed.diskutils - INFO - Found stale lock file and directory '/Users/juliusbusecke/code/rechunker/docs/dask-worker-space/worker-xbr5qiyc', purging
distributed.diskutils - INFO - Found stale lock file and directory '/Users/juliusbusecke/code/rechunker/docs/dask-worker-space/worker-29nz_tzq', purging
[11]:
future = array_plan.persist()
progress(future)
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
/var/folders/7c/cchjc_ys3z5_33vyp640xycm0000gn/T/ipykernel_23789/3962758384.py in <module>
----> 1 future = array_plan.persist()
      2 progress(future)

AttributeError: 'Rechunked' object has no attribute 'persist'

Now that it is written to disk, we can open the rechunked array however we please. Using Zarr…

[12]:
target_array = zarr.open('air_rechunked.zarr')
target_array
[12]:
<zarr.core.Array (2920, 25, 53) float32>

…or Dask

[13]:
target_array_dask = dsa.from_zarr('air_rechunked.zarr')
target_array_dask
[13]:
Array Chunk
Bytes 14.76 MiB 285.16 kiB
Shape (2920, 25, 53) (2920, 25, 1)
Count 54 Tasks 53 Chunks
Type float32 numpy.ndarray
53 25 2920

Rechunk a Group

In the example above, we only rechunked a single array. We can open it with Dask, but not Xarray, because it doesn’t contain any coordinates or metadata.

Rechunker also supports rechunking entire groups. In this case, target_chunks must be a dictionary.

[14]:
target_chunks = {
    'air': {'time': 2920, 'lat': 25, 'lon': 1},
    'time': None, # don't rechunk this array
    'lon': None,
    'lat': None,
}
max_mem = '1MB'

target_store = 'group_rechunked.zarr'
temp_store = 'group_rechunked-tmp.zarr'

# need to remove the existing stores or it won't work
!rm -rf group_rechunked.zarr group_rechunked-tmp.zarr
array_plan = rechunk(source_group, target_chunks, max_mem, target_store, temp_store=temp_store)
array_plan
[14]:

Rechunked

Source

<zarr.hierarchy.Group '/'>

Intermediate

<zarr.hierarchy.Group '/'>

Target

<zarr.hierarchy.Group '/'>

[15]:
array_plan.execute()
_copy_chunk((slice(1500, 1600, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1600, 1700, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(0, 100, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2100, 2200, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(100, 200, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2200, 2300, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2800, 2900, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2900, 2920, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1700, 1800, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1000, 1100, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2300, 2400, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2400, 2500, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1800, 1900, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1100, 1200, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(300, 400, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(400, 500, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1900, 2000, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(200, 300, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1200, 1300, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2500, 2600, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1300, 1400, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2600, 2700, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(500, 600, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(600, 700, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2000, 2100, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(0, 53, None),))
_copy_chunk((slice(1400, 1500, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2700, 2800, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(700, 800, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(800, 900, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(0, 25, None),))
_copy_chunk((slice(0, 2920, None),))
_copy_chunk((slice(900, 1000, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(0, 3, None)))_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(3, 6, None)))

_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(42, 45, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(9, 12, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(36, 39, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(33, 36, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(45, 48, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(39, 42, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(30, 33, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(48, 51, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(6, 9, None)))_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(51, 53, None)))

_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(12, 15, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(15, 18, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(18, 21, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(21, 24, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(24, 27, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(27, 30, None)))
[15]:
<zarr.hierarchy.Group '/'>

Now that we have written a group, we can open it back up with Xarray.

[16]:
xr.open_zarr('group_rechunked.zarr')
/var/folders/7c/cchjc_ys3z5_33vyp640xycm0000gn/T/ipykernel_23789/4235005900.py:1: RuntimeWarning: Failed to open Zarr store with consolidated metadata, falling back to try reading non-consolidated metadata. This is typically much slower for opening a dataset. To silence this warning, consider:
1. Consolidating metadata in this existing store with zarr.consolidate_metadata().
2. Explicitly setting consolidated=False, to avoid trying to read consolidate metadata, or
3. Explicitly setting consolidated=True, to raise an error in this case instead of falling back to try reading non-consolidated metadata.
  xr.open_zarr('group_rechunked.zarr')
[16]:
<xarray.Dataset>
Dimensions:  (time: 2920, lat: 25, lon: 53)
Coordinates:
  * lat      (lat) float32 75.0 72.5 70.0 67.5 65.0 ... 25.0 22.5 20.0 17.5 15.0
  * lon      (lon) float32 200.0 202.5 205.0 207.5 ... 322.5 325.0 327.5 330.0
  * time     (time) datetime64[ns] 2013-01-01 ... 2014-12-31T18:00:00
Data variables:
    air      (time, lat, lon) float32 dask.array<chunksize=(2920, 25, 1), meta=np.ndarray>
Attributes:
    Conventions:  COARDS
    description:  Data is from NMC initialized reanalysis\n(4x/day).  These a...
    platform:     Model
    references:   http://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanaly...
    title:        4x daily NMC reanalysis (1948)

Often groups have many variables sharing all or a subset of dimensions. In the common case that a given dimension should have equivalent chunks in each variable that contains it, chunks can be provided as a simpler dictionary, mapping dimension names to chunksize.

[17]:
# extend the dataset with some more variables
ds_complex = ds
ds_complex['air_slice'] = ds.air.isel(lat=10)
ds_complex['air_timeseries'] = ds.air.isel(lat=10, lon=10)
ds_complex

target_chunks = {'time': 2920, 'lat': 25, 'lon': 1}
max_mem = '1MB'

target_store = 'group_complex_rechunked.zarr'
temp_store = 'group_complex_rechunked-tmp.zarr'

# need to remove the existing stores or it won't work
!rm -rf group_complex_rechunked.zarr group_complex_rechunked-tmp.zarr

# rechunk directly from dataset this time
array_plan = rechunk(ds_complex, target_chunks, max_mem, target_store, temp_store=temp_store)
array_plan
[17]:

Rechunked

Source
<xarray.Dataset>
Dimensions:         (lat: 25, time: 2920, lon: 53)
Coordinates:
  * lat             (lat) float32 75.0 72.5 70.0 67.5 ... 22.5 20.0 17.5 15.0
  * lon             (lon) float32 200.0 202.5 205.0 207.5 ... 325.0 327.5 330.0
  * time            (time) datetime64[ns] 2013-01-01 ... 2014-12-31T18:00:00
Data variables:
    air             (time, lat, lon) float32 dask.array<chunksize=(100, 25, 53), meta=np.ndarray>
    air_slice       (time, lon) float32 dask.array<chunksize=(100, 53), meta=np.ndarray>
    air_timeseries  (time) float32 dask.array<chunksize=(100,), meta=np.ndarray>
Attributes:
    Conventions:  COARDS
    title:        4x daily NMC reanalysis (1948)
    description:  Data is from NMC initialized reanalysis\n(4x/day).  These a...
    platform:     Model
    references:   http://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanaly...
Intermediate

<zarr.hierarchy.Group '/'>

Target

<zarr.hierarchy.Group '/'>

[18]:
array_plan.execute()
_copy_chunk((slice(1500, 1600, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1600, 1700, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1700, 1800, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1800, 1900, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1900, 2000, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(200, 300, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2000, 2100, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(0, 100, None), slice(0, 53, None)))
_copy_chunk((slice(100, 200, None), slice(0, 53, None)))
_copy_chunk((slice(1000, 1100, None), slice(0, 53, None)))
_copy_chunk((slice(1100, 1200, None), slice(0, 53, None)))
_copy_chunk((slice(1200, 1300, None), slice(0, 53, None)))
_copy_chunk((slice(1300, 1400, None), slice(0, 53, None)))
_copy_chunk((slice(1400, 1500, None), slice(0, 53, None)))
_copy_chunk((slice(800, 900, None), slice(0, 53, None)))
_copy_chunk((slice(900, 1000, None), slice(0, 53, None)))
_copy_chunk((slice(2800, 2900, None),))
_copy_chunk((slice(2900, 2920, None),))
_copy_chunk((slice(300, 400, None),))
_copy_chunk((slice(400, 500, None),))
_copy_chunk((slice(500, 600, None),))
_copy_chunk((slice(600, 700, None),))
_copy_chunk((slice(700, 800, None),))
_copy_chunk((slice(2100, 2200, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2200, 2300, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(0, 100, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2600, 2700, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(100, 200, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2300, 2400, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2800, 2900, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2900, 2920, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1700, 1800, None), slice(0, 53, None)))
_copy_chunk((slice(800, 900, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(0, 100, None),))
_copy_chunk((slice(2100, 2200, None),))
_copy_chunk((slice(1000, 1100, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(100, 200, None),))
_copy_chunk((slice(2400, 2500, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1000, 1100, None),))
_copy_chunk((slice(2400, 2500, None),))
_copy_chunk((slice(1400, 1500, None),))
_copy_chunk((slice(2700, 2800, None),))
_copy_chunk((slice(300, 400, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(600, 700, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1100, 1200, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1200, 1300, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(700, 800, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2500, 2600, None), slice(0, 53, None)))
_copy_chunk((slice(1300, 1400, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1600, 1700, None),))
_copy_chunk((slice(1900, 2000, None),))
_copy_chunk((slice(1400, 1500, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(0, 53, None),))
_copy_chunk((slice(2500, 2600, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(2700, 2800, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(700, 800, None), slice(0, 53, None)))
_copy_chunk((slice(2300, 2400, None),))
_copy_chunk((slice(2500, 2600, None),))
_copy_chunk((slice(0, 25, None),))
_copy_chunk((slice(2300, 2400, None), slice(0, 53, None)))
_copy_chunk((slice(2400, 2500, None), slice(0, 53, None)))
_copy_chunk((slice(1700, 1800, None),))_copy_chunk((slice(900, 1000, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(1600, 1700, None), slice(0, 53, None)))
_copy_chunk((slice(600, 700, None), slice(0, 53, None)))

_copy_chunk((slice(1100, 1200, None),))
_copy_chunk((slice(2100, 2200, None), slice(0, 53, None)))
_copy_chunk((slice(2000, 2100, None), slice(0, 53, None)))_copy_chunk((slice(400, 500, None), slice(0, 25, None), slice(0, 53, None)))

_copy_chunk((slice(900, 1000, None),))
_copy_chunk((slice(1800, 1900, None), slice(0, 53, None)))
_copy_chunk((slice(2200, 2300, None), slice(0, 53, None)))
_copy_chunk((slice(2000, 2100, None),))
_copy_chunk((slice(1500, 1600, None),))
_copy_chunk((slice(500, 600, None), slice(0, 53, None)))
_copy_chunk((slice(2200, 2300, None),))
_copy_chunk((slice(2800, 2900, None), slice(0, 53, None)))
_copy_chunk((slice(2600, 2700, None), slice(0, 53, None)))
_copy_chunk((slice(2900, 2920, None), slice(0, 53, None)))
_copy_chunk((slice(1500, 1600, None), slice(0, 53, None)))
_copy_chunk((slice(2700, 2800, None), slice(0, 53, None)))
_copy_chunk((slice(500, 600, None), slice(0, 25, None), slice(0, 53, None)))
_copy_chunk((slice(300, 400, None), slice(0, 53, None)))
_copy_chunk((slice(1800, 1900, None),))
_copy_chunk((slice(400, 500, None), slice(0, 53, None)))_copy_chunk((slice(1900, 2000, None), slice(0, 53, None)))

_copy_chunk((slice(800, 900, None),))
_copy_chunk((slice(200, 300, None),))
_copy_chunk((slice(200, 300, None), slice(0, 53, None)))
_copy_chunk((slice(2600, 2700, None),))
_copy_chunk((slice(1300, 1400, None),))
_copy_chunk((slice(1200, 1300, None),))
_copy_chunk((slice(0, 2920, None),))
_copy_chunk((slice(0, 2920, None),))
_copy_chunk((slice(0, 2920, None), slice(0, 53, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(0, 3, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(3, 6, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(30, 33, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(33, 36, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(36, 39, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(39, 42, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(42, 45, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(45, 48, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(48, 51, None)))_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(51, 53, None)))

_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(6, 9, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(9, 12, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(12, 15, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(15, 18, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(18, 21, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(21, 24, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(24, 27, None)))
_copy_chunk((slice(0, 2920, None), slice(0, 25, None), slice(27, 30, None)))
[18]:
<zarr.hierarchy.Group '/'>
[19]:
xr.open_zarr('group_complex_rechunked.zarr')
/var/folders/7c/cchjc_ys3z5_33vyp640xycm0000gn/T/ipykernel_23789/3867248564.py:1: RuntimeWarning: Failed to open Zarr store with consolidated metadata, falling back to try reading non-consolidated metadata. This is typically much slower for opening a dataset. To silence this warning, consider:
1. Consolidating metadata in this existing store with zarr.consolidate_metadata().
2. Explicitly setting consolidated=False, to avoid trying to read consolidate metadata, or
3. Explicitly setting consolidated=True, to raise an error in this case instead of falling back to try reading non-consolidated metadata.
  xr.open_zarr('group_complex_rechunked.zarr')
[19]:
<xarray.Dataset>
Dimensions:         (time: 2920, lat: 25, lon: 53)
Coordinates:
  * lat             (lat) float32 75.0 72.5 70.0 67.5 ... 22.5 20.0 17.5 15.0
  * lon             (lon) float32 200.0 202.5 205.0 207.5 ... 325.0 327.5 330.0
  * time            (time) datetime64[ns] 2013-01-01 ... 2014-12-31T18:00:00
Data variables:
    air             (time, lat, lon) float32 dask.array<chunksize=(2920, 25, 1), meta=np.ndarray>
    air_slice       (time, lon) float32 dask.array<chunksize=(2920, 1), meta=np.ndarray>
    air_timeseries  (time) float32 dask.array<chunksize=(2920,), meta=np.ndarray>
Attributes:
    Conventions:  COARDS
    description:  Data is from NMC initialized reanalysis\n(4x/day).  These a...
    platform:     Model
    references:   http://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanaly...
    title:        4x daily NMC reanalysis (1948)

Note that all the variables now have the same time chunks. Other dimensions (if they exist) also have consistent chunks.

Cloud Example

In this example we use real data from Pangeo’s Cloud Data Catalog. This dataset is stored in Google Cloud Storage. We also use a Dask Gateway distributed cluster to scale up our processing. This part of the tutorial won’t work for you unless you are in a Pangeo Cloud environment or binder.

[31]:
from dask_gateway import GatewayCluster
cluster = GatewayCluster()
cluster.scale(20)
cluster
---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
/var/folders/7c/cchjc_ys3z5_33vyp640xycm0000gn/T/ipykernel_22645/578470573.py in <module>
----> 1 from dask_gateway import GatewayCluster
      2 cluster = GatewayCluster()
      3 cluster.scale(20)
      4 cluster

ModuleNotFoundError: No module named 'dask_gateway'
[ ]:
from dask.distributed import Client
client = Client(cluster)
client
[ ]:
import gcsfs
# a zarr group lives here
url = 'gs://pangeo-cmems-duacs'
gcs = gcsfs.GCSFileSystem(requester_pays=True)
source_store = gcs.get_mapper(url)

Open Zarr Array

[ ]:
group = zarr.open_consolidated(source_store, mode='r')
source_array = group['sla']
source_array
[ ]:
source_array.chunks

Make a Rechunking Plan

[ ]:
max_mem = '1GB'
target_chunks = (8901, 72, 72)
# you must have write access to this location
store_tmp = gcs.get_mapper('pangeo-scratch/rabernat/rechunker_demo/temp.zarr')
store_target = gcs.get_mapper('pangeo-scratch/rabernat/rechunker_demo/target.zarr')
r = rechunk(source_array, target_chunks, max_mem,
                      store_target, temp_store=store_tmp)
r

Execute the Plan

[ ]:
result = r.execute()
result
[ ]:
dsa.from_zarr(result)
[ ]: