Skip to content

Commit 29bf0db

Browse files
author
Georgy Ayzel
committed
docstrings and docs have been updated
1 parent 622bf18 commit 29bf0db

File tree

7 files changed

+626
-191
lines changed

7 files changed

+626
-191
lines changed

docs/metrics.rst

+43
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,45 @@
11
Metrics
22
=======
3+
4+
The ``rainymotion`` library provides the extensive list of goodness-of-fit statistical metrics to evaluate nowcasting models performance.
5+
6+
================ =====================================
7+
Metric Description
8+
================ =====================================
9+
**Regression**
10+
R Correlation coefficient
11+
R2 Coefficient of determination
12+
RMSE Root mean squared error
13+
MAE Mean absolute error
14+
**QPN specific**
15+
CSI Critical Success Index
16+
FAR False Alarm Rate
17+
POD Probability Of Detection
18+
HSS Heidke Skill Score
19+
ETS Equitable Threat Score
20+
BSS Brier Skill Score
21+
**ML specific**
22+
ACC Accuracy
23+
precision Precision
24+
recall Recall
25+
FSC F1-score
26+
MCC Matthews Correlation Coefficient
27+
================ =====================================
28+
29+
You can easily use any metric for verification of your nowcasts:
30+
31+
.. code-block:: python
32+
33+
# import the specific metric from the rainymotion library
34+
from rainymotion.metrics import CSI
35+
36+
# read your observations and simulations
37+
obs = np.load("/path/to/observations")
38+
sim = np.load("/path/to/simulations")
39+
40+
# calculate the corresponding metric
41+
csi = CSI(obs, sim, threshold=1.0)
42+
43+
44+
.. seealso::
45+
:doc:`notebooks`.

docs/models.rst

+185
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,187 @@
11
Models
22
======
3+
4+
Documentation for all precipitation nowcasting models implemented in ``rainymotion.models`` module.
5+
6+
For the detailed model description please refer to our paper:
7+
8+
.. note:: *Ayzel, G., Heistermann, M., and Winterrath, T.: Optical flow models as an open benchmark for radar-based precipitation nowcasting (rainymotion v0.1), Geosci. Model Dev. Discuss., https://doi.org/10.5194/gmd-2018-166, in review, 2018.*
9+
10+
The Sparse group
11+
----------------
12+
The central idea around this model group is to identify distinct features in a radar image that are suitable for tracking. In this context, a "feature" is defined as a distinct point ("corner") with a sharp gradient of rainfall intensity. Inside this group, we developed two models that slightly differ with regard to both tracking and extrapolation.
13+
14+
The SparseSD model
15+
~~~~~~~~~~~~~~~~~~
16+
The first model (SparseSD, SD stands for Single Delta) uses only the two most recent radar images for identifying, tracking, and extrapolating features. Assuming that *t* denotes both the nowcast issue time and the time of the most recent radar image, the implementation can be summarized as follows (Fig. 1):
17+
18+
* Identify features in a radar image at time *t-1* using the Shi--Tomasi corner detector. This detector determines the most prominent corners in the image based on the calculation of the corner quality measure;
19+
* Track these features at time *t* using the local Lucas--Kanade optical flow algorithm. This algorithm tries to identify the location of feature we previously identified on the radar image at time *t-1* on the radar image at time *t* based on the solving a set of optical flow equations in the local feature neighborhood using the least-squares approach;
20+
* Linearly extrapolate the features’ motion in order to predict the features’ locations at each lead time *n*;
21+
* Calculate the affine transformation matrix for each lead time *n* based on the all identified features’ locations at time *t* and *t+n* using the least-squares approach. This matrix uniquely identifies the required transformation of the last observed radar image at time *t* to obtain nowcasted images at times *t+1...t+n* providing the smallest possible difference between identified and extrapolated features' locations;
22+
* Warp the radar image at time *t* for each lead time *n* using the corresponding affine matrix, and linearly interpolate remaining discontinuities. Warping procedure uniquely transforms each pixel location of the radar image at time *t* to its new location on the corresponding nowcasted radar images at times *t+1...t+n* and then performs linear interpolation procedure in order to interpolate nowcasted pixels' intensities to the original grid of the radar image at time *t*.
23+
24+
.. figure:: ./notebooks/images/sparsesd_sc.png
25+
:align: center
26+
:alt: Figure 1
27+
:figclass: align-center
28+
29+
Figure 1. A visual representation of the SparseSD model routine.
30+
31+
The SparseSD model usage example:
32+
33+
.. code-block:: python
34+
35+
# import the model from the rainymotion library
36+
from rainymotion.models import SparseSD
37+
38+
# initialize the model
39+
model = SparseSD()
40+
41+
# upload data to the model instance
42+
model.input_data = np.load("/path/to/data")
43+
44+
# run the model with default parameters
45+
nowcast = model.run()
46+
47+
48+
.. seealso::
49+
:doc:`notebooks`.
50+
51+
52+
The Sparse model
53+
~~~~~~~~~~~~~~~~
54+
The Sparse model uses the 24 most recent radar images, and we consider here only features that are persistent over the whole period (of 24 timesteps) for capturing the most steady movement. Its implementation can be summarized as follows (Fig. 2):
55+
56+
* Identify features on a radar image at time *t-23* using the Shi--Tomasi corner detector;
57+
* Track these features on radar images at the time from *t-22* to *t* using the local Lucas--Kanade optical flow algorithm;
58+
* Build linear regression models which independently parametrize changes in coordinates through time (from *t-23* to *t*) for every successfully tracked feature;
59+
* Continue with steps 3-5 of the SparseSD model routine.
60+
61+
.. figure:: ./notebooks/images/sparse_sc.png
62+
:align: center
63+
:alt: Figure 2
64+
:figclass: align-center
65+
66+
Figure 2. A visual representation of the Sparse model routine.
67+
68+
The Sparse model usage example:
69+
70+
.. code-block:: python
71+
72+
# import the model from the rainymotion library
73+
from rainymotion.models import Sparse
74+
75+
# initialize the model
76+
model = Sparse()
77+
78+
# upload data to the model instance
79+
model.input_data = np.load("/path/to/data")
80+
81+
# run the model with default parameters
82+
nowcast = model.run()
83+
84+
85+
.. seealso::
86+
:doc:`notebooks`.
87+
88+
The Dense group
89+
---------------
90+
The Dense group of models uses the Dense Inverse Search algorithm (DIS) which allows us to explicitly estimate the velocity of each image pixel based on an analysis of two consecutive radar images.
91+
92+
The two models in this group differ only with regard to the extrapolation (or advection) step. The first model (the Dense) uses a constant-vector advection scheme, while the second model (the DenseRotation) uses a semi-Lagrangian advection scheme (Fig. 3).
93+
94+
.. figure:: ./notebooks/images/advection.png
95+
:align: center
96+
:width: 50%
97+
:alt: Figure 3
98+
:figclass: align-center
99+
100+
Figure 3. Advection schemes representation.
101+
102+
Both the Dense and DenseRotation models utilize a linear interpolation procedure (we use Inverse Distance Weightning approach by default) in order to interpolate advected rainfall intensities at their predicted locations to the original radar grid (Fig. 4).
103+
104+
.. figure:: ./notebooks/images/idw_interpolation.png
105+
:align: center
106+
:alt: Figure 4
107+
:figclass: align-center
108+
109+
Figure 4. Interpolation of the advected pixels
110+
111+
The Dense model
112+
~~~~~~~~~~~~~~~
113+
The Dense model implementation can be summarized as follows:
114+
115+
* Calculate a continuous displacement field using a global DIS optical flow algorithm based on the radar images at time *t-1* and *t*;
116+
* Use a backward constant-vector approach to extrapolate (advect) each pixel according to the obtained displacement (velocity) field, in one single step for each lead time *t+n*;
117+
* As a result of the advection step, we basically obtain an irregular point cloud that consists of the original radar pixels displaced from their original location. We use the intensity of each displaced pixel at its predicted location at time *t+n* in order to interpolate the intensity at each grid point of the original (native) radar grid using the inverse distance weighting interpolation technique (Fig. 4).
118+
119+
The Dense model usage example:
120+
121+
.. code-block:: python
122+
123+
# import the model from the rainymotion library
124+
from rainymotion.models import Dense
125+
126+
# initialize the model
127+
model = Dense()
128+
129+
# upload data to the model instance
130+
model.input_data = np.load("/path/to/data")
131+
132+
# run the model with default parameters
133+
nowcast = model.run()
134+
135+
136+
.. seealso::
137+
:doc:`notebooks`.
138+
139+
The DenseRotation model
140+
~~~~~~~~~~~~~~~~~~~~~~~
141+
The routine for the DenseRotation model is almost the same as for the Dense model, except differences in advection approach (the second step of the Dense model routine), which can be summarized as follows:
142+
143+
* Instead of using the backward constant-vector approach, we use the backward semi-Lagrangian approach to extrapolate (advect) each pixel according to the obtained displacement (velocity) field, in one single step for each lead time *t+n*. For the semi-Lagrangian scheme, we update the velocity of displaced pixels at each prediction time step by implementing a linear interpolation of obtained displacement field at time *t* to displaced pixels' locations at this (current) time step.
144+
145+
146+
The DenseRoration model usage example:
147+
148+
.. code-block:: python
149+
150+
# import the model from the rainymotion library
151+
from rainymotion.models import DenseRotation
152+
153+
# initialize the model
154+
model = DenseRotation()
155+
156+
# upload data to the model instance
157+
model.input_data = np.load("/path/to/data")
158+
159+
# run the model with default parameters
160+
nowcast = model.run()
161+
162+
163+
.. seealso::
164+
:doc:`notebooks`.
165+
166+
The Eulerian Persistence
167+
------------------------
168+
The (trivial) benchmark model of Eulerian persistence assumes that for any lead time *n*, the precipitation field is the same as for time *t*.
169+
170+
The Persistence model usage example:
171+
172+
.. code-block:: python
173+
174+
# import the model from the rainymotion library
175+
from rainymotion.models import Persistence
176+
177+
# initialize the model
178+
model = Persistence()
179+
180+
# upload data to the model instance
181+
model.input_data = np.load("/path/to/data")
182+
183+
# run the model with default parameters
184+
nowcast = model.run()
185+
186+
.. seealso::
187+
:doc:`notebooks`.
5.88 KB
Loading

docs/utils.rst

+14
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,16 @@
11
Utils
22
=====
3+
4+
The ``rainymotion`` library provides some useful utils to help user with a data preprocessing workflow which is usually needed to perform radar-based precipitation nowcasting. At the moment, we have utils that only deal with the RY radar product by DWD. By the way, you can use such utils as an example to construct your own data preprocessing pipeline.
5+
6+
================ =====================================
7+
Function Description
8+
================ =====================================
9+
depth2intensity Convert rainfall depth (in mm) to rainfall intensity (mm/h)
10+
intensity2depth Convert rainfall intensity (mm/h) back to rainfall depth (mm)
11+
RYScaler Scale RY data from mm (in float64) to brightness (in uint8)
12+
inv_RYScaler Scale brightness (in uint8) back to RY data (in mm).
13+
================ =====================================
14+
15+
.. seealso::
16+
:doc:`notebooks` for how to use ``rainymotion.utils`` in the nowcasting workflow.

0 commit comments

Comments
 (0)