├── .gitignore
├── 1 - The first S1 scene.ipynb
├── 2 - Data Search and Access.ipynb
├── 3 - Latest Sentinel-1 scene.ipynb
├── 4a - Sentinel-1 GRD Batch - Subset.ipynb
├── 4b - Sentinel-1 GRD Batch - Timeseries.ipynb
├── 4c - Sentinel-1 GRD Batch - Timescan.ipynb
├── README.md
├── Tips and Tricks.ipynb
├── auxiliary
└── header_image.PNG
└── install_ost.sh
/.gitignore:
--------------------------------------------------------------------------------
1 | .ipynb_checkpoints
2 | .DS_Store
--------------------------------------------------------------------------------
/1 - The first S1 scene.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "colab_type": "text",
7 | "id": "1PV8FN2-x2wj"
8 | },
9 | "source": [
10 | " Open SAR Toolkit - Tutorial 1, version 1.3, July 2020. Andreas Vollrath, ESA/ESRIN phi-lab\n",
11 | ""
12 | ]
13 | },
14 | {
15 | "cell_type": "markdown",
16 | "metadata": {
17 | "colab_type": "text",
18 | "id": "kOn6Dorgx2wk"
19 | },
20 | "source": [
21 | "\n",
22 | "\n",
23 | "--------\n",
24 | "\n",
25 | "# OST Tutorial I \n",
26 | "\n",
27 | "\n",
28 | "\n",
29 | "\n",
30 | "## Pre-processing your first Sentinel-1 image with OST\n",
31 | "\n",
32 | "[](https://colab.research.google.com/github/ESA-PhiLab/OST_Notebooks/blob/master/1%20-%20The%20first%20S1%20scene.ipynb)\n",
33 | "\n",
34 | "--------\n",
35 | "\n",
36 | "**Short description**\n",
37 | "\n",
38 | "This notebook introduces you to the *Sentinel1Scene* class of the Open SAR Toolkit and demonstrates how it can be used to download, extract metadata and pre-process a single Sentinel-1 scene.\n",
39 | "\n",
40 | "--------\n",
41 | "\n",
42 | "**Requirements**\n",
43 | "\n",
44 | "- a PC/Mac with at least 16GB of RAM\n",
45 | "- about 4 GB of free disk space (or more if you want to process more scenes)\n",
46 | "- a NASA Earthdata account with signed EULA for use of https://search.asf.alaska.edu (just register directly there)\n",
47 | "--------\n",
48 | "\n",
49 | "**NOTE:** all cells that have an * after its number can be executed without changing any code. "
50 | ]
51 | },
52 | {
53 | "cell_type": "markdown",
54 | "metadata": {
55 | "colab_type": "text",
56 | "id": "o4gN2FtIx2wl"
57 | },
58 | "source": [
59 | "### 0\\* - Install OST and dependencies \n",
60 | "\n",
61 | "**NOTE:** Applies only if you haven't fully installed OST and its dependencies yet, e.g. on Google Colab, so uncomment the lines in this case."
62 | ]
63 | },
64 | {
65 | "cell_type": "code",
66 | "execution_count": null,
67 | "metadata": {
68 | "colab": {},
69 | "colab_type": "code",
70 | "id": "KqtRYFnhx2wm"
71 | },
72 | "outputs": [],
73 | "source": [
74 | "# !apt-get -y install wget\n",
75 | "# !wget https://raw.githubusercontent.com/ESA-PhiLab/OST_Notebooks/master/install_ost.sh\n",
76 | "# !bash install_ost.sh"
77 | ]
78 | },
79 | {
80 | "cell_type": "markdown",
81 | "metadata": {
82 | "colab_type": "text",
83 | "id": "keC_14irx2wq"
84 | },
85 | "source": [
86 | "### 1\\* - Import the OST *Sentinel1Scene* class"
87 | ]
88 | },
89 | {
90 | "cell_type": "code",
91 | "execution_count": null,
92 | "metadata": {
93 | "colab": {},
94 | "colab_type": "code",
95 | "id": "l1WpZ6RSx2wr"
96 | },
97 | "outputs": [],
98 | "source": [
99 | "# these imports we need to handle the folders, independent of the OS\n",
100 | "from pathlib import Path\n",
101 | "from pprint import pprint\n",
102 | "\n",
103 | "# this is the Sentinel1Scene class, that basically handles all the workflow from beginning to the end\n",
104 | "from ost import Sentinel1Scene\n",
105 | "\n",
106 | "#from ost.helpers.settings import set_log_level\n",
107 | "#import logging\n",
108 | "#set_log_level(logging.DEBUG)"
109 | ]
110 | },
111 | {
112 | "cell_type": "markdown",
113 | "metadata": {
114 | "colab_type": "text",
115 | "id": "hpm4Ii9gx2wu"
116 | },
117 | "source": [
118 | "### 2* - Create a folder for our outputs\n",
119 | "\n",
120 | "By executing this cell, a new folder will be created and the path will be written to the *output_dir* variable"
121 | ]
122 | },
123 | {
124 | "cell_type": "code",
125 | "execution_count": null,
126 | "metadata": {
127 | "colab": {},
128 | "colab_type": "code",
129 | "id": "KDfHogBVx2wv"
130 | },
131 | "outputs": [],
132 | "source": [
133 | "# get home folder\n",
134 | "home = Path.home()\n",
135 | "\n",
136 | "# create a processing directory\n",
137 | "output_dir = home.joinpath('OST_Tutorials', 'Tutorial_1')\n",
138 | "output_dir.mkdir(parents=True, exist_ok=True)\n",
139 | "print(str(output_dir))"
140 | ]
141 | },
142 | {
143 | "cell_type": "markdown",
144 | "metadata": {
145 | "colab_type": "text",
146 | "id": "9eR0Uy4nx2wy"
147 | },
148 | "source": [
149 | "### 3* - Choose scene ID and display some metadata\n",
150 | "\n",
151 | "In order to initialize an instance of the *Sentinel1Scene* class, all we need is a valid scene id of a Sentinel-1 product. "
152 | ]
153 | },
154 | {
155 | "cell_type": "code",
156 | "execution_count": null,
157 | "metadata": {
158 | "colab": {},
159 | "colab_type": "code",
160 | "id": "km3z40Kjx2wz"
161 | },
162 | "outputs": [],
163 | "source": [
164 | "# create a S1Scene class instance based on the scene identifier of the first ever Dual-Pol Sentinel-1 IW product\n",
165 | "\n",
166 | "#---------------------------------------------------\n",
167 | "# Some scenes to choose from\n",
168 | "\n",
169 | "# very first IW (VV/VH) S1 image available over Istanbul/Turkey \n",
170 | "# NOTE:only available via ASF data mirror\n",
171 | "scene_id = 'S1A_IW_GRDH_1SDV_20141003T040550_20141003T040619_002660_002F64_EC04' \n",
172 | "\n",
173 | "# other scenes with different scene types to process (uncomment)\n",
174 | "# IW scene (dual-polarised HH/HV) over Norway/Spitzbergen\n",
175 | "# scene_id = 'S1B_IW_GRDH_1SDH_20200325T150411_20200325T150436_020850_02789D_2B85' \n",
176 | "\n",
177 | "# IW scene (single-polarised VV) over Ecuadorian Amazon \n",
178 | "# scene_id = 'S1A_IW_GRDH_1SSV_20150205T232009_20150205T232034_004494_00583A_1C80' \n",
179 | "\n",
180 | "# EW scene (dual-polarised VV/VH) over Azores (needs a different DEM, see cell of ARD parameters below)\n",
181 | "# scene_id = 'S1B_EW_GRDM_1SDV_20200303T193150_20200303T193250_020532_026E82_5CE9' \n",
182 | "\n",
183 | "# EW scene (dual-polarised HH/HV) over Greenland \n",
184 | "# scene_id = 'S1B_EW_GRDM_1SDH_20200511T205319_20200511T205419_021539_028E4E_697E' \n",
185 | "\n",
186 | "# Stripmap mode S5 scene (dual-polarised VV/VH) over Germany \n",
187 | "# scene_id = 'S1B_S5_GRDH_1SDV_20170104T052519_20170104T052548_003694_006587_86AB' \n",
188 | "#---------------------------------------------------\n",
189 | "\n",
190 | "# create an S1Scene instance\n",
191 | "s1 = Sentinel1Scene(scene_id)\n",
192 | "\n",
193 | "# print summarising infos about the scene\n",
194 | "s1.info()"
195 | ]
196 | },
197 | {
198 | "cell_type": "markdown",
199 | "metadata": {
200 | "colab_type": "text",
201 | "id": "4KaMauPDx2w3"
202 | },
203 | "source": [
204 | "### 4* Download scene\n",
205 | "\n",
206 | "The first step is to download the selected scene. In our case we chose the first regular Sentinel-1 IW product acquired in the dual-polarisation VV/VH acquired the 3rd of October 2014. \n",
207 | "OST provides download from 3 different mirrors, ESA's scihub, CNES PEPS and Alaska Satellite Facility (NASA Earthdata). Since ESA's official scihub became a rolling archive, and so is PEPS, best is to download from the fantastic **Alaska Satellite Facility** mirror (selection 2), which holds the full Sentinel-1 archive online. \n",
208 | "\n",
209 | "**Note I:** You can interrupt the download and restart it. The download will continue from where it stopped.\n",
210 | "\n",
211 | "**Note II:** After the actual download, the file is checked for inconsistency. This assures that the download went fine and we can use it for processing. In addition, OST will magically remember that this file has been successfully downloaded (just run the cell again to see the behaviour)."
212 | ]
213 | },
214 | {
215 | "cell_type": "code",
216 | "execution_count": null,
217 | "metadata": {
218 | "colab": {},
219 | "colab_type": "code",
220 | "id": "pcdvGAS5x2w3"
221 | },
222 | "outputs": [],
223 | "source": [
224 | "s1.download(output_dir)"
225 | ]
226 | },
227 | {
228 | "cell_type": "markdown",
229 | "metadata": {
230 | "colab_type": "text",
231 | "id": "cs_Ps0Agx2w7"
232 | },
233 | "source": [
234 | "### 5* - Define our ARD product\n",
235 | "\n",
236 | "ARD stands for Analysis Ready Data and is interpreted differently by different persons. OST provides various pre-defined flavours which can be used instantly.\n",
237 | "\n",
238 | "The following table shows the ARD types and corresponding processing steps applied for GRD data. \n",
239 | "\n",
240 | "| ARD Types | OST-GTC | OST-RTC | CEOS | Earth-Engine |\n",
241 | "| :------------- | :----------: | :-----------: | :----------: | -----------: |\n",
242 | "| **Scene processing steps** | \n",
243 | "| Update Orbit File | x | x | x | x |\n",
244 | "| Thermal Noise Removal | x | x | x | x |\n",
245 | "| GRD Border Noise | x | x | x | - |\n",
246 | "| Calibration | $\\gamma^0$ | $\\beta^0$ | $\\beta^0$ | $\\sigma^0$ |\n",
247 | "| Multi-look | x | x | - | - |\n",
248 | "| Speckle-Filter | - | - | - | - |\n",
249 | "| Terrain Flattening | - | x ($\\gamma^0_f$)|x ($\\gamma^0_f$)| - |\n",
250 | "| Layover-Shadow Mask | - | x | x | - |\n",
251 | "| dB conversion | - | - | - | x |\n",
252 | "| Terrain Correction | x | x | x | x |\n",
253 | "| Resolution (in m) | 20 | 20 | 10 | 10 |\n",
254 | "\n",
255 | "The default ARD type is *'OST-GTC'*, referring to a Geometrically Terrain Corrected product which is calibrated to the ellipsoid correceted $\\gamma^0$ backscatter coefficient at 20m resolution. Other pre-defined ARD types are available, but it is also possible to customise single ARD parameters via a dictionary where all parameters are stored (as demonstrated in the cell below). Note how the resolution and the resampling of the image during terrain correction is changed at the bottom. In this way, actually all relevant parameters for processing are customisable. "
256 | ]
257 | },
258 | {
259 | "cell_type": "code",
260 | "execution_count": null,
261 | "metadata": {
262 | "colab": {},
263 | "colab_type": "code",
264 | "id": "HZfdAYg3x2w7"
265 | },
266 | "outputs": [],
267 | "source": [
268 | "# Default ARD parameter\n",
269 | "\n",
270 | "print('-----------------------------------------------------------------------------------------------------------')\n",
271 | "print('Our ARD parameters dictionary contains 4 keys. For the moment, only single_ARD is relevant.')\n",
272 | "print('-----------------------------------------------------------------------------------------------------------')\n",
273 | "pprint(s1.ard_parameters.keys())\n",
274 | "print('-----------------------------------------------------------------------------------------------------------')\n",
275 | "print('')\n",
276 | "\n",
277 | "print('-----------------------------------------------------------------------------------------------------------')\n",
278 | "print('Dictionary of our default OST ARD parameters for single scene processing:')\n",
279 | "print('-----------------------------------------------------------------------------------------------------------')\n",
280 | "pprint(s1.ard_parameters['single_ARD'])\n",
281 | "print('-----------------------------------------------------------------------------------------------------------')\n",
282 | "print('')"
283 | ]
284 | },
285 | {
286 | "cell_type": "code",
287 | "execution_count": null,
288 | "metadata": {
289 | "colab": {},
290 | "colab_type": "code",
291 | "id": "HZfdAYg3x2w7"
292 | },
293 | "outputs": [],
294 | "source": [
295 | "# Template ARD parameters\n",
296 | "\n",
297 | "# we change ARD type\n",
298 | "# possible choices are:\n",
299 | "# 'OST_GTC', 'OST-RTC', 'CEOS', 'Earth Engine'\n",
300 | "s1.update_ard_parameters('Earth-Engine')\n",
301 | "print('-----------------------------------------------------------------------------------------------------------')\n",
302 | "print('Dictionary of Earth Engine ARD parameters:')\n",
303 | "print('-----------------------------------------------------------------------------------------------------------')\n",
304 | "pprint(s1.ard_parameters['single_ARD'])\n",
305 | "print('-----------------------------------------------------------------------------------------------------------')"
306 | ]
307 | },
308 | {
309 | "cell_type": "code",
310 | "execution_count": null,
311 | "metadata": {
312 | "colab": {},
313 | "colab_type": "code",
314 | "id": "HZfdAYg3x2w7"
315 | },
316 | "outputs": [],
317 | "source": [
318 | "# Customised ARD parameters\n",
319 | "\n",
320 | "# we cusomize the resolution and image resampling\n",
321 | "s1.ard_parameters['single_ARD']['resolution'] = 100 # set output resolution to 100m\n",
322 | "s1.ard_parameters['single_ARD']['remove_speckle'] = False # apply a speckle filter\n",
323 | "s1.ard_parameters['single_ARD']['dem']['image_resampling'] = 'BILINEAR_INTERPOLATION' # BICUBIC_INTERPOLATION is default\n",
324 | "\n",
325 | "# s1.ard_parameters['single_ARD']['product_type'] = 'RTC-gamma0' \n",
326 | "\n",
327 | "# uncomment this for the Azores EW scene\n",
328 | "# s1.ard_parameters['single_ARD']['dem']['dem_name'] = 'GETASSE30'\n",
329 | "print('-----------------------------------------------------------------------------------------------------------')\n",
330 | "print('Dictionary of our customised ARD parameters for the final scene processing:')\n",
331 | "print('-----------------------------------------------------------------------------------------------------------')\n",
332 | "pprint(s1.ard_parameters['single_ARD'])\n",
333 | "print('-----------------------------------------------------------------------------------------------------------')"
334 | ]
335 | },
336 | {
337 | "cell_type": "markdown",
338 | "metadata": {
339 | "colab_type": "text",
340 | "id": "IavR8qQmx2w_"
341 | },
342 | "source": [
343 | "### 6* - Create an ARD product\n",
344 | "\n",
345 | "Our *Sentinel1Scene* class comes with the build-in method *create_ard* to produce a standardised ARD product based on the ARD dictionary above. \n",
346 | "\n",
347 | "To run the command we just have to provide: \n",
348 | "- the path to the downloaded file. We can use the *get_path* method in conjunction with the download directory provided\n",
349 | "- a directory where the output files will be written to\n",
350 | "- a filename prefix (the output format will be the standard SNAP Dimap format, consisting of the dim-file and the data directory)\n",
351 | "- and a directory for storing temporary files (can not be the same as the output folder)"
352 | ]
353 | },
354 | {
355 | "cell_type": "code",
356 | "execution_count": null,
357 | "metadata": {
358 | "colab": {},
359 | "colab_type": "code",
360 | "id": "BsSOUvVUx2w_"
361 | },
362 | "outputs": [],
363 | "source": [
364 | "s1.create_ard(\n",
365 | " infile=s1.get_path(output_dir),\n",
366 | " out_dir=output_dir, \n",
367 | " overwrite=True\n",
368 | ") \n",
369 | "\n",
370 | "print(' The path to our newly created ARD product can be obtained the following way:')\n",
371 | "s1.ard_dimap"
372 | ]
373 | },
374 | {
375 | "cell_type": "markdown",
376 | "metadata": {
377 | "colab_type": "text",
378 | "id": "Umkg9vgux2xC"
379 | },
380 | "source": [
381 | "### 6* - Create a RGB color composite\n",
382 | "\n",
383 | "Sentinel-1 scenes usually consist of two polarisation bands. In order to create a 3 channel RGB composite a ratio between the Co- (VV or HH) and the Cross-polarised (VH or HV) band is added. The *create_rgb* method takes the *ard_dimap* file and converts it to a 3-channel GeoTiff."
384 | ]
385 | },
386 | {
387 | "cell_type": "code",
388 | "execution_count": null,
389 | "metadata": {
390 | "colab": {},
391 | "colab_type": "code",
392 | "id": "bT5tNXs1x2xD"
393 | },
394 | "outputs": [],
395 | "source": [
396 | "s1.create_rgb(outfile = output_dir.joinpath(f'{s1.start_date}.tif'))\n",
397 | "\n",
398 | "print(' The path to our newly created RGB product can be obtained the following way:')\n",
399 | "s1.ard_rgb"
400 | ]
401 | },
402 | {
403 | "cell_type": "markdown",
404 | "metadata": {
405 | "colab_type": "text",
406 | "id": "Y_FbFahox2xF"
407 | },
408 | "source": [
409 | "### 7* - Visualise the RGB composite\n",
410 | "\n",
411 | "We can plot the newly created RGB image with the *visualise_rgb* method. A *shrink_factor* can be set, which reduces resolution in favour of memory requirements for plotting."
412 | ]
413 | },
414 | {
415 | "cell_type": "code",
416 | "execution_count": null,
417 | "metadata": {
418 | "colab": {},
419 | "colab_type": "code",
420 | "id": "PQzl1epox2xG"
421 | },
422 | "outputs": [],
423 | "source": [
424 | "#---------------------------------------------------\n",
425 | "# for plotting purposes we use this iPython magic\n",
426 | "%matplotlib inline\n",
427 | "%pylab inline\n",
428 | "pylab.rcParams['figure.figsize'] = (23, 23)\n",
429 | "#---------------------------------------------------\n",
430 | "s1.visualise_rgb(shrink_factor=2)"
431 | ]
432 | },
433 | {
434 | "cell_type": "markdown",
435 | "metadata": {
436 | "colab_type": "text",
437 | "id": "4QaGAsf8x2xJ"
438 | },
439 | "source": [
440 | "### 8* - Create thumbnail image\n",
441 | "\n",
442 | "For some it might be interesting to create a small thumbnail image in Jpeg format. The *create_rgb_thumbnail* method allows for this. "
443 | ]
444 | },
445 | {
446 | "cell_type": "code",
447 | "execution_count": null,
448 | "metadata": {
449 | "colab": {},
450 | "colab_type": "code",
451 | "id": "bj6RWSpox2xK"
452 | },
453 | "outputs": [],
454 | "source": [
455 | "# define a filename for our thumbnail image\n",
456 | "path_to_thumbnail = output_dir.joinpath(f'{s1.start_date}.TN.jpg')\n",
457 | "\n",
458 | "# create the thumbnail image\n",
459 | "s1.create_rgb_thumbnail(outfile=str(path_to_thumbnail))"
460 | ]
461 | },
462 | {
463 | "cell_type": "code",
464 | "execution_count": null,
465 | "metadata": {
466 | "colab": {},
467 | "colab_type": "code",
468 | "id": "efxTV4Zox2xN"
469 | },
470 | "outputs": [],
471 | "source": [
472 | "import imageio\n",
473 | "img = imageio.imread(path_to_thumbnail)\n",
474 | "!ls -sh {path_to_thumbnail}\n",
475 | "plt.imshow(img)"
476 | ]
477 | },
478 | {
479 | "cell_type": "markdown",
480 | "metadata": {},
481 | "source": [
482 | "### 9* - Play around\n",
483 | "\n",
484 | "You can try out the different images and also check the difference in backscatter for RTC products (uncomment the line of product type in the customisable ARD parameters)"
485 | ]
486 | }
487 | ],
488 | "metadata": {
489 | "colab": {
490 | "collapsed_sections": [],
491 | "name": "1 - The first S1 scene.ipynb",
492 | "provenance": [],
493 | "toc_visible": true
494 | },
495 | "kernelspec": {
496 | "display_name": "Python 3",
497 | "language": "python",
498 | "name": "python3"
499 | },
500 | "language_info": {
501 | "codemirror_mode": {
502 | "name": "ipython",
503 | "version": 3
504 | },
505 | "file_extension": ".py",
506 | "mimetype": "text/x-python",
507 | "name": "python",
508 | "nbconvert_exporter": "python",
509 | "pygments_lexer": "ipython3",
510 | "version": "3.8.5"
511 | },
512 | "toc": {
513 | "base_numbering": 1,
514 | "nav_menu": {},
515 | "number_sections": false,
516 | "sideBar": true,
517 | "skip_h1_title": true,
518 | "title_cell": "Table of Contents",
519 | "title_sidebar": "Contents",
520 | "toc_cell": false,
521 | "toc_position": {},
522 | "toc_section_display": true,
523 | "toc_window_display": true
524 | }
525 | },
526 | "nbformat": 4,
527 | "nbformat_minor": 4
528 | }
529 |
--------------------------------------------------------------------------------
/2 - Data Search and Access.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | " Open SAR Toolkit - Tutorial 2, version 1.1, July 2020. Andreas Vollrath, ESA/ESRIN phi-lab\n",
8 | ""
9 | ]
10 | },
11 | {
12 | "cell_type": "markdown",
13 | "metadata": {},
14 | "source": [
15 | "\n",
16 | "\n",
17 | "--------\n",
18 | "\n",
19 | "# OST Tutorial II \n",
20 | "## How to access and download Sentinel-1 data with OST\n",
21 | "\n",
22 | "[](https://colab.research.google.com/github/ESA-PhiLab/OST_Notebooks/blob/master/2%20-%20Data%20Search%20and%20Access.ipynb)\n",
23 | "\n",
24 | "--------\n",
25 | "\n",
26 | "**Short description**\n",
27 | "\n",
28 | "This notebook introduces you to OST's main class *Generic*, and its subclass *Sentinel1*. The *Generic* class handles the basic structure of any OST batch processing project, while the *Sentinel1* class provides methods to search, refine and download sets of acquisitions for the EU Copernicus Sentinel-1 mission.\n",
29 | "\n",
30 | "This notebook is of interest for those users who like to only search and download Sentinel-1 data in an efficient way.\n",
31 | "\n",
32 | "- **I:** Get to know the *Generic* main class for setting up a OST Project\n",
33 | "- **II:** Get to know the *Sentinel1* subclass, that features functions for data search and access\n",
34 | "\n",
35 | "--------\n",
36 | "\n",
37 | "**Requirements**\n",
38 | "\n",
39 | "- a PC/Mac\n",
40 | "- about 100MB of free disk space\n",
41 | "- a Copernicus Open Data Hub user account, valid for at least 7 days (https://scihub.copernicus.eu)\n",
42 | "--------\n",
43 | "\n",
44 | "**NOTE:** all cells that have an * after its number can be executed without changing any code. "
45 | ]
46 | },
47 | {
48 | "cell_type": "markdown",
49 | "metadata": {},
50 | "source": [
51 | "### 0\\* - Install OST and dependencies \n",
52 | "\n",
53 | "**NOTE:** Applies only if you haven't fully installed OST and its dependencies yet, e.g. on Google Colab, so uncomment the lines in this case."
54 | ]
55 | },
56 | {
57 | "cell_type": "code",
58 | "execution_count": null,
59 | "metadata": {},
60 | "outputs": [],
61 | "source": [
62 | "# !apt-get -y install wget\n",
63 | "# !wget https://raw.githubusercontent.com/ESA-PhiLab/OST_Notebooks/master/install_ost.sh\n",
64 | "# !bash install_ost.sh"
65 | ]
66 | },
67 | {
68 | "cell_type": "markdown",
69 | "metadata": {},
70 | "source": [
71 | "### I-1* - Import python libraries necessary for processing"
72 | ]
73 | },
74 | {
75 | "cell_type": "code",
76 | "execution_count": null,
77 | "metadata": {},
78 | "outputs": [],
79 | "source": [
80 | "# this imports we need to handle the folders, independent of the OS\n",
81 | "from pathlib import Path\n",
82 | "from pprint import pprint\n",
83 | "\n",
84 | "# this is the Generic class, that basically handles all the workflow from beginning to the end\n",
85 | "from ost import Generic"
86 | ]
87 | },
88 | {
89 | "cell_type": "markdown",
90 | "metadata": {},
91 | "source": [
92 | "### I-2 - Data selection parameters\n",
93 | "\n",
94 | "In order to define your project you need to define 3 main attributes. \n",
95 | "\n",
96 | "**1 Area of Interest:** \n",
97 | "\n",
98 | "The Area of Interest can be defined in different ways:\n",
99 | "\n",
100 | "1. One possibility is to use the low resolution layer of country boundaries from geopandas. To select a specific country you need to specify its ISO3 code. You can find a collection of all ISO3 codes [here](https://unstats.un.org/unsd/tradekb/knowledgebase/country-code).\n",
101 | "\n",
102 | "2. Another possibility is to provide a Well-Known Text formatted string, which is the format OST uses internally. \n",
103 | "\n",
104 | "3. A third possibility is to provide a path to a valid vector file supported by OGR (e.g. GeoJSON, GeoPackage, KML, Esri Shapefile). Try to keep that as simple as possible. If your layer contains lots of different entries (e.g. crop fields), create a convex hull beforehand and use this.\n",
105 | "\n",
106 | "**2 Time of Interest:**\n",
107 | "\n",
108 | "The time of interest is defined by a *start* and *end* date. The date is defined by a string in the format 'YYYY-MM-DD'. If none of the two parameters are defined, both parameters will use default values, which is *2014-10-01* for *start*, and *today* for the end of the TOI.\n",
109 | "\n",
110 | "**3 Project directory**\n",
111 | "\n",
112 | "Here we set a high-level directory where all of the project-related data (i.e. inventory, download, processed files) will be stored or created. "
113 | ]
114 | },
115 | {
116 | "cell_type": "code",
117 | "execution_count": null,
118 | "metadata": {},
119 | "outputs": [],
120 | "source": [
121 | "#----------------------------\n",
122 | "# Area of interest\n",
123 | "#----------------------------\n",
124 | "\n",
125 | "# Here we can either point to a shapefile, an ISO3 country code, or a WKT string\n",
126 | "aoi = 'AUT' # AUT is the ISO3 country code of Austria\n",
127 | "\n",
128 | "#----------------------------\n",
129 | "# Time of interest\n",
130 | "#----------------------------\n",
131 | "# we set only the start date to today - 30 days\n",
132 | "start = '2019-06-01'\n",
133 | "end = '2019-08-31'\n",
134 | "\n",
135 | "#----------------------------\n",
136 | "# Project folder\n",
137 | "#----------------------------\n",
138 | "\n",
139 | "# get home folder\n",
140 | "home = Path.home()\n",
141 | "\n",
142 | "# create a processing directory\n",
143 | "project_dir = home.joinpath('OST_Tutorials', 'Tutorial_2')\n",
144 | "\n",
145 | "#------------------------------\n",
146 | "# Print out AOI and start date\n",
147 | "#------------------------------\n",
148 | "print('AOI: ', aoi)\n",
149 | "print('TOI start: ', start)\n",
150 | "print('TOI end: ', end)\n",
151 | "print('Project Directory: ', project_dir)"
152 | ]
153 | },
154 | {
155 | "cell_type": "markdown",
156 | "metadata": {},
157 | "source": [
158 | "### I-3* - Initialize the *Generic* class\n",
159 | "\n",
160 | "The above defined variables are used to initialize the class with its main attributes."
161 | ]
162 | },
163 | {
164 | "cell_type": "code",
165 | "execution_count": null,
166 | "metadata": {},
167 | "outputs": [],
168 | "source": [
169 | "# create an OST Generic class instance\n",
170 | "ost_generic = Generic(\n",
171 | " project_dir=project_dir,\n",
172 | " aoi=aoi, \n",
173 | " start=start, \n",
174 | " end=end\n",
175 | ")\n",
176 | "\n",
177 | "# Uncomment below to see the list of folders inside the project directory (UNIX only):\n",
178 | "print('')\n",
179 | "print('We use the linux ls command for listing the directories inside our project folder:')\n",
180 | "!ls {project_dir}"
181 | ]
182 | },
183 | {
184 | "cell_type": "markdown",
185 | "metadata": {},
186 | "source": [
187 | "### I-4* Customise project parameters\n",
188 | "\n",
189 | "The initialisation of the class creates a config file, where all project attributes are stored. This includes for example the lcoation of the download or the processing folder. Those can be customised as shown below. Also note that independent of the input format of the AOI, it will be stored as Well Known Text string. The possible input formats for AOI defintion will be covered in later tutorials."
190 | ]
191 | },
192 | {
193 | "cell_type": "code",
194 | "execution_count": null,
195 | "metadata": {},
196 | "outputs": [],
197 | "source": [
198 | "# Default config as created by the class initialisation\n",
199 | "print(' Before customisation')\n",
200 | "print('---------------------------------------------------------------------')\n",
201 | "pprint(ost_generic.config_dict)\n",
202 | "print('---------------------------------------------------------------------')\n",
203 | "\n",
204 | "# customisation\n",
205 | "ost_generic.config_dict['download_dir'] = '/download'\n",
206 | "ost_generic.config_dict['temp_dir'] = '/tmp'\n",
207 | "\n",
208 | "print('')\n",
209 | "print(' After customisation (note the change in download_dir and temp_dir)')\n",
210 | "print('---------------------------------------------------------------------')\n",
211 | "pprint(ost_generic.config_dict)"
212 | ]
213 | },
214 | {
215 | "cell_type": "markdown",
216 | "metadata": {},
217 | "source": [
218 | "### II-1* - The *Sentinel1* class\n",
219 | "\n",
220 | "The *Sentinel1* class, as a subclass of the Generic class, inherts all the attributes and methods from the Generic class, and adds specific new ones for search and download of data.\n"
221 | ]
222 | },
223 | {
224 | "cell_type": "code",
225 | "execution_count": null,
226 | "metadata": {},
227 | "outputs": [],
228 | "source": [
229 | "# the import of the Sentinel1 class\n",
230 | "from ost import Sentinel1"
231 | ]
232 | },
233 | {
234 | "cell_type": "markdown",
235 | "metadata": {},
236 | "source": [
237 | "### II-2* Initialize the *Sentinel1* class\n",
238 | "\n",
239 | "In addition to the AOI, TOI and project directory parameters needed for the initialization of the *Generic* class, three more Sentinel-1 specific attributes can be defined\n",
240 | "\n",
241 | "1. product_type: this can be either RAW, SLC, GRD or OCN (default is '*' for all)\n",
242 | "2. the beam mode: this can be either IW, SM, EW or WV (default is '*' for all)\n",
243 | "3. polarisation: This can be either VV, VH, HV, HH or a combination, e.g. VV, VH or HH, HV (default is '*' for all)\n",
244 | "\n",
245 | "Have a look at https://sentinel.esa.int/web/sentinel/user-guides/sentinel-1-sar/acquisition-modes for further information on Sentinel-1 acquisition modes and https://sentinel.esa.int/web/sentinel/missions/sentinel-1/observation-scenario for information of the observation scenario globally."
246 | ]
247 | },
248 | {
249 | "cell_type": "code",
250 | "execution_count": null,
251 | "metadata": {},
252 | "outputs": [],
253 | "source": [
254 | "# initialize the Sentinel1 class\n",
255 | "ost_s1 = Sentinel1(\n",
256 | " project_dir=project_dir,\n",
257 | " aoi=aoi, \n",
258 | " start=start, \n",
259 | " end=end,\n",
260 | " product_type='SLC',\n",
261 | " beam_mode='IW',\n",
262 | " polarisation='*'\n",
263 | ")"
264 | ]
265 | },
266 | {
267 | "cell_type": "markdown",
268 | "metadata": {},
269 | "source": [
270 | "### II-3* Searching for data\n",
271 | "\n",
272 | "\n",
273 | "The search method of our *Sentinel1* class instance will trigger a search query on the scihub catalogue and get the results back in 2 ways: \n",
274 | "\n",
275 | "- write it into a shapefile (inside your inventory directory).\n",
276 | "- store it as an instance attribute in form of a Geopandas GeoDataFrame that can be called by ost_s1.inventory\n",
277 | "\n",
278 | "You will need a valid scihub account to do this step.\n",
279 | "In case you do not have a scihub account yet, please go [here](https://scihub.copernicus.eu/dhus/#/home) to register.\n",
280 | "\n",
281 | "**IMPORTANT** OST, by default, queries the Copernicus Apihub (i.e. a different server than the one you access over your web browser), for which user credentials will be transfered only after a week of registration to the standard open scihub ([more info here](https://scihub.copernicus.eu/twiki/do/view/SciHubWebPortal/APIHubDescription)). In case this is an issue, use the commented line with the specified base_url and comment out the standard search command.\n",
282 | "\n",
283 | "So you may need to wait a couple of days after first registration before it works. "
284 | ]
285 | },
286 | {
287 | "cell_type": "code",
288 | "execution_count": null,
289 | "metadata": {},
290 | "outputs": [],
291 | "source": [
292 | "#---------------------------------------------------\n",
293 | "# for plotting purposes we use this iPython magic\n",
294 | "%matplotlib inline\n",
295 | "%pylab inline\n",
296 | "pylab.rcParams['figure.figsize'] = (13, 13)\n",
297 | "#---------------------------------------------------\n",
298 | "\n",
299 | "# search command\n",
300 | "ost_s1.search()\n",
301 | "\n",
302 | "# uncomment in case you have issues with the registration procedure \n",
303 | "#ost_s1.search(base_url='https://scihub.copernicus.eu/dhus')\n",
304 | "\n",
305 | "# we plot the full Inventory on a map\n",
306 | "ost_s1.plot_inventory(transparency=.1)"
307 | ]
308 | },
309 | {
310 | "cell_type": "markdown",
311 | "metadata": {},
312 | "source": [
313 | "### II-4* The inventory attribute\n",
314 | "\n",
315 | "The results of the search are stored in the *inventory* attribute of the class instance *ost_s1*. This is actually a [Geopandas](http://geopandas.org) Dataframe that stores all the available metadata from the scihub catalogue. Therefore all (geo)pandas functionality can be applied for filtering, plotting and selection. "
316 | ]
317 | },
318 | {
319 | "cell_type": "code",
320 | "execution_count": null,
321 | "metadata": {},
322 | "outputs": [],
323 | "source": [
324 | "print('-----------------------------------------------------------------------------------------------------------')\n",
325 | "print(' INFO: We found a total of {} products for our project definition'.format(len(ost_s1.inventory)))\n",
326 | "print('-----------------------------------------------------------------------------------------------------------')\n",
327 | "print('')\n",
328 | "# combine OST class attribute with pandas head command to print out the first 5 rows of the \n",
329 | "print('-----------------------------------------------------------------------------------------------------------')\n",
330 | "print('The columns of our inventory:')\n",
331 | "print('')\n",
332 | "print(ost_s1.inventory.columns)\n",
333 | "print('-----------------------------------------------------------------------------------------------------------')\n",
334 | "\n",
335 | "print('')\n",
336 | "print('-----------------------------------------------------------------------------------------------------------')\n",
337 | "print(' The last 5 rows of our inventory:')\n",
338 | "print(ost_s1.inventory.tail(5))"
339 | ]
340 | },
341 | {
342 | "cell_type": "markdown",
343 | "metadata": {},
344 | "source": [
345 | "### II-5* Search Refinement\n",
346 | "\n",
347 | "The results returned by the search algorithm on Copernicus scihub might not be 100% appropriate to what we are looking for. In this step we refine the results adressing possible issues and reduce later processing needs.\n",
348 | "\n",
349 | "A first step **splits the data** by **orbit direction** (i.e. ascending and descending) and **polarization mode** (i.e. VV, VV/VH, HH, HH/HV). For each combination the routine then checks the coverage for the resulting combinations (e.g. descending VV/VH polarization). If one combination results in a non-full overlap to the AOI, all further steps are disregarded. In case a full coverage is possbile further refinement steps are taken: \n",
350 | "\n",
351 | "1. Some of the acquisition frames might have been processed and/or stored **more than once** in the ESA ground segment. Therefore they appear twice, with the scene identifier that only changes for the last 4 digits. It is necessary to identify those scenes in order to avoid redundancy. We therefore take the ones with the latest ingestion date to assure the use of the latest processor. \n",
352 | "\n",
353 | "2. Some of the scenes returned by the search query are actually **not overlapping the AOI**. This is because the search algorithm will actually check for data within a square defined by the outer bounds of the AOI geometry and not the AOI itself. The refinement only takes those frames overlapping with the AOI in order to reduce unnecassary processing later on.\n",
354 | "\n",
355 | "3. In the case of **ascending tracks that cross the equator**, the **orbit number** of the frames will **increase by 1** even though they are practically from the same acquisition. During processing the frames need to be merged and the relative orbit numbers (i.e. tracks) should be the same. The metadata in the inventory is therefore updated in order to normalize the relative orbit number for the project.\n",
356 | "\n",
357 | "4. (optional) The tracks of Sentinel-1 overlap to a certain degree. The data inventory might return tracks that only **marginally cross the AOI**, but there AOI overlap is already covered by the adjacent track. Thus, if tracks do not contribute to the overall overlap of the AOI, they are disregarded.\n",
358 | "\n",
359 | "5. (optional) Some acquisitions might **not cross the entire AOI**. For the subsequent time-series/timescan processing this becomes problematic, since the generation of the time-series will only consider the overlapping region for all acquisitions per track.\n",
360 | "\n",
361 | "6. A similar issue appears when one track **crosses the AOI twice**. In other words some of the frames in the middle of the track are not overlapping the AOI and are already disregarded by step 2. The assembling of the non-subsequent frames during processing would result in a failure. The metadata in the inventory is consequently updated, where the first part of the relative orbit number will be renamed to XXX.1, the second part to XXX.2 and so on. During processing those acquistions will be handled as 2 different tracks, and only merged during the final mosaicking.\n",
362 | "\n",
363 | "7. (optional) A last step is needed to assure that for one mosaic in time that consists of different tracks, is only covered once by each track. "
364 | ]
365 | },
366 | {
367 | "cell_type": "code",
368 | "execution_count": null,
369 | "metadata": {},
370 | "outputs": [],
371 | "source": [
372 | "ost_s1.refine_inventory()"
373 | ]
374 | },
375 | {
376 | "cell_type": "markdown",
377 | "metadata": {},
378 | "source": [
379 | "### II-6* - Selecting the right data\n",
380 | "\n",
381 | "The results of the refinement are stored in a new attribute called **refined_inventory_dict**.\n",
382 | "This is a python dictionary with the mosaic keys as dictionary keys, whereas the respective items are the refined Geodataframes."
383 | ]
384 | },
385 | {
386 | "cell_type": "code",
387 | "execution_count": null,
388 | "metadata": {},
389 | "outputs": [],
390 | "source": [
391 | "pylab.rcParams['figure.figsize'] = (19, 19)\n",
392 | "\n",
393 | "key = 'ASCENDING_VVVH'\n",
394 | "ost_s1.refined_inventory_dict[key]\n",
395 | "ost_s1.plot_inventory(ost_s1.refined_inventory_dict[key], 0.1)"
396 | ]
397 | },
398 | {
399 | "cell_type": "markdown",
400 | "metadata": {},
401 | "source": [
402 | "### II-7* Downloading the data\n",
403 | "\n",
404 | "Now that we have a refined selection of the scenes we want to download, we have different data mirrors as options.\n",
405 | "By executing the following cell, OST will ask you from which data portal you want to download.\n",
406 | "\n",
407 | "#### ESA's Scihub catalogue\n",
408 | "The main entry point is the offcial scihub catalogue from ESA. It is however limited to 2 concurrent donwloads at the same time. Also note that it is a rolling archive, so for historical data, a special procedure has to applied to download this data (see Tips and Tricks notebook).\n",
409 | "\n",
410 | "#### Alternative I - Alaska Satellite Facility:\n",
411 | "\n",
412 | "A good alternative is the download mirror from the Alaska Satellite Facility, which provides the full archive of Sentinel-1 data. In order to get registered, go on their [data portal](https://vertex.daac.asf.alaska.edu) and register. If you already have a NASA Earthdata account, make sure you signed the specific EULA needed to access the Copernicus data. A good practice is to try a download directly from the vertex data protal, to assure everything works. \n",
413 | "\n",
414 | "#### Alternative II - PEPS server from CNES:\n",
415 | "\n",
416 | "Another good alternative is the Peps server from the French Space Agency CNES. While it is also a rolling archive, copies of historic data are stored on tape and can be easily transferred to the online available storage. OST takes care of that automatically. You can register for an account [here](https://peps.cnes.fr/rocket/)\n",
417 | "\n",
418 | "#### Alternative III - ONDA DIAS by Serco:\n",
419 | "\n",
420 | "Another good alternative is the free data access portal from the ONDA DIAS. This is especially well suited for SLC data for which it holds the full archive. GRD data is accessible by a rolling archive. You can register for an account [here](https://www.onda-dias.eu/cms/).\n",
421 | "\n",
422 | "**NOTE** While for scihub there is a limit of 2 concurrent downloads, ASF, PEPS and ONDA do not have such strict limits. For ASF the limit is 10, and we can set this with the keyword concurrent."
423 | ]
424 | },
425 | {
426 | "cell_type": "code",
427 | "execution_count": null,
428 | "metadata": {},
429 | "outputs": [],
430 | "source": [
431 | "ost_s1.download(ost_s1.refined_inventory_dict[key], concurrent=10)"
432 | ]
433 | }
434 | ],
435 | "metadata": {
436 | "kernelspec": {
437 | "display_name": "Python 3",
438 | "language": "python",
439 | "name": "python3"
440 | },
441 | "language_info": {
442 | "codemirror_mode": {
443 | "name": "ipython",
444 | "version": 3
445 | },
446 | "file_extension": ".py",
447 | "mimetype": "text/x-python",
448 | "name": "python",
449 | "nbconvert_exporter": "python",
450 | "pygments_lexer": "ipython3",
451 | "version": "3.8.5"
452 | },
453 | "toc": {
454 | "base_numbering": 1,
455 | "nav_menu": {},
456 | "number_sections": false,
457 | "sideBar": true,
458 | "skip_h1_title": true,
459 | "title_cell": "Table of Contents",
460 | "title_sidebar": "Contents",
461 | "toc_cell": false,
462 | "toc_position": {},
463 | "toc_section_display": true,
464 | "toc_window_display": true
465 | }
466 | },
467 | "nbformat": 4,
468 | "nbformat_minor": 4
469 | }
470 |
--------------------------------------------------------------------------------
/3 - Latest Sentinel-1 scene.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "colab_type": "text",
7 | "id": "6zVUMYgV537n"
8 | },
9 | "source": [
10 | " Open SAR Toolkit - Tutorial 3, version 1.2, July 2020. Andreas Vollrath, ESA/ESRIN phi-lab\n",
11 | ""
12 | ]
13 | },
14 | {
15 | "cell_type": "markdown",
16 | "metadata": {
17 | "colab_type": "text",
18 | "id": "kSGwC9r8537o"
19 | },
20 | "source": [
21 | "\n",
22 | "\n",
23 | "--------\n",
24 | "\n",
25 | "# OST Tutorial III\n",
26 | "## Process the latest Sentinel-1 GRD product for a given point\n",
27 | "\n",
28 | "[](https://colab.research.google.com/github/ESA-PhiLab/OST_Notebooks/blob/master/3%20-%20Latest%20Sentinel-1%20scene.ipynb)\n",
29 | "\n",
30 | "--------\n",
31 | "\n",
32 | "**Short description**\n",
33 | "\n",
34 | "This notebook demonstrates the interaction between the *Sentinel1* class for data inventory and download, and the *Sentinel1Scene* class, together, for the generation of the latest Sentinel-1 product over a given point coordinate. \n",
35 | "\n",
36 | "--------\n",
37 | "\n",
38 | "**Requirements**\n",
39 | "\n",
40 | "- a PC/Mac with at least 16GB of RAM\n",
41 | "- about 4GB of free disk space\n",
42 | "- a Copernicus Open Data Hub user account, ideally valid for at least 7 days (https://scihub.copernicus.eu)\n",
43 | "--------\n",
44 | "\n",
45 | "**NOTE:** all cells that have an * after its number can be executed without changing any code. "
46 | ]
47 | },
48 | {
49 | "cell_type": "markdown",
50 | "metadata": {
51 | "colab_type": "text",
52 | "id": "gLvPAMdI537p"
53 | },
54 | "source": [
55 | "### 0\\* - Install OST and dependencies \n",
56 | "\n",
57 | "**NOTE:** Applies only if you haven't fully installed OST and its dependencies yet, e.g. on Google Colab, so uncomment the lines in this case."
58 | ]
59 | },
60 | {
61 | "cell_type": "code",
62 | "execution_count": null,
63 | "metadata": {
64 | "colab": {},
65 | "colab_type": "code",
66 | "id": "Z21YR74D537q"
67 | },
68 | "outputs": [],
69 | "source": [
70 | "# !apt-get -y install wget\n",
71 | "# !wget https://raw.githubusercontent.com/ESA-PhiLab/OST_Notebooks/master/install_ost.sh\n",
72 | "# !bash install_ost.sh"
73 | ]
74 | },
75 | {
76 | "cell_type": "markdown",
77 | "metadata": {
78 | "colab_type": "text",
79 | "id": "DpqUWHp0537t"
80 | },
81 | "source": [
82 | "### 1* - Import python libraries necessary for processing"
83 | ]
84 | },
85 | {
86 | "cell_type": "code",
87 | "execution_count": null,
88 | "metadata": {
89 | "colab": {},
90 | "colab_type": "code",
91 | "id": "KRplYtUo537u"
92 | },
93 | "outputs": [],
94 | "source": [
95 | "# this is for the path handling and especially useful if you are on Windows\n",
96 | "from pathlib import Path\n",
97 | "from pprint import pprint\n",
98 | "\n",
99 | "# we will need this for our time of period definition\n",
100 | "from datetime import datetime, timedelta\n",
101 | "\n",
102 | "# this is the s1Project class, that basically handles all the workflow from beginning to the end\n",
103 | "from ost import Sentinel1, Sentinel1Scene"
104 | ]
105 | },
106 | {
107 | "cell_type": "markdown",
108 | "metadata": {
109 | "colab_type": "text",
110 | "id": "QP9IBFg7537x"
111 | },
112 | "source": [
113 | "### 2 - Data selection parameters\n",
114 | "\n",
115 | "**NOTE:** In case you want to process a different area, all you want to change is the lat and lon values in line 6 \n",
116 | "\n",
117 | "\n",
118 | "As already covered in OST Tutorial 2, we need a minimum of 3 basic parameters to initialise the *Sentinel1* class.\n",
119 | "\n",
120 | "**1 Area of Interest:** \n",
121 | "\n",
122 | "In this case we only search for a *specific spot on earth, i.e. Rome, Italy*, that is defined by the *Latitude* and *Longitude*. We then create a Well-Known-Text formatted string.\n",
123 | "\n",
124 | "**2 Time of Interest:**\n",
125 | "\n",
126 | "In this example, the datetime class is used to set the start date to 30 days before today to assure we get any scene within our time of interest.\n",
127 | "\n",
128 | "**3 Project directory**\n",
129 | "\n",
130 | "Set this to anything you like if not happy with the default one.\n",
131 | "\n"
132 | ]
133 | },
134 | {
135 | "cell_type": "code",
136 | "execution_count": null,
137 | "metadata": {
138 | "colab": {},
139 | "colab_type": "code",
140 | "id": "cB40bzfX537x"
141 | },
142 | "outputs": [],
143 | "source": [
144 | "#----------------------------\n",
145 | "# Area of interest\n",
146 | "#----------------------------\n",
147 | "\n",
148 | "# Here we can either point to a shapefile or as well use \n",
149 | "lat, lon = 41.8919, 12.5113\n",
150 | "aoi = 'POINT ({} {})'.format(lon, lat)\n",
151 | "\n",
152 | "#----------------------------\n",
153 | "# Time of interest\n",
154 | "#----------------------------\n",
155 | "# we set only the start date to today - 30 days\n",
156 | "start = (datetime.today() - timedelta(days=30)).strftime(\"%Y-%m-%d\")\n",
157 | "\n",
158 | "#----------------------------\n",
159 | "# Processing directory\n",
160 | "#----------------------------\n",
161 | "# get home folder\n",
162 | "home = Path.home()\n",
163 | "# create a processing directory within the home folder\n",
164 | "project_dir = home.joinpath('OST_Tutorials', 'Tutorial_3')\n",
165 | "\n",
166 | "#------------------------------\n",
167 | "# Print out AOI and start date\n",
168 | "#------------------------------\n",
169 | "print('AOI: ' + aoi, )\n",
170 | "print('TOI start: ' + start)\n",
171 | "print('Our project directory is located at: ' + str(project_dir))"
172 | ]
173 | },
174 | {
175 | "cell_type": "markdown",
176 | "metadata": {
177 | "colab_type": "text",
178 | "id": "JNToZ_1u5370"
179 | },
180 | "source": [
181 | "### 3* - Initialize the Sentinel1 project class\n",
182 | "\n",
183 | "After initialisation of our class, where we explicitley add the GRD product type argument, we do a rough search over our AOI for the last 30 days. We print the first 5 entries and plot all images for visualization by using the *Sentinel1* class attribute *inventory* and method *plot_inventory*."
184 | ]
185 | },
186 | {
187 | "cell_type": "code",
188 | "execution_count": null,
189 | "metadata": {
190 | "colab": {},
191 | "colab_type": "code",
192 | "id": "yRey1rVI5371"
193 | },
194 | "outputs": [],
195 | "source": [
196 | "#---------------------------------------------------\n",
197 | "# for plotting purposes we use this iPython magic\n",
198 | "%matplotlib inline\n",
199 | "%pylab inline\n",
200 | "pylab.rcParams['figure.figsize'] = (19, 19)\n",
201 | "#---------------------------------------------------\n",
202 | "\n",
203 | "# create s1Project class instance\n",
204 | "s1_project = Sentinel1(\n",
205 | " project_dir=project_dir, \n",
206 | " aoi=aoi, \n",
207 | " start=start,\n",
208 | " product_type='GRD'\n",
209 | ")\n",
210 | "\n",
211 | "# search command\n",
212 | "s1_project.search()\n",
213 | "# uncomment in case you have issues with the registration procedure \n",
214 | "#ost_s1.search(base_url='https://scihub.copernicus.eu/dhus')\n",
215 | "print('We found {} products.'.format(len(s1_project.inventory.identifier.unique())))\n",
216 | "# combine OST class attribute with pandas head command to print out the first 5 rows of the \n",
217 | "print(s1_project.inventory.head(5))\n",
218 | "\n",
219 | "# we plot the full Inventory on a map\n",
220 | "s1_project.plot_inventory(transparency=.1)"
221 | ]
222 | },
223 | {
224 | "cell_type": "markdown",
225 | "metadata": {
226 | "colab_type": "text",
227 | "id": "2MWfH4ma5374"
228 | },
229 | "source": [
230 | "### 4* - Select the latest scene found during the search\n",
231 | "\n",
232 | "Here we use some python-panda syntax on our rough data inventory to filter out the latest scene and create store it in a new dataframe. "
233 | ]
234 | },
235 | {
236 | "cell_type": "code",
237 | "execution_count": null,
238 | "metadata": {
239 | "colab": {},
240 | "colab_type": "code",
241 | "id": "UiUl2GKY5375"
242 | },
243 | "outputs": [],
244 | "source": [
245 | "pylab.rcParams['figure.figsize'] = (13, 13)\n",
246 | "\n",
247 | "# we give our inventory a shorter name iDf (for inventory Dataframe)\n",
248 | "iDf = s1_project.inventory.copy()\n",
249 | "\n",
250 | "# we select the latest scene based on the metadata entry endposition\n",
251 | "latest_df = iDf[iDf.endposition == iDf.endposition.max()]\n",
252 | "\n",
253 | "# we print out a little info on the date of the \n",
254 | "print(' INFO: Latest scene found for {}'.format(latest_df['acquisitiondate'].values[0]))\n",
255 | "\n",
256 | "# we use the plotInventory method in combination with the newly\n",
257 | "# created Geodataframe to see our scene footprint\n",
258 | "s1_project.plot_inventory(latest_df, transparency=.5)"
259 | ]
260 | },
261 | {
262 | "cell_type": "markdown",
263 | "metadata": {
264 | "colab_type": "text",
265 | "id": "z0J2MNlq5378"
266 | },
267 | "source": [
268 | "### 7* Download scene\n",
269 | "\n",
270 | "We use the build-in download method from the *Sentinel1* class. Note that you can pass any Geodataframe generated by OST, and filtered by you (e.g. sort out rows that you do not need). In our case we are only interested in the latest scene, so we pass the newly generated *latest_df* Geodataframe object.\n",
271 | "\n",
272 | "**NOTE** that you should use ESA's scihub server in this case, since it is the place where the images arrive first. Other data mirrors might have slight delays, so that the scene found by the inventory might not be available."
273 | ]
274 | },
275 | {
276 | "cell_type": "code",
277 | "execution_count": null,
278 | "metadata": {
279 | "colab": {},
280 | "colab_type": "code",
281 | "id": "EVCw8pHg5378"
282 | },
283 | "outputs": [],
284 | "source": [
285 | "s1_project.download(latest_df)"
286 | ]
287 | },
288 | {
289 | "cell_type": "markdown",
290 | "metadata": {
291 | "colab_type": "text",
292 | "id": "1TI2VeaD538A"
293 | },
294 | "source": [
295 | "### 8* - Display some metadata of the latest scene\n",
296 | "\n",
297 | "After use of the *Sentinel1* class for finding and downloading the latest scene, we hand the scene identifier over to the *Sentinel1Scene* class for further processing as already demonstrated in OST Tutorial 1."
298 | ]
299 | },
300 | {
301 | "cell_type": "code",
302 | "execution_count": null,
303 | "metadata": {
304 | "colab": {},
305 | "colab_type": "code",
306 | "id": "kqqZdzKC538A"
307 | },
308 | "outputs": [],
309 | "source": [
310 | "# create a S1Scene class instance based on the scene identifier coming from our \"latest scene dataframe\"\n",
311 | "latest_scene = Sentinel1Scene(latest_df['identifier'].values[0])\n",
312 | "\n",
313 | "# print summarising infos\n",
314 | "latest_scene.info()\n",
315 | "\n",
316 | "# print file location\n",
317 | "file_location = latest_scene.get_path(s1_project.download_dir)\n",
318 | "\n",
319 | "print(' File is located at: ')\n",
320 | "print(' ' + str(file_location))"
321 | ]
322 | },
323 | {
324 | "cell_type": "markdown",
325 | "metadata": {
326 | "colab_type": "text",
327 | "id": "MkKvHRSl538D"
328 | },
329 | "source": [
330 | "### 9* - Produce a subsetted ARD product\n",
331 | "\n",
332 | "The creation of the ARD product follows the same logic as presented in OST Tutorial 1. However, for this case we introduce the subset argument to eh *create_ard* function.\n",
333 | "Subsetting is adviced if only a small portion of the whole image is of interest. It will speed up processing and uses less storage.\n",
334 | "\n",
335 | "In our case we use some helper functions within the OST package to create a squared buffer area fo 10000 meter around our point of interest defined as AOI."
336 | ]
337 | },
338 | {
339 | "cell_type": "code",
340 | "execution_count": null,
341 | "metadata": {
342 | "colab": {},
343 | "colab_type": "code",
344 | "id": "W3Wyb_ny538E"
345 | },
346 | "outputs": [],
347 | "source": [
348 | "# 10 km buffer around AOI Point\n",
349 | "from shapely.wkt import loads\n",
350 | "from ost.helpers import vector as vec\n",
351 | "\n",
352 | "# turn WKT into shapely geometry object\n",
353 | "shp_aoi = loads(s1_project.aoi)\n",
354 | "\n",
355 | "# use OST helper's function to create a quadrant buffer for subset\n",
356 | "subset_area = vec.geodesic_point_buffer(shp_aoi.y, shp_aoi.x, 10000, envelope=True)\n",
357 | "\n",
358 | "print('-----------------------------------------------------------------------------')\n",
359 | "latest_scene.create_ard(\n",
360 | " # we see our download path can be automatically generated by providing the Project's download directory\n",
361 | " infile=latest_scene.get_path(download_dir=s1_project.download_dir), \n",
362 | " # let's simply take our processing folder\n",
363 | " out_dir=s1_project.processing_dir, \n",
364 | " # define the subset\n",
365 | " subset=subset_area,\n",
366 | " # in case already processed, we will re-process\n",
367 | " overwrite=True\n",
368 | ")\n",
369 | "\n",
370 | "print('-----------------------------------------------------------------------------')\n",
371 | "print(' The path to our newly created ARD product can be obtained the following way:')\n",
372 | "latest_scene.ard_dimap"
373 | ]
374 | },
375 | {
376 | "cell_type": "markdown",
377 | "metadata": {
378 | "colab_type": "text",
379 | "id": "m4N-5dBu538G"
380 | },
381 | "source": [
382 | "### 10* - Create a RGB color composite\n",
383 | "\n",
384 | "As already demonstrated in OST Tutorial 1, we create an RGB GeoTiff, and visualize it."
385 | ]
386 | },
387 | {
388 | "cell_type": "code",
389 | "execution_count": null,
390 | "metadata": {
391 | "colab": {},
392 | "colab_type": "code",
393 | "id": "icq5Nf34538H"
394 | },
395 | "outputs": [],
396 | "source": [
397 | "latest_scene.create_rgb(outfile = s1_project.processing_dir.joinpath(f'{latest_scene.start_date}.tif'))\n",
398 | "latest_scene.visualise_rgb(shrink_factor=1)"
399 | ]
400 | }
401 | ],
402 | "metadata": {
403 | "colab": {
404 | "collapsed_sections": [],
405 | "name": "3 - Latest Sentinel-1 scene.ipynb",
406 | "provenance": []
407 | },
408 | "kernelspec": {
409 | "display_name": "Python 3",
410 | "language": "python",
411 | "name": "python3"
412 | },
413 | "language_info": {
414 | "codemirror_mode": {
415 | "name": "ipython",
416 | "version": 3
417 | },
418 | "file_extension": ".py",
419 | "mimetype": "text/x-python",
420 | "name": "python",
421 | "nbconvert_exporter": "python",
422 | "pygments_lexer": "ipython3",
423 | "version": "3.8.5"
424 | },
425 | "toc": {
426 | "base_numbering": 1,
427 | "nav_menu": {},
428 | "number_sections": false,
429 | "sideBar": true,
430 | "skip_h1_title": true,
431 | "title_cell": "Table of Contents",
432 | "title_sidebar": "Contents",
433 | "toc_cell": false,
434 | "toc_position": {},
435 | "toc_section_display": true,
436 | "toc_window_display": true
437 | }
438 | },
439 | "nbformat": 4,
440 | "nbformat_minor": 4
441 | }
442 |
--------------------------------------------------------------------------------
/4a - Sentinel-1 GRD Batch - Subset.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | " Open SAR Toolkit - Tutorial 4a, version 1.2, June 2020. Andreas Vollrath, ESA/ESRIN phi-lab\n",
8 | ""
9 | ]
10 | },
11 | {
12 | "cell_type": "markdown",
13 | "metadata": {},
14 | "source": [
15 | "\n",
16 | "\n",
17 | "--------\n",
18 | "\n",
19 | "# OST Tutorial IV-A\n",
20 | "## How to create near-daily timeseries over Vienna. Introduction to GRD Batch Processing part I.\n",
21 | "\n",
22 | "[](https://colab.research.google.com/github/ESA-PhiLab/OST_Notebooks/blob/master/4a%20-%20Sentinel-1%20GRD%20Batch%20-%20Subset.ipynb)\n",
23 | "\n",
24 | "--------\n",
25 | "\n",
26 | "**Short description**\n",
27 | "\n",
28 | "This notebook provides an introduction to the batch processing of Sentinel-1 GRD data using OST's *Sentinel1Batch* class. This is a subclass of the *Sentinel1* class, and thus inherits all the functionalities of the *Generic* and the *Sentinel1* classes for the generation of a project as well as data search and refinement as presented in the OST Tutorial II notebook. The *Sentinel1Batch* class holds functions for the batch processing of single calibrated backscatter products. Furthermore, time-series processing and the generation of multi-temporal statistics, referred to as timescans, are introduced. \n",
29 | "\n",
30 | "Within the given example, time-series for 4 different overlapping tracks are going to be produced over the city of Vienna, Austria. The notebook demonstrates:\n",
31 | "\n",
32 | "1. the reduction of data processing by automatically subsetting the data,\n",
33 | "2. time-series processing and the corresponding ARD types,\n",
34 | "3. merging the track specific time-series into a unique time-series with almost daily coverage, \n",
35 | "4. creation of a timeseries animation for outreach purposes.\n",
36 | "\n",
37 | "--------\n",
38 | "\n",
39 | "**Requirements**\n",
40 | "\n",
41 | "- a PC/Mac with at least 16GB of RAM\n",
42 | "- about 25 GB of free disk space\n",
43 | "- a Copernicus Open Data Hub user account, ideally valid for at least 7 days (https://scihub.copernicus.eu)\n",
44 | "--------\n",
45 | "\n",
46 | "**NOTE:** all cells that have an * after its number can be executed without changing any code. "
47 | ]
48 | },
49 | {
50 | "cell_type": "markdown",
51 | "metadata": {},
52 | "source": [
53 | "### 0\\* - Install OST and dependencies \n",
54 | "\n",
55 | "**NOTE:** Applies only if you haven't fully installed OST yet, e.g. on Google Colab, "
56 | ]
57 | },
58 | {
59 | "cell_type": "code",
60 | "execution_count": null,
61 | "metadata": {},
62 | "outputs": [],
63 | "source": [
64 | "# !apt-get -y install wget\n",
65 | "# !wget https://raw.githubusercontent.com/ESA-PhiLab/OST_Notebooks/master/install_ost.sh\n",
66 | "# !bash install_ost.sh"
67 | ]
68 | },
69 | {
70 | "cell_type": "markdown",
71 | "metadata": {},
72 | "source": [
73 | "### 1* - Import of Libraries\n",
74 | "\n",
75 | "In this step we import some standard python libraries for OS independent path handling as well as the *Sentinel1_GRDBatch* class thta handles the full workflow from search, download and processing of multiple GRD scenes. In addition, the OST helper module *vector* is loaded to create an AOI based on Point coordinates, and the *raster* module for creating a time-series animation."
76 | ]
77 | },
78 | {
79 | "cell_type": "code",
80 | "execution_count": null,
81 | "metadata": {},
82 | "outputs": [],
83 | "source": [
84 | "# this is for the path handling and especially useful if you are on Windows\n",
85 | "from pathlib import Path\n",
86 | "from pprint import pprint\n",
87 | "\n",
88 | "\n",
89 | "# this is the s1Project class, that basically handles all the workflow from beginning to the end\n",
90 | "from ost import Sentinel1Batch\n",
91 | "from ost.helpers import vector, raster"
92 | ]
93 | },
94 | {
95 | "cell_type": "markdown",
96 | "metadata": {},
97 | "source": [
98 | "### 2* - Set up the project \n",
99 | "\n",
100 | "Here you going to initialize the *Sentinel1Batch* class by determining the project folder, the AOI and the start and end date. In addition we determine the image product type (i.e. GRD) and the ARD type that we use for processing. In this cas we choose the OST-RTC that produces Radiometrically Terrain Corrected products, i.e. the images are corrected for radiometric distortions along mountainous slopes. This type of ARD is advised when doing land cover and land use studiesover rugged terrain."
101 | ]
102 | },
103 | {
104 | "cell_type": "code",
105 | "execution_count": null,
106 | "metadata": {},
107 | "outputs": [],
108 | "source": [
109 | "# define the project directory\n",
110 | "project_dir = Path.home().joinpath('OST_Tutorials/Tutorial_4a')\n",
111 | "\n",
112 | "# define aoi with a 2 point coordinates and create a buffer with 20km radius\n",
113 | "lat, lon = '48.25', '16.4' # Vienna\n",
114 | "aoi = vector.latlon_to_wkt(lat, lon, buffer_meter=17500, envelope=True)\n",
115 | "\n",
116 | "# define the start and end date\n",
117 | "start = '2020-05-01'\n",
118 | "end = '2020-05-31'\n",
119 | "\n",
120 | "# initialize the class to s1_grd instance\n",
121 | "s1_grd = Sentinel1Batch(\n",
122 | " project_dir=project_dir,\n",
123 | " aoi=aoi,\n",
124 | " start=start,\n",
125 | " end=end, \n",
126 | " product_type='GRD', \n",
127 | " ard_type='OST-RTC'\n",
128 | ")\n",
129 | "\n",
130 | "# do the search\n",
131 | "if not s1_grd.inventory_file:\n",
132 | " s1_grd.search()"
133 | ]
134 | },
135 | {
136 | "cell_type": "markdown",
137 | "metadata": {},
138 | "source": [
139 | "### 3* - Plot refined data inventory\n",
140 | "\n",
141 | "The resultant dataframe from the search inventory is visualised. We do not do a refinement step here, since all images are fully overlapping the AOI. This allows us to create a combined, almost daily time-series of all images."
142 | ]
143 | },
144 | {
145 | "cell_type": "code",
146 | "execution_count": null,
147 | "metadata": {},
148 | "outputs": [],
149 | "source": [
150 | "#---------------------------------------------------\n",
151 | "# for plotting purposes we use this iPython magic\n",
152 | "%matplotlib inline\n",
153 | "%pylab inline\n",
154 | "pylab.rcParams['figure.figsize'] = (19, 19)\n",
155 | "#---------------------------------------------------\n",
156 | "\n",
157 | "# plot the inventory\n",
158 | "s1_grd.plot_inventory(s1_grd.inventory, transparency=.1)"
159 | ]
160 | },
161 | {
162 | "cell_type": "markdown",
163 | "metadata": {},
164 | "source": [
165 | "### 4* - Download of GRD scenes\n",
166 | "\n",
167 | "As already shown in Tutorial II, you will download the scenes based on the refined inventory dataframe for the respective produckt key."
168 | ]
169 | },
170 | {
171 | "cell_type": "code",
172 | "execution_count": null,
173 | "metadata": {},
174 | "outputs": [],
175 | "source": [
176 | "s1_grd.download(s1_grd.inventory, concurrent=10)"
177 | ]
178 | },
179 | {
180 | "cell_type": "markdown",
181 | "metadata": {},
182 | "source": [
183 | "### 5* - Set ARD parameters\n",
184 | "\n",
185 | "Similar to the *Sentinel1Scene* class (Tutorial I and III), the *Sentinel1Batch* class handles the defintion of ARD types in a hierarchical dictionary structure. You can use the same types and steps to customize as for the *Sentinel1Scene* class. For our example, we already intialised the class instance with the OST-RTC ARD type, in order to remove the backscatter distortion on mountainous slopes. This applies to all single image processing in the first step. The subsequent time-series processing will bring all the imagery to a common grid and apply a multitemporal speckle filter, that is much more efficient than speckle filtering applied on a single image. Since speckle filters a conceptionalised to work on the raw power data of SAR imagery, the conversion to the dB scale is only applied after the multi-temporal speckle filtering. In addition, all images are clipped to the very same extent, that is defined by the common data coverage of all images per track as well as the AOI.\n",
186 | "\n",
187 | "Note that it is possible to change the datatype of the output to either unsigned 16 or 8-bit integer. The backscatter data is therefore linearly stretched between -30 to 5 dB. This has the advantage to reduce the necessary storage by a factor of 2 for 16-bit uint and a factor of 4 for 8-bit uint data.\n",
188 | "\n",
189 | "The exact processing steps are as follows and depend on the ARD type:\n",
190 | "\n",
191 | "| ARD Types | OST-GTC | OST-RTC | CEOS | Earth-Engine |\n",
192 | "| :------------- | :----------: | :-----------: | :----------: | -----------: |\n",
193 | "| **Time-series processing steps** | \n",
194 | "| Create stack | x | x | x | x |\n",
195 | "| Multi-temporal speckle filter | x | x | - | - |\n",
196 | "| dB conversion | x | x | x | - |\n",
197 | "| Layover/Shadow mask creation | x | x | x | x |\n",
198 | "| Datatype conversion | - | - | - | - |\n",
199 | "| Clip to common extent | x | x | x | x |"
200 | ]
201 | },
202 | {
203 | "cell_type": "code",
204 | "execution_count": null,
205 | "metadata": {},
206 | "outputs": [],
207 | "source": [
208 | "print('-----------------------------------------------------------------------------------------------------------')\n",
209 | "print('Time-series processing parameters hold in the configuration file:')\n",
210 | "print('-----------------------------------------------------------------------------------------------------------')\n",
211 | "print('')\n",
212 | "pprint(s1_grd.ard_parameters['time-series_ARD'])\n",
213 | "\n",
214 | "# custimze some single scene ARD parameters\n",
215 | "s1_grd.ard_parameters['single_ARD']['resolution'] = 50 # reduce for processing time and disk space\n",
216 | "s1_grd.ard_parameters['time-series_ARD']['dtype_output'] = 'uint8'"
217 | ]
218 | },
219 | {
220 | "cell_type": "markdown",
221 | "metadata": {},
222 | "source": [
223 | "### 6* - Run the batch routine\n",
224 | "\n",
225 | "To process all the data, including time-series and timescans is as easy as one command. All the complexity is handled by OST in the back, and you just have to wait, since processing can take quite a while. Note the keywords to aly higher level tim-series and timescan generation. Mosaicking refers toacross track mosaicking, which for this example is not the case. The *overwrite* argument tells OST if it should start processing from scratch (i.e. set to **True**), or process from where it stopped. The latter comes in handy when wokng on cloud instances that crash or automatically shutdown every once in a while. \n",
226 | "\n",
227 | "**Note** that subsetting is set automatically to **True** when all tracks hold in the inventory fully overlap the AOI."
228 | ]
229 | },
230 | {
231 | "cell_type": "code",
232 | "execution_count": null,
233 | "metadata": {},
234 | "outputs": [],
235 | "source": [
236 | "s1_grd.grds_to_ards(\n",
237 | " inventory_df=s1_grd.inventory, \n",
238 | " timeseries=True, \n",
239 | " timescan=True, \n",
240 | " mosaic=False, \n",
241 | " overwrite=False\n",
242 | ")"
243 | ]
244 | },
245 | {
246 | "cell_type": "markdown",
247 | "metadata": {},
248 | "source": [
249 | "### 7* - Merge single per-track time-series to one single time-series\n",
250 | "\n",
251 | "By using a little helper function from OST's raster module, combining the 4 time-series to a unique one is as easy the following command. Within your processing directory, a new foler called combined is created. If multi-temporal statistics for this new time-series should be created, set the timescan argument to **True**."
252 | ]
253 | },
254 | {
255 | "cell_type": "code",
256 | "execution_count": null,
257 | "metadata": {},
258 | "outputs": [],
259 | "source": [
260 | "raster.combine_timeseries(\n",
261 | " processing_dir=s1_grd.processing_dir, \n",
262 | " config_dict=s1_grd.config_dict, \n",
263 | " timescan=True\n",
264 | ")"
265 | ]
266 | },
267 | {
268 | "cell_type": "markdown",
269 | "metadata": {},
270 | "source": [
271 | "### 8* - Create a time-series animation\n",
272 | "\n",
273 | "Finally, a time-series animation is created. therefore we need to pass the time-series folder to the command. product_list expects a list of 1 to 3 elements. For GRD data this is either a single polarisation, or both bands. OST will calculate the power ratio of band 1 and 2 for a 3-band RGB composite. A shrink factor can be set to reduce image resolution and memory needs.\n",
274 | "\n",
275 | "**Note:** This needs imagemagick installed, which is not a default requirement by OST.\n",
276 | "You can install it on e.g. Ubuntu by typing:\n",
277 | "sudo apt-get install magick"
278 | ]
279 | },
280 | {
281 | "cell_type": "code",
282 | "execution_count": null,
283 | "metadata": {},
284 | "outputs": [],
285 | "source": [
286 | "# define Time-series folder\n",
287 | "ts_folder = s1_grd.processing_dir.joinpath('combined/Timeseries')\n",
288 | "\n",
289 | "# create the animation\n",
290 | "raster.create_timeseries_animation(\n",
291 | " timeseries_folder=ts_folder, \n",
292 | " product_list=['bs.VV', 'bs.VH'], \n",
293 | " out_folder=s1_grd.processing_dir,\n",
294 | " shrink_factor=3, \n",
295 | " add_dates=True\n",
296 | ")"
297 | ]
298 | }
299 | ],
300 | "metadata": {
301 | "kernelspec": {
302 | "display_name": "Python 3",
303 | "language": "python",
304 | "name": "python3"
305 | },
306 | "language_info": {
307 | "codemirror_mode": {
308 | "name": "ipython",
309 | "version": 3
310 | },
311 | "file_extension": ".py",
312 | "mimetype": "text/x-python",
313 | "name": "python",
314 | "nbconvert_exporter": "python",
315 | "pygments_lexer": "ipython3",
316 | "version": "3.8.5"
317 | },
318 | "toc": {
319 | "base_numbering": 1,
320 | "nav_menu": {},
321 | "number_sections": false,
322 | "sideBar": true,
323 | "skip_h1_title": true,
324 | "title_cell": "Table of Contents",
325 | "title_sidebar": "Contents",
326 | "toc_cell": false,
327 | "toc_position": {},
328 | "toc_section_display": true,
329 | "toc_window_display": true
330 | }
331 | },
332 | "nbformat": 4,
333 | "nbformat_minor": 4
334 | }
335 |
--------------------------------------------------------------------------------
/4b - Sentinel-1 GRD Batch - Timeseries.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "colab_type": "text",
7 | "id": "P2SVmQcoWFBW"
8 | },
9 | "source": [
10 | " Open SAR Toolkit - Tutorial 4b, version 1.2, June 2020. Andreas Vollrath, ESA/ESRIN phi-lab\n",
11 | ""
12 | ]
13 | },
14 | {
15 | "cell_type": "markdown",
16 | "metadata": {
17 | "colab_type": "text",
18 | "id": "Y_9lvtkkWFBX"
19 | },
20 | "source": [
21 | "\n",
22 | "\n",
23 | "--------\n",
24 | "\n",
25 | "# OST Tutorial IV-B\n",
26 | "## How to create a timeseries animation of Iceberg A-68. Introduction to GRD Batch Processing part II.\n",
27 | "\n",
28 | "[](https://colab.research.google.com/github/ESA-PhiLab/OST_Notebooks/blob/master/4b%20-%20Sentinel-1%20GRD%20Batch%20-%20Timeseries.ipynb)\n",
29 | "\n",
30 | "--------\n",
31 | "\n",
32 | "**Short description**\n",
33 | "\n",
34 | "This notebook continues to introduce you to the general workflow of OST for the batch processing of GRD data using the *Sentinel1Batch* class. In this example:\n",
35 | "\n",
36 | "1. across-track mosaicking based on a refined inventory and\n",
37 | "2. processing in polar regions is shown.\n",
38 | "\n",
39 | "--------\n",
40 | "\n",
41 | "**Requirements**\n",
42 | "\n",
43 | "- a PC/Mac with at least 16GB of RAM\n",
44 | "- about 75 GB of free disk space\n",
45 | "- a Copernicus Open Data Hub user account, valid for at least 7 days (https://scihub.copernicus.eu)\n",
46 | "--------\n",
47 | "\n",
48 | "**NOTE:** all cells that have an * after its number can be executed without changing any code. "
49 | ]
50 | },
51 | {
52 | "cell_type": "markdown",
53 | "metadata": {},
54 | "source": [
55 | "### 0\\* - Install OST and dependencies \n",
56 | "\n",
57 | "**NOTE:** Applies only if you haven't fully installed OST yet, e.g. on Google Colab, "
58 | ]
59 | },
60 | {
61 | "cell_type": "code",
62 | "execution_count": null,
63 | "metadata": {
64 | "colab": {
65 | "base_uri": "https://localhost:8080/",
66 | "height": 1000
67 | },
68 | "colab_type": "code",
69 | "executionInfo": {
70 | "elapsed": 129719,
71 | "status": "ok",
72 | "timestamp": 1594827783444,
73 | "user": {
74 | "displayName": "Andreas Vollrath",
75 | "photoUrl": "",
76 | "userId": "16529617641432137889"
77 | },
78 | "user_tz": -120
79 | },
80 | "id": "JdI1jtExWkWH",
81 | "outputId": "2c0c26ee-4137-41d2-8c4c-b389833401e6"
82 | },
83 | "outputs": [],
84 | "source": [
85 | "# !apt-get -y install wget\n",
86 | "# !wget https://raw.githubusercontent.com/ESA-PhiLab/OST_Notebooks/master/install_ost.sh\n",
87 | "# !bash install_ost.sh"
88 | ]
89 | },
90 | {
91 | "cell_type": "markdown",
92 | "metadata": {
93 | "colab_type": "text",
94 | "id": "3cZq5gYiWFBY"
95 | },
96 | "source": [
97 | "### 1* - Import of libraries"
98 | ]
99 | },
100 | {
101 | "cell_type": "code",
102 | "execution_count": null,
103 | "metadata": {
104 | "colab": {},
105 | "colab_type": "code",
106 | "executionInfo": {
107 | "elapsed": 1205,
108 | "status": "ok",
109 | "timestamp": 1594828366359,
110 | "user": {
111 | "displayName": "Andreas Vollrath",
112 | "photoUrl": "",
113 | "userId": "16529617641432137889"
114 | },
115 | "user_tz": -120
116 | },
117 | "id": "6LgvmzL0WFBZ"
118 | },
119 | "outputs": [],
120 | "source": [
121 | "from pathlib import Path\n",
122 | "from pprint import pprint\n",
123 | "\n",
124 | "from ost import Sentinel1Batch\n",
125 | "from ost.helpers import vector, raster\n",
126 | "\n",
127 | "#---------------------------------------------------\n",
128 | "# for plotting purposes we use this iPython magic\n",
129 | "%matplotlib inline\n",
130 | "%pylab inline\n",
131 | "pylab.rcParams['figure.figsize'] = (19, 19)\n",
132 | "#---------------------------------------------------"
133 | ]
134 | },
135 | {
136 | "cell_type": "markdown",
137 | "metadata": {
138 | "colab_type": "text",
139 | "id": "Q8fB1i_6WFBd"
140 | },
141 | "source": [
142 | "### 2 - Set up the project \n",
143 | "\n",
144 | "This follows the logic of the prior OST Tutorial notebooks."
145 | ]
146 | },
147 | {
148 | "cell_type": "code",
149 | "execution_count": null,
150 | "metadata": {
151 | "colab": {
152 | "base_uri": "https://localhost:8080/",
153 | "height": 275
154 | },
155 | "colab_type": "code",
156 | "id": "FyKptPZJWFBe",
157 | "outputId": "47e445ea-a4b9-4270-ebca-6c93f11f9f95"
158 | },
159 | "outputs": [],
160 | "source": [
161 | "# define a project directory\n",
162 | "home = Path.home()\n",
163 | "# create a processing directory\n",
164 | "project_dir = home.joinpath('OST_Tutorials', '4b')\n",
165 | "\n",
166 | "# define aoi with helper function, i.e. get a buffered area around point coordinates\n",
167 | "lat, lon = '-67', '-61'\n",
168 | "aoi = vector.latlon_to_wkt(lat, lon, buffer_degree=1.5, envelope=True)\n",
169 | "\n",
170 | "# define the start and end date\n",
171 | "start = '2017-06-30'\n",
172 | "end = '2017-08-31'\n",
173 | "\n",
174 | "# initialize the class to s1_grd instance\n",
175 | "s1_grd = Sentinel1Batch(\n",
176 | " project_dir=project_dir,\n",
177 | " aoi = aoi,\n",
178 | " start = start,\n",
179 | " end = end,\n",
180 | " product_type='GRD'\n",
181 | ")\n",
182 | "\n",
183 | "# trigger the search\n",
184 | "s1_grd.search()\n",
185 | "s1_grd.plot_inventory()"
186 | ]
187 | },
188 | {
189 | "cell_type": "markdown",
190 | "metadata": {
191 | "colab_type": "text",
192 | "id": "mr8nkbgRWFBi"
193 | },
194 | "source": [
195 | "### 3 - Search Refinement\n",
196 | "\n",
197 | "In order to create a time-series of multiple tracks, a pre-condition is that all tracks feature the same amount of acquistions within our Time of Interest.\n",
198 | "Let's use some pandas syntax to see if this is the case:"
199 | ]
200 | },
201 | {
202 | "cell_type": "code",
203 | "execution_count": null,
204 | "metadata": {},
205 | "outputs": [],
206 | "source": [
207 | "df = s1_grd.inventory.pivot_table(index=['relativeorbit', 'acquisitiondate'], aggfunc='size').reset_index()\n",
208 | "df.pivot_table(index='relativeorbit', aggfunc='size').reset_index()"
209 | ]
210 | },
211 | {
212 | "cell_type": "markdown",
213 | "metadata": {},
214 | "source": [
215 | "As in most cases, we do not fulfill this pre-condition. By considering all tracks, our time-series would need to be reduced to 5 acquisitions. However, images taken over track 9 are not necessary, since our AOI is fully covered by the other 2 tracks.\n",
216 | "\n",
217 | "As already mentioned in OST Tutorial 2, the *refine_inventory* method takes care of those issues and prepares the inventory in a way that it is suitable for across-track mosaic time-series. This includes the splitting of images by orbit direction and polarzation mode in the first place. In addition, it checks if some tracks can be excluded because all the others fully overlap the AOI. In this way we reduce the amount of images to process, while optimising for our later time-series processing. See OST Tutorial 2 for full explanation and arguments."
218 | ]
219 | },
220 | {
221 | "cell_type": "code",
222 | "execution_count": null,
223 | "metadata": {},
224 | "outputs": [],
225 | "source": [
226 | "# do the refinement\n",
227 | "s1_grd.refine_inventory()"
228 | ]
229 | },
230 | {
231 | "cell_type": "markdown",
232 | "metadata": {},
233 | "source": [
234 | "The output of the refinement procedure gives some infos, e.g. the exclusion of track 9. At the very end it summarises the information. Since in our case we only have imagery acquired in descending orbit and HH polarization, we see that 10 mosaics in time can be created. Another **important** infomation is the **key** defiend by orbit direction and polarisation, i.e. **DESCENDING_HH**. We will need this to select the refined inventory stored in the *refined_inventory_dict* attribute of our class instance as follows:"
235 | ]
236 | },
237 | {
238 | "cell_type": "code",
239 | "execution_count": null,
240 | "metadata": {
241 | "colab": {
242 | "base_uri": "https://localhost:8080/",
243 | "height": 1000
244 | },
245 | "colab_type": "code",
246 | "executionInfo": {
247 | "elapsed": 1461,
248 | "status": "ok",
249 | "timestamp": 1594828707720,
250 | "user": {
251 | "displayName": "Andreas Vollrath",
252 | "photoUrl": "",
253 | "userId": "16529617641432137889"
254 | },
255 | "user_tz": -120
256 | },
257 | "id": "FnlnX_l8WFBj",
258 | "outputId": "836e7ec2-eaf1-4ec1-a5db-d4fefa41ef28"
259 | },
260 | "outputs": [],
261 | "source": [
262 | "# select the key from output of refinement command\n",
263 | "key = 'DESCENDING_HH'\n",
264 | "\n",
265 | "# we wrap the information of the length of our refined inventory in a print statement\n",
266 | "print(f'The refined inventory holds {len(s1_grd.refined_inventory_dict[key])} acquisitions to process.')\n",
267 | "\n",
268 | "# we plot the full Inventory on a map\n",
269 | "s1_grd.plot_inventory(s1_grd.refined_inventory_dict[key], transparency=.05)"
270 | ]
271 | },
272 | {
273 | "cell_type": "markdown",
274 | "metadata": {},
275 | "source": [
276 | "### 3 - Data download\n",
277 | "\n",
278 | "Here we download the data. It is best to use the ASF data mirror. "
279 | ]
280 | },
281 | {
282 | "cell_type": "code",
283 | "execution_count": null,
284 | "metadata": {
285 | "colab": {},
286 | "colab_type": "code",
287 | "id": "WV9i2KwRWFBs"
288 | },
289 | "outputs": [],
290 | "source": [
291 | "s1_grd.download(s1_grd.refined_inventory_dict[key], concurrent=10)"
292 | ]
293 | },
294 | {
295 | "cell_type": "markdown",
296 | "metadata": {},
297 | "source": [
298 | "### 5* - Customise ARD parameters\n",
299 | "\n",
300 | "1. For data reduction we lower the resolution to 100 meters. \n",
301 | "2. This will also reduce speckle, so we do not need it neither. \n",
302 | "3. We do not care about Layover and Shadow for this example, since there is anyway no high-resolution DEM for Antarctica that could provide sufficient inforamtion.\n",
303 | "4. Our time-series output will be scaled to dB scale for better stretching and visualisation\n",
304 | "5. We further reduce the amount of data by converting from 32-bit float to unsigned 8-bit integer data type\n",
305 | "6. Our AOI is only a rough seletion for the Area of Interest. We do not want to cut it to the boundaries to see the full data provided by the selected imagery."
306 | ]
307 | },
308 | {
309 | "cell_type": "code",
310 | "execution_count": null,
311 | "metadata": {
312 | "colab": {},
313 | "colab_type": "code",
314 | "id": "Eu5GYUasWFBw"
315 | },
316 | "outputs": [],
317 | "source": [
318 | "s1_grd.ard_parameters['single_ARD']['resolution'] = 100\n",
319 | "s1_grd.ard_parameters['single_ARD']['remove_mt_speckle'] = False\n",
320 | "s1_grd.ard_parameters['single_ARD']['create_ls_mask'] = False\n",
321 | "s1_grd.ard_parameters['time-series_ARD']['to_db'] = True\n",
322 | "s1_grd.ard_parameters['time-series_ARD']['dtype_output'] = 'uint8'\n",
323 | "s1_grd.ard_parameters['mosaic']['cut_to_aoi'] = False\n",
324 | "\n",
325 | "pprint(s1_grd.ard_parameters)"
326 | ]
327 | },
328 | {
329 | "cell_type": "markdown",
330 | "metadata": {},
331 | "source": [
332 | "### 6* - Produce Timeseries Mosaics\n",
333 | "\n",
334 | "Now the creation of our mosaic time-series is just a single command away."
335 | ]
336 | },
337 | {
338 | "cell_type": "code",
339 | "execution_count": null,
340 | "metadata": {
341 | "colab": {},
342 | "colab_type": "code",
343 | "id": "GYgdavVDWFBz"
344 | },
345 | "outputs": [],
346 | "source": [
347 | "s1_grd.grds_to_ards(\n",
348 | " inventory_df=s1_grd.refined_inventory_dict[key], \n",
349 | " timeseries=True, \n",
350 | " timescan=False, \n",
351 | " mosaic=True, \n",
352 | " overwrite=False\n",
353 | ")"
354 | ]
355 | },
356 | {
357 | "cell_type": "markdown",
358 | "metadata": {
359 | "colab_type": "text",
360 | "id": "Iat7lGGTWFB2"
361 | },
362 | "source": [
363 | "### 7* - Creation of an animated GIF of a timeseries\n",
364 | "\n",
365 | "**Note:** This needs imagemagick installed, which is not a default requirement by OST.\n",
366 | "You can install it on e.g. Ubuntu by typing:\n",
367 | "sudo apt-get install magick"
368 | ]
369 | },
370 | {
371 | "cell_type": "code",
372 | "execution_count": null,
373 | "metadata": {
374 | "colab": {},
375 | "colab_type": "code",
376 | "id": "6OF9NCoXWFB2"
377 | },
378 | "outputs": [],
379 | "source": [
380 | "from ost.helpers import raster\n",
381 | "\n",
382 | "# define the timeseries folder for which the animation should be created\n",
383 | "ts_folder = s1_grd.processing_dir.joinpath('Mosaic/Timeseries')\n",
384 | "\n",
385 | "raster.create_timeseries_animation(\n",
386 | " ts_folder, \n",
387 | " ['bs.HH'], \n",
388 | " s1_grd.processing_dir,\n",
389 | " shrink_factor=15,\n",
390 | " add_dates=True,\n",
391 | " duration=0.25\n",
392 | ")"
393 | ]
394 | }
395 | ],
396 | "metadata": {
397 | "colab": {
398 | "collapsed_sections": [],
399 | "name": "4b - Sentinel-1 GRD Batch processing - Timeseries.ipynb",
400 | "provenance": [],
401 | "toc_visible": true
402 | },
403 | "kernelspec": {
404 | "display_name": "Python 3",
405 | "language": "python",
406 | "name": "python3"
407 | },
408 | "language_info": {
409 | "codemirror_mode": {
410 | "name": "ipython",
411 | "version": 3
412 | },
413 | "file_extension": ".py",
414 | "mimetype": "text/x-python",
415 | "name": "python",
416 | "nbconvert_exporter": "python",
417 | "pygments_lexer": "ipython3",
418 | "version": "3.8.5"
419 | },
420 | "toc": {
421 | "base_numbering": 1,
422 | "nav_menu": {},
423 | "number_sections": false,
424 | "sideBar": true,
425 | "skip_h1_title": true,
426 | "title_cell": "Table of Contents",
427 | "title_sidebar": "Contents",
428 | "toc_cell": false,
429 | "toc_position": {},
430 | "toc_section_display": true,
431 | "toc_window_display": true
432 | }
433 | },
434 | "nbformat": 4,
435 | "nbformat_minor": 4
436 | }
437 |
--------------------------------------------------------------------------------
/4c - Sentinel-1 GRD Batch - Timescan.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | " Open SAR Toolkit - Tutorial 4c, version 1.2, August 2020. Andreas Vollrath, ESA/ESRIN phi-lab\n",
8 | ""
9 | ]
10 | },
11 | {
12 | "cell_type": "markdown",
13 | "metadata": {},
14 | "source": [
15 | "\n",
16 | "\n",
17 | "--------\n",
18 | "\n",
19 | "# OST Tutorial IV - C\n",
20 | "## Create country-wide mosaic time-series and timescan. Introduction to GRD Batch processing part III.\n",
21 | "\n",
22 | "[](https://colab.research.google.com/github/ESA-PhiLab/OST_Notebooks/blob/master/4c%20-%20Sentinel-1%20GRD%20Batch%20-%20Timescan.ipynb)\n",
23 | "\n",
24 | "--------\n",
25 | "\n",
26 | "**Short description**\n",
27 | "\n",
28 | "This notebook is very similar to the Tutorial IVa, with the difference that you will process GRD data over a larger area and create time-series and timescan mosaics. It therefore represents the workflow for large-scale processing.\n",
29 | "\n",
30 | "--------\n",
31 | "\n",
32 | "**Requirements**\n",
33 | "\n",
34 | "- a PC/Mac with at least 16GB of RAM\n",
35 | "- about 150GB of free disk space\n",
36 | "- a Copernicus Open Data Hub user account, valid for at least 7 days (https://scihub.copernicus.eu)\n",
37 | "--------\n",
38 | "\n",
39 | "**NOTE:** all cells that have an * after its number can be executed without changing any code. "
40 | ]
41 | },
42 | {
43 | "cell_type": "markdown",
44 | "metadata": {},
45 | "source": [
46 | "### 0\\* - Install OST and dependencies \n",
47 | "\n",
48 | "**NOTE:** Applies only if you haven't fully installed OST and its dependencies yet, e.g. on Google Colab, so uncomment the lines in this case."
49 | ]
50 | },
51 | {
52 | "cell_type": "code",
53 | "execution_count": null,
54 | "metadata": {},
55 | "outputs": [],
56 | "source": [
57 | "# !apt-get -y install wget\n",
58 | "# !wget https://raw.githubusercontent.com/ESA-PhiLab/OST_Notebooks/master/install_ost.sh\n",
59 | "# !bash install_ost.sh"
60 | ]
61 | },
62 | {
63 | "cell_type": "markdown",
64 | "metadata": {},
65 | "source": [
66 | "### 1* - Import of Libraries\n",
67 | "\n",
68 | "In this step we import some standard python libraries for OS independent path handling as well as the *Sentinel1_GRDBatch* class thta handles the full workflow from search, download and processing of multiple GRD scenes. In addition, the OST helper module *vector* is loaded to create an AOI based on Point coordinates, and the *raster* module for creating a time-series animation."
69 | ]
70 | },
71 | {
72 | "cell_type": "code",
73 | "execution_count": null,
74 | "metadata": {},
75 | "outputs": [],
76 | "source": [
77 | "from pathlib import Path\n",
78 | "from pprint import pprint\n",
79 | "\n",
80 | "from ost import Sentinel1Batch\n",
81 | "from ost.helpers import vector, raster"
82 | ]
83 | },
84 | {
85 | "cell_type": "markdown",
86 | "metadata": {},
87 | "source": [
88 | "### 2* - Set up the project \n",
89 | "\n",
90 | "Here you going to initialize the *Sentinel1_GRDBatch* class by determining the project folder, the AOI and the start and end date. Since you should be already familiar with the *search* and *refine* functions, we execute them within the same cell."
91 | ]
92 | },
93 | {
94 | "cell_type": "code",
95 | "execution_count": null,
96 | "metadata": {},
97 | "outputs": [],
98 | "source": [
99 | "# define a project directory\n",
100 | "home = Path.home()\n",
101 | "# create a processing directory\n",
102 | "project_dir = home.joinpath('OST_Tutorials', 'Tutorial_4c')\n",
103 | "\n",
104 | "# define aoi with helper function, i.e. get a buffered area around point coordinates\n",
105 | "aoi = 'IRL'\n",
106 | "\n",
107 | "#define the start and end date\n",
108 | "start = '2019-02-01'\n",
109 | "end = '2019-04-30'\n",
110 | "\n",
111 | "# initialize the class to s1_grd instance\n",
112 | "s1_grd = Sentinel1Batch(\n",
113 | " project_dir=project_dir,\n",
114 | " aoi = aoi,\n",
115 | " start = start,\n",
116 | " end = end,\n",
117 | " product_type='GRD'\n",
118 | ")\n",
119 | "\n",
120 | "# trigger the search\n",
121 | "s1_grd.search()\n",
122 | "\n",
123 | "# optional: once you did the search the first time, you can load \n",
124 | "# the full inventory uncommenting the follwoing 2 lines\n",
125 | "# and commenting the search command\n",
126 | "# s1_grd.inventory_file = s1_grd.inventory_dir.joinpath('full.inventory.gpkg')\n",
127 | "# s1_grd.read_inventory()\n",
128 | "\n",
129 | "# do the refinement\n",
130 | "s1_grd.refine_inventory()"
131 | ]
132 | },
133 | {
134 | "cell_type": "markdown",
135 | "metadata": {},
136 | "source": [
137 | "### 3* - Plot refined data inventory\n",
138 | "\n",
139 | "Here you will visualize the resultant dataframes from the refined search inventory based on the product key."
140 | ]
141 | },
142 | {
143 | "cell_type": "code",
144 | "execution_count": null,
145 | "metadata": {},
146 | "outputs": [],
147 | "source": [
148 | "#---------------------------------------------------\n",
149 | "# for plotting purposes we use this iPython magic\n",
150 | "%matplotlib inline\n",
151 | "%pylab inline\n",
152 | "pylab.rcParams['figure.figsize'] = (19, 19)\n",
153 | "#---------------------------------------------------\n",
154 | "\n",
155 | "# search command\n",
156 | "key = 'ASCENDING_VVVH'\n",
157 | "print(f'Our refined inventory holds {len(s1_grd.refined_inventory_dict[key])} frames to process.')\n",
158 | "# we plot the full Inventory on a map\n",
159 | "s1_grd.plot_inventory(s1_grd.refined_inventory_dict[key], transparency=.1)"
160 | ]
161 | },
162 | {
163 | "cell_type": "markdown",
164 | "metadata": {},
165 | "source": [
166 | "### 4* - Download of GRD scenes\n",
167 | "\n",
168 | "As already shown in Tutorial II, you will download the scenes based on the refined inventory dataframe for the respective produckt key."
169 | ]
170 | },
171 | {
172 | "cell_type": "code",
173 | "execution_count": null,
174 | "metadata": {},
175 | "outputs": [],
176 | "source": [
177 | "s1_grd.download(s1_grd.refined_inventory_dict[key])"
178 | ]
179 | },
180 | {
181 | "cell_type": "markdown",
182 | "metadata": {},
183 | "source": [
184 | "### 5* - Set ARD parameters\n",
185 | "\n",
186 | "Similar to the *Sentinel1-Scene* class (Tutorial I and III), the *Sentinel1-GRDBatch* class handles the defintion of ARD types in a hierarchical dictionary structure. You can use the same types and steps to customize as for the *Sentinel1-Scene* class."
187 | ]
188 | },
189 | {
190 | "cell_type": "code",
191 | "execution_count": null,
192 | "metadata": {},
193 | "outputs": [],
194 | "source": [
195 | "# single scene ARD parameters\n",
196 | "s1_grd.ard_parameters['single_ARD']['resolution'] = 50\n",
197 | "s1_grd.ard_parameters['single_ARD']['product_type'] = 'GTC-gamma0'\n",
198 | "s1_grd.ard_parameters['single_ARD']['create_ls_mask'] = False\n",
199 | "\n",
200 | "# time-series ARD\n",
201 | "s1_grd.ard_parameters['time-series_ARD']['remove_mt_speckle'] = False\n",
202 | "\n",
203 | "# our borders for Ireland are quite rough. We therefore won't clip the final mosaics\n",
204 | "s1_grd.ard_parameters['mosaic']['cut_to_aoi'] = False\n",
205 | "\n",
206 | "#s1_grd.config_dict['temp_dir'] = '/ram'\n",
207 | "pprint(s1_grd.ard_parameters)"
208 | ]
209 | },
210 | {
211 | "cell_type": "markdown",
212 | "metadata": {},
213 | "source": [
214 | "### 6* - Run the batch routine\n",
215 | "\n",
216 | "To process all the data, including time-series and timescans is as easy as one command. All the complexity is handled behind, and you just have to wait, since processing can take quite a while."
217 | ]
218 | },
219 | {
220 | "cell_type": "code",
221 | "execution_count": null,
222 | "metadata": {},
223 | "outputs": [],
224 | "source": [
225 | "s1_grd.grds_to_ards(\n",
226 | " inventory_df=s1_grd.refined_inventory_dict[key], \n",
227 | " timeseries=True, \n",
228 | " timescan=True, \n",
229 | " overwrite=False\n",
230 | ")"
231 | ]
232 | },
233 | {
234 | "cell_type": "markdown",
235 | "metadata": {},
236 | "source": [
237 | "### 7* - Create a time-series animation\n",
238 | "\n",
239 | "For interactive presentations it is nice to have animated \"movies\". The following command allows you to create animated time-series of oyur processed data."
240 | ]
241 | },
242 | {
243 | "cell_type": "code",
244 | "execution_count": null,
245 | "metadata": {},
246 | "outputs": [],
247 | "source": [
248 | "# define Time-series folder\n",
249 | "ts_folder = s1_grd.processing_dir.joinpath('23/Timeseries')\n",
250 | "\n",
251 | "# create the animation\n",
252 | "raster.create_timeseries_animation(\n",
253 | " timeseries_folder=ts_folder, \n",
254 | " product_list=['bs.VV', 'bs.VH'], \n",
255 | " out_folder=s1_grd.processing_dir,\n",
256 | " shrink_factor=10, \n",
257 | " add_dates=True\n",
258 | ")"
259 | ]
260 | }
261 | ],
262 | "metadata": {
263 | "kernelspec": {
264 | "display_name": "Python 3",
265 | "language": "python",
266 | "name": "python3"
267 | },
268 | "language_info": {
269 | "codemirror_mode": {
270 | "name": "ipython",
271 | "version": 3
272 | },
273 | "file_extension": ".py",
274 | "mimetype": "text/x-python",
275 | "name": "python",
276 | "nbconvert_exporter": "python",
277 | "pygments_lexer": "ipython3",
278 | "version": "3.8.5"
279 | },
280 | "toc": {
281 | "base_numbering": 1,
282 | "nav_menu": {},
283 | "number_sections": false,
284 | "sideBar": true,
285 | "skip_h1_title": true,
286 | "title_cell": "Table of Contents",
287 | "title_sidebar": "Contents",
288 | "toc_cell": false,
289 | "toc_position": {},
290 | "toc_section_display": true,
291 | "toc_window_display": true
292 | }
293 | },
294 | "nbformat": 4,
295 | "nbformat_minor": 4
296 | }
297 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # OST_Notebooks
2 |
3 | This repo is being archived shortly, and the most up to date notebooks, along with installation instructions and other OST information, are now held here: https://opensartoolkit.readthedocs.io/en/latest/example/index.html
4 |
5 | The notebooks within this repository provide *getting started* tutorials for the use of the Open SAR Toolkit, found here in the ESA-philab github channel.
6 |
7 | Just clone them and get started. Note that for processing exercises you will need sufficient disk space and processing power.
8 |
9 | There is the possibility to run them on Google Colab
10 |
--------------------------------------------------------------------------------
/Tips and Tricks.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | " Open SAR Toolkit - Tips and Tricks, version 1.0, September 2019. Andreas Vollrath, ESA/ESRIN phi-lab\n",
8 | ""
9 | ]
10 | },
11 | {
12 | "cell_type": "markdown",
13 | "metadata": {},
14 | "source": [
15 | "\n",
16 | "\n",
17 | "--------\n",
18 | "\n",
19 | "# OST Tips and Tricks\n",
20 | "\n",
21 | "This notebook provides code snippets that might be useful for specific OST usage.\n",
22 | "\n",
23 | "[](https://colab.research.google.com/github/ESA-PhiLab/OST_Notebooks/blob/master/Tips%20and%20Tricks.ipynb)\n",
24 | "\n",
25 | "--------\n",
26 | "\n",
27 | "**Short description**\n",
28 | "\n",
29 | "This notebook shows some useful low level functionality of OST, as well as some tricks that can be helpful for certain projects. \n",
30 | "\n",
31 | "- **1:** Create a squared AOI from point coordinates\n",
32 | "- **2:** Create a OST confrom download directory from already downloaded files\n",
33 | "- **3:** Create the Time of Interest using python's datatime class\n",
34 | "- **4:** Load an already created inventory file into a OST class instance.\n",
35 | "- **5:** How to download an offline scene from ESA scihub archive\n",
36 | "- **6:** Speed up processing by using a ram disk for temporary file storage\n",
37 | "--------"
38 | ]
39 | },
40 | {
41 | "cell_type": "markdown",
42 | "metadata": {},
43 | "source": [
44 | "### 1 - Create a squared AOI from Lat/Lon point coordinates\n",
45 | "\n",
46 | "In case you do not have a shapefile of your Area Of Interest (AOI), but rather want to define it by Latitude and Longitude, considering a buffer, there is a helper function that let you do exactly this. \n",
47 | "\n",
48 | "**Note** that there are 2 buffer options, in meter and in degree, respectively. The buffer in meter does the transform from Lat/Lon into meters based on a equidistant projection. This may result in non-sqaured rectangles towards the poles when plotting on Lat/Lon grid (see second cell)"
49 | ]
50 | },
51 | {
52 | "cell_type": "code",
53 | "execution_count": 1,
54 | "metadata": {},
55 | "outputs": [
56 | {
57 | "name": "stdout",
58 | "output_type": "stream",
59 | "text": [
60 | "POLYGON ((11.5000000000000000 77.5000000000000000, 12.5000000000000000 77.5000000000000000, 12.5000000000000000 78.5000000000000000, 11.5000000000000000 78.5000000000000000, 11.5000000000000000 77.5000000000000000))\n",
61 | "POLYGON ((11.5693278865807 77.91043024406456, 12.4306721134193 77.91043024406456, 12.4306721134193 78.08956918035963, 11.5693278865807 78.08956918035963, 11.5693278865807 77.91043024406456))\n"
62 | ]
63 | }
64 | ],
65 | "source": [
66 | "# import of of vector helper functions of ost\n",
67 | "from ost.helpers import vector\n",
68 | "\n",
69 | "# define point by lat/lon coordinates\n",
70 | "lat, lon = '78', '12'\n",
71 | "\n",
72 | "# apply function with buffer in meters\n",
73 | "wkt1 = vector.latlon_to_wkt(lat, lon, buffer_degree=0.5, envelope=True)\n",
74 | "wkt2 = vector.latlon_to_wkt(lat, lon, buffer_meter=10000, envelope=True)\n",
75 | "print(wkt1)\n",
76 | "print(wkt2)"
77 | ]
78 | },
79 | {
80 | "cell_type": "code",
81 | "execution_count": 2,
82 | "metadata": {},
83 | "outputs": [],
84 | "source": [
85 | "# we plot the wkt with geopandas and matplotlib\n",
86 | "import geopandas as gpd\n",
87 | "import matplotlib.pyplot as plt\n",
88 | "\n",
89 | "# load world borders for background\n",
90 | "world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))\n",
91 | "# import aoi as gdf\n",
92 | "aoi1 = vector.wkt_to_gdf(wkt1)\n",
93 | "aoi2 = vector.wkt_to_gdf(wkt2)\n",
94 | "# get bounds of AOI\n",
95 | "bounds = aoi1.geometry.bounds\n",
96 | "# get world map as base\n",
97 | "base = world.plot(color='lightgrey', edgecolor='white')\n",
98 | "# plot aoi\n",
99 | "aoi1.plot(ax=base, color='None', edgecolor='black')\n",
100 | "aoi2.plot(ax=base, color='None', edgecolor='red')\n",
101 | "\n",
102 | "# set bounds\n",
103 | "plt.xlim([bounds.minx.min()-2, bounds.maxx.max()+2])\n",
104 | "plt.ylim([bounds.miny.min()-2, bounds.maxy.max()+2])\n",
105 | "plt.grid(color='grey', linestyle='-', linewidth=0.1)"
106 | ]
107 | },
108 | {
109 | "cell_type": "markdown",
110 | "metadata": {},
111 | "source": [
112 | "### 2 Create a OST conform download directory\n",
113 | "\n",
114 | "OST stores downloaded data in a certain directory hierarchy. In case you already downloaded your Sentinel-1 data by yourself, you can automatically create an OST conform directory, where all scenes found in the input directory, will be moved into its hierarchical structure. "
115 | ]
116 | },
117 | {
118 | "cell_type": "code",
119 | "execution_count": null,
120 | "metadata": {},
121 | "outputs": [],
122 | "source": [
123 | "from ost.s1 import download\n",
124 | "\n",
125 | "input_directory = '/path/to/files/already/downloaded'\n",
126 | "output_directory = '/path/to/OST/download/directory'\n",
127 | "download.restore_download_dir(input_directory, output_directory)"
128 | ]
129 | },
130 | {
131 | "cell_type": "markdown",
132 | "metadata": {},
133 | "source": [
134 | "\n",
135 | "### 3 Create the Time of Interest using python's datetime class\n",
136 | "\n",
137 | "Sometimes it is wanted to create dynamic timestamps for the defintion of time of interest. Here we show python's datetime library is used in combination with the OST format needed for class instantion. "
138 | ]
139 | },
140 | {
141 | "cell_type": "markdown",
142 | "metadata": {},
143 | "source": [
144 | "#### 3.1. Last 30 days from today onwards.\n",
145 | "\n",
146 | "**Note**, we do not need to set an end date, since this is by default today."
147 | ]
148 | },
149 | {
150 | "cell_type": "code",
151 | "execution_count": null,
152 | "metadata": {},
153 | "outputs": [],
154 | "source": [
155 | "from datetime import datetime, timedelta\n",
156 | "start_date = (datetime.today() - timedelta(days=30)).strftime(\"%Y-%m-%d\")\n",
157 | "print(start_date)"
158 | ]
159 | },
160 | {
161 | "cell_type": "markdown",
162 | "metadata": {},
163 | "source": [
164 | "#### 3.2. Target day (create date range around a certain date)"
165 | ]
166 | },
167 | {
168 | "cell_type": "code",
169 | "execution_count": null,
170 | "metadata": {},
171 | "outputs": [],
172 | "source": [
173 | "# we set only the start date to today - 30 days\n",
174 | "target_day = '2018-11-28'\n",
175 | "delta_days = 60\n",
176 | "\n",
177 | "# we set start and end 60 days before, repsectively after event\n",
178 | "start = (datetime.strptime(target_day, \"%Y-%m-%d\") - timedelta(days=delta_days)).strftime(\"%Y-%m-%d\")\n",
179 | "end = (datetime.strptime(target_day, \"%Y-%m-%d\") + timedelta(days=delta_days)).strftime(\"%Y-%m-%d\")\n",
180 | "\n",
181 | "print(start, end)"
182 | ]
183 | },
184 | {
185 | "cell_type": "markdown",
186 | "metadata": {},
187 | "source": [
188 | "### 4 Load an already created inventory file into a OST class instance. \n",
189 | "\n",
190 | "Sometimes you need ot re-initialize the one of the batch processing classes. This results in an empty inventory atttriubte. In order ot skip the search on scihub you can load an already created inventory shapefile and set it as attribute in the following way:"
191 | ]
192 | },
193 | {
194 | "cell_type": "code",
195 | "execution_count": null,
196 | "metadata": {},
197 | "outputs": [],
198 | "source": [
199 | "s1_instance = Sentinel1Batch(args)\n",
200 | "s1_instance.inventory_file = s1_instance.joinpath('full_inventory.gpkg')\n",
201 | "s1_instance.read_inventory()"
202 | ]
203 | },
204 | {
205 | "cell_type": "markdown",
206 | "metadata": {},
207 | "source": [
208 | "### 5 How to download an offline scene from ESA scihub archive\n",
209 | "\n",
210 | "ESA's scihub catalogue features an rolling archive of Sentinel-1 data meaning that older products are offline and need to be produced on demand. OST provides the functionality to do that in a programmatic way."
211 | ]
212 | },
213 | {
214 | "cell_type": "code",
215 | "execution_count": null,
216 | "metadata": {},
217 | "outputs": [],
218 | "source": [
219 | "from ost import Sentinel1_Scene\n",
220 | "from ost.helpers.scihub import connect\n",
221 | "\n",
222 | "# create instance\n",
223 | "s1 = Sentinel1_Scene('S1A_IW_GRDH_1SDV_20141004T230354_20141004T230423_002686_002FFD_062B')\n",
224 | "# connection to Scihub\n",
225 | "opener = connect()\n",
226 | "# heck online status\n",
227 | "print('The scene\\'s online status is: ', s1.scihub_online_status(opener))"
228 | ]
229 | },
230 | {
231 | "cell_type": "code",
232 | "execution_count": null,
233 | "metadata": {},
234 | "outputs": [],
235 | "source": [
236 | "import sys\n",
237 | "\n",
238 | "# trigger production\n",
239 | "s1.scihub_trigger_production(opener)\n",
240 | "\n",
241 | "# let's run a loop until the scene is online\n",
242 | "while status is False:\n",
243 | " sys.sleep(60) # just to not ask every millisecond, production can take a while\n",
244 | " status = s1.scihub_online_status(opener) \n",
245 | " print(status)\n",
246 | " \n",
247 | "s1.download('/path/to/download') "
248 | ]
249 | },
250 | {
251 | "cell_type": "markdown",
252 | "metadata": {},
253 | "source": [
254 | "### 6 Speed up processing by using a ram disk for temporary filestorage\n",
255 | "\n",
256 | "On UNIX systems it is possible to mount part of your RAM as a hard disk. If you have enough RAM, you can use this as a directory for temporary file storage. Since the RAM is very fast in terms of read/write operations, processing can accelareted quite a bit. \n",
257 | "\n",
258 | "**Note** that you need to run those commands on the command line and you will need administrative or superuser privilegues. \n",
259 | "\n",
260 | "Here is an example for a 30 GB big ramdisk mounted at /ramdisk:\n",
261 | "```bash \n",
262 | "sudo mkdir /ramdisk\n",
263 | "sudo mount -t tmpfs -o size=30G tmpfs /ramdisk\n",
264 | "```\n",
265 | "After that you can set your temp_dir to the mount point as in the following example (adjusting the other keywords for the class initialization of course)"
266 | ]
267 | },
268 | {
269 | "cell_type": "code",
270 | "execution_count": null,
271 | "metadata": {},
272 | "outputs": [],
273 | "source": [
274 | "from ost import Sentinel1Batch\n",
275 | "project = Sentinel1Batch(project_dir='/your/project/dir', aoi='your/aoi', start='2019/01/01', end='2019-12-31')\n",
276 | "project.config_dict['temp_dir'] = '/ramdisk'"
277 | ]
278 | }
279 | ],
280 | "metadata": {
281 | "kernelspec": {
282 | "display_name": "Python 3",
283 | "language": "python",
284 | "name": "python3"
285 | },
286 | "language_info": {
287 | "codemirror_mode": {
288 | "name": "ipython",
289 | "version": 3
290 | },
291 | "file_extension": ".py",
292 | "mimetype": "text/x-python",
293 | "name": "python",
294 | "nbconvert_exporter": "python",
295 | "pygments_lexer": "ipython3",
296 | "version": "3.8.5"
297 | },
298 | "toc": {
299 | "base_numbering": 1,
300 | "nav_menu": {},
301 | "number_sections": false,
302 | "sideBar": true,
303 | "skip_h1_title": true,
304 | "title_cell": "Table of Contents",
305 | "title_sidebar": "Contents",
306 | "toc_cell": false,
307 | "toc_position": {},
308 | "toc_section_display": true,
309 | "toc_window_display": true
310 | }
311 | },
312 | "nbformat": 4,
313 | "nbformat_minor": 4
314 | }
315 |
--------------------------------------------------------------------------------
/auxiliary/header_image.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ESA-PhiLab/OST_Notebooks/ab335a1c207198a17b0984c7cb8cc772df7d6891/auxiliary/header_image.PNG
--------------------------------------------------------------------------------
/install_ost.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 |
3 | # get ubuntu dependencies
4 | apt-get -y install python3-pip git libgdal-dev python3-gdal libspatialindex-dev wget libgfortran5 imagemagick file
5 |
6 | # set Snap and OTB version and download link
7 | OTB_VERSION="8.1.1" # Update to the current version https://www.orfeo-toolbox.org/packages/
8 | TBX_VERSION="8"
9 | TBX_SUBVERSION="0"
10 | SNAP_URL="http://step.esa.int/downloads/${TBX_VERSION}.${TBX_SUBVERSION}/installers"
11 | TBX="esa-snap_sentinel_unix_${TBX_VERSION}_${TBX_SUBVERSION}.sh"
12 |
13 | # get Snap
14 | wget "${SNAP_URL}/${TBX}"
15 | chmod 755 ${TBX}
16 | wget https://raw.githubusercontent.com/ESA-PhiLab/OpenSarToolkit/master/snap.varfile
17 |
18 | # install snap
19 | ./${TBX} -q -varfile snap.varfile
20 | rm ${TBX}
21 |
22 | # OTB installation
23 | OTB=OTB-${OTB_VERSION}-Linux64.run
24 | wget https://www.orfeo-toolbox.org/packages/archives/OTB/${OTB}
25 | chmod +x $OTB
26 | ./${OTB}
27 | rm -f ${OTB}
28 |
29 | # install ost
30 | pip install git+https://github.com/ESA-PhiLab/OpenSarToolkit.git@main
31 |
32 | # update snap
33 | /home/ost/programs/snap/bin/snap --nosplash --nogui --modules --update-all 2>&1 | while read -r line; do \
34 | echo "$line" && \
35 | [ "$line" = "updates=0" ] && sleep 2 && pkill -TERM -f "snap/jre/bin/java"; \
36 | done; exit 0
37 |
38 |
39 |
--------------------------------------------------------------------------------