├── .gitignore ├── Coverting HDF5 to CSV.ipynb ├── LICENSE ├── README.md ├── data └── gpm_jan_2020.HDF5 └── requirements.txt /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | pip-wheel-metadata/ 24 | share/python-wheels/ 25 | *.egg-info/ 26 | .installed.cfg 27 | *.egg 28 | MANIFEST 29 | 30 | # PyInstaller 31 | # Usually these files are written by a python script from a template 32 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 33 | *.manifest 34 | *.spec 35 | 36 | # Installer logs 37 | pip-log.txt 38 | pip-delete-this-directory.txt 39 | 40 | # Unit test / coverage reports 41 | htmlcov/ 42 | .tox/ 43 | .nox/ 44 | .coverage 45 | .coverage.* 46 | .cache 47 | nosetests.xml 48 | coverage.xml 49 | *.cover 50 | *.py,cover 51 | .hypothesis/ 52 | .pytest_cache/ 53 | 54 | # Translations 55 | *.mo 56 | *.pot 57 | 58 | # Django stuff: 59 | *.log 60 | local_settings.py 61 | db.sqlite3 62 | db.sqlite3-journal 63 | 64 | # Flask stuff: 65 | instance/ 66 | .webassets-cache 67 | 68 | # Scrapy stuff: 69 | .scrapy 70 | 71 | # Sphinx documentation 72 | docs/_build/ 73 | 74 | # PyBuilder 75 | target/ 76 | 77 | # Jupyter Notebook 78 | .ipynb_checkpoints 79 | 80 | # IPython 81 | profile_default/ 82 | ipython_config.py 83 | 84 | # pyenv 85 | .python-version 86 | 87 | # pipenv 88 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 89 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 90 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 91 | # install all needed dependencies. 92 | #Pipfile.lock 93 | 94 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow 95 | __pypackages__/ 96 | 97 | # Celery stuff 98 | celerybeat-schedule 99 | celerybeat.pid 100 | 101 | # SageMath parsed files 102 | *.sage.py 103 | 104 | # Environments 105 | .env 106 | .venv 107 | env/ 108 | venv/ 109 | ENV/ 110 | env.bak/ 111 | venv.bak/ 112 | 113 | # Spyder project settings 114 | .spyderproject 115 | .spyproject 116 | 117 | # Rope project settings 118 | .ropeproject 119 | 120 | # mkdocs documentation 121 | /site 122 | 123 | # mypy 124 | .mypy_cache/ 125 | .dmypy.json 126 | dmypy.json 127 | 128 | # Pyre type checker 129 | .pyre/ 130 | 131 | # Data store 132 | .DS_Store 133 | -------------------------------------------------------------------------------- /Coverting HDF5 to CSV.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Converting HDF5 to CSV\n", 8 | "\n", 9 | "While HDF5 is a format used for storing data values, CSV files are very easy to read and understand. Further, you can directly import them in `pandas` and use them as needed.\n", 10 | "\n", 11 | "In this notebook, we'll explore the **January, 2020 GPM data**, identify the values we want to record and create a CSV file." 12 | ] 13 | }, 14 | { 15 | "cell_type": "markdown", 16 | "metadata": {}, 17 | "source": [ 18 | "## Load libraries\n", 19 | "\n", 20 | "We need the `h5py` package to read the HDF5 file. Further, we'll use `numpy` to work with arrays and `pandas` package to create a final dataset and save it to a CSV file." 21 | ] 22 | }, 23 | { 24 | "cell_type": "code", 25 | "execution_count": 1, 26 | "metadata": {}, 27 | "outputs": [], 28 | "source": [ 29 | "import h5py\n", 30 | "import numpy as np\n", 31 | "import pandas as pd" 32 | ] 33 | }, 34 | { 35 | "cell_type": "markdown", 36 | "metadata": {}, 37 | "source": [ 38 | "## Load dataset\n", 39 | "\n", 40 | "We have one data file inside **/data** directory. I'll read the same using the `h5py` package." 41 | ] 42 | }, 43 | { 44 | "cell_type": "code", 45 | "execution_count": 2, 46 | "metadata": {}, 47 | "outputs": [], 48 | "source": [ 49 | "dataset = h5py.File('data/gpm_jan_2020.HDF5', 'r')" 50 | ] 51 | }, 52 | { 53 | "cell_type": "markdown", 54 | "metadata": {}, 55 | "source": [ 56 | "## Explore dataset\n", 57 | "\n", 58 | "Once the dataset is loaded in, it acts like a Python dictionary. So, we'll start by looking at the various key value pairs and based on them, identify all the values we want to keep." 59 | ] 60 | }, 61 | { 62 | "cell_type": "code", 63 | "execution_count": 3, 64 | "metadata": {}, 65 | "outputs": [ 66 | { 67 | "data": { 68 | "text/plain": [ 69 | "" 70 | ] 71 | }, 72 | "execution_count": 3, 73 | "metadata": {}, 74 | "output_type": "execute_result" 75 | } 76 | ], 77 | "source": [ 78 | "dataset.keys()" 79 | ] 80 | }, 81 | { 82 | "cell_type": "markdown", 83 | "metadata": {}, 84 | "source": [ 85 | "It appears the HDF5 file has a **Grid** inside it. So, let's see the key value pairs inside it." 86 | ] 87 | }, 88 | { 89 | "cell_type": "code", 90 | "execution_count": 4, 91 | "metadata": {}, 92 | "outputs": [ 93 | { 94 | "data": { 95 | "text/plain": [ 96 | "" 97 | ] 98 | }, 99 | "execution_count": 4, 100 | "metadata": {}, 101 | "output_type": "execute_result" 102 | } 103 | ], 104 | "source": [ 105 | "grid = dataset['Grid']\n", 106 | "grid.keys()" 107 | ] 108 | }, 109 | { 110 | "cell_type": "markdown", 111 | "metadata": {}, 112 | "source": [ 113 | "We observe that there are a lot of values in this data file. Here, I'm most interested in the `lon`, `lat` and `precipitation` values. Let's take a brief look at them." 114 | ] 115 | }, 116 | { 117 | "cell_type": "markdown", 118 | "metadata": {}, 119 | "source": [ 120 | "### Longitude" 121 | ] 122 | }, 123 | { 124 | "cell_type": "code", 125 | "execution_count": 5, 126 | "metadata": {}, 127 | "outputs": [ 128 | { 129 | "name": "stdout", 130 | "output_type": "stream", 131 | "text": [ 132 | "Longitude data: \n", 133 | "Longitude data attributes: ['DimensionNames', 'Units', 'units', 'standard_name', 'LongName', 'bounds', 'axis', 'CLASS', 'REFERENCE_LIST']\n" 134 | ] 135 | } 136 | ], 137 | "source": [ 138 | "print(\"Longitude data: {}\".format(grid['lon']))\n", 139 | "print(\"Longitude data attributes: {}\".format(list(grid['lon'].attrs)))" 140 | ] 141 | }, 142 | { 143 | "cell_type": "markdown", 144 | "metadata": {}, 145 | "source": [ 146 | "The shape indicates that `longitude` has 3600 values. From the attributes, I'll yse the `standard_name` and the `units`." 147 | ] 148 | }, 149 | { 150 | "cell_type": "code", 151 | "execution_count": 6, 152 | "metadata": {}, 153 | "outputs": [ 154 | { 155 | "name": "stdout", 156 | "output_type": "stream", 157 | "text": [ 158 | "Name: longitude\n", 159 | "Unit: degrees_east\n" 160 | ] 161 | } 162 | ], 163 | "source": [ 164 | "print(\"Name: {}\".format(grid['lon'].attrs['standard_name'].decode()))\n", 165 | "print(\"Unit: {}\".format(grid['lon'].attrs['units'].decode()))" 166 | ] 167 | }, 168 | { 169 | "cell_type": "markdown", 170 | "metadata": {}, 171 | "source": [ 172 | "### Latitude" 173 | ] 174 | }, 175 | { 176 | "cell_type": "code", 177 | "execution_count": 7, 178 | "metadata": {}, 179 | "outputs": [ 180 | { 181 | "name": "stdout", 182 | "output_type": "stream", 183 | "text": [ 184 | "Latitude data: \n", 185 | "Latitude data attributes: ['DimensionNames', 'Units', 'units', 'standard_name', 'LongName', 'bounds', 'axis', 'CLASS', 'REFERENCE_LIST']\n" 186 | ] 187 | } 188 | ], 189 | "source": [ 190 | "print(\"Latitude data: {}\".format(grid['lat']))\n", 191 | "print(\"Latitude data attributes: {}\".format(list(grid['lat'].attrs)))" 192 | ] 193 | }, 194 | { 195 | "cell_type": "markdown", 196 | "metadata": {}, 197 | "source": [ 198 | "The shape indicates that `latitude` has 1800 values. From the attributes, I'll yse the `standard_name` and the `units`." 199 | ] 200 | }, 201 | { 202 | "cell_type": "code", 203 | "execution_count": 8, 204 | "metadata": {}, 205 | "outputs": [ 206 | { 207 | "name": "stdout", 208 | "output_type": "stream", 209 | "text": [ 210 | "Name: latitude\n", 211 | "Unit: degrees_north\n" 212 | ] 213 | } 214 | ], 215 | "source": [ 216 | "print(\"Name: {}\".format(grid['lat'].attrs['standard_name'].decode()))\n", 217 | "print(\"Unit: {}\".format(grid['lat'].attrs['units'].decode()))" 218 | ] 219 | }, 220 | { 221 | "cell_type": "markdown", 222 | "metadata": {}, 223 | "source": [ 224 | "### Precipitation" 225 | ] 226 | }, 227 | { 228 | "cell_type": "code", 229 | "execution_count": 9, 230 | "metadata": {}, 231 | "outputs": [ 232 | { 233 | "name": "stdout", 234 | "output_type": "stream", 235 | "text": [ 236 | "Precipitation data: \n", 237 | "Precipitation data attributes: ['DimensionNames', 'Units', 'units', 'coordinates', '_FillValue', 'CodeMissingValue', 'DIMENSION_LIST']\n" 238 | ] 239 | } 240 | ], 241 | "source": [ 242 | "print(\"Precipitation data: {}\".format(grid['precipitation']))\n", 243 | "print(\"Precipitation data attributes: {}\".format(list(grid['precipitation'].attrs)))" 244 | ] 245 | }, 246 | { 247 | "cell_type": "markdown", 248 | "metadata": {}, 249 | "source": [ 250 | "The shape shows that it is a 3-dimensional array. The values order 3600-1800 implies that we have 6480000 (3600\\*1800) precipitation values. For each combination of longitude and latitude, we get a value of precipitation. I'll also use the `units` attrbiute here." 251 | ] 252 | }, 253 | { 254 | "cell_type": "code", 255 | "execution_count": 10, 256 | "metadata": {}, 257 | "outputs": [ 258 | { 259 | "name": "stdout", 260 | "output_type": "stream", 261 | "text": [ 262 | "Unit: mm/hr\n" 263 | ] 264 | } 265 | ], 266 | "source": [ 267 | "print(\"Unit: {}\".format(grid['precipitation'].attrs['units'].decode()))" 268 | ] 269 | }, 270 | { 271 | "cell_type": "markdown", 272 | "metadata": {}, 273 | "source": [ 274 | "## Create dataframe\n", 275 | "\n", 276 | "Now, that we're sure what values we want, let's start constructing the dataset." 277 | ] 278 | }, 279 | { 280 | "cell_type": "markdown", 281 | "metadata": {}, 282 | "source": [ 283 | "We'll use the `flatten()` function in `numpy` to create the complete list of precipitation values. For each longitude value, we'll have all latitude values. So, I'll create the list of longitude values where each value is repeated 1800 times using `repeat()`. For latitude values, I'll have to repeat the complete original list 3600 times corresponding to each value longitude value." 284 | ] 285 | }, 286 | { 287 | "cell_type": "code", 288 | "execution_count": 11, 289 | "metadata": {}, 290 | "outputs": [ 291 | { 292 | "data": { 293 | "text/html": [ 294 | "
\n", 295 | "\n", 308 | "\n", 309 | " \n", 310 | " \n", 311 | " \n", 312 | " \n", 313 | " \n", 314 | " \n", 315 | " \n", 316 | " \n", 317 | " \n", 318 | " \n", 319 | " \n", 320 | " \n", 321 | " \n", 322 | " \n", 323 | " \n", 324 | " \n", 325 | " \n", 326 | " \n", 327 | " \n", 328 | " \n", 329 | " \n", 330 | " \n", 331 | " \n", 332 | " \n", 333 | " \n", 334 | " \n", 335 | " \n", 336 | " \n", 337 | " \n", 338 | " \n", 339 | " \n", 340 | " \n", 341 | " \n", 342 | " \n", 343 | " \n", 344 | " \n", 345 | " \n", 346 | " \n", 347 | " \n", 348 | " \n", 349 | "
longitude (degrees_east)latitude (degrees_north)Precipitation (mm/hr)
0-179.949997-89.949997-9999.900391
1-179.949997-89.849998-9999.900391
2-179.949997-89.750000-9999.900391
3-179.949997-89.650002-9999.900391
4-179.949997-89.550003-9999.900391
\n", 350 | "
" 351 | ], 352 | "text/plain": [ 353 | " longitude (degrees_east) latitude (degrees_north) Precipitation (mm/hr)\n", 354 | "0 -179.949997 -89.949997 -9999.900391\n", 355 | "1 -179.949997 -89.849998 -9999.900391\n", 356 | "2 -179.949997 -89.750000 -9999.900391\n", 357 | "3 -179.949997 -89.650002 -9999.900391\n", 358 | "4 -179.949997 -89.550003 -9999.900391" 359 | ] 360 | }, 361 | "execution_count": 11, 362 | "metadata": {}, 363 | "output_type": "execute_result" 364 | } 365 | ], 366 | "source": [ 367 | "longitude_values = np.repeat(list(grid['lon']), 1800)\n", 368 | "latitude_values = list(grid['lat'])*3600\n", 369 | "precipitation_values = np.array(list(grid['precipitation'])).flatten()\n", 370 | "\n", 371 | "dataset = pd.DataFrame({\"lon\": longitude_values, \"lat\": latitude_values, \"precipitation\": precipitation_values})\n", 372 | "dataset.columns = [grid['lon'].attrs['standard_name'].decode() + \" (\" + grid['lon'].attrs['units'].decode() + \")\",\n", 373 | " grid['lat'].attrs['standard_name'].decode() + \" (\" + grid['lat'].attrs['units'].decode() + \")\",\n", 374 | " \"Precipitation (\" + grid['precipitation'].attrs['units'].decode() + \")\",]\n", 375 | "dataset.head()" 376 | ] 377 | }, 378 | { 379 | "cell_type": "markdown", 380 | "metadata": {}, 381 | "source": [ 382 | "Note that the value `-9999.900391` in precipitation means that the value is missing, so I will change it to zero." 383 | ] 384 | }, 385 | { 386 | "cell_type": "code", 387 | "execution_count": 12, 388 | "metadata": {}, 389 | "outputs": [], 390 | "source": [ 391 | "dataset['Precipitation (mm/hr)'] = dataset['Precipitation (mm/hr)'].mask(\n", 392 | " dataset['Precipitation (mm/hr)'] == -9999.900391, 0)" 393 | ] 394 | }, 395 | { 396 | "cell_type": "markdown", 397 | "metadata": {}, 398 | "source": [ 399 | "I'll save the resultant dataframe to a CSV file and save it inside the **/data** folder." 400 | ] 401 | }, 402 | { 403 | "cell_type": "code", 404 | "execution_count": 13, 405 | "metadata": {}, 406 | "outputs": [], 407 | "source": [ 408 | "dataset.to_csv(\"data/precipitation_jan_2020.csv\", index = False)" 409 | ] 410 | }, 411 | { 412 | "cell_type": "markdown", 413 | "metadata": {}, 414 | "source": [ 415 | "# Conclusion\n", 416 | "\n", 417 | "In this notebook, we used a dataset downloaded from NASA, understood its various elements and created a CSV file from it." 418 | ] 419 | } 420 | ], 421 | "metadata": { 422 | "kernelspec": { 423 | "display_name": "Python 3", 424 | "language": "python", 425 | "name": "python3" 426 | }, 427 | "language_info": { 428 | "codemirror_mode": { 429 | "name": "ipython", 430 | "version": 3 431 | }, 432 | "file_extension": ".py", 433 | "mimetype": "text/x-python", 434 | "name": "python", 435 | "nbconvert_exporter": "python", 436 | "pygments_lexer": "ipython3", 437 | "version": "3.8.5" 438 | } 439 | }, 440 | "nbformat": 4, 441 | "nbformat_minor": 2 442 | } 443 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 Karan Bhanot 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # NASA-data-exploration 2 | The repository includes detailed steps to get NASA data from GES DISC, convert HDF5 files to CSV and plotting geographic data. In this repository, we extract Global Precipitation Measurement (GPM) data, which keeps measures precipitation all across the globe. We'll capture the data for January, 2020 ([source](https://disc.gsfc.nasa.gov/datasets/GPM_3IMERGM_06/summary)). While this repository steps get specific, the exact same steps apply to any HDF5 data downloaded from GES DISC. 3 | 4 | ## Step 1: Get data 5 | The first step is to get data from the [GES DISC](https://disc.gsfc.nasa.gov/) website. The detailed steps are described by me in the article [Getting NASA data for your next geo-project](https://towardsdatascience.com/getting-nasa-data-for-your-next-geo-project-9d621243b8f3?source=friends_link&sk=b5b1e2415be5738e578dbf28386e3b9d). The data downloaded is in HDF5 format. 6 | 7 | ## Step 2: Convert HDF5 to CSV 8 | The second step is to take the HDF5 files we downloaded, understand their content and generate CSV files which can be easily used. We use the package `h5py` to read in the HDF5 files, and based on the column data we need, we create a CSV file which can be easily distributed or analyzed. 9 | 10 | ### DATA CREDITS 11 | The data used in this repository is downloaded from GES DISC website (https://disc.gsfc.nasa.gov/). 12 | -------------------------------------------------------------------------------- /data/gpm_jan_2020.HDF5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kb22/NASA-data-exploration/1ee162133593ea5cffe010ec817f1e25c5fd0c7d/data/gpm_jan_2020.HDF5 -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | appnope==0.1.0 2 | argon2-cffi==20.1.0 3 | attrs==19.3.0 4 | backcall==0.2.0 5 | bleach==3.1.5 6 | certifi==2020.6.20 7 | cffi==1.14.1 8 | click==7.1.2 9 | click-plugins==1.1.1 10 | cligj==0.5.0 11 | cycler==0.10.0 12 | decorator==4.4.2 13 | defusedxml==0.6.0 14 | entrypoints==0.3 15 | Fiona==1.8.13.post1 16 | geopandas==0.8.1 17 | h5py==2.10.0 18 | ipykernel==5.3.4 19 | ipython==7.17.0 20 | ipython-genutils==0.2.0 21 | ipywidgets==7.5.1 22 | jedi==0.17.2 23 | Jinja2==2.11.2 24 | jsonschema==3.2.0 25 | jupyter==1.0.0 26 | jupyter-client==6.1.6 27 | jupyter-console==6.1.0 28 | jupyter-core==4.6.3 29 | kiwisolver==1.2.0 30 | MarkupSafe==1.1.1 31 | matplotlib==3.3.1 32 | mistune==0.8.4 33 | munch==2.5.0 34 | nbconvert==5.6.1 35 | nbformat==5.0.7 36 | notebook==6.1.3 37 | numpy==1.19.1 38 | packaging==20.4 39 | pandas==1.1.0 40 | pandocfilters==1.4.2 41 | parso==0.7.1 42 | pexpect==4.8.0 43 | pickleshare==0.7.5 44 | Pillow==7.2.0 45 | prometheus-client==0.8.0 46 | prompt-toolkit==3.0.6 47 | ptyprocess==0.6.0 48 | pycparser==2.20 49 | Pygments==2.6.1 50 | pyparsing==2.4.7 51 | pyproj==2.6.1.post1 52 | pyrsistent==0.16.0 53 | python-dateutil==2.8.1 54 | pytz==2020.1 55 | pyzmq==19.0.2 56 | qtconsole==4.7.5 57 | QtPy==1.9.0 58 | Send2Trash==1.5.0 59 | Shapely==1.7.0 60 | six==1.15.0 61 | terminado==0.8.3 62 | testpath==0.4.4 63 | tornado==6.0.4 64 | traitlets==4.3.3 65 | wcwidth==0.2.5 66 | webencodings==0.5.1 67 | widgetsnbextension==3.5.1 68 | --------------------------------------------------------------------------------