├── .gitignore ├── LICENSE ├── README.md ├── docs ├── .ipynb_checkpoints │ └── Lesson-GIS-pre-processing-checkpoint.ipynb ├── Lesson-GIS-pre-processing.html └── Lesson-GIS-pre-processing.ipynb ├── setup.py └── wrfhydro_gis ├── Build_GeoTiff_From_Geogrid_File.py ├── Build_Groundwater_Inputs.py ├── Build_PRJ_From_Geogrid_File.py ├── Build_Routing_Stack.py ├── Build_Spatial_Metadata_File.py ├── Create_Domain_Boundary_Shapefile.py ├── Create_SoilProperties_and_Hydro2D.py ├── Create_latitude_longitude_rasters.py ├── Create_wrfinput_from_Geogrid.py ├── Examine_Outputs_of_GIS_Preprocessor.py ├── Forecast_Point_Tools.py ├── Harmonize_Soils_to_LANDMASK.py ├── Testing_DEM_interpolation.py ├── Unused_Code.py ├── __init__.py └── wrfhydro_functions.py /.gitignore: -------------------------------------------------------------------------------- 1 | # Compiled source # 2 | ################### 3 | *.com 4 | *.class 5 | *.dll 6 | *.exe 7 | *.o 8 | *.so 9 | *.pyc 10 | __pycache__ 11 | 12 | # Packages # 13 | ############ 14 | # it's better to unpack these files and commit the raw source 15 | # git has its own built in compression methods 16 | *.7z 17 | *.dmg 18 | *.gz 19 | *.iso 20 | *.jar 21 | *.rar 22 | *.tar 23 | *.zip 24 | 25 | # Logs and databases # 26 | ###################### 27 | *.log 28 | *.sql 29 | *.sqlite 30 | *.txt 31 | 32 | # OS generated files # 33 | ###################### 34 | .DS_Store 35 | .DS_Store? 36 | ._* 37 | .Spotlight-V100 38 | .Trashes 39 | ehthumbs.db 40 | Thumbs.db 41 | rom-history 42 | 43 | # Other # 44 | ###################### 45 | **/.idea/* 46 | /wrfhydro_gis/Unused_Code.py 47 | /wrfhydro_gis/Testing_DEM_interpolation.py -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 University Corporation for Atmospheric Research 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # WRF-Hydro GIS Preprocessor 2 | 3 | ## Description: 4 | The WRF-Hydro GIS Pre-processor provides various scripts and tools for building the geospatial input files for running a WRF-Hydro simulation. 5 | 6 | ## Necessary Python Packages and Installation Tips 7 | 8 | The scripts for the WRF-Hydro GIS Pre-processor rely on several python modules a user will need to install such as numpy, gdal, and Whitebox-Tools. It is highly recommended to use a python distribution such as Miniconda (https://docs.conda.io/en/latest/miniconda.html). An example environment is given below, using the conda package manager to install necessary python modules. The essential packages and versions used in development of this repository are listed below (Windows 64-bit and Python 3.6.10): 9 | 10 | | Package | Version | 11 | | ------------- |--------------:| 12 | | gdal | 3.6.3 | 13 | | netcdf4 | 1.6.3 | 14 | | numpy | 1.24.2 | 15 | | packaging | 23.0 | 16 | | pyproj | 3.4.1 | 17 | | python | 3.10.9 | 18 | | whitebox | 2.3.5 | 19 | | shapely | 2.0.1 | 20 | 21 | If you are using Anaconda, creating a new, clean 'wrfh_gis_env' environment with these needed packages can be done easily and simply one of several ways: 22 | 23 | * In your conda shell, add one necessary channel (conda-forge) and then download the component libraries from the Anaconda cloud: 24 | + `conda config --add channels conda-forge` 25 | + `conda create -n wrfh_gis_env -c conda-forge python=3.10 gdal netCDF4 numpy pyproj whitebox=2.3.5 packaging shapely` 26 | 27 | * To activate this new environment, type the following at the conda prompt 28 | + `activate wrfh_gis_env` 29 | 30 | ## How to Run Scripts 31 | 32 | ### The scripts make use of a function script '\wrfhydro_gis\wrfhydro_functions.py' to pass all functions and selected global parameters parameters to the primary script: 33 | 34 | + [Build_Routing_Stack.py](https://github.com/NCAR/wrf_hydro_gis_preprocessor/blob/master/wrfhydro_gis/Build_Routing_Stack.py). 35 | 36 | In turn, these scripts rely on a set of functions in [wrfhydro_functions.py](https://github.com/NCAR/wrf_hydro_gis_preprocessor/blob/master/wrfhydro_gis/wrfhydro_functions.py). 37 | 38 | ### Running Build_Routing_Stack.py to generate the routing grids for a new WRF-Hydro simulation domain 39 | 40 | Use `-h` when calling any of the scripts on the command-line, for help information. Provide the required and any optional parameters as arguments. The following steps will excecute a process to generate a minimal set of routing grids for the desired domain. This example assumes use of a Bash shell. 41 | 42 | `python Build_Routing_Stack.py -i geo_em.d01.nc -d NED_30m_Croton.tif -R 4 -t 20 -o croton_test.zip` 43 | 44 | ## NCAR Disclaimer 45 | The National Center for Atmospheric Research (NCAR) GitHub project code is provided on an "as is" basis and the user assumes responsibility for its use. NCAR has relinquished control of the information and no longer has responsibility to protect the integrity , confidentiality, or availability of the information. Any reference to specific commercial products, processes, or services by service mark, trademark, manufacturer, or otherwise, does not constitute or imply their endorsement, recommendation or favoring by NCAR. The NCAR seal and logo shall not be used in any manner to imply endorsement of any commercial product or activity by NCAR or the National Science Foundation (NSF). -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import find_packages, setup 2 | 3 | with open("README.md", "r") as fh: 4 | long_description = fh.read() 5 | 6 | setup( 7 | name='wrfhydro_gis', 8 | version='0.1.0', 9 | packages=find_packages(include=['wrfhydro_gis']), 10 | url='https://github.com/NCAR/wrf_hydro_gis_preprocessor', 11 | license='MIT', 12 | install_requires=[ 13 | 'gdal', 14 | 'netcdf4==1.5.3', 15 | 'numpy==1.22.0', 16 | 'pyproj==2.6.0', 17 | 'whitebox==1.2.0' 18 | ], 19 | author='Kevin Sampson & Matt Casali', 20 | author_email='ksampson@ucar.edu', 21 | description='Geospatial Pre-processing functions for the WRF-Hydro model', 22 | python_requires='>=3.6.10', 23 | ) 24 | 25 | 26 | 27 | 28 | -------------------------------------------------------------------------------- /wrfhydro_gis/Build_GeoTiff_From_Geogrid_File.py: -------------------------------------------------------------------------------- 1 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 2 | # Copyright UCAR (c) 2019 3 | # University Corporation for Atmospheric Research(UCAR) 4 | # National Center for Atmospheric Research(NCAR) 5 | # Research Applications Laboratory(RAL) 6 | # P.O.Box 3000, Boulder, Colorado, 80307-3000, USA 7 | # 24/09/2019 8 | # 9 | # Name: Build_GeoTiff_From_Geogrid_File.py 10 | # Purpose: 11 | # Author: Kevin Sampson, NCAR 12 | # Created: 24/09/2019 13 | # Licence: Reserved 14 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 15 | 16 | descText = 'This is a program to export >=2D variables from a WRF-Hydro input file ' + \ 17 | '(geogrid or Fulldom_hires) file to an output raster format, with all ' + \ 18 | 'spatial and coordinate system metadata. If a 3-dimensional variable is ' + \ 19 | 'selected, individual raster bands will be created in the output raster ' + \ 20 | 'for each index in the 3rd dimension. If a 4-dimensional variable ' + \ 21 | 'is selected, the first index in the 4th dimension will be selected ' + \ 22 | 'and the variable will be treated as a 3-dimensional variable described above.' 23 | 24 | ''' 25 | NOTES: 26 | Check for the position of certain dimensions? 27 | ''' 28 | 29 | # Import Python Core Modules 30 | import sys 31 | import os 32 | import time 33 | 34 | # Import Additional Modules 35 | import numpy 36 | import netCDF4 37 | from packaging.version import parse as LooseVersion # To avoid deprecation warnings 38 | from argparse import ArgumentParser 39 | from pathlib import Path 40 | import osgeo 41 | 42 | try: 43 | if LooseVersion(osgeo.__version__) > LooseVersion('3.0.0'): 44 | from osgeo import gdal 45 | else: 46 | import gdal 47 | except: 48 | sys.exit('ERROR: cannot find GDAL/OGR modules') 49 | 50 | # Import function library into namespace. Must exist in same directory as this script. 51 | from wrfhydro_functions import (WRF_Hydro_Grid, RasterDriver, subset_ncVar) 52 | 53 | # --- Global Variables --- # 54 | out_fmt = RasterDriver # Could overwrite the output format. Default is 'GTiff' 55 | defaltGeogrid = 'geo_em.d01.nc' # Default input geogrid file name if not provided by user 56 | overwrite_output = True # Option to overwrite the output file if it exists already 57 | # --- End Global Variables --- # 58 | 59 | # --- Functions --- # 60 | def build_geogrid_raster(in_nc, Variable, OutGTiff, out_Grid_fmt=out_fmt): 61 | ''' 62 | Function to build a properly georeferenced raster object from a WRF-Hydro 63 | input file and variable name. 64 | ''' 65 | 66 | # Check inputs for validity 67 | if os.path.exists(in_nc): 68 | rootgrp = netCDF4.Dataset(in_nc, 'r') # Establish an object for reading the input NetCDF file 69 | grid_obj = WRF_Hydro_Grid(rootgrp) # Instantiate a grid object 70 | 71 | # Change masked arrays to old default (numpy arrays always returned) 72 | if LooseVersion(netCDF4.__version__) > LooseVersion('1.4.0'): 73 | rootgrp.set_auto_mask(False) 74 | else: 75 | print(' The input netCDF file does not exist: {0}'.format(in_nc)) 76 | sys.exit(1) 77 | 78 | # Convert 2D or 4D input variables to 3D 79 | if Variable not in rootgrp.variables: 80 | print(' Could not find variable {0} in input netCDF file. Exiting...'.format(Variable)) 81 | sys.exit(1) 82 | 83 | if os.path.exists(OutGTiff): 84 | if overwrite_output: 85 | print(' The output file already exists and will be overwritten: {0}'.format(OutGTiff)) 86 | else: 87 | print(' The output file already exists. Exiting...') 88 | sys.exit(1) 89 | 90 | # Enhancement to allow n-dimensional arrays, with max dimension size of 3. 91 | if grid_obj.isGeogrid: 92 | array = subset_ncVar(rootgrp.variables[Variable], DimToFlip='south_north') 93 | else: 94 | array = subset_ncVar(rootgrp.variables[Variable], DimToFlip='') 95 | print(' Size of array being sent to raster: {0}'.format(array.shape)) 96 | 97 | # Export numpy array to raster (up to 3D). 98 | OutRaster = grid_obj.numpy_to_Raster(array) 99 | print(' Bands in output raster: {0}'.format(OutRaster.RasterCount)) 100 | rootgrp.close() 101 | del grid_obj, rootgrp, array 102 | 103 | # Save in-memory raster file to disk 104 | if OutRaster is not None: 105 | target_ds = gdal.GetDriverByName(out_Grid_fmt).CreateCopy(OutGTiff, OutRaster) 106 | print(' Created {0} format raster from {1} variable: {2}'.format(out_Grid_fmt, Variable, OutGTiff)) 107 | target_ds = None 108 | OutRaster = None 109 | del Variable, in_nc, OutRaster, OutGTiff 110 | # --- End Functions --- # 111 | 112 | # --- Main Codeblock --- # 113 | if __name__ == '__main__': 114 | 115 | print('Script initiated at {0}'.format(time.ctime())) 116 | tic = time.time() 117 | 118 | # Setup the input arguments 119 | parser = ArgumentParser(description=descText, add_help=True) 120 | parser.add_argument("-i", 121 | dest="in_nc", 122 | required=True, 123 | help="Path to WPS geogrid (geo_em.d0*.nc) file or WRF-Hydro Fulldom_hires.nc file.") 124 | parser.add_argument("-v", 125 | dest="Variable", 126 | default='HGT_M', 127 | help="Name of the variable in the input netCDF file. default=HGT_M") 128 | parser.add_argument("-o", 129 | dest="out_file", 130 | default='./Output_GEOGRID_Raster.tif', 131 | help="Output GeoTiff raster file.") 132 | 133 | # If no arguments are supplied, print help message 134 | if len(sys.argv)==1: 135 | parser.print_help(sys.stderr) 136 | sys.exit(1) 137 | args = parser.parse_args() 138 | all_defaults = {key: parser.get_default(key) for key in vars(args)} 139 | 140 | # Handle path of input 141 | if args.in_nc == all_defaults["in_nc"]: 142 | print('Using default input geogrid location of: {0}'.format(all_defaults["in_nc"])) 143 | 144 | # Handle printing to user the default variable name 145 | if args.Variable == all_defaults["Variable"]: 146 | print('Using default variable name: {0}'.format(all_defaults["Variable"])) 147 | if args.out_file == all_defaults["out_file"]: 148 | print('Using default output location: {0}'.format(all_defaults["out_file"])) 149 | 150 | # Print information to screen 151 | print('Input WPS Geogrid or Fulldom file: {0}'.format(args.in_nc)) 152 | print('Input netCDF variable name: {0}'.format(args.Variable)) 153 | print('Output raster file: {0}'.format(args.out_file)) 154 | 155 | build_geogrid_raster(args.in_nc, args.Variable, args.out_file) 156 | print('Process complted in {0:3.2f} seconds.'.format(time.time()-tic)) 157 | # --- End Main Codeblock --- # -------------------------------------------------------------------------------- /wrfhydro_gis/Build_Groundwater_Inputs.py: -------------------------------------------------------------------------------- 1 | # # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 2 | # Copyright UCAR (c) 2019 3 | # University Corporation for Atmospheric Research(UCAR) 4 | # National Center for Atmospheric Research(NCAR) 5 | # Research Applications Laboratory(RAL) 6 | # P.O.Box 3000, Boulder, Colorado, 80307-3000, USA 7 | # 24/09/2019 8 | # 9 | # Name: module1 10 | # Purpose: 11 | # Author: $ Kevin Sampson 12 | # Created: 24/09/2019 13 | # Licence: 14 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 15 | 16 | descText = "This script will build WRF-Hydro groundwater basin inputs using an input WPS Geogrid " \ 17 | "file and WRF-Hydro routing grid (Fulldom_hires.nc) file as inputs. Three methods" \ 18 | "are currently available for generating groundwater basins. One method, 'Polygon" \ 19 | "Shapefile or Feature Class' currently requires an input polygon shapefile defining" \ 20 | "the groundwater basins." 21 | 22 | # Import Python Core Modules 23 | import os 24 | import time 25 | import shutil 26 | import sys 27 | from packaging.version import parse as LooseVersion # To avoid deprecation warnings 28 | 29 | # Import additional modules 30 | import netCDF4 31 | from argparse import ArgumentParser 32 | from pathlib import Path 33 | import osgeo 34 | 35 | try: 36 | if LooseVersion(osgeo.__version__) > LooseVersion('3.0.0'): 37 | from osgeo import gdal 38 | else: 39 | import gdal 40 | except: 41 | sys.exit('ERROR: cannot find GDAL/OGR modules') 42 | 43 | # Import function library into namespace. Must exist in same directory as this script. 44 | #import wrfhydro_functions as wrfh # Function script packaged with this toolbox 45 | from wrfhydro_functions import (WRF_Hydro_Grid, GW_nc, GWGRID_nc, dir_d8, streams, 46 | basinRaster, RasterDriver, build_GW_Basin_Raster, build_GW_buckets, remove_file, 47 | zipUpFolder) 48 | #import Examine_Outputs_of_GIS_Preprocessor as EO 49 | 50 | # --- Global Variables --- # 51 | 52 | # --- EDIT BELOW THIS LINE --- # 53 | 54 | # Provide the default groundwater basin generation method. 55 | defaultGWmethod = 'FullDom basn_msk variable' 56 | defaultFulldom = 'Fulldom_hires.nc' 57 | defaultGeogrid = 'geo_em.d01.nc' 58 | 59 | # --- EDIT ABOVE THIS LINE --- # 60 | 61 | # --- DO NOT EDIT BELOW THIS LINE --- # 62 | 63 | # List of routing-stack files to send to output .zip files 64 | nclist = [GW_nc, GWGRID_nc] 65 | 66 | # Save a GeoTiff of the location of the derived basins on the fine and coarse grids 67 | saveBasins_Fine = False 68 | saveBasins_Coarse = False 69 | 70 | # --- End Global Variables --- # 71 | 72 | # Main Codeblock 73 | if __name__ == '__main__': 74 | tic = time.time() 75 | print('Script initiated at {0}'.format(time.ctime())) 76 | 77 | # Setup the input arguments 78 | parser = ArgumentParser(description=descText, add_help=True) 79 | parser.add_argument("-i", 80 | dest="in_nc", 81 | help="Path to WPS geogrid (geo_em.d0*.nc) file or WRF-Hydro Fulldom_hires.nc file.") 82 | parser.add_argument("-f", 83 | dest="in_fulldom", 84 | default='./{0}'.format(defaultFulldom), 85 | help="Path to WRF-Hydro Fulldom_hires.nc file.") 86 | parser.add_argument("-m", 87 | dest="GWmethod", 88 | default='FullDom LINKID local basins', 89 | help="Method to create groundwater basins. Choose from 'FullDom basn_msk variable', " 90 | "'FullDom LINKID local basins', 'Polygon Shapefile or Feature Class'" 91 | " default='FullDom basn_msk variable'") 92 | parser.add_argument("-g", 93 | dest="in_GWPolys", 94 | default='', 95 | help="Path to groundwater basin polygon or feature class file.") 96 | parser.add_argument("-o", 97 | dest="out_dir", 98 | default='', 99 | required=True, 100 | help="Output directory.") 101 | 102 | # If no arguments are supplied, print help message 103 | if len(sys.argv) == 1: 104 | parser.print_help(sys.stderr) 105 | sys.exit(1) 106 | args = parser.parse_args() 107 | all_defaults = {key: parser.get_default(key) for key in vars(args)} 108 | 109 | # Handle path of input 110 | if args.in_nc == all_defaults["in_nc"]: 111 | print('Using default input geogrid location of: {0}'.format(all_defaults["in_nc"])) 112 | 113 | if args.in_fulldom == all_defaults["in_fulldom"]: 114 | print('Using default input fulldom location of: {0}'.format(all_defaults["in_fulldom"])) 115 | 116 | if args.GWmethod == all_defaults["GWmethod"]: 117 | print('Using groundwater method of: {0}'.format(all_defaults["GWmethod"])) 118 | 119 | if args.out_dir == all_defaults["out_dir"]: 120 | print('Using default output location: {0}'.format(all_defaults["out_file"])) 121 | 122 | # Outputs - permanent 123 | out_zip = os.path.join(args.out_dir, 'GroundwaterBasins_local.zip') 124 | 125 | # Create scratch directory for temporary outputs 126 | projdir = os.path.join(args.out_dir, 'gw_scratchdir') 127 | projdir = os.path.abspath(projdir) 128 | if os.path.exists(projdir): 129 | shutil.rmtree(projdir) 130 | os.makedirs(projdir) 131 | 132 | # Setup temporary output files 133 | fdir = os.path.join(projdir, dir_d8) 134 | channelgrid = os.path.join(projdir, streams) 135 | if saveBasins_Fine: 136 | basinRaster_File = os.path.join(projdir, 'GWBasins_fine.tif') 137 | nclist.append('GWBasins_fine.tif') 138 | if saveBasins_Coarse: 139 | nclist.append(basinRaster) 140 | 141 | rootgrp1 = netCDF4.Dataset(args.in_nc, 'r') 142 | rootgrp2 = netCDF4.Dataset(args.in_fulldom, 'r') 143 | if LooseVersion(netCDF4.__version__) > LooseVersion('1.4.0'): 144 | rootgrp2.set_auto_mask(False) # Change masked arrays to old default (numpy arrays always returned) 145 | coarse_grid = WRF_Hydro_Grid(rootgrp1) # Instantiate a grid object for the coarse grid 146 | fine_grid = WRF_Hydro_Grid(rootgrp2) # Instantiate a grid object for the fine grid 147 | 148 | # Build inputs required for creating groundwater buckets 149 | flowdir = fine_grid.numpy_to_Raster(rootgrp2.variables['FLOWDIRECTION'][:]) 150 | strm_arr = rootgrp2.variables['CHANNELGRID'][:] 151 | strm_arr[strm_arr==0] = 1 # Set active channels to 1 152 | strm = fine_grid.numpy_to_Raster(strm_arr) 153 | del strm_arr 154 | 155 | # Save to disk for the Groundwater tools to use 156 | out_ds1 = gdal.GetDriverByName(RasterDriver).CreateCopy(fdir, flowdir) 157 | out_ds2 = gdal.GetDriverByName(RasterDriver).CreateCopy(channelgrid, strm) 158 | out_ds1 = out_ds2 = flowdir = strm = None 159 | 160 | # Build groundwater files 161 | print(' Building Groundwater Basin inputs.') 162 | GWBasns = build_GW_Basin_Raster(args.in_fulldom, projdir, args.GWmethod, channelgrid, fdir, fine_grid, in_Polys=args.in_GWPolys) 163 | build_GW_buckets(projdir, GWBasns, coarse_grid, Grid=True, saveRaster=saveBasins_Coarse) 164 | if saveBasins_Fine: 165 | out_ds3 = gdal.GetDriverByName(RasterDriver).CreateCopy(basinRaster_File, GWBasns) 166 | out_ds3 = None 167 | GWBasns = None 168 | remove_file(fdir) 169 | remove_file(channelgrid) 170 | del GWBasns, coarse_grid, fine_grid 171 | 172 | # zip the folder 173 | tic1 = time.time() 174 | zipper = zipUpFolder(projdir, out_zip, nclist) 175 | print('Built output .zip file: {0}'.format(out_zip)) 176 | 177 | # Delete all temporary files 178 | shutil.rmtree(projdir) 179 | rootgrp1.close() 180 | rootgrp2.close() 181 | del rootgrp1, rootgrp2 182 | print('Process completed in {0:3.2f} seconds.'.format(time.time()-tic)) -------------------------------------------------------------------------------- /wrfhydro_gis/Build_PRJ_From_Geogrid_File.py: -------------------------------------------------------------------------------- 1 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 2 | # Copyright UCAR (c) 2019 3 | # University Corporation for Atmospheric Research(UCAR) 4 | # National Center for Atmospheric Research(NCAR) 5 | # Research Applications Laboratory(RAL) 6 | # P.O.Box 3000, Boulder, Colorado, 80307-3000, USA 7 | # 24/09/2019 8 | # 9 | # Name: module1 10 | # Purpose: 11 | # Author: $ Kevin Sampson 12 | # Created: 24/09/2019 13 | # Licence: 14 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 15 | 16 | descText = "This tool takes an input WRF Geogrid file in NetCDF format and uses the" \ 17 | " specified variable's projection parameters to produce a projection file." 18 | 19 | # Import Modules 20 | 21 | # Import Python Core Modules 22 | import os 23 | import time 24 | import sys 25 | 26 | # Import additional modules 27 | import netCDF4 28 | from argparse import ArgumentParser 29 | from pathlib import Path 30 | 31 | # Import function library into namespace. Must exist in same directory as this script. 32 | from wrfhydro_functions import WRF_Hydro_Grid 33 | 34 | # Global Variables 35 | defaultGeogrid = 'geo_em.d01.nc' 36 | 37 | # Script options 38 | buildPRJ = True # Switch for building output Esri Projection File 39 | 40 | # Main Codeblock 41 | if __name__ == '__main__': 42 | print('Script initiated at {0}'.format(time.ctime())) 43 | tic = time.time() 44 | 45 | # Setup the input arguments 46 | parser = ArgumentParser(description=descText, add_help=True) 47 | parser.add_argument("-i", 48 | dest="in_nc", 49 | required=True, 50 | help="Path to WPS geogrid (geo_em.d0*.nc) file or WRF-Hydro Fulldom_hires.nc file.") 51 | parser.add_argument("-o", 52 | dest="out_dir", 53 | default='', 54 | required=True, 55 | help="Output directory.") 56 | 57 | # If no arguments are supplied, print help message 58 | if len(sys.argv) == 1: 59 | parser.print_help(sys.stderr) 60 | sys.exit(1) 61 | args = parser.parse_args() 62 | all_defaults = {key: parser.get_default(key) for key in vars(args)} 63 | 64 | # Handle path of input 65 | if args.in_nc == all_defaults["in_nc"]: 66 | print('Using default input geogrid location of: {0}'.format(all_defaults["in_nc"])) 67 | 68 | if args.out_dir == all_defaults["out_dir"]: 69 | print('Using default output location: {0}'.format(all_defaults["out_file"])) 70 | 71 | # Input and output files and directories 72 | outPRJ = os.path.join(args.out_dir, os.path.basename(args.in_nc).replace('.nc', '.prj')) 73 | 74 | print('Input WPS Geogrid or Fulldom file: {0}'.format(args.in_nc)) 75 | print('Output prj file: {0}'.format(outPRJ)) 76 | 77 | rootgrp = netCDF4.Dataset(args.in_nc, 'r') # Establish an object for reading the input NetCDF file 78 | coarse_grid = WRF_Hydro_Grid(rootgrp) # Instantiate a grid object 79 | rootgrp.close() 80 | del rootgrp 81 | print(' Created projection definition from input NetCDF GEOGRID file.') 82 | 83 | # Build an ESRI style projection file 84 | if buildPRJ: 85 | projEsri = coarse_grid.proj.Clone() # Copy the SRS 86 | projEsri.MorphToESRI() # Alter the projection to Esri's representation of a coordinate system 87 | file = open(outPRJ, 'w') 88 | file.write(projEsri.ExportToWkt()) 89 | file.close() 90 | print(' Created ESRI Projection file: {0}'.format(outPRJ)) 91 | del coarse_grid 92 | print('Process complted in {0:3.2f} seconds.'.format(time.time()-tic)) 93 | -------------------------------------------------------------------------------- /wrfhydro_gis/Build_Routing_Stack.py: -------------------------------------------------------------------------------- 1 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 2 | # Copyright UCAR (c) 2019 3 | # University Corporation for Atmospheric Research(UCAR) 4 | # National Center for Atmospheric Research(NCAR) 5 | # Research Applications Laboratory(RAL) 6 | # P.O.Box 3000, Boulder, Colorado, 80307-3000, USA 7 | # 24/09/2019 8 | # 9 | # Name: module1 10 | # Purpose: 11 | # Author: $ Kevin Sampson 12 | # Created: 24/09/2019 13 | # Licence: 14 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 15 | 16 | descText = 'This is a program to perform the full routing-stack GIS pre-processing' + \ 17 | 'for WRF-Hydro. The inputs will be related to the domain, the desired ' + \ 18 | 'routing nest factor, and other options and parameter values. The output ' + \ 19 | 'will be a routing stack zip file with WRF-Hydro domain and parameter files. ' 20 | 21 | # --- Import Modules --- # 22 | 23 | # Import Python Core Modules 24 | import os 25 | import sys 26 | import time 27 | import shutil 28 | import copy as cpy 29 | from packaging.version import parse as LooseVersion # To avoid deprecation warnings 30 | import argparse 31 | from argparse import ArgumentParser 32 | import platform # Added 8/20/2020 to detect OS 33 | 34 | # Import Additional Modules 35 | import netCDF4 36 | import osgeo 37 | 38 | try: 39 | if LooseVersion(osgeo.__version__) > LooseVersion('3.0.0'): 40 | from osgeo import osr 41 | from osgeo import ogr 42 | from osgeo import gdal 43 | from osgeo.gdal_array import * 44 | else: 45 | import osr 46 | import ogr 47 | import gdal 48 | from gdal_array import * 49 | except: 50 | sys.exit('ERROR: cannot find GDAL/OGR modules') 51 | 52 | # Import function library into namespace. Must exist in same directory as this script. 53 | import wrfhydro_functions as wrfh # Function script packaged with this toolbox 54 | 55 | # --- End Import Modules --- # 56 | 57 | # Add Proj directory to path 58 | conda_env_path = os.path.join(os.path.dirname(sys.executable)) 59 | if platform.system() == 'Windows': 60 | internal_datadir = os.path.join(conda_env_path, "Library", "share", "proj") 61 | elif platform.system() in ['Linux', 'Darwin']: 62 | internal_datadir = os.path.join(os.path.dirname(conda_env_path), "share", "proj") 63 | os.environ["PROJ_LIB"] = internal_datadir 64 | 65 | # --- Global Variables --- # 66 | 67 | # Provide the default groundwater basin generation method. 68 | # Options: ['FullDom basn_msk variable', 'FullDom LINKID local basins', 'Polygon Shapefile or Feature Class'] 69 | defaultGWmethod = 'FullDom LINKID local basins' 70 | #defaultGWmethod = 'FullDom basn_msk variable' 71 | GW_with_Stack = True # Switch for building default groundwater inputs with any routing stack 72 | 73 | # Processing Notes to insert into output netCDF global attributes. Provide any documentation here. 74 | processing_notes_SM = '''Created: {0}'''.format(time.ctime()) # Processing notes for Spatial Metdata files 75 | processing_notesFD = '''Created: {0}'''.format(time.ctime()) # Processing notes for the FULLDOM (Routing Grid) file 76 | 77 | # --- DO NOT EDIT BELOW THIS LINE --- # 78 | 79 | # Parameter default values 80 | default_regridFactor = 10 # Regridding factor 81 | default_ovroughrtfac_val = 1.0 82 | default_retdeprtfac_val = 1.0 83 | default_threshold = 200 84 | default_lksatfac_val = wrfh.lksatfac_val 85 | 86 | # Script options 87 | runGEOGRID_STANDALONE = True # Switch for testing the GEOGRID STANDALONE Pre-processing workflow 88 | cleanUp = True # Switch to keep all temporary files (for troubleshooting) 89 | 90 | # Methods test switches 91 | coordMethod1 = True # Interpolate GEOGRID latitude and longitude coordinate arrays 92 | coordMethod2 = False # Transform coordinate pairs at each grid cell from projected to geocentric 93 | 94 | # Variables derived from function script 95 | out_Grid_fmt = wrfh.RasterDriver 96 | 97 | #outNCType = 'NETCDF3_64BIT' # Set output netCDF format for spatial metdata files. This was the default before 7/31/2018 98 | outNCType = 'NETCDF4_CLASSIC' # Define the output netCDF version for RouteLink.nc and LAKEPARM.nc 99 | 100 | # List of all possible routing-stack files to keep between the working directory and output .zip files 101 | nclist = [wrfh.LDASFile, 102 | wrfh.FullDom, 103 | 'gw_basns.nc', 104 | wrfh.GW_ASCII, 105 | 'gw_basns_geogrid.prj', 106 | wrfh.RT_nc, 107 | 'Route_Link.csv', 108 | wrfh.LK_nc, 109 | 'streams.shp', 'streams.shx', 'streams.shp.xml', 'streams.sbx', 'streams.sbn', 'streams.prj', 'streams.dbf', 110 | 'lakes.shp', 'lakes.shx', 'lakes.shp.xml', 'lakes.sbx', 'lakes.sbn', 'lakes.prj', 'lakes.dbf', 111 | wrfh.GW_nc, 112 | wrfh.GWGRID_nc, 113 | wrfh.minDepthCSV, 114 | 'Lake_Problems.csv', 115 | 'Old_New_LakeComIDs.csv', 116 | 'Lake_Link_Types.csv', 117 | 'Tossed_Lake_Link_Types.csv', 118 | 'Lake_Preprocssing_Info.txt', 119 | 'Lakes_with_minimum_depth.csv'] 120 | 121 | '''Pre-defining the variables and populating variable attributes is 122 | a much faster strategry than creating and populating each variable 123 | sequentially, especially for netCDF3 versions. Also, unsigned integer 124 | types are only allowed in NETCDF4.''' 125 | # List of variables to create [, , ] 126 | varList2D = [['CHANNELGRID', 'i4', ''], 127 | ['FLOWDIRECTION', 'i2', ''], 128 | ['FLOWACC', 'i4', ''], 129 | ['TOPOGRAPHY', 'f4', ''], 130 | ['RETDEPRTFAC', 'f4', ''], 131 | ['OVROUGHRTFAC', 'f4', ''], 132 | ['STREAMORDER', 'i1', ''], 133 | ['frxst_pts', 'i4', ''], 134 | ['basn_msk', 'i4', ''], 135 | ['LAKEGRID', 'i4', ''], 136 | ['landuse', 'f4', ''], 137 | ['LKSATFAC', 'f4', '']] 138 | 139 | # Default temporary output file names 140 | mosprj_name = 'mosaicprj.tif' # Default regridded input DEM if saved to disk 141 | 142 | # Default name for the output routing stack zip file 143 | outZipDefault = 'WRF_Hydro_routing_grids.zip' # Default output routing stack zip file name if not provided by user 144 | defaltGeogrid = 'geo_em.d01.nc' # Default input geogrid file name if not provided by user 145 | 146 | # Check the resulting Fulldom_hires.nc file for NLINKS errors (a WRF-Hydro channel connectivity issue) 147 | check_nlinks = True 148 | 149 | # --- End Global Variables --- # 150 | 151 | # --- Functions --- # 152 | def is_valid_file(parser, arg): 153 | # https://stackoverflow.com/questions/11540854/file-as-command-line-argument-for-argparse-error-message-if-argument-is-not-va 154 | if not os.path.exists(arg): 155 | parser.error("The file %s does not exist!" % arg) 156 | else: 157 | return str(arg) 158 | 159 | def GEOGRID_STANDALONE(inGeogrid, 160 | regridFactor, 161 | inDEM, 162 | projdir, 163 | threshold, 164 | out_zip, 165 | in_csv = '', 166 | basin_mask = False, 167 | routing = False, 168 | varList2D = [], 169 | in_lakes = '', 170 | GW_with_Stack = True, 171 | in_GWPolys = None, 172 | ovroughrtfac_val = 1.0, 173 | retdeprtfac_val = 1.0, 174 | lksatfac_val = 1000.0, 175 | startPts = None, 176 | channel_mask = None): 177 | ''' 178 | This function will validate input parameters and attempt to run the full routing- 179 | stack GIS pre-processing for WRF-Hydro. The inputs will be related to the domain, 180 | the desired routing nest factor, and other options and parameter values. The 181 | output will be a routing stack zip file with WRF-Hydro domain and parameter 182 | files. 183 | ''' 184 | 185 | global defaultGWmethod 186 | tic1 = time.time() 187 | 188 | # Print information provided to this function 189 | for key, value in locals().items(): 190 | if callable(value) and value.__module__ == __name__: 191 | print(' {0}: {1}'.format(key, value)) 192 | 193 | # Set some switches 194 | if os.path.exists(in_csv): 195 | AddGages = True 196 | print(' Forecast points provided.') 197 | else: 198 | AddGages = False 199 | 200 | if routing: 201 | print(' Reach-based routing files will be created.') 202 | varList2D.append(['LINKID', 'i4', '']) 203 | else: 204 | print(' Reach-based routing files will not be created.') 205 | 206 | # Step 1 - Georeference geogrid file 207 | rootgrp = netCDF4.Dataset(inGeogrid, 'r') # Establish an object for reading the input NetCDF files 208 | globalAtts = rootgrp.__dict__ # Read all global attributes into a dictionary 209 | if LooseVersion(netCDF4.__version__) > LooseVersion('1.4.0'): 210 | rootgrp.set_auto_mask(False) # Change masked arrays to old default (numpy arrays always returned) 211 | coarse_grid = wrfh.WRF_Hydro_Grid(rootgrp) # Instantiate a grid object 212 | fine_grid = cpy.copy(coarse_grid) # Copy the grid object for modification 213 | fine_grid.regrid(regridFactor) # Regrid to the coarse grid 214 | print(' Created projection definition from input NetCDF GEOGRID file.') 215 | print(' Proj4: {0}'.format(coarse_grid.proj4)) # Print Proj.4 string to screen 216 | print(' Coarse grid GeoTransform: {0}'.format(coarse_grid.GeoTransformStr())) # Print affine transformation to screen. 217 | print(' Coarse grid extent [Xmin, Ymin, Xmax, Ymax]: {0}'.format(coarse_grid.grid_extent())) # Print extent to screen. 218 | print(' Fine grid extent [Xmin, Ymin, Xmax, Ymax]: {0}'.format(fine_grid.grid_extent())) # Print extent to screen. 219 | 220 | # Build output raster from numpy array of the GEOGRID variable requested. This will be used as a template later on 221 | LU_INDEX = coarse_grid.numpy_to_Raster(wrfh.flip_grid(rootgrp.variables['LU_INDEX'][:])) 222 | 223 | # Create spatial metadata file for GEOGRID/LDASOUT grids 224 | out_nc1 = os.path.join(projdir, wrfh.LDASFile) 225 | rootgrp1 = netCDF4.Dataset(out_nc1, 'w', format=outNCType) # wrf_hydro_functions.outNCType) 226 | rootgrp1, grid_mapping = wrfh.create_CF_NetCDF(coarse_grid, rootgrp1, projdir, 227 | notes=processing_notes_SM) # addLatLon=True, latArr=latArr, lonArr=lonArr) 228 | for item in wrfh.Geogrid_MapVars + ['DX', 'DY']: 229 | if item in globalAtts: 230 | rootgrp1.setncattr(item, globalAtts[item]) 231 | rootgrp1.close() 232 | del rootgrp1 233 | 234 | # Step 3 - Create high resolution topography layers 235 | in_DEM = gdal.Open(inDEM, 0) # Open with read-only mode 236 | outDEM = os.path.join(projdir, mosprj_name) 237 | mosprj = fine_grid.project_to_model_grid(in_DEM, saveRaster=True, OutGTiff=outDEM, resampling=gdal.GRA_Bilinear) 238 | in_DEM = mosprj = None 239 | 240 | # Build latitude and longitude arrays for Fulldom_hires netCDF file 241 | if coordMethod1: 242 | print(' Deriving geocentric coordinates on routing grid from bilinear interpolation of geogrid coordinates.') 243 | # Build latitude and longitude arrays for GEOGRID_LDASOUT spatial metadata file 244 | latArr = wrfh.flip_grid(rootgrp.variables['XLAT_M'][:]) # Extract array of GEOGRID latitude values 245 | lonArr = wrfh.flip_grid(rootgrp.variables['XLONG_M'][:]) # Extract array of GEOGRID longitude values 246 | 247 | # Resolve any remaining issues with masked arrays. Happens in the ArcGIS pre-processing tools for python 2.7. 248 | if numpy.ma.isMA(lonArr): 249 | lonArr = lonArr.data 250 | if numpy.ma.isMA(latArr): 251 | latArr = latArr.data 252 | 253 | # Method 1: Use GEOGRID latitude and longitude fields and resample to routing grid 254 | latRaster1 = coarse_grid.numpy_to_Raster(latArr) # Build raster out of GEOGRID latitude array 255 | lonRaster1 = coarse_grid.numpy_to_Raster(lonArr) # Build raster out of GEOGRID longitude array 256 | 257 | latRaster2 = fine_grid.project_to_model_grid(latRaster1) # Regrid from GEOGRID resolution to routing grid resolution 258 | lonRaster2 = fine_grid.project_to_model_grid(lonRaster1) # Regrid from GEOGRID resolution to routing grid resolution 259 | latRaster1 = lonRaster1 = None # Destroy rater objects 260 | latArr2 = BandReadAsArray(latRaster2.GetRasterBand(1)) # Read into numpy array 261 | lonArr2 = BandReadAsArray(lonRaster2.GetRasterBand(1)) # Read into numpy array 262 | latRaster2 = lonRaster2 = None # Destroy raster objects 263 | del latArr, lonArr, latRaster1, lonRaster1, latRaster2, lonRaster2 264 | 265 | elif coordMethod2: 266 | print(' Deriving geocentric coordinates on routing grid from direct transformation geogrid coordinates.') 267 | # Method 2: Transform each point from projected coordinates to geocentric coordinates 268 | wgs84_proj = osr.SpatialReference() # Build empty spatial reference object 269 | wgs84_proj.ImportFromProj4(wrfh.wgs84_proj4) # Imprort from proj4 to avoid EPSG errors (4326) 270 | xmap, ymap = fine_grid.getxy() # Get x and y coordinates as numpy array 271 | latArr2, lonArr2 = wrfh.ReprojectCoords(xmap, ymap, coarse_grid.proj, wgs84_proj) # Transform coordinate arrays 272 | del xmap, ymap, wgs84_proj 273 | 274 | # Create FULLDOM file 275 | out_nc2 = os.path.join(projdir, wrfh.FullDom) 276 | rootgrp2 = netCDF4.Dataset(out_nc2, 'w', format=outNCType) # wrf_hydro_functions.outNCType) 277 | rootgrp2, grid_mapping = wrfh.create_CF_NetCDF(fine_grid, rootgrp2, projdir, 278 | notes=processing_notesFD, addVars=varList2D, addLatLon=True, 279 | latArr=latArr2, lonArr=lonArr2) 280 | del latArr2, lonArr2 281 | 282 | # Add some global attribute metadata to the Fulldom file, including relevant WPS attributes for defining the model coordinate system 283 | rootgrp2.geogrid_used = inGeogrid # Paste path of geogrid file to the Fulldom global attributes 284 | rootgrp2.DX = fine_grid.DX # Add X resolution as a global attribute 285 | rootgrp2.DY = -fine_grid.DY # Add Y resolution as a global attribute 286 | for item in wrfh.Geogrid_MapVars: 287 | if item in globalAtts: 288 | rootgrp2.setncattr(item, globalAtts[item]) 289 | rootgrp.close() # Close input GEOGRID file 290 | del item, globalAtts, rootgrp 291 | 292 | # Process: Resample LU_INDEX grid to a higher resolution 293 | LU_INDEX2 = fine_grid.project_to_model_grid(LU_INDEX, fine_grid.DX, fine_grid.DY, resampling=gdal.GRA_NearestNeighbour) 294 | rootgrp2.variables['landuse'][:] = BandReadAsArray(LU_INDEX2.GetRasterBand(1)) # Read into numpy array 295 | LU_INDEX = None # Destroy raster object 296 | print(' Process: landuse written to output netCDF.') 297 | del LU_INDEX, LU_INDEX2 298 | 299 | ## # Step X(a) - Test to match LANDMASK - Only used for areas surrounded by water (LANDMASK=0) 300 | ## mosprj2, loglines = wrfh.adjust_to_landmask(mosprj, LANDMASK, coarse_grid.proj, projdir, 'm') 301 | ## outtable.writelines("\n".join(loglines) + "\n") 302 | ## del LANDMASK 303 | 304 | # Step 4 - Hyrdo processing functions -- Whitebox 305 | rootgrp2, fdir, fac, channelgrid, fill, order = wrfh.WB_functions(rootgrp2, outDEM, 306 | projdir, threshold, ovroughrtfac_val, retdeprtfac_val, lksatfac_val, startPts=startPts, chmask=channel_mask) 307 | if cleanUp: 308 | wrfh.remove_file(outDEM) # Delete output DEM from disk 309 | 310 | # If the user provides forecast points as a CSV file, alter outputs accordingly 311 | if AddGages: 312 | if os.path.exists(in_csv): 313 | rootgrp2 = wrfh.forecast_points(in_csv, rootgrp2, basin_mask, projdir, 314 | fine_grid.DX, fine_grid.WKT, fdir, fac, channelgrid) # Forecast point processing 315 | 316 | # Allow masking routing files (LINKID, Route_Link, etc.) to forecast points if requested 317 | if routing: 318 | rootgrp2 = wrfh.Routing_Table(projdir, rootgrp2, fine_grid, fdir, channelgrid, fill, order, gages=AddGages) 319 | if cleanUp: 320 | wrfh.remove_file(fill) # Delete fill from disk 321 | wrfh.remove_file(order) # Delete order from disk 322 | 323 | gridded = not routing # Flag for gridded routing 324 | if os.path.exists(in_lakes): 325 | # Alter Channelgrid for reservoirs and build reservoir inputs 326 | print(' Reservoir polygons provided. Lake routing will be activated.') 327 | rootgrp2, lake_ID_field = wrfh.add_reservoirs(rootgrp2, 328 | projdir, 329 | fac, 330 | in_lakes, 331 | fine_grid, 332 | Gridded=gridded, 333 | lakeIDfield=None) 334 | 335 | # Attempt to add reservoirs onto reach-based routing configuation after lakes and reaches have been processed 336 | # (added by KMS 3/28/2023) 337 | if routing: 338 | print(' Attempting to resolve reservoirs on reach-based routing network.') 339 | in_RL = os.path.join(projdir, wrfh.RT_nc) 340 | LakeNC = os.path.join(projdir, wrfh.LK_nc) 341 | out_lakes = os.path.join(projdir, wrfh.LakesSHP) 342 | if os.path.exists(in_RL) and os.path.exists(out_lakes): 343 | # Run lake pre-processor 344 | WaterbodyDict, Lake_Link_Type_arr, Old_New_LakeComID = wrfh.LK_main(projdir, 345 | in_RL, 346 | out_lakes, 347 | 'link', 348 | lake_ID_field, 349 | Subset_arr=None, 350 | datestr=wrfh.datestr, 351 | LakeAssociation=wrfh.LakeAssoc,) 352 | del WaterbodyDict, Lake_Link_Type_arr, Old_New_LakeComID 353 | 354 | # Check for NLINKS channel connectivity errors (added by KMS 3/27/2023) 355 | if check_nlinks: 356 | print(' Checking CHANNELGRID layer for NLINKS errors.') 357 | rootgrp2 = wrfh.nlinks_checker(rootgrp2, silent=True) 358 | rootgrp2.close() # Close Fulldom_hires.nc file 359 | del rootgrp2 360 | 361 | # Build groundwater files 362 | if GW_with_Stack: 363 | if in_GWPolys is not None: 364 | if os.path.exists(in_GWPolys): 365 | print(' Groundwater basin boundary polygons provided. Delineating groundwater basins from these polygons.') 366 | defaultGWmethod = 'Polygon Shapefile or Feature Class' 367 | GWBasns = wrfh.build_GW_Basin_Raster(out_nc2, projdir, defaultGWmethod, channelgrid, fdir, fine_grid, in_Polys=in_GWPolys) 368 | wrfh.build_GW_buckets(projdir, GWBasns, coarse_grid, Grid=True) 369 | GWBasns = None 370 | 371 | if cleanUp: 372 | wrfh.remove_file(fdir) # Delete fdir from disk 373 | wrfh.remove_file(fac) # Delete fac from disk 374 | wrfh.remove_file(channelgrid) # Delete channelgrid from disk 375 | if routing: 376 | wrfh.remove_file(os.path.join(projdir, wrfh.stream_id)) 377 | 378 | # Copmress (zip) the output directory 379 | zipper = wrfh.zipUpFolder(projdir, out_zip, nclist) 380 | print('Built output .zip file in {0: 3.2f} seconds.'.format(time.time()-tic1)) # Diagnotsitc print statement 381 | 382 | # Delete all temporary files 383 | if cleanUp: 384 | shutil.rmtree(projdir) 385 | 386 | # --- End Functions --- # 387 | 388 | # --- Main Codeblock --- # 389 | if __name__ == '__main__': 390 | print('Script initiated at {0}'.format(time.ctime())) 391 | tic = time.time() 392 | 393 | # Setup the input arguments 394 | parser = ArgumentParser(description=descText, add_help=True) 395 | parser.add_argument("-i", 396 | dest="in_Geogrid", 397 | type=lambda x: is_valid_file(parser, x), 398 | required=True, 399 | help="Path to WPS geogrid (geo_em.d0*.nc) file [REQUIRED]") 400 | parser.add_argument("--CSV", 401 | dest="in_CSV", 402 | type=lambda x: is_valid_file(parser, x), 403 | default=None, 404 | help="Path to input forecast point CSV file [OPTIONAL]") 405 | parser.add_argument("-b", 406 | dest="basin_mask", 407 | type=bool, 408 | default=False, 409 | help="Mask CHANNELGRID variable to forecast basins? [True/False]. default=False") 410 | parser.add_argument("-r", 411 | dest="RB_routing", 412 | type=bool, 413 | default=False, 414 | help="Create reach-based routing (RouteLink) files? [True/False]. default=False") 415 | parser.add_argument("-l", 416 | dest="in_reservoirs", 417 | type=lambda x: is_valid_file(parser, x), 418 | default=None, 419 | help="Path to reservoirs shapefile or feature class [OPTIONAL]. If -l is TRUE, this is required.") 420 | parser.add_argument("-d", 421 | dest="inDEM", 422 | type=lambda x: is_valid_file(parser, x), 423 | default='', 424 | required=True, 425 | help="Path to input high-resolution elevation raster [REQUIRED]") 426 | parser.add_argument("-R", 427 | dest="cellsize", 428 | type=int, 429 | default=default_regridFactor, 430 | help="Regridding (nest) Factor. default=10") 431 | parser.add_argument("-t", 432 | dest="threshold", 433 | type=int, 434 | default=default_threshold, 435 | help="Number of routing grid cells to define stream. default=200") 436 | parser.add_argument("-o", 437 | dest="out_zip_file", 438 | default='./{0}'.format(outZipDefault), 439 | help="Output routing stack ZIP file") 440 | parser.add_argument("-O", 441 | dest="ovroughrtfac_val", 442 | type=float, 443 | default=default_ovroughrtfac_val, 444 | help="OVROUGHRTFAC value. default=1.0") 445 | parser.add_argument("-T", 446 | dest="retdeprtfac_val", 447 | type=float, 448 | default=default_retdeprtfac_val, 449 | help="RETDEPRTFAC value. default=1.0") 450 | parser.add_argument("--starts", 451 | dest="channel_starts", 452 | type=lambda x: is_valid_file(parser, x), 453 | default=None, 454 | help="Path to channel initiation points feature class. Must be 2D point type. [OPTIONAL]") 455 | parser.add_argument("--gw", 456 | dest="gw_polys", 457 | type=lambda x: is_valid_file(parser, x), 458 | default=None, 459 | help="Path to groundwater polygons feature class [OPTIONAL]") 460 | parser.add_argument("--mask", 461 | dest="ch_mask", 462 | type=lambda x: is_valid_file(parser, x), 463 | default=None, 464 | help="Path to a routing grid raster with which to mask channels and channel-derived grids [OPTIONAL]") 465 | 466 | # If no arguments are supplied, print help message 467 | if len(sys.argv)==1: 468 | parser.print_help(sys.stderr) 469 | sys.exit(1) 470 | args = parser.parse_args() 471 | all_defaults = {key: parser.get_default(key) for key in vars(args)} 472 | 473 | # Handle printing to user the default variable name 474 | print(' Parameter values that have not been altered from script default values:') 475 | if args.basin_mask == all_defaults["basin_mask"]: 476 | print(' Using default basin mask setting: {0}'.format(all_defaults["basin_mask"])) 477 | if args.RB_routing == all_defaults["RB_routing"]: 478 | print(' Using default reach-based routing setting: {0}'.format(all_defaults["RB_routing"])) 479 | if args.cellsize == all_defaults["cellsize"]: 480 | print(' Using default regridding factor: {0}'.format(all_defaults["cellsize"])) 481 | if args.threshold == all_defaults["threshold"]: 482 | if args.channel_starts == None: 483 | print(' Using default stream initiation threshold: {0}'.format(all_defaults["threshold"])) 484 | if args.out_zip_file == all_defaults["out_zip_file"]: 485 | print(' Using default output location: {0}'.format(all_defaults["out_zip_file"])) 486 | if args.ovroughrtfac_val == all_defaults["ovroughrtfac_val"]: 487 | print(' Using default OVROUGHRTFAC parameter value: {0}'.format(all_defaults["ovroughrtfac_val"])) 488 | if args.retdeprtfac_val == all_defaults["retdeprtfac_val"]: 489 | print(' Using default RETDEPRTFAC parameter value: {0}'.format(all_defaults["retdeprtfac_val"])) 490 | 491 | # Handle unsupported configurations - Currently none 492 | 493 | # This block allows us to continue to check for a valid file path while allowing the script later to avoid a NoneType error. 494 | args.in_Geogrid = os.path.abspath(args.in_Geogrid) # Obtain absolute path for required input file. 495 | args.inDEM = os.path.abspath(args.inDEM) # Obtain absolute path for required input file. 496 | args.out_zip_file = os.path.abspath(args.out_zip_file) # Obtain absolute path for required output file. 497 | if not args.in_reservoirs: 498 | args.in_reservoirs = '' 499 | else: 500 | args.in_reservoirs = os.path.abspath(args.in_reservoirs) # Obtain absolute path for optional input file. 501 | if args.in_CSV == None: 502 | args.in_CSV = '' 503 | else: 504 | args.in_CSV = os.path.abspath(args.in_CSV) # Obtain absolute path for optional input file. 505 | if args.channel_starts != None: 506 | args.channel_starts = os.path.abspath(args.channel_starts) # Obtain absolute path for optional input file. 507 | args.threshold = None 508 | if args.gw_polys is not None: 509 | args.gw_polys = os.path.abspath(args.gw_polys) # Obtain absolute path for optional input file. 510 | if args.ch_mask is not None: 511 | args.ch_mask = os.path.abspath(args.ch_mask) # Obtain absolute path for optional input file. 512 | if runGEOGRID_STANDALONE: 513 | 514 | # Configure logging 515 | logfile = args.out_zip_file.replace('.zip', '.log') 516 | tee = wrfh.TeeNoFile(logfile, 'w') 517 | 518 | # Print information to screen 519 | print(' Values that will be used in building this routing stack:') 520 | print(' Input WPS Geogrid file: {0}'.format(args.in_Geogrid)) 521 | print(' Forecast Point CSV file: {0}'.format(args.in_CSV)) 522 | print(' Mask CHANNELGRID variable to forecast basins?: {0}'.format(args.basin_mask)) 523 | print(' Create reach-based routing (RouteLink) files?: {0}'.format(args.RB_routing)) 524 | print(' Lake polygon feature class: {0}'.format(args.in_reservoirs)) 525 | print(' Input high-resolution DEM: {0}'.format(args.inDEM)) 526 | print(' Regridding factor: {0}'.format(args.cellsize)) 527 | print(' Stream initiation threshold: {0}'.format(args.threshold)) 528 | print(' OVROUGHRTFAC parameter value: {0}'.format(args.ovroughrtfac_val)) 529 | print(' RETDEPRTFAC parameter value: {0}'.format(args.retdeprtfac_val)) 530 | print(' Input channel initiation start point feature class: {0}'.format(args.channel_starts)) 531 | print(' Input groundwater basin polygons: {0}'.format(args.gw_polys)) 532 | print(' Input channelgrid mask raster: {0}'.format(args.ch_mask)) 533 | print(' Output ZIP file: {0}'.format(args.out_zip_file)) 534 | 535 | # Create scratch directory for temporary outputs 536 | projdir = os.path.join(os.path.dirname(args.out_zip_file), 'scratchdir') 537 | projdir = os.path.abspath(projdir) 538 | if os.path.exists(projdir): 539 | shutil.rmtree(projdir) 540 | os.makedirs(projdir) 541 | 542 | # Run pre-process 543 | print(' Running Process GEOGRID function') 544 | GEOGRID_STANDALONE(args.in_Geogrid, 545 | args.cellsize, 546 | args.inDEM, 547 | projdir, 548 | args.threshold, 549 | args.out_zip_file, 550 | in_csv = args.in_CSV, 551 | basin_mask = args.basin_mask, 552 | routing = args.RB_routing, 553 | varList2D = varList2D, 554 | in_lakes = args.in_reservoirs, 555 | GW_with_Stack = GW_with_Stack, 556 | in_GWPolys = args.gw_polys, 557 | ovroughrtfac_val = args.ovroughrtfac_val, 558 | retdeprtfac_val = args.retdeprtfac_val, 559 | lksatfac_val = default_lksatfac_val, 560 | startPts = args.channel_starts, 561 | channel_mask = args.ch_mask) 562 | tee.close() 563 | del tee 564 | else: 565 | print(' Will not run Process GEOGRID function. Set global "runGEOGRID_STANDALONE" to True to run.') # Should do nothing 566 | print('Process completed in {0:3.2f} seconds.'.format(time.time()-tic)) 567 | # --- End Main Codeblock --- # -------------------------------------------------------------------------------- /wrfhydro_gis/Build_Spatial_Metadata_File.py: -------------------------------------------------------------------------------- 1 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 2 | # Copyright UCAR (c) 2019 3 | # University Corporation for Atmospheric Research(UCAR) 4 | # National Center for Atmospheric Research(NCAR) 5 | # Research Applications Laboratory(RAL) 6 | # P.O.Box 3000, Boulder, Colorado, 80307-3000, USA 7 | # 24/09/2019 8 | # 9 | # Name: module1 10 | # Purpose: 11 | # Author: $ Kevin Sampson 12 | # Created: 24/09/2019 13 | # Licence: 14 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 15 | 16 | descText = "This tool takes an input GEOGRID and uses that grid information to produce spatial" \ 17 | " metadata files against the multiple resolutions of WRF Hydro output files." 18 | 19 | # Import Modules 20 | 21 | # Import Python Core Modules 22 | import os 23 | import sys 24 | import time 25 | import copy as cpy 26 | from distutils.version import LooseVersion 27 | 28 | # Import additional modules 29 | import netCDF4 30 | from argparse import ArgumentParser 31 | from pathlib import Path 32 | from packaging.version import parse as LooseVersion # To avoid deprecation warnings 33 | import osgeo 34 | 35 | try: 36 | if LooseVersion(osgeo.__version__) > LooseVersion('3.0.0'): 37 | from osgeo import gdal 38 | from osgeo import osr 39 | from osgeo.gdal_array import * 40 | else: 41 | import gdal 42 | import osr 43 | from gdal_array import * 44 | except: 45 | sys.exit('ERROR: cannot find GDAL/OGR modules') 46 | #from gdalnumeric import * 47 | 48 | # Import function library into namespace. Must exist in same directory as this script. 49 | from wrfhydro_functions import (WRF_Hydro_Grid, projdict, flip_grid, 50 | numpy_to_Raster, wgs84_proj4, ReprojectCoords, outNCType, create_CF_NetCDF, 51 | Geogrid_MapVars) 52 | 53 | # Globals 54 | latlon_vars = True # Include LATITUDE and LONGITUDE 2D variables? 55 | defaultGeogrid = 'geo_em.d01.nc' 56 | 57 | # Processing Notes to insert into netCDF global attributes 58 | processing_notes_SM = '''Created: %s''' %time.ctime() # Processing notes for Spatial Metdata files 59 | 60 | # Script options 61 | 62 | # Methods test switches 63 | coordMethod1 = True # Interpolate GEOGRID latitude and longitude coordinate arrays 64 | coordMethod2 = False # Transform coordinate pairs at each grid cell from projected to geocentric 65 | 66 | # Main Codeblock 67 | if __name__ == '__main__': 68 | tic = time.time() 69 | print('Script initiated at {0}'.format(time.ctime())) 70 | 71 | parser = ArgumentParser(description=descText, add_help=True) 72 | parser.add_argument("-i", 73 | dest="in_nc", 74 | required=True, 75 | help="Path to WPS geogrid (geo_em.d0*.nc) file or WRF-Hydro Fulldom_hires.nc file.") 76 | parser.add_argument("-o", 77 | dest="out_nc", 78 | default='', 79 | required=True, 80 | help="Output netCDF file.") 81 | parser.add_argument("-f", 82 | dest="output_format", 83 | default='RTOUT', 84 | help="Output format. Options: LDASOUT or RTOUT") 85 | parser.add_argument("-r", 86 | dest="regrid_factor", 87 | type=int, 88 | default=4, 89 | help="Regridding factor of data.") 90 | 91 | # If no arguments are supplied, print help message 92 | if len(sys.argv) == 1: 93 | parser.print_help(sys.stderr) 94 | sys.exit(1) 95 | args = parser.parse_args() 96 | all_defaults = {key: parser.get_default(key) for key in vars(args)} 97 | 98 | # Handle path of input 99 | if args.in_nc == all_defaults["in_nc"]: 100 | print('Using default input geogrid location of: {0}'.format(all_defaults["in_nc"])) 101 | 102 | if args.out_nc == all_defaults["out_nc"]: 103 | print('Using output location of: {0}'.format(all_defaults["out_nc"])) 104 | 105 | if args.output_format == all_defaults["output_format"]: 106 | print('Using output format of: {0}'.format(all_defaults["output_format"])) 107 | 108 | if args.regrid_factor == all_defaults["regrid_factor"]: 109 | print('Using regrid factor of: {0}'.format(all_defaults["regrid_factor"])) 110 | 111 | projdir = os.path.dirname(args.out_nc) 112 | 113 | if args.output_format == "LDASOUT": 114 | regridFactor = 1.0 115 | elif args.output_format == "RTOUT": 116 | regridFactor = int(args.regrid_factor) 117 | 118 | # Print informational messages 119 | # print('Input Dataset: {0}'.format(inGeogrid)) 120 | # print('Output Grid Resolution: {0}'.format(format_out)) 121 | # print('Output Regridding Factor: {0}'.format(regridFactor)) 122 | print('Directory to be used for outputs: {0}'.format(projdir)) 123 | # print('Output netCDF File: {0}'.format(out_nc)) 124 | 125 | # Georeference geogrid file 126 | rootgrp = netCDF4.Dataset(args.in_nc, 'r') # Establish an object for reading the input NetCDF file 127 | if LooseVersion(netCDF4.__version__) > LooseVersion('1.4.0'): 128 | rootgrp.set_auto_mask(False) # Change masked arrays to old default (numpy arrays always returned) 129 | coarse_grid = WRF_Hydro_Grid(rootgrp) # Instantiate a grid object 130 | print(' Map Projection of GEOGRID: {0}'.format(projdict[coarse_grid.map_pro])) 131 | print(' PROJ4: {0}'.format(coarse_grid.proj4)) 132 | print(' Input GeoTransform: {0}'.format(coarse_grid.GeoTransform())) # Print affine transformation to screen. 133 | 134 | # Build the regridded domain 135 | fine_grid = cpy.copy(coarse_grid) # Copy the grid object for modification 136 | fine_grid.regrid(regridFactor) # Regrid to the coarse grid 137 | print(' Output GeoTransform: {0}'.format(fine_grid.GeoTransform())) # Print affine transformation to screen. 138 | print(' New Resolution: {0} {1}'.format(fine_grid.DX, -fine_grid.DY)) 139 | 140 | # Build latitude and longitude arrays for GEOGRID_LDASOUT spatial metadata file 141 | if 'XLAT_M' in rootgrp.variables and 'XLONG_M' in rootgrp.variables: 142 | lat_var = 'XLAT_M' 143 | lon_var = 'XLONG_M' 144 | elif 'XLAT' in rootgrp.variables and 'XLONG' in rootgrp.variables: 145 | lat_var = 'XLAT' 146 | lon_var = 'XLONG' 147 | 148 | if 'time' in rootgrp.dimensions or 'Time' in rootgrp.dimensions: 149 | latArr = flip_grid(rootgrp.variables[lat_var][0]) # Extract array of GEOGRID latitude values 150 | lonArr = flip_grid(rootgrp.variables[lon_var][0]) # Extract array of GEOGRID longitude values 151 | else: 152 | latArr = flip_grid(rootgrp.variables[lat_var]) # Extract array of GEOGRID latitude values 153 | lonArr = flip_grid(rootgrp.variables[lon_var]) # Extract array of GEOGRID longitude values 154 | rootgrp.close() 155 | del rootgrp 156 | 157 | if latlon_vars: 158 | # Build latitude and longitude arrays for Fulldom_hires netCDF file 159 | 160 | if coordMethod1: 161 | print(' Deriving geocentric coordinates on routing grid from bilinear interpolation of geogrid coordinates.') 162 | 163 | # Method 1: Use GEOGRID latitude and longitude fields and resample to routing grid 164 | latRaster = numpy_to_Raster(latArr, coarse_grid.proj, coarse_grid.DX, coarse_grid.DY, coarse_grid.x00, coarse_grid.y00) # Build raster out of GEOGRID latitude array 165 | lonRaster = numpy_to_Raster(lonArr, coarse_grid.proj, coarse_grid.DX, coarse_grid.DY, coarse_grid.x00, coarse_grid.y00) # Build raster out of GEOGRID latitude array 166 | 167 | if args.output_format == "RTOUT": 168 | latRaster = fine_grid.project_to_model_grid(latRaster) # Regrid from GEOGRID resolution to routing grid resolution 169 | lonRaster = fine_grid.project_to_model_grid(lonRaster) # Regrid from GEOGRID resolution to routing grid resolution 170 | 171 | latArr = BandReadAsArray(latRaster.GetRasterBand(1)) # Read into numpy array 172 | lonArr = BandReadAsArray(lonRaster.GetRasterBand(1)) # Read into numpy array 173 | latRaster = lonRaster = None # Destroy raster objects 174 | del latRaster, lonRaster 175 | 176 | elif coordMethod2: 177 | print(' Deriving geocentric coordinates on routing grid from direct transformation geogrid coordinates.') 178 | 179 | # Method 2: Transform each point from projected coordinates to geocentric coordinates 180 | wgs84_proj = osr.SpatialReference() # Build empty spatial reference object 181 | wgs84_proj.ImportFromProj4(wgs84_proj4) # Imprort from proj4 to avoid EPSG errors (4326) 182 | 183 | if args.output_format == "RTOUT": 184 | pass 185 | 186 | xmap, ymap = coarse_grid.getxy() # Get x and y coordinates as numpy array 187 | latArr, lonArr = ReprojectCoords(xmap, ymap, fine_grid.proj, wgs84_proj) # Transform coordinate arrays 188 | del xmap, ymap, wgs84_proj 189 | else: 190 | latArr = lonArr = None 191 | 192 | # Create the netCDF file with spatial metadata 193 | rootgrp2 = netCDF4.Dataset(args.out_nc, 'w', format=outNCType) 194 | rootgrp2, grid_mapping = create_CF_NetCDF(fine_grid, rootgrp2, projdir, 195 | notes=processing_notes_SM, addLatLon=latlon_vars, latArr=latArr, lonArr=lonArr) 196 | globalAtts = rootgrp2.__dict__ # Read all global attributes into a dictionary 197 | for item in Geogrid_MapVars + ['DX', 'DY']: 198 | if item in globalAtts: 199 | rootgrp2.setncattr(item, globalAtts[item]) 200 | rootgrp2.close() 201 | 202 | del rootgrp2, latArr, lonArr 203 | print('Process completed in {0:3.2f} seconds.'.format(time.time()-tic)) -------------------------------------------------------------------------------- /wrfhydro_gis/Create_Domain_Boundary_Shapefile.py: -------------------------------------------------------------------------------- 1 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 2 | # Copyright UCAR (c) 2019 3 | # University Corporation for Atmospheric Research(UCAR) 4 | # National Center for Atmospheric Research(NCAR) 5 | # Research Applications Laboratory(RAL) 6 | # P.O.Box 3000, Boulder, Colorado, 80307-3000, USA 7 | # 24/09/2019 8 | # 9 | # Name: module1 10 | # Purpose: 11 | # Author: $ Kevin Sampson 12 | # Created: 24/09/2019 13 | # Licence: 14 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 15 | 16 | descText = "This tool takes an WRF Geogrid file and creates a single polygon shapefile" \ 17 | " that makes up the boundary of the domain of the M-grid (HGT_M, for example)." 18 | 19 | # Import Modules 20 | 21 | # Import Python Core Modules 22 | import os 23 | import sys 24 | import time 25 | 26 | # Import additional modules 27 | import netCDF4 28 | from argparse import ArgumentParser 29 | from pathlib import Path 30 | from packaging.version import parse as LooseVersion # To avoid deprecation warnings 31 | import osgeo 32 | 33 | try: 34 | if LooseVersion(osgeo.__version__) > LooseVersion('3.0.0'): 35 | from osgeo import ogr 36 | else: 37 | import ogr 38 | except: 39 | sys.exit('ERROR: cannot find GDAL/OGR modules') 40 | 41 | # Import function library into namespace. Must exist in same directory as this script. 42 | from wrfhydro_functions import WRF_Hydro_Grid # Function script packaged with this toolbox 43 | 44 | # --- Global Variables --- # 45 | outDriverName = 'ESRI Shapefile' # Output vector file format (OGR driver name) 46 | outSHPDefault = 'domain_boundary.shp' # Default output filename if none is provided 47 | 48 | # Main Codeblock 49 | if __name__ == '__main__': 50 | tic = time.time() 51 | print('Script initiated at {0}'.format(time.ctime())) 52 | 53 | # Setup the input arguments 54 | parser = ArgumentParser(description=descText, add_help=True) 55 | parser.add_argument("-i", 56 | dest="in_nc", 57 | required=True, 58 | help="Path to WPS geogrid (geo_em.d0*.nc) file or WRF-Hydro Fulldom_hires.nc file.") 59 | parser.add_argument("-o", 60 | dest="out_dir", 61 | default='./{0}'.format(outSHPDefault), 62 | required=True, 63 | help="Output directory.") 64 | 65 | # If no arguments are supplied, print help message 66 | if len(sys.argv) == 1: 67 | parser.print_help(sys.stderr) 68 | sys.exit(1) 69 | args = parser.parse_args() 70 | all_defaults = {key: parser.get_default(key) for key in vars(args)} 71 | 72 | # Handle output path 73 | if args.out_dir == all_defaults["out_dir"]: 74 | print('Using default output location of: {0}'.format(all_defaults["out_dir"])) 75 | 76 | # Input and output files and directories 77 | outSHP = os.path.join(args.out_dir, os.path.basename(args.in_nc).replace('.nc', '_boundary.shp')) 78 | 79 | rootgrp = netCDF4.Dataset(args.in_nc, 'r') # Establish an object for reading the input NetCDF file 80 | coarse_grid = WRF_Hydro_Grid(rootgrp) # Instantiate a grid object 81 | print(' Created projection definition from input NetCDF GEOGRID file.') 82 | 83 | # Write out domain boundary shapefile 84 | '''For some unkonwn reason, the OGR.createlayer method will not pass CRS scale factors 85 | into the output for polar sterographic projections.''' 86 | geom = coarse_grid.boundarySHP(outSHP, outDriverName) 87 | geom = None 88 | rootgrp.close() 89 | del rootgrp, geom 90 | print(' Output shapefile: {0}'.format(outSHP)) 91 | print('Process completed in {0:3.2f} seconds.'.format(time.time()-tic)) -------------------------------------------------------------------------------- /wrfhydro_gis/Create_SoilProperties_and_Hydro2D.py: -------------------------------------------------------------------------------- 1 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 2 | # Copyright UCAR (c) 2018 3 | # University Corporation for Atmospheric Research(UCAR) 4 | # National Center for Atmospheric Research(NCAR) 5 | # Research Applications Laboratory(RAL) 6 | # P.O.Box 3000, Boulder, Colorado, 80307-3000, USA 7 | # 14/12/2018 8 | # 9 | # Name: create_SoilProperties.py 10 | # Purpose: 11 | # Author: Kevin Sampson, NCAR 12 | # Created: 14/12/2018 13 | # Licence: 14 | # 15 | # Based on: 16 | # ############################################################# 17 | # R script to create spatial parameter files from TBLs. 18 | # Usage: Rscript create_SoilProperties.R 19 | # Developed: 11/11/2016, A. Dugger 20 | # Updated: 07/23/2017, A.Dugger 21 | # New capability to handle soil composition 22 | # fractions and convert to soil parameters. Also 23 | # new functionality to create HYDRO2DTBL.nc. 24 | # Pedotransfer and soil texture class functions 25 | # from M. Barlage (7/2017). 26 | # ############################################################# 27 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 28 | 29 | ''' 30 | 09/09/2024 31 | ''' 32 | 33 | # Screen print in case invalid parameters are given 34 | descText = '''Usage: python %s 35 | -> Input GEOGRID netCDF file 36 | -> Directory containing SOILPARM.TBL, MPTABLE.TBL, GENPARM.TBL, and HYDRO.TBL 37 | -> Directory to write output files to (soil_properties.nc, hydro2dtbl.nc) 38 | ''' 39 | 40 | # Import python core modules 41 | import os 42 | import sys 43 | import time 44 | import re 45 | import shutil 46 | from io import StringIO 47 | 48 | # Import additional modules 49 | import f90nml 50 | import netCDF4 51 | import numpy as np 52 | import pandas as pd 53 | from argparse import ArgumentParser 54 | 55 | # --- Global Variables --- # 56 | 57 | ####################################################### 58 | # Update relevant arguments below. 59 | ####################################################### 60 | 61 | #### Land cover classification system? 62 | # Options: ["USGS", "MODIS"] 63 | landClass = "USGS" 64 | 65 | # Select which soil parameter set to use from SOILPARM.TBL ['STAS', 'STAS-RUC'] 66 | soilparm_set = 'STAS' 67 | 68 | # Use SOILCOMP? 69 | useSoilComp = False 70 | 71 | # Update texture class in geogrid?? 72 | # Note that if TRUE, the script will overwrite the geogrid file specified above. 73 | updateTexture = False 74 | 75 | #### Category to fill in for soil class if a cell is water in the soil layer but NOT water in the land cover layer: 76 | # If the script encounters a cell that is classified as land in the land use field (LU_INDEX) but is 77 | # classified as a water soil type, it will replace the soil type with the value you specify below. 78 | # If updateTexture is TRUE, these chages will be propagated to the geogrid. If not, they are just 79 | # used in parameter assignment. 80 | # Ideally there are not very many of these, so you can simply choose the most common soil type in 81 | # your domain. Alternatively, you can set to a "bad" value (e.g., -8888) to see how many of these 82 | # conflicts there are. If you do this DO NOT RUN THE MODEL WITH THESE BAD VALUES. Instead, fix them 83 | # manually with a neighbor fill or similar fill algorithm. 84 | soilFillVal = 3 85 | 86 | # Output files to create: 87 | # IMPORTANT: The netcdf files below will be overwritten if they exist! 88 | slpropFile = 'soil_properties.nc' 89 | hyd2dFile = 'hydro2dtbl.nc' 90 | 91 | #### Hard-wire urban soil properties in hydro 2d table? 92 | # Some soil parameters are hard-coded to preset values in NoahMP and WRF-Hydro for urban land cover cells. 93 | # If you want to show these in your hyd2dFile parameter file, set this to TRUE. If you want to show 94 | # default parameters, set to FALSE. There should be no answer differences either way. 95 | setUrban = False 96 | 97 | ####################################################### 98 | # Do not update below this line. 99 | ####################################################### 100 | 101 | ### Number of soil layers (e.g., 4) 102 | # This number should be consistent with the nsoil in the geogrid IF you choose the updateTexture option. 103 | nsoil = 4 104 | 105 | # User-specified soil layer thickness (dzs). Also used to calculate depth of the center of each layer (zs) 106 | lyrHt = [0.1, 0.3, 0.6, 1.0] # Soil layer thickness top layer to bottom (m) 107 | 108 | # Data model for output netCDF data 109 | outNCType = 'NETCDF4' 110 | 111 | # Default WRF-Hydro nodata value 112 | fillValue = -9999.0 113 | 114 | # Input parameter table names and MPTABLE parsing 115 | soilParamFile = 'SOILPARM.TBL' 116 | mpParamFile = 'MPTABLE.TBL' 117 | genParamFile = 'GENPARM.TBL' 118 | if landClass == "USGS": 119 | hydParamFile = 'HYDRO.TBL' 120 | mp_params = 'noahmp_usgs_parameters' 121 | elif landClass == "MODIS": 122 | hydParamFile = 'HYDRO_MODIS.TBL' 123 | mp_params = 'noahmp_modis_parameters' 124 | 125 | # Map variable names from netCDF (lower case, keys) to parameter column headings 126 | # from .TBL files (upper case, values) 127 | nameLookupSoil = dict(smcref="REFSMC", 128 | dwsat="SATDW", 129 | smcdry="DRYSMC", 130 | smcwlt="WLTSMC", 131 | bexp="BB", 132 | dksat="SATDK", 133 | psisat="SATPSI", 134 | quartz="QTZ", 135 | refdk="REFDK", 136 | refkdt="REFKDT", 137 | slope="SLOPE", 138 | smcmax="MAXSMC", 139 | cwpvt="CWPVT", 140 | vcmx25="VCMX25", 141 | mp="MP", 142 | hvt="HVT", 143 | mfsno="MFSNO", 144 | AXAJ="AXAJ", 145 | BXAJ="BXAJ", 146 | XXAJ="XXAJ") 147 | 148 | # 3D variables in soil_properties file 149 | var3d = ["smcref", 150 | "dwsat", 151 | "smcdry", 152 | "smcwlt", 153 | "bexp", 154 | "dksat", 155 | "psisat", 156 | "quartz", 157 | "smcmax"] 158 | 159 | # All possible soilParam columns 160 | soilparm_columns = ['solID', 161 | 'BB', 162 | 'DRYSMC', 163 | 'F11', 164 | 'MAXSMC', 165 | 'REFSMC', 166 | 'SATPSI', 167 | 'SATDK', 168 | 'SATDW', 169 | 'WLTSMC', 170 | 'QTZ', 171 | 'AXAJ', 172 | 'BXAJ', 173 | 'XXAJ', 174 | 'solName'] 175 | 176 | ## Assumes that each table starts with 'Soil Parameters' and the next line is the name of the parameter set 177 | SOILPARM_Start = 'Soil Parameters' 178 | 179 | # Additional parameters to be added from MPTABLE 'noahmp_global_parameters' table 180 | add_vars = ['ssi', 'snowretfac', 'tau0', 'rsurfsnow', 'scamax', 'rsurfexp'] 181 | 182 | # Default global values for snow variables in case they are not found in MPTABLE 183 | add_var_defaults = {'ssi': 0.03, 184 | 'snowretfac': 0.00005, 185 | 'tau0': 1000000.0, 186 | 'rsurfsnow': 50.0, 187 | 'scamax': 1.0, 188 | 'rsurfexp': 5.0} 189 | 190 | # Mapping between output variable (key) and input parameter name from MPTABLE (value) 191 | global_mp_dict = {'ssi': 'ssi', 192 | 'snowretfac': 'snow_ret_fac', 193 | 'tau0': 'tau0', 194 | 'rsurfsnow': 'rsurf_snow', 195 | 'scamax': 'scamax', 196 | 'rsurfexp': 'rsurf_exp'} 197 | 198 | # Hydro 2D Table variable names from netCDF (lower case, keys) to parameter column 199 | # headings from .TBL files (upper case, values) 200 | nameLookupHyd = dict(SMCMAX1="smcmax", 201 | SMCREF1="smcref", 202 | SMCWLT1="smcwlt", 203 | OV_ROUGH2D="OV_ROUGH2D", 204 | LKSAT="dksat", 205 | NEXP="NEXP") 206 | 207 | # Soil layers that will be averaged over when calculating pedo-transfer function parameters. 208 | hydTagList = [0, 1, 2, 3] 209 | 210 | # Output netCDF file structure 211 | soilDimName = 'soil_layers_stag' 212 | dims3D = tuple(['Time', 'soil_layers_stag', 'south_north', 'west_east']) 213 | dims2D = tuple(['Time', 'south_north', 'west_east']) 214 | 215 | # Pedo-transfer function parameter range limits ([min, max]; set to NULL if you don't want to apply limits): 216 | theta_1500_rng = [0.03, 0.45] 217 | theta_33_rng = [0.07, 0.56] 218 | theta_s33_rng = [0.01, 0.50] 219 | psi_e_rng = [0.1, 30.0] 220 | 221 | 222 | # --- End Global Variables --- # 223 | 224 | # --- Functions --- # 225 | 226 | def is_valid_file(parser, arg): 227 | # https://stackoverflow.com/questions/11540854/file-as-command-line-argument-for-argparse-error-message-if-argument-is-not-va 228 | if not os.path.exists(arg): 229 | parser.error(f"The file {arg} does not exist!") 230 | else: 231 | return str(arg) 232 | 233 | 234 | def array_replace(from_values, to_values, array): 235 | ''' 236 | This function will perform a fast array replacement based on arrays of 237 | from and to values. 238 | ''' 239 | sort_idx = np.argsort(from_values) 240 | idx = np.searchsorted(from_values, array, sorter=sort_idx) 241 | out = to_values[sort_idx][idx] 242 | return out 243 | 244 | 245 | def lyrAvg(arr, hts, inds): 246 | ''' 247 | This function will calculate a mean from a multidimensional array given 248 | the list of indexes to average. 249 | This seems to more heavily weight the inputs by the depth... ?! 250 | ''' 251 | wtval = 0 252 | wtsum = 0 253 | for i in inds: 254 | wtval += arr[i] * hts[i] 255 | wtsum += hts[i] 256 | return wtval / wtsum 257 | 258 | 259 | def ApplyPedo(sand, clay, orgm=0.0): 260 | # Calcs 261 | theta_1500t = (-0.024 * sand) + (0.487 * clay) + (0.006 * orgm) + (0.005 * (sand * orgm)) - ( 262 | 0.013 * (clay * orgm)) + (0.068 * sand * clay) + 0.031 263 | theta_1500 = theta_1500t + ((0.14 * theta_1500t) - 0.02) 264 | theta_33t = (-0.251 * sand) + (0.195 * clay) + (0.011 * orgm) + (0.006 * (sand * orgm)) - ( 265 | 0.027 * (clay * orgm)) + (0.452 * sand * clay) + 0.299 266 | theta_33 = theta_33t + ((1.283 * theta_33t * theta_33t) - (0.374 * theta_33t) - 0.015) 267 | theta_s33t = (0.278 * sand) + (0.034 * clay) + (0.022 * orgm) - (0.018 * (sand * orgm)) - ( 268 | 0.027 * (clay * orgm)) - (0.584 * sand * clay) + 0.078 269 | theta_s33 = theta_s33t + ((0.636 * theta_s33t) - 0.107) 270 | psi_et = (-21.67 * sand) - (27.93 * clay) - (81.97 * theta_s33) + (71.12 * (sand * theta_s33)) + ( 271 | 8.29 * (clay * theta_s33)) + (14.05 * sand * clay) + 27.16 272 | psi_e = psi_et + ((0.02 * psi_et * psi_et) - (0.113 * psi_et) - 0.7) 273 | 274 | # Clip array values to realistic bounds 275 | if theta_1500_rng: 276 | theta_1500 = np.clip(theta_1500, theta_1500_rng[0], theta_1500_rng[1]) 277 | if theta_33_rng: 278 | theta_33 = np.clip(theta_33, theta_33_rng[0], theta_33_rng[0]) 279 | if theta_s33_rng: 280 | theta_s33 = np.clip(theta_s33, theta_s33_rng[0], theta_s33_rng[0]) 281 | if psi_e_rng: 282 | psi_e = np.clip(psi_e, psi_e_rng[0], psi_e_rng[1]) 283 | 284 | # Almost final param arrays 285 | smcwlt_3d = theta_1500 286 | smcref_3d = theta_33 287 | smcmax_3d = theta_33 + theta_s33 - (0.097 * sand) + 0.043 288 | bexp_3d = 3.816712826 / (np.log(theta_33) - np.log(theta_1500)) 289 | psisat_3d = psi_e 290 | dksat_3d = 1930.0 * (smcmax_3d - theta_33) ** (3.0 - (1.0 / bexp_3d)) 291 | quartz_3d = sand 292 | 293 | # Units conversion 294 | psisat_3d = 0.101997 * psisat_3d # convert kpa to m 295 | dksat_3d = dksat_3d / 3600000.0 # convert mm/h to m/s 296 | dwsat_3d = (dksat_3d * psisat_3d * bexp_3d) / smcmax_3d # units should be m*m/s 297 | smcdry_3d = smcwlt_3d 298 | outDict = dict(smcwlt=smcwlt_3d, smcref=smcref_3d, smcmax=smcmax_3d, bexp=bexp_3d, 299 | psisat=psisat_3d, dksat=dksat_3d, quartz=quartz_3d, dwsat=dwsat_3d, smcdry=smcdry_3d) 300 | return outDict 301 | 302 | 303 | def demote_dtype(in_df, in_dtype='float64', out_dtype='float32'): 304 | ''' 305 | Alter dtype of a pandas DataFrame 306 | ''' 307 | for column in in_df.columns: 308 | if in_df[column].dtype == in_dtype: 309 | in_df[column] = in_df[column].astype(out_dtype) 310 | return in_df 311 | 312 | 313 | def obtain_soilparams(in_file, table_name='STAS'): 314 | ''' 315 | Create pandas DataFrame tables from each parameter set in the soil parameter 316 | files. Assumes that each table starts with 'Soil Parameters' and the next 317 | line is the name of the parameter set. 318 | ''' 319 | 320 | # Read entire table into a list of lines 321 | with open(in_file, 'r') as td: 322 | lines = td.readlines() 323 | 324 | # Find where each table starts and ends 325 | table_start_lines = [n for n, item in enumerate(lines) if item.startswith(SOILPARM_Start)] 326 | num_tables = len(table_start_lines) 327 | print(f' Found {num_tables} Soil Parameter tables in {os.path.basename(in_file)}') 328 | table_start_lines += [len(lines)] 329 | 330 | # Build a dictionary of tables 331 | for n in range(num_tables): 332 | in_table_name = lines[table_start_lines[n] + 1].strip() 333 | if in_table_name == table_name: 334 | print(f" Found table: {table_name}") 335 | 336 | # Find header inside of quotes and modify header line 337 | header = re.findall(r"'(.*?)'", lines[table_start_lines[n] + 2].strip(), re.DOTALL) 338 | header = re.split(r'\s{2,}', header[0]) 339 | header.insert(0, 'solID') 340 | header = ['solName' if column == '' else column for column in header] 341 | header += [item for item in soilparm_columns if 342 | item not in header] # Add columns to the end that may be missing 343 | 344 | # Identify the body lines for the sub-table 345 | body = [line for line in lines[table_start_lines[n] + 3:table_start_lines[n + 1]]] 346 | 347 | # Generate the dataframe and modify data types 348 | df = pd.read_table(StringIO('\n'.join(body)), sep=',', header=None, names=header) 349 | df = demote_dtype(df, in_dtype='float64', out_dtype='float32') 350 | df = demote_dtype(df, in_dtype='i8', out_dtype='i4') 351 | return df 352 | 353 | 354 | def obtain_MPparams(in_file, table_name='noahmp_modis_parameters'): 355 | ''' 356 | Create pandas DataFrame tables from each parameter set in the MP parameter files. 357 | ''' 358 | # Read the parameter table as a FORTRAN namelist 359 | namelist_patch = f90nml.read(in_file) 360 | 361 | # Convert top-level namelist to dictionary 362 | namelist_dict = namelist_patch.todict() 363 | num_tables = len(namelist_dict) 364 | if table_name in namelist_dict.keys(): 365 | print(f" Found table: {table_name}") 366 | 367 | # Read selected table 368 | table = namelist_dict[table_name] 369 | 370 | # Read the information in the tablef 371 | val_dict = {} 372 | list_dict = {} 373 | for key, val in list(table.items()): 374 | if type(val) == int: 375 | val_dict[key] = val 376 | elif type(val) == list: 377 | list_dict[key] = val 378 | elif type(val) == float: 379 | list_dict[key] = val 380 | 381 | # Handle different returns 382 | if table_name == 'noahmp_global_parameters': 383 | return list_dict 384 | else: 385 | 386 | # set column names to 1...n 387 | columns = range(1, max([len(x) for x in list_dict.values()]) + 1) 388 | 389 | # Create the dataframe 390 | df = pd.DataFrame.from_dict(list_dict) 391 | df.index = columns 392 | return df 393 | 394 | 395 | def obtain_GENparams(in_file): 396 | ''' 397 | Create dictionary of GENPARM parameters. Assumes that all parameters end with '_DATA'. 398 | ''' 399 | 400 | # Read entire table into a list of lines 401 | GPdict = {} 402 | with open(in_file, 'r') as td: 403 | lines = td.readlines() 404 | lines = [line.strip() for line in lines] 405 | 406 | # Find where the parameters start and end in the file 407 | param_start_lines = [n for n, item in enumerate(lines) if item.endswith('_DATA')] 408 | num_params = len(param_start_lines) 409 | param_start_lines += [len(lines)] 410 | print(f' Found {num_params} parameters in {os.path.basename(in_file)}') 411 | 412 | for n in range(num_params): 413 | param_name = lines[param_start_lines[n]] 414 | GPdict[param_name] = [float(line) for line in lines[param_start_lines[n] + 1:param_start_lines[n + 1]]] 415 | return GPdict 416 | 417 | 418 | def obtain_HYDROparams(in_file): 419 | # Read entire table into a list of lines 420 | with open(in_file, 'r') as td: 421 | lines = td.readlines() 422 | lines = [line.strip() for line in lines] 423 | 424 | # Find the roughness parameters for landcover types 425 | num_SFC_ROUGH = int(lines[0].split(' ')[0]) 426 | SFC_ROUGH_startline = [n for n, line in enumerate(lines) if line.startswith('SFC_ROUGH')][0] 427 | SFC_ROUGH_endline = SFC_ROUGH_startline + 1 + num_SFC_ROUGH 428 | SFC_ROUGH_vals = [line for line in lines[SFC_ROUGH_startline + 1:SFC_ROUGH_endline]] 429 | 430 | # Create the surface roughness parameter dataframe 431 | sfcRough_df = pd.read_table(StringIO('\n'.join(SFC_ROUGH_vals)), sep=',', header=None, 432 | names=('OV_ROUGH2D', 'descrip')) 433 | print(f' Found {len(sfcRough_df)} SFC_ROUGH parameters in {os.path.basename(in_file)}') 434 | 435 | # Find the soil hydro parameters 436 | soil_param_start = SFC_ROUGH_endline 437 | num_soil_params = int(lines[soil_param_start].split(',')[0]) 438 | 439 | # Find header for soil parameters 440 | header = re.split(r'\s{2,}', lines[soil_param_start + 1]) 441 | header = ['solName' if column == "'" else column for column in header] 442 | 443 | # Identify the body lines for the sub-table 444 | body = [line for line in lines[soil_param_start + 2:soil_param_start + 2 + num_soil_params]] 445 | 446 | # Generate the dataframe and modify data types 447 | hydro_df = pd.read_table(StringIO('\n'.join(body)), sep=',', header=None, names=header) 448 | hydro_df = demote_dtype(hydro_df, in_dtype='float64', out_dtype='float32') 449 | hydro_df = demote_dtype(hydro_df, in_dtype='i8', out_dtype='i4') 450 | return hydro_df, sfcRough_df 451 | 452 | 453 | def main_soilProp(geoFile, 454 | paramDir, 455 | outDir, 456 | landClass=landClass, 457 | soilFillVal=soilFillVal, 458 | setUrban=setUrban, 459 | soilparm_set=soilparm_set): 460 | '''This function will process the input geogrid file into the appropriate 461 | 2D soil property and hydro prarameter output variables. 462 | 463 | Inputs: 464 | geoFile - WRF WPS GEOGRID file (netCDF). 465 | paramDir - The directory containing WRF-Hydro parameter files. 466 | outDir - Directory to store outputs 467 | Outputs: 468 | {slpropFile} - 2-Dimensional soil properties file (netCDF) 469 | {hyd2dFile} - 2-Dimensional hydro parameter file (netCDF) 470 | ''' 471 | tic1 = time.time() 472 | 473 | # Setup input and output files 474 | slpropF = os.path.join(outDir, slpropFile) 475 | hydFile = os.path.join(outDir, hyd2dFile) 476 | soilParamF = os.path.join(paramDir, soilParamFile) 477 | mpParamF = os.path.join(paramDir, mpParamFile) 478 | genParamF = os.path.join(paramDir, genParamFile) 479 | hydParamF = os.path.join(paramDir, hydParamFile) 480 | for inFile in [geoFile, soilParamF, mpParamF, genParamF]: 481 | if not os.path.isfile(inFile): 482 | print(f'Expected input file "{inFile}" not found.\nExiting...') 483 | raise SystemExit 484 | print(' Creating initial soil_properties.nc file from the input geogrid file') 485 | print(f' Input: {geoFile}') 486 | print(f' Output: {slpropF}') 487 | print(f' Output: {hydFile}') 488 | print(f' Keyword arguments:') 489 | print(f' landClass: {landClass}') 490 | print(f' soilFillVal: {soilFillVal}') 491 | print(f' setUrban: {setUrban}') 492 | print(f' soilparm_set: {soilparm_set}') 493 | 494 | # Instantiate read and write objects on netCDF files 495 | if updateTexture: 496 | new_geoFile = os.path.join(outDir, os.path.basename(geoFile).replace('.nc', '_texture.nc')) 497 | shutil.copyfile(geoFile, new_geoFile) 498 | print(f' Copied input geogrild file:\n \t{geoFile}\n to:\n \t{new_geoFile}') 499 | rootgrp_geo = netCDF4.Dataset(new_geoFile, 'r+') # Read/Write object on input GEOGRID file 500 | else: 501 | rootgrp_geo = netCDF4.Dataset(geoFile, 'r') # Read object on input GEOGRID file 502 | rootgrp_SP = netCDF4.Dataset(slpropF, 'w', format=outNCType) # Write object to create output file 503 | 504 | # Copy dimensions from GEOGRID file 505 | for dimname, dim in rootgrp_geo.dimensions.items(): 506 | if dimname in dims2D: 507 | if dimname == 'Time': 508 | dimlen = None # Make Time an unlimited dimension 509 | else: 510 | dimlen = len(dim) 511 | rootgrp_SP.createDimension(dimname, dimlen) # Copy dimensions from the GEOGRID file 512 | soildim = rootgrp_SP.createDimension(soilDimName, nsoil) # Add soil_layers_stag dimension 513 | del dimname, dim, soildim 514 | 515 | # Populate initial file with variables to keep from the input GEOGRID file 516 | for varname in nameLookupSoil: 517 | if varname in var3d: 518 | rootgrp_SP.createVariable(varname, 'f4', dims3D, fill_value=fillValue) 519 | else: 520 | rootgrp_SP.createVariable(varname, 'f4', dims2D, fill_value=fillValue) 521 | del varname 522 | 523 | # Global Attributes - copy all from GEOGRID file to soil_properties 524 | ncatts = {key: val for key, val in rootgrp_geo.__dict__.items()} 525 | ncatts['Source_Software'] = f'WRF-Hydro {sys.argv[0]} script (Python) v1.0' 526 | ncatts['creation_time'] = f'Created {time.ctime()}' 527 | rootgrp_SP.setncatts(ncatts) 528 | 529 | # Create new hydro2d file with fill values 530 | dims2D_hyd = [item for item in dims2D if item != 'Time'] 531 | rootgrp_hyd = netCDF4.Dataset(hydFile, 'w', format=outNCType) # Write object to create output file 532 | for dimname, dim in rootgrp_geo.dimensions.items(): 533 | if dimname in dims2D_hyd: 534 | rootgrp_hyd.createDimension(dimname, len(dim)) # Copy dimensions from the GEOGRID file 535 | for varname in nameLookupHyd.keys(): 536 | rootgrp_hyd.createVariable(varname, 'f4', dims2D_hyd, fill_value=fillValue) 537 | rootgrp_hyd.setncatts(ncatts) 538 | del dimname, dim, varname 539 | 540 | # --- Read parameter tables --- # 541 | 542 | # --- Read Soil parameter table --- # 543 | 544 | print(f'Reading {soilParamF}') 545 | soilTblDf = obtain_soilparams(soilParamF, table_name=soilparm_set) 546 | 547 | # --- End Read Soil parameter table --- # 548 | 549 | # --- Read MP parameter table --- # 550 | 551 | print(f'Reading {mpParamF}') 552 | mpTblDf = obtain_MPparams(mpParamF, table_name=mp_params) 553 | mpTblDict = obtain_MPparams(mpParamF, table_name='noahmp_global_parameters') 554 | 555 | # --- End Read MP parameter table --- # 556 | 557 | # --- Read GENPARM parameter table --- # 558 | 559 | print(f'Reading {genParamF}') 560 | GPdict = obtain_GENparams(genParamF) 561 | genTab = {'SLOPE': GPdict['SLOPE_DATA'][1], 562 | 'REFKDT': GPdict['REFKDT_DATA'][0], 563 | 'REFDK': GPdict['REFDK_DATA'][0]} 564 | 565 | # --- End Read GENPARM parameter table --- # 566 | 567 | # --- Read HYDRO parameter table --- # 568 | 569 | print(f'Reading {hydParamF}') 570 | hydroTblDf, sfcRoughDf = obtain_HYDROparams(hydParamF) 571 | 572 | # --- End Read HYDRO parameter table --- # 573 | 574 | # Get 2D fields and global attribute values from input GEOGRID file 575 | vegmap = rootgrp_geo.variables['LU_INDEX'][0] 576 | solmap = rootgrp_geo.variables['SCT_DOM'][0] 577 | maxSoilClass = len(rootgrp_geo.dimensions['soil_cat']) 578 | vegWater = rootgrp_geo.ISWATER 579 | vegLake = rootgrp_geo.ISLAKE 580 | soilWater = rootgrp_geo.ISOILWATER 581 | vegUrban = rootgrp_geo.ISURBAN 582 | print(f'Geogrid attributes: vegWater={vegWater} soilWater={soilWater}, maxSoilClass={maxSoilClass}') 583 | 584 | if useSoilComp and 'SOILCOMP' in rootgrp_geo.variables.keys(): 585 | print('Pulling in soil composition data') 586 | soilc = rootgrp_geo.variables['SOILCOMP'][0] 587 | soilcFrac = (soilc / 100.0).astype('f8') # Convert percent to fraction and make Double 588 | sandFrac3D = soilcFrac[0:4] # Sand fraction for layers 1-4 589 | clayFrac3D = soilcFrac[4:8] # Clay fraction for layers 1-4 590 | orgm3D = 0.0 591 | 592 | # Create water mask from layers with all 0s 593 | soilWaterMsk = np.sum(soilc, axis=0) 594 | soilWaterMsk[soilWaterMsk == 0] = fillValue 595 | soilWaterMsk[soilWaterMsk > 0] = 0 596 | 597 | # Create layer means 598 | layerMeans = {} # Empty dictionary to store data 599 | layerMeans[f'sand'] = lyrAvg(sandFrac3D, lyrHt, hydTagList) 600 | layerMeans[f'clay'] = lyrAvg(clayFrac3D, lyrHt, hydTagList) 601 | 602 | # Apply pedotransfer functions 603 | print('Applying pedotransfer functions') 604 | soillist3D = ApplyPedo(sandFrac3D, clayFrac3D, orgm3D) 605 | pedoXfer = {} # Empty dictionary to store transformed parameters 606 | pedoXfer[f'soillist'] = ApplyPedo(layerMeans[f'sand'], layerMeans[f'clay']) 607 | del soilcFrac, sandFrac3D, clayFrac3D 608 | else: 609 | # Fill all areas in SCT_DOM where the vegetation is water and soil is water with a fill value 610 | vegmap[vegmap == vegLake] = vegWater 611 | solmap[np.logical_and(vegmap != vegWater, solmap == soilWater)] = soilFillVal 612 | solmap[vegmap == vegWater] = soilWater 613 | 614 | paramList = rootgrp_SP.variables.keys() 615 | print(f'Updating {slpropFile}') 616 | for param in paramList: 617 | paramName = nameLookupSoil[param] 618 | ncvar = rootgrp_SP.variables[param] 619 | if paramName in soilTblDf.columns.values: 620 | # Parameter is in the soil table, map to the categories 621 | print(f' Updating SOIL parameter: {param} {paramName}') 622 | if useSoilComp and 'SOILCOMP' in rootgrp_geo.variables.keys(): 623 | pnew = soillist3D[param][0] 624 | 625 | # Create a water mask 626 | pnew[vegmap == vegWater] = 0 627 | # pnew[vegmap!=vegWater] = fillValue 628 | # pnew[pnew>fillValue] = soilWater 629 | else: 630 | pnew = solmap.copy() 631 | paramVar = ncvar[:] 632 | 633 | # Create 2D replacement matrix 634 | replaceArr = np.array([soilTblDf.solID.values, soilTblDf[paramName]]) 635 | pnew = array_replace(replaceArr[0, :], replaceArr[1, :], pnew).astype('f4') 636 | 637 | # Build mask array to reclassify out of range values 638 | maskArr = np.full(pnew.shape, True, dtype=bool) 639 | maskArr = array_replace(replaceArr[0, :], np.full(soilTblDf.solID.values.shape, False, dtype=bool), 640 | maskArr) 641 | pnew[maskArr] = fillValue 642 | 643 | # Fill in output netCDF variable with this value 644 | if 'soil_layers_stag' in ncvar.dimensions: 645 | dimsize = len(rootgrp_SP.dimensions[ncvar.dimensions[-3]]) # Either 'Time' or 'soil_layers_stag' 646 | 647 | # Write output to NetCDF 648 | ncvar[0] = np.repeat(pnew[np.newaxis, :, :], dimsize, axis=0) 649 | else: 650 | # Write output to NetCDF 651 | ncvar[0] = pnew 652 | 653 | elif paramName.lower() in mpTblDf.columns: 654 | 655 | # Parameter is in the table, map to the categories. 656 | print(f' Updating MP parameter: {param} {paramName}') 657 | paramVals = mpTblDf[paramName.lower()] 658 | paramCols = mpTblDf.index 659 | 660 | # Create 2D replacement matrix 661 | replaceArr = np.array([paramCols.tolist(), paramVals.tolist()]) 662 | pnew = array_replace(replaceArr[0, :], replaceArr[1, :], vegmap).astype('f4') 663 | 664 | # Build mask array to reclassify out of range values 665 | maskArr = np.full(pnew.shape, True, dtype=bool) 666 | maskArr = array_replace(replaceArr[0, :], np.full(paramVals.shape, False, dtype=bool), maskArr) 667 | pnew[maskArr] = fillValue 668 | 669 | # Write output to NetCDF 670 | ncvar[0] = pnew 671 | 672 | elif paramName in genTab: 673 | # Parameter is in the general parameter table 674 | print(f' Updating GEN parameters: {param} {paramName} with {genTab[paramName]}') 675 | 676 | # Write output to NetCDF 677 | ncvar[:, :, :] = genTab[paramName] 678 | 679 | # Add additional noahmp global parameters 680 | print(f' Adding additional noahmp global parameters to {slpropFile}') 681 | for add_var in add_vars: 682 | if global_mp_dict[add_var] in mpTblDict: 683 | add_val = mpTblDict.get(global_mp_dict[add_var], fillValue) 684 | print(f' Adding global parameter {add_var}={add_val} to grid.') 685 | rootgrp_SP.createVariable(add_var, 'f4', dims2D, fill_value=fillValue) 686 | rootgrp_SP[add_var][:] = add_val 687 | else: 688 | print(f' Could not find parameter {global_mp_dict[add_var]} in {mpParamFile}.') 689 | print(f' Using default value of {add_var}={add_var_defaults[add_var]}.') 690 | rootgrp_SP.createVariable(add_var, 'f4', dims2D, fill_value=fillValue) 691 | rootgrp_SP[add_var][:] = add_var_defaults[add_var] 692 | 693 | rootgrp_SP.close() 694 | del pnew 695 | 696 | # Options to update the input Geogrid file texture classes 697 | if updateTexture: 698 | print('Updating texture classes') 699 | lyrDict = dict(top=dict(indx=0, splitlyr="SOILCTOP", mglyr="SCT_DOM"), 700 | bot=dict(indx=-1, splitlyr="SOILCBOT", mglyr="SCB_DOM")) 701 | 702 | for layer, layerOpts in lyrDict.items(): 703 | 704 | # Fill the 2D variable (SC*_DOM) 705 | inLayerName = layerOpts['mglyr'] 706 | splitlyr = layerOpts['splitlyr'] 707 | i = layerOpts['indx'] 708 | ncvar = rootgrp_geo.variables[inLayerName] 709 | ncvar2 = rootgrp_geo.variables[splitlyr] 710 | soil_texture = np.full(ncvar[0].shape, fillValue, dtype='f4') 711 | 712 | if useSoilComp and 'SOILCOMP' in rootgrp_geo.variables.keys(): 713 | print(f' Using SOILCOMP variable to update soils for layer {layerOpts}.') 714 | # SOILCOMP variable has vertical dimension with sand fraction layers 715 | # then clay fraction layers. Silt can be calculated as the remainder. 716 | sand = soilc[0:nsoil] # Sand fraction for layers 1-4 717 | clay = soilc[nsoil:] # Clay fraction for layers 1-4 718 | silt = 100.0 - sand - clay # Silt is everything except sand and clay 719 | 720 | # Bin into texture classes 721 | soil_texture[(silt[i] + 1.5 * clay[i]) < 15] = 1 722 | soil_texture[((silt[i] + 1.5 * clay[i]) >= 15) & (silt[i] + 2 * clay[i] < 30)] = 2 723 | soil_texture[((clay[i] >= 7) & (clay[i] < 20)) & (sand[i] > 52) & (((silt[i] + 2 * clay[i]) >= 30) | ( 724 | (clay[i] < 7) & (silt[i] < 50) & ((silt[i] + 2 * clay[i]) >= 30)))] = 3 725 | soil_texture[ 726 | ((clay[i] >= 7) & (clay[i] < 27)) & ((silt[i] >= 28) & (silt[i] < 50)) & (sand[i] <= 52)] = 6 727 | soil_texture[((silt[i] >= 50) & ((clay[i] >= 12) & (clay[i] < 27))) | ( 728 | ((silt[i] >= 50) & (silt[i] < 80)) & (clay[i] < 12))] = 4 729 | soil_texture[((silt[i] >= 80) & (clay[i] < 12))] = 5 730 | soil_texture[((clay[i] >= 20) & (clay[i] < 35)) & ((silt[i] < 28) & (sand[i] > 45))] = 7 731 | soil_texture[((clay[i] >= 27) & (clay[i] < 40)) & ((sand[i] > 20) & (sand[i] <= 45))] = 9 732 | soil_texture[((clay[i] >= 27) & (clay[i] < 40)) & (sand[i] <= 20)] = 8 733 | soil_texture[(clay[i] >= 35) & (sand[i] > 45)] = 10 734 | soil_texture[(clay[i] >= 40) & (silt[i] >= 40)] = 11 735 | soil_texture[(clay[i] >= 40) & (sand[i] <= 45) & (silt[i] < 40)] = 12 736 | 737 | # Apply water masks 738 | soil_texture[soilWaterMsk == fillValue] = soilFillVal 739 | soil_texture[vegmap == vegWater] = soilWater 740 | del soilc, sand, clay, silt 741 | else: 742 | print(' SOILCOMP variable not in the input GEOGRID file.') 743 | print(' Updating Soil Texture in GEOGRID file.') 744 | # soil_texture = solmap.copy() # Will only be SCT_DOM! 745 | soil_texture = ncvar[0] 746 | soil_texture[np.logical_and(vegmap != vegWater, soil_texture == soilWater)] = soilFillVal 747 | soil_texture[vegmap == vegWater] = soilWater 748 | 749 | # Write output to NetCDF 750 | ncvar[0] = soil_texture 751 | 752 | # Calculate and place split layer (assumes 100% for specified class) 753 | for stype in range(1, maxSoilClass + 1): 754 | tmp = soil_texture.copy() 755 | tmp[soil_texture == stype] = 1 756 | tmp[soil_texture != stype] = 0 757 | ncvar2[0, stype - 1] = tmp 758 | del ncvar, ncvar2, soil_texture, tmp 759 | 760 | # Populate hydro2d file 761 | print(f'Updating: {hyd2dFile}') 762 | vegmap = rootgrp_geo.variables['LU_INDEX'][0] 763 | solmap = rootgrp_geo.variables['SCT_DOM'][0] 764 | 765 | # Loop through params and update 766 | paramList = rootgrp_hyd.variables 767 | for param in paramList: 768 | paramNameHyd = nameLookupHyd.get(param) 769 | paramNameSoil = nameLookupSoil.get(paramNameHyd) 770 | print(f' Processing {param}') 771 | ncvar = rootgrp_hyd.variables[param] 772 | if paramNameHyd and (paramNameSoil in soilTblDf.columns.tolist()): 773 | print(f' Updating HYDRO soil parameters: {param} {paramNameHyd} {paramNameSoil}') 774 | 775 | pnew = solmap.copy() 776 | if useSoilComp and 'SOILCOMP' in rootgrp_geo.variables.keys(): 777 | paramVar = pedoXfer[f'soillist'][paramNameHyd] 778 | 779 | # Create a water mask 780 | pnew[vegmap != vegWater] = fillValue 781 | pnew[pnew > fillValue] = soilWater 782 | else: 783 | paramVar = ncvar[:] 784 | 785 | # Create 2D replacement matrix 786 | pnew[vegmap == vegWater] = soilWater 787 | replaceArr = np.array([soilTblDf.solID.values, soilTblDf[paramNameSoil]]) 788 | pnew = array_replace(replaceArr[0, :], replaceArr[1, :], pnew).astype('f4') 789 | 790 | # Added to incorporate pedo-transfer function data 791 | pnew[pnew < -9998] = paramVar[pnew < -9998] 792 | 793 | # Manually make some changes to urban cells to match hydro code. 794 | if setUrban: 795 | for param_to_alter, replaceval in zip(['SMCMAX1', 'SMCREF1', 'SMCWLT1'], [0.45, 0.42, 0.40]): 796 | if param == param_to_alter: 797 | pnew[np.where((vegmap == vegUrban) & (solmap != soilWater))] = replaceval 798 | print( 799 | f' Replaced HYDRO2D parameter {param} with {replaceval} where vegetation is urban but soil is not soilwater') 800 | 801 | # Write output to NetCDF 802 | ncvar[:, :] = pnew 803 | 804 | elif paramNameHyd in sfcRoughDf: 805 | print(f' Updating {paramNameHyd}') 806 | pnew = vegmap.copy() 807 | pnew[solmap == soilWater] = vegWater 808 | 809 | # Loop through each vegetation category 810 | for catTmp in range(len(sfcRoughDf.descrip)): 811 | pnew[np.where(pnew == int(catTmp + 1))] = float(sfcRoughDf[paramNameHyd][catTmp]) 812 | 813 | # Write output to NetCDF 814 | ncvar[:, :] = pnew 815 | elif paramNameHyd == "NEXP": 816 | # Setting this to a global initial value of 1.0 817 | print(f"Updating HYDRO global parameters: {param} {paramNameHyd}") 818 | ncvar[:, :] = 1.0 819 | 820 | rootgrp_hyd.close() 821 | rootgrp_geo.close() 822 | del rootgrp_geo, rootgrp_hyd 823 | print(' Function main_soilProp completed in {0:3.2f} seconds'.format(time.time() - tic1)) 824 | return 825 | 826 | 827 | # --- End Functions --- # 828 | 829 | if __name__ == '__main__': 830 | print('Script initiated at {0}'.format(time.ctime())) 831 | tic = time.time() 832 | 833 | # Setup the input arguments 834 | parser = ArgumentParser(description=descText, add_help=True) 835 | parser.add_argument("-i", 836 | dest="in_Geogrid", 837 | type=lambda x: is_valid_file(parser, x), 838 | required=True, 839 | help="Path to WPS geogrid (geo_em.d0*.nc) file [REQUIRED]") 840 | parser.add_argument("-p", 841 | dest="param_dir", 842 | type=lambda x: is_valid_file(parser, x), 843 | default='./', 844 | help="[OPTIONAL] Directory containing SOILPARM.TBL, MPTABLE.TBL, GENPARM.TBL, and HYDRO.TBL. Default is the current directory.") 845 | parser.add_argument("-o", 846 | dest="out_dir", 847 | type=lambda x: is_valid_file(parser, x), 848 | default='./', 849 | help="[OPTIONAL] Directory to write output files to (soil_properties.nc, hydro2dtbl.nc). Default is the current directory.") 850 | 851 | # If no arguments are supplied, print help message 852 | if len(sys.argv) == 1: 853 | parser.print_help(sys.stderr) 854 | sys.exit(1) 855 | args = parser.parse_args() 856 | all_defaults = {key: parser.get_default(key) for key in vars(args)} 857 | 858 | # Handle printing to user the default variable name 859 | print(' Parameter values that have not been altered from script default values:') 860 | if args.in_Geogrid == all_defaults["in_Geogrid"]: 861 | print(' Using default basin mask setting: {0}'.format(all_defaults["in_Geogrid"])) 862 | if args.param_dir == all_defaults["param_dir"]: 863 | print(' Using default reach-based routing setting: {0}'.format(all_defaults["param_dir"])) 864 | if args.out_dir == all_defaults["out_dir"]: 865 | print(' Using default regridding factor: {0}'.format(all_defaults["out_dir"])) 866 | 867 | # This block allows us to continue to check for a valid file path while allowing the script later to avoid a NoneType error. 868 | geoFile = args.in_Geogrid = os.path.abspath( 869 | args.in_Geogrid) # Obtain absolute path for required input geogrid file.. 870 | paramDir = args.param_dir = os.path.abspath(args.param_dir) # Obtain absolute path for required input directory. 871 | outDir = args.out_dir = os.path.abspath(args.out_dir) # Obtain absolute path for required output directory. 872 | 873 | # Print information to screen 874 | print(' Values that will be used in building this routing stack:') 875 | print(f' Input GEOGRID file: {geoFile}') 876 | print(f' Parameter directory: {paramDir}') 877 | print(f' Output directory: {outDir}') 878 | 879 | # Run pre-process 880 | print(' Running Process: main_soilProp function') 881 | main_soilProp(geoFile, 882 | paramDir, 883 | outDir) 884 | print('Process completed in {0:3.2f} seconds.'.format(time.time() - tic)) 885 | -------------------------------------------------------------------------------- /wrfhydro_gis/Create_latitude_longitude_rasters.py: -------------------------------------------------------------------------------- 1 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 2 | # Copyright UCAR (c) 2019 3 | # University Corporation for Atmospheric Research(UCAR) 4 | # National Center for Atmospheric Research(NCAR) 5 | # Research Applications Laboratory(RAL) 6 | # P.O.Box 3000, Boulder, Colorado, 80307-3000, USA 7 | # 24/09/2019 8 | # 9 | # Name: module1 10 | # Purpose: 11 | # Author: $ Kevin Sampson 12 | # Created: 24/09/2019 13 | # Licence: 14 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 15 | 16 | descText = "This tool takes an input raster (most likely produced using the ExportGrid tool)" \ 17 | " and uses that grid to produce latitude and longitude ESRI GRID rasters." 18 | 19 | # Import Modules 20 | 21 | # Import Python Core Modules 22 | import os 23 | import sys 24 | import time 25 | 26 | # Import additional modules 27 | from argparse import ArgumentParser 28 | #from distutils.version import LooseVersion 29 | from packaging.version import parse as LooseVersion # To avoid deprecation warnings 30 | import osgeo 31 | 32 | try: 33 | if LooseVersion(osgeo.__version__) > LooseVersion('3.0.0'): 34 | from osgeo import gdal 35 | from osgeo import osr 36 | else: 37 | import gdal 38 | import osr 39 | except: 40 | sys.exit('ERROR: cannot find GDAL/OGR modules') 41 | 42 | # Import function library into namespace. Must exist in same directory as this script. 43 | # Import wrfhydro_functions as wrfh, Function script packaged with this toolbox 44 | from wrfhydro_functions import (get_projection_from_raster, wgs84_proj4, getxy, 45 | ReprojectCoords, numpy_to_Raster) 46 | 47 | # Global Variables 48 | 49 | # Script options 50 | RasterDriver = 'GTiff' 51 | wgs84_proj4 = '+proj=longlat +datum=WGS84 +no_defs' 52 | 53 | # Main Codeblock 54 | if __name__ == '__main__': 55 | tic = time.time() 56 | print('Script initiated at {0}'.format(time.ctime())) 57 | 58 | # Setup the input arguments 59 | parser = ArgumentParser(description=descText, add_help=True) 60 | parser.add_argument("-i", 61 | dest="in_raster", 62 | default='', 63 | required=True, 64 | help="Path to input raster.") 65 | parser.add_argument("-o", 66 | dest="out_dir", 67 | default='', 68 | required=True, 69 | help="Output directory.") 70 | 71 | # If no arguments are supplied, print help message 72 | if len(sys.argv) == 1: 73 | parser.print_help(sys.stderr) 74 | sys.exit(1) 75 | args = parser.parse_args() 76 | all_defaults = {key: parser.get_default(key) for key in vars(args)} 77 | 78 | if args.in_raster == all_defaults["in_raster"]: 79 | print('Using input raster location of: {0}'.format(all_defaults["in_raster"])) 80 | 81 | if args.out_dir == all_defaults["out_dir"]: 82 | print('Using output location of: {0}'.format(all_defaults["out_dir"])) 83 | 84 | outLat = os.path.join(args.out_dir, 'LATITUDE.tif') 85 | outLon = os.path.join(args.out_dir, 'LONGITUDE.tif') 86 | 87 | # Open (read-only) input raster 88 | in_raster = gdal.Open(args.in_raster, 0) # Open with read-only mode 89 | 90 | # Gather information from input raster projection 91 | proj = get_projection_from_raster(in_raster) 92 | x00, DX, xskew, y00, yskew, DY = in_raster.GetGeoTransform() 93 | del xskew, yskew 94 | 95 | print(' Deriving geocentric coordinates on routing grid from direct transformation geogrid coordinates.') 96 | # Note that this might not be the same method used in the main pre-processing script 97 | # because a geogrid file is not required here as input. 98 | wgs84_proj = osr.SpatialReference() # Build empty spatial reference object 99 | wgs84_proj.ImportFromProj4(wgs84_proj4) # Import from proj4 to avoid EPSG errors (4326) 100 | xmap, ymap = getxy(in_raster) # Get x and y coordinates as numpy array 101 | in_raster = None 102 | lonArr2, latArr2 = ReprojectCoords(xmap, ymap, proj, wgs84_proj) # Transform coordinate arrays 103 | del wgs84_proj, in_raster, xmap, ymap 104 | 105 | # Convert back to rasters 106 | xmap = numpy_to_Raster(lonArr2, proj, DX, DY, x00, y00) 107 | ymap = numpy_to_Raster(latArr2, proj, DX, DY, x00, y00) 108 | del DX, DY, x00, y00, proj, latArr2, lonArr2 109 | 110 | # Section below causing a RuntimeError 111 | out_drv = gdal.GetDriverByName(RasterDriver) 112 | if ymap is not None: 113 | try: 114 | target_ds = out_drv.CreateCopy(outLat, ymap) 115 | target_ds = None 116 | except: 117 | pass 118 | if xmap is not None: 119 | try: 120 | target_ds = out_drv.CreateCopy(outLon, xmap) 121 | target_ds = None 122 | except: 123 | pass 124 | del out_drv, xmap, ymap 125 | 126 | ## # Save output rasters 127 | ## for OutGTiff, InRaster in zip([outLat, outLon], [ymap, xmap]): 128 | ## if InRaster is not None: 129 | ## target_ds = gdal.GetDriverByName(RasterDriver).CreateCopy(OutGTiff, InRaster) 130 | ## target_ds = None 131 | ## del OutGTiff, InRaster, xmap, ymap 132 | print('Process completed in {0:3.2f} seconds.'.format(time.time()-tic)) 133 | -------------------------------------------------------------------------------- /wrfhydro_gis/Create_wrfinput_from_Geogrid.py: -------------------------------------------------------------------------------- 1 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 2 | # Copyright UCAR (c) 2018 3 | # University Corporation for Atmospheric Research(UCAR) 4 | # National Center for Atmospheric Research(NCAR) 5 | # Research Applications Laboratory(RAL) 6 | # P.O.Box 3000, Boulder, Colorado, 80307-3000, USA 7 | # 20/10/2018 8 | # 9 | # Name: create_wrfinput.py 10 | # Purpose: 11 | # Author: Kevin Sampson 12 | # Created: 20/10/2018 13 | # Licence: 14 | # 15 | # 10/20/2018: 16 | # The purpose of this script is to port the functinality of the create_wrfinput.R 17 | # script to Python. 18 | # 19 | # Based on: 20 | # ############################################################ 21 | # R script to create wrfinput file from geogrid. 22 | # Usage: Rscript create_Wrfinput.R 23 | # Developed: 07/09/2017, A. Dugger 24 | # Mirrors the HRLDAS routines here: 25 | # https://github.com/NCAR/hrldas-release/blob/release/HRLDAS/HRLDAS_forcing/lib/module_geo_em.F 26 | # from M. Barlage. 27 | # ############################################################## 28 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 29 | 30 | # Import Python Core Modules 31 | import os 32 | import sys 33 | import getopt 34 | import time 35 | from argparse import ArgumentParser 36 | 37 | # Import additional Python Modules 38 | import numpy 39 | 40 | # Module configurations 41 | sys.dont_write_bytecode = True 42 | 43 | # Screen print in case invalid parameters are given 44 | descText = '''Utility for converting a WPS Geogrid (geo_em.d0*.nc) into a wrfinput file for initiating 45 | a WRF-Hydro simulation. An input Geogrid file and month with which to pull LAI is required.''' 46 | 47 | 48 | ''' 49 | To import and run these functions using python, from a custom script or console: 50 | 51 | from Create_wrfinput_from_Geogrid import main_ncdfpy 52 | main_ncdfpy(r'./geo_em.d01.nc', r'./wrfinput_aug_pytest.nc', lai=8) 53 | ''' 54 | 55 | ####################################################### 56 | # Global Variables - update relevant arguments below. 57 | ####################################################### 58 | 59 | # Soil type to use as a fill value in case conflicts between soil water and land cover water cells: 60 | # If the script encounters a cell that is classified as land in the land use field (LU_INDEX) 61 | # but is classified as a water soil type, it will replace the soil type with the value you 62 | # specify below. Ideally there are not very many of these, so you can simply choose the most 63 | # common soil type in your domain. Alternatively, you can set to a "bad" value (e.g., -8888) 64 | # to see how many of these conflicts there are. If you do this DO NOT RUN THE MODEL WITH THESE 65 | # BAD VALUES. Instead, fix them manually with a neighbor fill or similar fill algorithm. 66 | fillsoiltyp = 3 67 | 68 | # The script will attempt to find the dominant SOILCTOP value. If LU_INDEX is water, 69 | # this will be water, otherwise it will be the dominant fractional non-water SOILCTOP 70 | # category. If no dominant land class can be determined, set the soil category to 71 | # use as a fill below. 72 | dom_lc_fill = 8 73 | 74 | # User-specified soil layer thickness (dzs). Also used to calculate depth of the center of each layer (zs) 75 | dzs = [0.1, 0.3, 0.6, 1.0] # Soil layer thickness top layer to bottom (m) 76 | 77 | # Missing values to use when defining netcdf variables: 78 | missFloat = -1.e+36 79 | missInt = -9999 80 | 81 | #### Number of soil layers (e.g., 4) 82 | nsoil = 4 83 | 84 | ####################################################### 85 | # Do not update below here. 86 | ####################################################### 87 | 88 | outNCType = 'NETCDF4' # Data model for output netCDF data 89 | 90 | # Name the dimensions 91 | wedim = 'west_east' # X dimension name 92 | sndim = 'south_north' # Y-dimension name 93 | timedim = 'Time' # Time dimension name 94 | soildim = 'soil_layers_stag' # Soil layer dimension name 95 | keepDims = [timedim, 'month', sndim, wedim, soildim] # Dimensions to transfer from GEOGRID to WRFINPUT 96 | 97 | # Alter the names of variable names from GEOGRID (key) to WRFINPUT (value) variable names 98 | mapVars = {'HGT_M': 'HGT', 99 | 'XLAT_M': 'XLAT', 100 | 'XLONG_M': 'XLONG', 101 | 'LU_INDEX': 'IVGTYP'} 102 | 103 | # Variables to keep from GEOGRID 104 | keepVars = ['XLAT_M','XLONG_M','HGT_M','LU_INDEX','MAPFAC_MX', 'MAPFAC_MY'] 105 | 106 | # (Name, units, dimensions, missing value, dtype) for all variables to add to the WRFINPUT file 107 | addVars = [('TMN', 'K', [timedim, sndim, wedim], missFloat, 'f4'), 108 | ('XLAND', '', [timedim, sndim, wedim], missInt, 'i4'), 109 | ('SEAICE', '', [timedim, sndim, wedim], missFloat, 'f4'), 110 | ('ISLTYP', '', [timedim, sndim, wedim], missInt, 'i4'), 111 | ('SHDMAX', '%', [timedim, sndim, wedim], missFloat, 'f4'), 112 | ('SHDMIN', '%', [timedim, sndim, wedim], missFloat, 'f4'), 113 | ('LAI', 'm^2/m^2', [timedim, sndim, wedim], missFloat, 'f4'), 114 | ('CANWAT', 'kg/m^2', [timedim, sndim, wedim], missFloat, 'f4'), 115 | ('SNOW', 'kg/m^2', [timedim, sndim, wedim], missFloat, 'f4'), 116 | ('TSK', 'K', [timedim, sndim, wedim], missFloat, 'f4'), 117 | ('SMOIS', 'm^3/m^3', [timedim, soildim, sndim, wedim], missFloat, 'f4'), 118 | ('TSLB', 'K', [timedim, soildim, sndim, wedim], missFloat, 'f4'), 119 | ('ZS', 'm', [timedim, soildim], missFloat, 'f4'), 120 | ('DZS', 'm', [timedim, soildim], missFloat, 'f4')] 121 | 122 | # Choose the processing method for producing the output. Options = ['netcdf4-python', 'xarray'] 123 | method = 'xarray' 124 | if method == 'netcdf4-python': 125 | import netCDF4 126 | elif method == 'xarray': 127 | import xarray as xr 128 | 129 | # Switch for fixing SoilT values of 0 over water areas 130 | fix_zero_over_water = True 131 | 132 | # --- Functions --- # 133 | def is_valid_file(parser, arg): 134 | # https://stackoverflow.com/questions/11540854/file-as-command-line-argument-for-argparse-error-message-if-argument-is-not-va 135 | if not os.path.exists(arg): 136 | parser.error("The file %s does not exist!" % arg) 137 | else: 138 | return str(arg) 139 | 140 | def fill_wrfinput_ncdfpy(rootgrp_in, rootgrp_out, laimo=8): 141 | ''' 142 | This function will populate the arrays in the WRFINPUT file based on array and 143 | attribute values in the input GEOGRID file. 144 | ''' 145 | 146 | # Pull data variables from input GEOGRID file 147 | soilT = rootgrp_in.variables['SOILTEMP'][:] # Soil temperature in degrees Kelvin 148 | hgt = rootgrp_in.variables['HGT_M'][:] # Elevation in meters on the mass grid 149 | use = rootgrp_in.variables['LU_INDEX'][:] # Landcover class 150 | soil_top_cat = rootgrp_in.variables['SOILCTOP'][:] # Soil texture class of the top soil layer 151 | veg = rootgrp_in.variables['GREENFRAC'][:] * 100.0 # Green fraction as a percentage (0-100) 152 | 153 | # Define variables using global attributes in GEOGRID file 154 | ncatts = {key:val for key,val in rootgrp_out.__dict__.items()} 155 | iswater = ncatts.get('ISWATER') # GEOGRID global attribute describing the LU_INDEX value assigned to water 156 | isoilwater = ncatts.get('ISOILWATER') # GEOGRID global describing the SOIL class assigned to water 157 | 158 | # SOILTEMP will show 0 value over water. This can cause issues when varying land cover fields 159 | # from default. Setting to mean non-zero values for now to have something reasonable. 160 | if fix_zero_over_water: 161 | soilT_mask = soilT<100 # Create a mask of 'invalid' soil temperature values. All values <100K on SOILTEMP grid 162 | soilT_mask_Mean = soilT[~soilT_mask].mean() # Calculate the non-NAN soil temperature mean 163 | soilT[soilT_mask] = soilT_mask_Mean # Replace masked values with the mean of the unmasked values 164 | if soilT_mask.sum()>0: 165 | print(' Replaced {0} values in TMN with mean SOILTEMPT value ({1}).'.format(soilT_mask.sum(), soilT_mask_Mean)) 166 | 167 | # TMN topographic adjustment 168 | print(' Performing topographic soil temperature adjustment.') 169 | tmn = soilT - 0.0065 * hgt # Topographic soil temperature adjustment 170 | del soilT, hgt, soilT_mask, soilT_mask_Mean 171 | 172 | # Create land/water mask from LU_INDEX in input GEOGRID file 173 | msk = use.copy() # Land use (LU_INDEX from GEOGRID) 174 | msk[numpy.logical_and(msk>=0, msk!=iswater)] = 1 # Set non-water cells to 1 175 | msk[msk==iswater] = 2 # Set all other cells to 2 176 | 177 | # Find the dominant SOILCTOP value. If LU_INDEX is water, this will be water, 178 | # otherwise it will be the dominant fractional non-water SOILCTOP category 179 | a = numpy.ma.array(soil_top_cat[0], mask=False) # Build a masked array in order to mask the water cateogry 180 | a.mask[isoilwater-1] = True # Use a mask to mask the index of the water category 181 | dominant_value = numpy.amax(a, axis=0).data # Dominant soil category fraction (unmasked) 182 | dominant_index = numpy.argmax(a, axis=0)+1 # Dominant soil category index (1-based index) 183 | dominant_index[numpy.logical_and(dominant_value < 0.01, msk[0]==1)] = dom_lc_fill # Set any land pixel values with tiny dominant fractions to a fill value 184 | if numpy.logical_and(dominant_value < 0.01, msk[0]==1).sum() > 0: 185 | print(' Replaced {0} values in ISLTYP with {1} because no dominant land class could be determined'.format(numpy.logical_and(dominant_value < 0.01, msk[0]==1).sum(), dom_lc_fill)) 186 | dominant_index[msk[0]==2] = isoilwater # Set any values with landmask == 2 (water) to water 187 | del a, dominant_value, soil_top_cat 188 | 189 | # Set soils to the dominant type (derived above) and handle water/land soiltype mismatches 190 | soi = dominant_index[numpy.newaxis] # Add an axis to make this 3D (time, south_north, west_east) 191 | soi[use==iswater] = isoilwater # Make all water LU_INDEX cells into water soiltypes 192 | soi[numpy.logical_and(use!=iswater, soi==isoilwater)] = fillsoiltyp # If the pixel LU_INDEX is land type and SOILCTOP is water type, then make it water soil type 193 | if numpy.logical_and(use!=iswater, soi==isoilwater).sum() > 0: 194 | print(' Replaced {0} values in ISLTYP with {1} because of a land landcover type and water soil class'.format(fillsoiltyp, numpy.logical_and(use!=iswater, soi==isoilwater).sum())) 195 | del dominant_index, use 196 | 197 | # Soil moisture SMOIS 3D array 198 | smoisArr = numpy.array([0.20, 0.21, 0.25, 0.27]) # Constant soil moisture with increasing depth by vertical level 199 | smois = smoisArr[:, None, None] * numpy.ones(msk.shape) # Set the soil moisture (SMOIS) array across entire domain by vertical level 200 | 201 | # TSLB 3D array 202 | tslbArr = numpy.array([285.0, 283.0, 279.0, 277.0]) # Constant tslb with increasing depth by vertical level 203 | tslb = tslbArr[:, None, None] * numpy.ones(msk.shape) # Set the TSLB array across entire domain by vertical level. tslb = numpy.vstack([msk]*4) 204 | 205 | # Calculate the depths of the center depth of each soil layer based on the 206 | # layer thicknesses provided in the header. Default is zs = [0.05, 0.25, 0.7, 1.5] 207 | zs = [item/2 + sum(dzs[:num]) for num,item in enumerate(dzs)] # Each center depth is half the layer thickness + sum of thicknesses of all levels above 208 | 209 | # Populate output WRFINPUT file variable arrays 210 | rootgrp_out.variables['TMN'][:] = tmn # Elevation adjusted deep soil temperature 211 | rootgrp_out.variables['XLAND'][:] = msk # Landmask (1=land, 2=water) from LU_INDEX 212 | rootgrp_out.variables['SEAICE'][:] = numpy.zeros(msk.shape) # zeros 213 | rootgrp_out.variables['ISLTYP'][:] = soi # Dominant soil type 214 | rootgrp_out.variables['SHDMAX'][:] = veg.max(axis=1) # Maximum GREENFRAC over time dimesion 215 | rootgrp_out.variables['SHDMIN'][:] = veg.min(axis=1) # Minimum GREENFRAC over time dimesion 216 | rootgrp_out.variables['LAI'][:] = rootgrp_in.variables['LAI12M'][:,laimo-1] # Leaf area index for the user-specified month 217 | rootgrp_out.variables['CANWAT'][:] = numpy.zeros(msk.shape) # zeros 218 | rootgrp_out.variables['SNOW'][:] = numpy.zeros(msk.shape) # zeros 219 | rootgrp_out.variables['TSK'][:] = numpy.zeros(msk.shape) + 290.0 # Constant value 220 | rootgrp_out.variables['SMOIS'][:] = smois[numpy.newaxis] # Add an axis to make this 4D (time, soil_layer_stag, south_north, west_east) 221 | rootgrp_out.variables['TSLB'][:] = tslb[numpy.newaxis] # Add an axis to make this 4D (time, soil_layer_stag, south_north, west_east) 222 | rootgrp_out.variables['ZS'][:] = numpy.array(zs)[numpy.newaxis] # Depths of the center of each soil layer 223 | rootgrp_out.variables['DZS'][:] = numpy.array(dzs)[numpy.newaxis] # Thickness of each soil layer 224 | del msk, veg, ncatts, iswater, isoilwater, soi, smois, smoisArr, tslb, tslbArr, tmn, zs 225 | return rootgrp_in, rootgrp_out 226 | 227 | def main_wrfinput_ncdfpy(geoFile, wrfinFile, lai=8, outNCType='NETCDF4'): 228 | ''' 229 | This function is designed to build the wrfinput file using only the 230 | netCDF4-python library. 231 | ''' 232 | tic1 = time.time() 233 | 234 | # --- Create initial file --- # 235 | print(' Creating wrfinput file from geogrid file.') 236 | print(' Input geogrid file: {0}'.format(geoFile)) 237 | print(' Output wrfinput file: {0}'.format(wrfinFile)) 238 | print(' Month selected (1=Januaray, 12=December): {0}'.format(lai)) 239 | rootgrp_in = netCDF4.Dataset(geoFile, 'r') # Read object on input GEOGRID file 240 | rootgrp_out = netCDF4.Dataset(wrfinFile, 'w', format=outNCType) # Write object to create output WRFINPUT file 241 | 242 | # Copy dimensions from GEOGRID file 243 | for dimname, dim in rootgrp_in.dimensions.items(): 244 | if dimname in keepDims: 245 | rootgrp_out.createDimension(dimname, len(dim)) # Copy dimensions from the GEOGRID file 246 | soildimension = rootgrp_out.createDimension(soildim, nsoil) # Add soil_layers_stag dimension 247 | 248 | # Populate initial file with variables to keep from the input GEOGRID file 249 | for varname, ncvar in rootgrp_in.variables.items(): 250 | if varname in keepVars: 251 | varAtts = {key:val for key,val in ncvar.__dict__.items()} 252 | varDims = tuple(varDim for varDim in ncvar.dimensions) 253 | if varname in mapVars: 254 | varname = mapVars[varname] # Alter the variable name 255 | var = rootgrp_out.createVariable(varname, ncvar.dtype, varDims) 256 | var.setncatts(varAtts) # Copy variable attributes from GEOGRID file 257 | 258 | # Define new variables based on the addVars list 259 | for (varname, units, varDims, missing_value, dtype) in addVars: 260 | var = rootgrp_out.createVariable(varname, dtype, varDims) 261 | var.setncatts({'units':units, 'missing_value':missing_value}) 262 | 263 | # Global Attributes - copy all from GEOGRID file to WRFINPUT 264 | ncatts = {key:val for key,val in rootgrp_in.__dict__.items()} 265 | ncatts['Source_Software'] = 'WRF-Hydro {0} script (Python).'.format(sys.argv[0]) 266 | ncatts['creation_time'] = 'Created {0}'.format(time.ctime()) 267 | rootgrp_out.setncatts(ncatts) 268 | 269 | # Add variable values last (makes the script run faster). These are exact copies of existing variables 270 | for varname, ncvar in rootgrp_in.variables.items(): 271 | if varname in keepVars: 272 | if varname in mapVars: 273 | varname = mapVars[varname] 274 | var = rootgrp_out.variables[varname] 275 | var[:] = ncvar[:] # Copy the variable data into the newly created variable 276 | 277 | # Process and populate variables 278 | rootgrp_in, rootgrp_out = fill_wrfinput_ncdfpy(rootgrp_in, rootgrp_out, laimo=lai) 279 | 280 | # Close and return 281 | rootgrp_in.close() 282 | rootgrp_out.close() 283 | return 284 | 285 | def fill_wrfinput_xarray(ds_in, laimo=8): 286 | ''' 287 | This function will populate the arrays in the WRFINPUT file based on array and 288 | attribute values in the input GEOGRID file. 289 | ''' 290 | 291 | # Define variables using global attributes in GEOGRID file 292 | iswater = ds_in.attrs.get('ISWATER') # GEOGRID global attribute describing the LU_INDEX value assigned to water 293 | isoilwater = ds_in.attrs.get('ISOILWATER') # GEOGRID global describing the SOIL class assigned to water 294 | 295 | # SOILTEMP will show 0 value over water. This can cause issues when varying land cover fields 296 | # from default. Setting to mean non-zero values for now to have something reasonable. 297 | hgt = ds_in['HGT'].data # Elevation in meters on the mass grid 298 | if fix_zero_over_water: 299 | soilT = ds_in['SOILTEMP'].data.copy() # Soil temperature in degrees Kelvin 300 | soilT_mask = soilT<100 # Create a mask of 'invalid' soil temperature values. All values <100K on SOILTEMP grid 301 | soilT_mask_Mean = soilT[~soilT_mask].mean() # Calculate the non-NAN soil temperature mean 302 | soilT[soilT_mask] = soilT_mask_Mean # Replace masked values with the mean of the unmasked values 303 | if soilT_mask.sum()>0: 304 | print(' Replaced {0} values in TMN with mean SOILTEMPT value ({1}).'.format(soilT_mask.sum(), soilT_mask_Mean)) 305 | 306 | # TMN topographic adjustment 307 | print(' Performing topographic soil temperature adjustment.') 308 | tmn = soilT - 0.0065 * hgt # Topographic soil temperature adjustment 309 | del soilT, hgt, soilT_mask, soilT_mask_Mean 310 | 311 | # Create land/water mask from LU_INDEX in input GEOGRID file 312 | use = ds_in['IVGTYP'].data # Landcover class 313 | msk = use.copy() # Land use (LU_INDEX from GEOGRID) 314 | msk[numpy.logical_and(msk>=0, msk!=iswater)] = 1 # Set non-water cells to 1 315 | msk[msk==iswater] = 2 # Set all other cells to 2 316 | 317 | # Find the dominant SOILCTOP value. If LU_INDEX is water, this will be water, 318 | # otherwise it will be the dominant fractional non-water SOILCTOP category 319 | soil_top_cat = ds_in['SOILCTOP'].data # Soil texture class of the top soil layer 320 | a = numpy.ma.array(soil_top_cat[0], mask=False) # Build a masked array in order to mask the water cateogry 321 | a.mask[isoilwater-1] = True # Use a mask to mask the index of the water category 322 | dominant_value = numpy.amax(a, axis=0).data # Dominant soil category fraction (unmasked) 323 | dominant_index = numpy.argmax(a, axis=0)+1 # Dominant soil category index (1-based index) 324 | dominant_index[numpy.logical_and(dominant_value < 0.01, msk[0]==1)] = dom_lc_fill # Set any land pixel values with tiny dominant fractions to a fill value 325 | if numpy.logical_and(dominant_value < 0.01, msk[0]==1).sum() > 0: 326 | print(' Replaced {0} values in ISLTYP with {1} because no dominant land class could be determined'.format(numpy.logical_and(dominant_value < 0.01, msk[0]==1).sum(), dom_lc_fill)) 327 | dominant_index[msk[0]==2] = isoilwater # Set any values with landmask == 2 (water) to water 328 | del a, dominant_value, soil_top_cat 329 | 330 | # Set soils to the dominant type (derived above) and handle water/land soiltype mismatches 331 | soi = dominant_index[numpy.newaxis] # Add an axis to make this 3D (time, south_north, west_east) 332 | soi[use==iswater] = isoilwater # Make all water LU_INDEX cells into water soiltypes 333 | soi[numpy.logical_and(use!=iswater, soi==isoilwater)] = fillsoiltyp # If the pixel LU_INDEX is land type and SOILCTOP is water type, then make it water soil type 334 | if numpy.logical_and(use!=iswater, soi==isoilwater).sum() > 0: 335 | print(' Replaced {0} values in ISLTYP with {1} because of a land landcover type and water soil class'.format(fillsoiltyp, numpy.logical_and(use!=iswater, soi==isoilwater).sum())) 336 | del dominant_index, use 337 | 338 | # Soil moisture SMOIS 3D array 339 | smoisArr = numpy.array([0.20, 0.21, 0.25, 0.27]) # Constant soil moisture with increasing depth by vertical level 340 | smois = smoisArr[:, None, None] * numpy.ones(msk.shape) # Set the soil moisture (SMOIS) array across entire domain by vertical level 341 | 342 | # TSLB 3D array 343 | tslbArr = numpy.array([285.0, 283.0, 279.0, 277.0]) # Constant tslb with increasing depth by vertical level 344 | tslb = tslbArr[:, None, None] * numpy.ones(msk.shape) # Set the TSLB array across entire domain by vertical level. tslb = numpy.vstack([msk]*4) 345 | 346 | # Calculate the depths of the center depth of each soil layer based on the 347 | # layer thicknesses provided in the header. Default is zs = [0.05, 0.25, 0.7, 1.5] 348 | zs = [item/2 + sum(dzs[:num]) for num,item in enumerate(dzs)] # Each center depth is half the layer thickness + sum of thicknesses of all levels above 349 | 350 | veg = ds_in['GREENFRAC'].data * 100.0 # Green fraction as a percentage (0-100) 351 | 352 | # Populate output WRFINPUT file variable arrays 353 | ds_in.variables['TMN'][:] = tmn # Elevation adjusted deep soil temperature 354 | ds_in.variables['XLAND'][:] = msk # Landmask (1=land, 2=water) from LU_INDEX 355 | ds_in.variables['SEAICE'][:] = numpy.zeros(msk.shape) # zeros 356 | ds_in.variables['ISLTYP'][:] = soi # Dominant soil type 357 | ds_in.variables['SHDMAX'][:] = veg.max(axis=1) # Maximum GREENFRAC over time dimesion 358 | ds_in.variables['SHDMIN'][:] = veg.min(axis=1) # Minimum GREENFRAC over time dimesion 359 | ds_in.variables['LAI'][:] = ds_in.variables['LAI12M'][:,laimo-1] # Leaf area index for the user-specified month 360 | ds_in.variables['CANWAT'][:] = numpy.zeros(msk.shape) # zeros 361 | ds_in.variables['SNOW'][:] = numpy.zeros(msk.shape) # zeros 362 | ds_in.variables['TSK'][:] = numpy.zeros(msk.shape) + 290.0 # Constant value 363 | ds_in.variables['SMOIS'][:] = smois[numpy.newaxis] # Add an axis to make this 4D (time, soil_layer_stag, south_north, west_east) 364 | ds_in.variables['TSLB'][:] = tslb[numpy.newaxis] # Add an axis to make this 4D (time, soil_layer_stag, south_north, west_east) 365 | ds_in.variables['ZS'][:] = numpy.array(zs)[numpy.newaxis] # Depths of the center of each soil layer 366 | ds_in.variables['DZS'][:] = numpy.array(dzs)[numpy.newaxis] # Thickness of each soil layer 367 | del msk, veg, iswater, isoilwater, soi, smois, smoisArr, tslb, tslbArr, tmn, zs 368 | return ds_in 369 | 370 | def main_wrfinput_xarray(geoFile, wrfinFile, lai=8, outNCType='NETCDF4'): 371 | ''' 372 | This function is designed to build the wrfinput file using the xarray library. 373 | ''' 374 | tic1 = time.time() 375 | 376 | # --- Create initial file --- # 377 | print(' Creating wrfinput file from geogrid file.') 378 | print(' Input geogrid file: {0}'.format(geoFile)) 379 | print(' Output wrfinput file: {0}'.format(wrfinFile)) 380 | print(' Month selected (1=Januaray, 12=December): {0}'.format(lai)) 381 | 382 | # Open input files for reading. Lazy loading 383 | ncDS = xr.open_dataset(geoFile) # , decode_cf=False 384 | 385 | # Rename variables 386 | ncDS = ncDS.rename(mapVars) 387 | 388 | # Add new variables based on the addVars list 389 | dims = dict(ncDS.dims) 390 | dims.update({soildim:nsoil}) 391 | newVars = [] 392 | for (varname, units, varDims, missing_value, dtype) in addVars: 393 | da = xr.DataArray(data=numpy.empty(tuple([dims[dim] for dim in varDims]), dtype=dtype), 394 | dims=varDims, 395 | attrs={'units':units, 'missing_value':missing_value}) 396 | ncDS[varname] = da 397 | newVars.append(varname) 398 | 399 | # Process and populate variables 400 | ncDS = fill_wrfinput_xarray(ncDS, laimo=lai) 401 | 402 | # Drop dimensions 403 | dropDims = [item for item in ncDS.dims if item not in keepDims] 404 | ncDS = ncDS.drop_dims(dropDims) 405 | 406 | # Drop variables first 407 | keepVars2 = [mapVars.get(item,item) for item in keepVars] + newVars 408 | dropVars = [item for item in ncDS.variables if item not in keepVars2] 409 | ncDS = ncDS.drop(dropVars) 410 | 411 | # Add global attributes 412 | ncDS.attrs['Source_Software'] = 'WRF-Hydro {0} script (Python).'.format(sys.argv[0]) 413 | ncDS.attrs['creation_time'] = 'Created {0}'.format(time.ctime()) 414 | 415 | # Output file to disk 416 | #encoding = {varname:ncDS[varname].encoding for varname in list(ncDS.variables.keys())} 417 | encoding = {varname:{'_FillValue':None} for varname in list(ncDS.variables.keys())} 418 | ncDS.to_netcdf(wrfinFile, mode='w', format=outNCType, encoding=encoding) 419 | ncDS.close() 420 | del encoding, ncDS 421 | return 422 | 423 | # --- End Functions --- # 424 | 425 | if __name__ == '__main__': 426 | print('Script initiated at {0}'.format(time.ctime())) 427 | tic = time.time() 428 | 429 | parser = ArgumentParser(description=descText, add_help=True) 430 | parser.add_argument("-i", 431 | dest="in_Geogrid", 432 | type=lambda x: is_valid_file(parser, x), 433 | required=True, 434 | help="Path to WPS geogrid (geo_em.d0*.nc) file [REQUIRED]") 435 | parser.add_argument("-m", 436 | dest="LAI_month", 437 | type=int, 438 | default=8, 439 | required=True, 440 | help="LAI month for initialization [REQUIRED]. 1=January, 12=December.") 441 | parser.add_argument("-o", 442 | dest="out_wrfinput", 443 | default='./wrfinput.nc', 444 | required=True, 445 | help='Output "wrfinput" file.') 446 | 447 | # If no arguments are supplied, print help message 448 | if len(sys.argv) == 1: 449 | parser.print_help(sys.stderr) 450 | sys.exit(1) 451 | args = parser.parse_args() 452 | 453 | if args.LAI_month not in range(1,13): 454 | print('LAI month provided [{0}] is not between 1 and 12. Exiting...'.format(args.LAI_month)) 455 | raise SystemExit 456 | 457 | # Resolve relative paths to absolute paths 458 | args.in_Geogrid = os.path.abspath(args.in_Geogrid) 459 | args.out_wrfinput = os.path.abspath(args.out_wrfinput) 460 | 461 | if method == 'netcdf4-python': 462 | main_wrfinput_ncdfpy(args.in_Geogrid, args.out_wrfinput, lai=args.LAI_month, outNCType=outNCType) 463 | elif method == 'xarray': 464 | main_wrfinput_xarray(args.in_Geogrid, args.out_wrfinput, lai=args.LAI_month, outNCType=outNCType) 465 | print(' Process completed in {0:3.2f} seconds'.format(time.time()-tic)) -------------------------------------------------------------------------------- /wrfhydro_gis/Examine_Outputs_of_GIS_Preprocessor.py: -------------------------------------------------------------------------------- 1 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 2 | # Copyright UCAR (c) 2019 3 | # University Corporation for Atmospheric Research(UCAR) 4 | # National Center for Atmospheric Research(NCAR) 5 | # Research Applications Laboratory(RAL) 6 | # P.O.Box 3000, Boulder, Colorado, 80307-3000, USA 7 | # 24/09/2019 8 | # 9 | # Name: module1 10 | # Purpose: 11 | # Author: $ Kevin Sampson 12 | # Created: 24/09/2019 13 | # Licence: 14 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 15 | 16 | descText = "This tool takes the output zip file from the ProcessGeogrid script and creates a raster " \ 17 | "from each output NetCDF file. The Input should be a .zip file that was created using the" \ 18 | "WRF Hydro pre-processing tools. The tool will create the folder which will contain the" \ 19 | "results (out_folder), if that folder does not already exist. " 20 | 21 | # Import Python Core Modules 22 | import os 23 | import sys 24 | import time 25 | import shutil 26 | from packaging.version import parse as LooseVersion # To avoid deprecation warnings 27 | 28 | # Import additional modules 29 | import netCDF4 30 | from argparse import ArgumentParser 31 | from pathlib import Path 32 | import osgeo 33 | 34 | try: 35 | if LooseVersion(osgeo.__version__) > LooseVersion('3.0.0'): 36 | from osgeo import gdal 37 | from osgeo import osr 38 | from osgeo.gdal_array import * 39 | else: 40 | import gdal 41 | import osr 42 | from gdal_array import * 43 | except: 44 | sys.exit('ERROR: cannot find GDAL/OGR modules') 45 | #from gdalnumeric import * # Assists in using BandWriteArray, BandReadAsArray, and CopyDatasetInfo 46 | 47 | # Import function library into namespace. Must exist in same directory as this script. 48 | from wrfhydro_functions import (LK_nc, RT_nc, GW_nc, LDASFile, crsVar, 49 | numpy_to_Raster, ZipCompat) 50 | 51 | # Global Variables 52 | 53 | # Script Options 54 | RasterDriver = 'GTiff' # Driver for output raster format 55 | suffix = '.tif' # File extension to use for output rasters 56 | skipfiles = [] # Files that should not be converted or written to output directory 57 | 58 | # --- Functions --- # 59 | def examine_outputs(out_folder, dellist=[], skipfiles=[]): 60 | ''' 61 | Provide a directory, ideally the unzipped directory of a routing stack produced 62 | by the WRF-Hydro GIS Pre-processor. Files will be examined and derivatives 63 | made from 2D netCDF files. Some files will be delted from the input directory. 64 | ''' 65 | tic1 = time.time() 66 | dellist = [] # Initialize list to store files to delete from output directory 67 | 68 | # Iterate through unzipped files and copy to output directory as necessary 69 | for dirpath, dirnames, filenames in os.walk(out_folder): 70 | for file in filenames: 71 | infile = os.path.join(dirpath, file) 72 | 73 | # Copy skipped files over to new directory 74 | if file in skipfiles: 75 | dellist.append(infile) 76 | print(' File NOT Copied: {0}'.format(file)) 77 | del file, infile 78 | continue 79 | 80 | # Trap to eliminate Parameter tables in NC format from this extraction 81 | if file in [LK_nc, RT_nc, GW_nc, LDASFile]: 82 | print(' File Copied: {0}'.format(file)) 83 | del file, infile 84 | continue 85 | 86 | if file.endswith('.nc'): 87 | 88 | # Establish an object for reading the input NetCDF file 89 | rootgrp = netCDF4.Dataset(infile, 'r') 90 | if LooseVersion(netCDF4.__version__) > LooseVersion('1.4.0'): 91 | rootgrp.set_auto_mask(False) # Change masked arrays to old default (numpy arrays always returned) 92 | 93 | # Old method which will crash if the CRS variable is not present 94 | ## if 'esri_pe_string' in rootgrp.variables[crsVar].__dict__: 95 | ## PE_string = rootgrp.variables[crsVar].esri_pe_string 96 | ## elif 'spatial_ref' in rootgrp.variables[crsVar].__dict__: 97 | ## PE_string = rootgrp.variables[crsVar].spatial_ref 98 | ## GT = rootgrp.variables[crsVar].GeoTransform.split(" ")[0:6] 99 | 100 | # Added 4/14/2021 to allow for the absence of a coordinate system variable. 101 | if crsVar in rootgrp.variables: 102 | crsNCVar = rootgrp.variables[crsVar] 103 | if 'esri_pe_string' in crsNCVar.__dict__: 104 | #PE_string = crsNCVar.esri_pe_string 105 | PE_string = crsNCVar.esri_pe_string.replace("'", '"') 106 | elif 'spatial_ref' in crsNCVar.__dict__: 107 | PE_string = crsNCVar.spatial_ref 108 | if 'GeoTransform' in crsNCVar.__dict__: 109 | GT = crsNCVar.GeoTransform.split(" ")[0:6] 110 | else: 111 | print(' No GeoTransform attribute found. Setting to default.') 112 | GT = [0, 1, 0, 0, 0, -1] 113 | else: 114 | # Create dummy variables to allow the script to continue 115 | PE_string = '' 116 | GT = [0, 1, 0, 0, 0, -1] 117 | 118 | GT = tuple(float(item) for item in GT) 119 | print(' GeoTransform: {0}'.format(GT)) 120 | print(' DX: {0}'.format(GT[1])) 121 | print(' DY: {0}'.format(-abs(GT[5]))) 122 | 123 | proj = osr.SpatialReference() # Initiate OSR spatial reference object 124 | proj.ImportFromWkt(PE_string) 125 | print(' PROJ.4 string: {0}'.format(proj.ExportToProj4())) 126 | for variablename, ncvar in rootgrp.variables.items(): 127 | if ncvar.dimensions==('y', 'x'): 128 | OutRaster = numpy_to_Raster(ncvar[:].copy(), proj, GT[1], -abs(GT[5]), GT[0], GT[3]) 129 | 130 | # Save to disk 131 | OutGTiff = os.path.join(out_folder, variablename+suffix)# Output raster 132 | 133 | try: 134 | target_ds = gdal.GetDriverByName(RasterDriver).CreateCopy(OutGTiff, OutRaster) 135 | target_ds = OutRaster = None 136 | del target_ds 137 | except: 138 | pass 139 | del OutRaster 140 | 141 | print(' File Created: {0}'.format(OutGTiff)) 142 | del OutGTiff, variablename 143 | rootgrp.close() 144 | dellist.append(infile) 145 | del file, infile, rootgrp, ncvar 146 | continue 147 | 148 | if file.split('.')[-1] in ['shp', 'shx', 'xml', 'sbx', 'sbn', 'prj', 'dbf']: 149 | print(' File Copied: {0}'.format(str(file))) 150 | del file, infile 151 | continue 152 | 153 | # These file formats are legacy output files, but allow the tool to work with older routing stacks 154 | if file.endswith('.csv') or file.endswith('.TBL') or file.endswith('.txt') or file.endswith('.prj'): 155 | print(' File Copied: {0}'.format(file)) 156 | del file, infile 157 | continue 158 | 159 | dellist.append(infile) 160 | del file, infile 161 | continue 162 | del dirpath, dirnames, filenames 163 | 164 | # Remove each file from the temporary extraction directory 165 | for infile in dellist: 166 | os.remove(infile) 167 | return 168 | 169 | # --- End Functions --- # 170 | 171 | 172 | # Main Codeblock 173 | if __name__ == '__main__': 174 | tic = time.time() 175 | print('Script initiated at {0}'.format(time.ctime())) 176 | 177 | parser = ArgumentParser(description=descText, add_help=True) 178 | parser.add_argument("-i", 179 | dest="in_zip", 180 | default='', 181 | required=True, 182 | help="Path to WRF Hydro routing grids zip file.") 183 | parser.add_argument("-o", 184 | dest="out_folder", 185 | default='', 186 | required=True, 187 | help="Path to output folder.") 188 | 189 | # If no arguments are supplied, print help message 190 | if len(sys.argv) == 1: 191 | parser.print_help(sys.stderr) 192 | sys.exit(1) 193 | args = parser.parse_args() 194 | all_defaults = {key: parser.get_default(key) for key in vars(args)} 195 | 196 | if args.in_zip == all_defaults["in_zip"]: 197 | print('Using input zip location of: {0}'.format(all_defaults["in_zip"])) 198 | 199 | if args.out_folder == all_defaults["out_folder"]: 200 | print('Using output location of: {0}'.format(all_defaults["out_folder"])) 201 | 202 | # Create output directory for temporary outputs 203 | if os.path.exists(args.out_folder): 204 | print('Requested output directory already exists. \nPlease specify a non-existant directory as output.') 205 | raise SystemExit 206 | else: 207 | os.makedirs(args.out_folder) 208 | 209 | # Unzip to a known location (make sure no other nc files live here) 210 | ZipCompat(args.in_zip).extractall(args.out_folder) 211 | examine_outputs(args.out_folder, skipfiles=skipfiles) 212 | print('Extraction of WRF routing grids completed.') 213 | print('Process complted in {0:3.2f} seconds.'.format(time.time()-tic)) 214 | -------------------------------------------------------------------------------- /wrfhydro_gis/Forecast_Point_Tools.py: -------------------------------------------------------------------------------- 1 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 2 | # Copyright UCAR (c) 2019 3 | # University Corporation for Atmospheric Research(UCAR) 4 | # National Center for Atmospheric Research(NCAR) 5 | # Research Applications Laboratory(RAL) 6 | # P.O.Box 3000, Boulder, Colorado, 80307-3000, USA 7 | # 24/09/2019 8 | # 9 | # Name: module1 10 | # Purpose: 11 | # Author: $ Kevin Sampson 12 | # Created: 24/09/2019 13 | # Licence: 14 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 15 | 16 | descText = "Script to create forecast points." 17 | 18 | # Import Modules 19 | 20 | # Import Python Core Modules 21 | import os 22 | import sys 23 | import time 24 | import numpy 25 | 26 | # Import additional modules 27 | from argparse import ArgumentParser 28 | from pathlib import Path 29 | from packaging.version import parse as LooseVersion # To avoid deprecation warnings 30 | import osgeo 31 | 32 | try: 33 | if LooseVersion(osgeo.__version__) > LooseVersion('3.0.0'): 34 | from osgeo import ogr 35 | from osgeo import osr 36 | else: 37 | import ogr 38 | import osr 39 | except: 40 | sys.exit('ERROR: cannot find GDAL/OGR modules') 41 | 42 | # Import function library into namespace. Must exist in same directory as this script. 43 | from wrfhydro_functions import CSV_to_SHP # Function script packaged with this toolbox 44 | 45 | # Global Variables 46 | 47 | # Coordinate system of all latitude/longitude coordinates: WGS84, EPSG:4326 48 | wgs84_proj4 = '+proj=longlat +datum=WGS84 +no_defs' 49 | 50 | Driver = 'ESRI Shapefile' 51 | 52 | # Script options 53 | # CSV_to_shape = False # Switch for creating a point shapefile from a forecast point CSV file 54 | # SHP_to_CSV = True # Switch for creating a CSV file from a point shapefile 55 | 56 | # Dictionary to map OGR data types to numpy dtypes - many of these are just a guess, 57 | # with numpy.object for List, string, and binary objects 58 | OGRTypes = {ogr.OFTBinary: numpy.object, 59 | ogr.OFTDate: numpy.datetime64, 60 | ogr.OFTDateTime: numpy.datetime64, 61 | ogr.OFTInteger: numpy.int, 62 | ogr.OFTInteger64: numpy.int64, 63 | ogr.OFTInteger64List: numpy.object, 64 | ogr.OFTIntegerList: numpy.object, 65 | ogr.OFTReal: numpy.float64, 66 | ogr.OFTRealList: numpy.object, 67 | ogr.OFTString: numpy.str, 68 | ogr.OFTStringList: numpy.object, 69 | ogr.OFTTime: numpy.datetime64, 70 | ogr.OFTWideString: numpy.object, 71 | ogr.OFTWideStringList: numpy.object} 72 | 73 | # Main Codeblock 74 | if __name__ == '__main__': 75 | tic = time.time() 76 | print('Script initiated at {0}'.format(time.ctime())) 77 | 78 | # Setup the input arguments 79 | parser = ArgumentParser(description=descText, add_help=True) 80 | parser.add_argument("-i", 81 | dest="in_csv", 82 | default='', 83 | required=True, 84 | help="Path to gauges csv.") 85 | parser.add_argument("-s", 86 | dest="in_shp", 87 | default='', 88 | required=True, 89 | help="Path to forecast points shapefile.") 90 | parser.add_argument("-o", 91 | dest="out_dir", 92 | default='', 93 | required=True, 94 | help="Output directory.") 95 | 96 | # If no arguments are supplied, print help message 97 | if len(sys.argv) == 1: 98 | parser.print_help(sys.stderr) 99 | sys.exit(1) 100 | args = parser.parse_args() 101 | all_defaults = {key: parser.get_default(key) for key in vars(args)} 102 | 103 | # if args.in_csv == all_defaults["in_csv"]: 104 | # print('Using input csv with path: {0}'.format(all_defaults["in_csv"])) 105 | # 106 | # if args.in_shp == all_defaults["in_shp"]: 107 | # print('Using default shapefile with path: {0}'.format(all_defaults["in_shp"])) 108 | # 109 | # if args.out_dir == all_defaults["out_dir"]: 110 | # print('Using output location: {0}'.format(all_defaults["out_dir"])) 111 | 112 | # Output files 113 | outCSV = os.path.join(args.out_dir, os.path.basename(args.in_shp).replace('.shp', '.csv')) 114 | outSHP = os.path.join(args.out_dir, os.path.basename(args.in_csv).replace('.csv', '.shp')) 115 | 116 | drv = ogr.GetDriverByName(Driver) 117 | if drv is None: 118 | print(' {0} driver not available.'.format(drv)) 119 | raise SystemExit 120 | 121 | if len(args.in_csv) > 1: 122 | ''' 123 | This block will create a point shapefile from an input CSV file. 124 | ''' 125 | ds = CSV_to_SHP(args.in_csv, DriverName='MEMORY') 126 | out_ds = drv.CopyDataSource(ds, outSHP) 127 | out_ds = ds = None 128 | 129 | elif len(args.in_shp) > 1: 130 | 131 | ''' 132 | This block will first use any existing fields to fill out the output 133 | CSV file from the list: ['FID', 'LAT', 'LON']. If any of these are not 134 | present, it will fill them in with values calculated from the geometry 135 | (['LAT', 'LON']) or from a 1...n numbering (['FID']). 136 | ''' 137 | 138 | data_source = drv.Open(args.in_shp, 0) # 0 means read-only. 1 means writeable. 139 | if data_source is None: 140 | print(' data source could not be created.') 141 | raise SystemExit 142 | 143 | layer = data_source.GetLayer() 144 | featureCount = layer.GetFeatureCount() 145 | inSR = layer.GetSpatialRef() # Get spatial reference of input 146 | print('Number of features in {0}: {1}'.format(os.path.basename(args.in_shp), featureCount)) 147 | 148 | # Read shapefile fields 149 | layerDefinition = layer.GetLayerDefn() 150 | npDtypes = [] 151 | fields = [] 152 | for i in range(layerDefinition.GetFieldCount()): 153 | fieldDef = layerDefinition.GetFieldDefn(i) 154 | fieldName = fieldDef.GetName() 155 | fieldType = fieldDef.GetType() 156 | npDtypes.append((fieldName, OGRTypes[fieldType])) 157 | fields.append(fieldName) 158 | 159 | # Read the input CSV file 160 | append_dtypes = [('LAT', numpy.float64), ('LON', numpy.float64), ('FID', numpy.int)] 161 | addDtypes = [item for item in append_dtypes if item[0] not in [item2[0] for item2 in npDtypes]] # Eliminate redundancy 162 | dtypes = numpy.dtype(npDtypes + addDtypes) # Create numpy dtype object, adding in any required fields 163 | csv_arr = numpy.empty(featureCount, dtype=dtypes) 164 | 165 | # create the spatial reference for output, WGS84 166 | outSR = osr.SpatialReference() 167 | outSR.ImportFromProj4(wgs84_proj4) 168 | 169 | # Create a transformation? 170 | if outSR.IsSame(inSR): 171 | print(' No coordinate transformation necessary.') 172 | mustTransform = False 173 | else: 174 | print(' Transforming coordinates from:\n\t\t{0}\n\t to\n\t\t{1}.'.format(inSR.ExportToProj4(), outSR.ExportToProj4())) 175 | mustTransform = True 176 | transform = osr.CoordinateTransformation(inSR, outSR) 177 | 178 | # Build the numpy array using info in the fields or geometries of the shapefile 179 | for num,feature in enumerate(layer): 180 | geom = feature.GetGeometryRef().Clone() 181 | if mustTransform: 182 | geom.Transform(transform) 183 | csv_arr[fields][num] = tuple([feature.GetField(field) for field in fields]) 184 | if 'LAT' not in npDtypes: 185 | csv_arr['LAT'][num] = geom.GetY(0) 186 | if 'LON' not in npDtypes: 187 | csv_arr['LON'][num] = geom.GetX(0) 188 | if 'FID' not in npDtypes: 189 | csv_arr['FID'][num] = num 190 | layer.ResetReading() 191 | 192 | numpy.savetxt(outCSV, csv_arr, fmt='%s', delimiter=',', header=','.join(csv_arr.dtype.names), comments='') 193 | 194 | elif len(args.in_shp) > 1 and len(args.in_csv) > 1: 195 | print("Please choose only a single input of either a CSV or shapefile.") 196 | sys.exit() 197 | 198 | drv = None 199 | print('Process completed in {0:3.2f} seconds.'.format(time.time()-tic)) 200 | -------------------------------------------------------------------------------- /wrfhydro_gis/Harmonize_Soils_to_LANDMASK.py: -------------------------------------------------------------------------------- 1 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 2 | # Copyright UCAR (c) 2021 3 | # University Corporation for Atmospheric Research(UCAR) 4 | # National Center for Atmospheric Research(NCAR) 5 | # Research Applications Laboratory(RAL) 6 | # P.O.Box 3000, Boulder, Colorado, 80307-3000, USA 7 | # 06/05/2021 8 | # 9 | # Name: create_wrfinput.py 10 | # Purpose: 11 | # Author: Kevin Sampson 12 | # Created: 06/05/2021 13 | # Licence: 14 | # 15 | # 10/20/2018: 16 | # The purpose of this script is to ensure that the dominant soil types more 17 | # closely match the landmask, such that no water soil types are present where 18 | # landcover indicates a land type. 19 | # 20 | # Based on work by A. Dugger (NCAR) 21 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 22 | 23 | # Import Python Core Modules 24 | import os 25 | import sys 26 | import getopt 27 | import time 28 | import shutil 29 | from argparse import ArgumentParser 30 | #from distutils.version import LooseVersion 31 | from packaging.version import parse as LooseVersion # To avoid deprecation warnings 32 | 33 | # Import additional Python Modules 34 | import xarray as xr 35 | import numpy 36 | import osgeo 37 | 38 | try: 39 | if LooseVersion(osgeo.__version__) > LooseVersion('3.0.0'): 40 | from osgeo import gdal 41 | from osgeo import gdalconst 42 | else: 43 | import gdal 44 | import gdalconst 45 | except: 46 | sys.exit('ERROR: cannot find GDAL/OGR modules') 47 | from gdalconst import * 48 | 49 | # Import whitebox 50 | #from whitebox.WBT.whitebox_tools import WhiteboxTools 51 | from whitebox.whitebox_tools import WhiteboxTools 52 | 53 | # Import functions from the WRF-Hydro GIS Pre-processing scripts 54 | from Build_GeoTiff_From_Geogrid_File import build_geogrid_raster 55 | import wrfhydro_functions as wrfh 56 | 57 | # Module configurations 58 | sys.dont_write_bytecode = True 59 | 60 | # Screen print in case invalid parameters are given 61 | descText = '''Utility for harmonizing the SCT_DOM and SCB_DOM variables in a WRF/ 62 | WRF-Hydro geogrid file with the LANDMASK variable. This is done 63 | to ensure that no water soil types are present where the LU_INDEX 64 | or LANDMASK indicate a land type. The reason a mismatch can happen 65 | is that Geogrid.exe does not ensure these variables will be consistent. 66 | Another reason these might not be consistent is if the user substitutes 67 | their own landcover dataset into LU_INDEX. For the purposes of 68 | hydrologic simulation, these layers should be consistent when it 69 | comes to water.''' 70 | 71 | ''' 72 | To import and run these functions using python, from a custom script or console: 73 | 74 | from Harmonize_Soils_to_LANDMASK import update_geogrid_soils 75 | update_geogrid_soils(r'./geo_em.d01.nc', r'./geo_em.d01.modifiedSoils.nc') 76 | ''' 77 | 78 | ####################################################### 79 | # Global Variables - update relevant arguments below. 80 | ####################################################### 81 | 82 | # Soil type to use as a fill value in case conflicts between soil water and land cover water cells: 83 | # If the script encounters a cell that is classified as land in the land use field (LU_INDEX) 84 | # but is classified as a water soil type, it will replace the soil type with the value you 85 | # specify below. Ideally there are not very many of these, so you can simply choose the most 86 | # common soil type in your domain. Alternatively, you can set to a "bad" value (e.g., -8888) 87 | # to see how many of these conflicts there are. If you do this DO NOT RUN THE MODEL WITH THESE 88 | # BAD VALUES. Instead, fix them manually with a neighbor fill or similar fill algorithm. 89 | fillsoiltyp = 3 90 | 91 | # Soil water type value from input GEOGRID SCT_DOM and SCB_DOM variables 92 | soil_water_val = 14 93 | 94 | # Search distance for filling water cell areas over land with nearest non-water 95 | # neighbor value, in cells. (Cell width is used to determine euclidean distance 96 | search_dist = 3 97 | 98 | ####################################################### 99 | # Do not update below here. 100 | ####################################################### 101 | 102 | # Data model for output netCDF data ['NETCDF4', 'NETCDF3_64BIT', 'NETCDF4_CLASSIC'] 103 | outNCType = 'NETCDF4' # Data model for output netCDF data 104 | 105 | # Use the GeoTiff raster driver for GDAL, which is the only format accepted by Whitebox Tools 106 | RasterDriver = 'GTiff' 107 | 108 | # Controls related to the search distance for gap-filling inland soil-wate classes 109 | distance_calc = True # Use a search distance threshold for nearest neighbor? 110 | distance = 3 # Distance in cell-widths. Uses GEOGRID DX global attribute 111 | 112 | # --- Functions --- # 113 | def is_valid_file(parser, arg): 114 | # https://stackoverflow.com/questions/11540854/file-as-command-line-argument-for-argparse-error-message-if-argument-is-not-va 115 | if not os.path.exists(arg): 116 | parser.error("The file %s does not exist!" % arg) 117 | else: 118 | return str(arg) 119 | 120 | def update_geogrid_soils(inNC, outFile, distance_calc=True, outNCType=outNCType): 121 | ''' 122 | This function will populate the arrays in the input xarray DataSet based on 123 | what is in the input GEOGRID file. 124 | 125 | This may only be needed if you supply and insert your own land-cover layer 126 | into the LU_INDEX variable in the geogrid file, which will require changes to 127 | other layers for maximum compatibilty between landcover and soil types. 128 | 129 | ''' 130 | 131 | # Create scratch directory for temporary outputs 132 | projdir = os.path.join(os.path.dirname(inNC), 'scratchdir') 133 | projdir = os.path.abspath(projdir) 134 | if os.path.exists(projdir): 135 | print('Removing existing scratch directory: {0}'.format(projdir)) 136 | shutil.rmtree(projdir) 137 | os.makedirs(projdir) 138 | 139 | # Whitebox options for running Whitebox in a full workflow 140 | wbt = WhiteboxTools() 141 | print(' Using {0}'.format(wbt.version().split('\n')[0])) 142 | wbt.work_dir = projdir # Set working directory 143 | wbt.verbose = False # Verbose output. [True, False] 144 | 145 | # Open input files for reading. Lazy loading 146 | print('Input netCDF file: {0}'.format(inNC)) 147 | ncDS = xr.open_dataset(inNC) # , decode_cf=False 148 | 149 | # Handle whether or not to modify files in place or create new output files. 150 | if inNC == outFile: 151 | print('OVERWRITING INPUT FILE') 152 | # !!! OVERWRITE exiting input file !!! 153 | # Load the entire dataset into memory. This allows you to overwrite the existing file 154 | outFile = inNC 155 | with xr.open_dataset(inNC) as dataset: 156 | ncDS = dataset.load() 157 | 158 | # Setup directory for temporary outputs 159 | OutGTiff_LM = os.path.join(projdir, 'LANDMASK.tif') 160 | OutGTiff_ST = os.path.join(projdir, 'SCT_DOM.tif') 161 | OutGTiff_SB = os.path.join(projdir, 'SCB_DOM.tif') 162 | 163 | # Build and then ensure the output file exists: 164 | build_geogrid_raster(inNC, 'LANDMASK', OutGTiff_LM, out_Grid_fmt=RasterDriver) 165 | assert(os.path.exists(OutGTiff_LM)) 166 | 167 | # Open the Landmask to use as a mask array 168 | LM_arr, LM_ndv= wrfh.return_raster_array(OutGTiff_LM) # Landmask array 169 | wrfh.remove_file(OutGTiff_LM) 170 | 171 | # Iterate over soil layers 172 | for soil_class_layer, inraster in zip(['SCT_DOM', 'SCB_DOM'], [OutGTiff_ST, OutGTiff_SB]): 173 | 174 | build_geogrid_raster(inNC, soil_class_layer, inraster, out_Grid_fmt=RasterDriver) 175 | assert(os.path.exists(inraster)) 176 | arr, ndv = wrfh.return_raster_array(inraster) 177 | wrfh.remove_file(inraster) 178 | 179 | # Create a copy 180 | modRaster = os.path.join(projdir, '{0}_mod.tif'.format(soil_class_layer)) 181 | target_ds = gdal.GetDriverByName(RasterDriver).CreateCopy(modRaster, gdal.Open(inraster, gdalconst.GA_ReadOnly)) 182 | target_ds = None 183 | 184 | # 1) Create a grid with holes of value 0 where SCT_DOM is water (14): 185 | ds = gdal.Open(modRaster, gdalconst.GA_Update) # Open for writing 186 | band = ds.GetRasterBand(1) 187 | arr_mod = band.ReadAsArray() 188 | arr_mod[arr_mod==soil_water_val] = 0 # Set all water soil type cells to 0 189 | band.WriteArray(arr_mod) # Write the array to the disk 190 | stats = band.GetStatistics(0,1) # Calculate statistics 191 | ds = band = stats = arr_mod = None 192 | 193 | # 2) Run the Euclidean Allocation tool to fill in the 0 gaps with nearest neighbor values. 194 | EA_output = os.path.join(projdir, '{0}_EA.tif'.format(soil_class_layer)) 195 | wbt.euclidean_allocation(modRaster, EA_output) 196 | EA_arr, EA_ndv= wrfh.return_raster_array(EA_output) # Euclidean allocation array 197 | wrfh.remove_file(EA_output) 198 | 199 | # 3) If requested, create a euclidean distance raster to that we can apply a 200 | # limit to the gap-filling process 201 | if distance_calc: 202 | cell_dist = distance * float(ncDS.DX) 203 | print(' Using cell distance of {0} map units'.format(cell_dist)) 204 | 205 | ED_output = os.path.join(projdir, '{0}_ED.tif'.format(soil_class_layer)) 206 | wbt.euclidean_distance(modRaster, ED_output) 207 | ED_arr, ED_ndv = wrfh.return_raster_array(ED_output) # Euclidean distance array 208 | wrfh.remove_file(ED_output) 209 | wrfh.remove_file(modRaster) 210 | 211 | # Modify the input to create a grid which samples nearest non-water land 212 | # cell for all inland water classes on the soil category grid 213 | 214 | # Fill all gaps with the result from Euclidean Allocation 215 | arr[LM_arr==1] = EA_arr[LM_arr==1] 216 | 217 | # If asking for a distance threshold, fill all other gaps with default value 218 | if distance_calc: 219 | arr[numpy.logical_and(LM_arr==1, ED_arr>cell_dist)] = fillsoiltyp 220 | del ED_arr, ED_ndv 221 | 222 | # Set all water areas (accoridng to landmask) to the soil water value 223 | arr[LM_arr==0] = soil_water_val 224 | 225 | # Reverse the order of any arrays if necessary 226 | ind = wrfh.flip_dim(['y', 'x'], DimToFlip='y') # Assumed gdal array order returned by 'return_raster_array' is y,x 227 | arr = arr[tuple(ind)] # Index the input array 228 | 229 | # Replace what is in GOEGRID with these values 230 | ncDS[soil_class_layer][:] = arr 231 | del arr, ndv, EA_arr, EA_ndv 232 | del LM_arr, LM_ndv 233 | 234 | # Delete all temporary files 235 | shutil.rmtree(projdir) 236 | 237 | # Output file to disk 238 | encoding = {varname:ncDS[varname].encoding for varname in list(ncDS.variables.keys())} 239 | for key, val in encoding.items(): 240 | val['_FillValue'] = None 241 | 242 | ncDS.to_netcdf(outFile, mode='w', format=outNCType, encoding=encoding) 243 | ncDS.close() 244 | del encoding, ncDS 245 | print('Output netCDF file: {0}'.format(outFile)) 246 | print('Process completed in {0:3.2f} seconds.'.format(time.time()-tic)) 247 | 248 | if __name__ == '__main__': 249 | print('Script initiated at {0}'.format(time.ctime())) 250 | tic = time.time() 251 | 252 | parser = ArgumentParser(description=descText, add_help=True) 253 | parser.add_argument("-i", 254 | dest="in_Geogrid", 255 | type=lambda x: is_valid_file(parser, x), 256 | required=True, 257 | help="Path to WPS geogrid (geo_em.d0*.nc) file [REQUIRED]") 258 | parser.add_argument("-o", 259 | dest="out_geogrid", 260 | default='./geo_em_altered_soiltypes.nc', 261 | required=True, 262 | help='Output "geogrid" file.') 263 | 264 | # If no arguments are supplied, print help message 265 | if len(sys.argv) == 1: 266 | parser.print_help(sys.stderr) 267 | sys.exit(1) 268 | args = parser.parse_args() 269 | 270 | # Resolve relative paths to absolute paths 271 | args.in_Geogrid = os.path.abspath(args.in_Geogrid) 272 | args.out_geogrid = os.path.abspath(args.out_geogrid) 273 | 274 | ## in_Geogrid = r"C:\Users\ksampson\Desktop\NWM\NWM_Alaska\WPS\WPS_Output\geo_em.d02.20210419_nlcd2016_snow.nc" 275 | ## out_geogrid = r"C:\Users\ksampson\Desktop\NWM\NWM_Alaska\WPS\WPS_Output\geo_em.d02.20210419_nlcd2016_snow_modifiedSoils.nc" 276 | ## update_geogrid_soils(in_Geogrid, out_geogrid, outNCType=outNCType, distance_calc=True) 277 | 278 | update_geogrid_soils(args.in_Geogrid, args.out_geogrid, outNCType=outNCType, distance_calc=True) 279 | print(' Process completed in {0:3.2f} seconds'.format(time.time()-tic)) -------------------------------------------------------------------------------- /wrfhydro_gis/Testing_DEM_interpolation.py: -------------------------------------------------------------------------------- 1 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 2 | # Copyright UCAR (c) 2020 3 | # University Corporation for Atmospheric Research(UCAR) 4 | # National Center for Atmospheric Research(NCAR) 5 | # Research Applications Laboratory(RAL) 6 | # P.O.Box 3000, Boulder, Colorado, 80307-3000, USA 7 | # 8 | # Name: Testing_DEM_interpolation.py 9 | # Purpose: 10 | # Author: $ Kevin Sampson(ksampson) 11 | # Created: 2020 12 | # Licence: 13 | # *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= 14 | 15 | descText = "This tool takes a WRF/WPS Geogrid file and a higher-resolution DEM raster " +\ 16 | "and interpolates the file to a grid that is either identical or nested into " +\ 17 | "the geogrid file Mass grid (Stagger='M'). This tool may be used to test " +\ 18 | "the effect of different regridding factors and interpolation methods on the " +\ 19 | "output DEM. It may also be used to interpolate other raster files to the model " +\ 20 | "grid, but caution should be used with regard to interpolating discrete data." +\ 21 | "Default interpolation method is bilinear." 22 | 23 | # --- Import Modules --- # 24 | 25 | # Import Python core modules 26 | import sys 27 | sys.dont_write_bytecode = True 28 | import time 29 | import os 30 | import copy as cpy 31 | from packaging.version import parse as LooseVersion # To avoid deprecation warnings 32 | from argparse import ArgumentParser 33 | 34 | # Import Additional Modules 35 | import netCDF4 36 | import osgeo 37 | 38 | try: 39 | if LooseVersion(osgeo.__version__) > LooseVersion('3.0.0'): 40 | from osgeo import gdal 41 | else: 42 | import gdal 43 | except: 44 | sys.exit('ERROR: cannot find GDAL/OGR modules') 45 | 46 | # Import function library into namespace. Must exist in same directory as this script. 47 | import wrfhydro_functions as wrfh # Function script packaged with this toolbox 48 | 49 | # Module options 50 | gdal.UseExceptions() # this allows GDAL to throw Python Exceptions 51 | gdal.PushErrorHandler('CPLQuietErrorHandler') 52 | 53 | # --- End Import Modules --- # 54 | 55 | # --- Global Variables --- # 56 | 57 | # Default values 58 | resampling_method = gdal.GRA_Bilinear # Default regridding method 59 | default_regridFactor = 1 # Default regridding factor 60 | 61 | # --- End Global Variables --- # 62 | 63 | # --- Functions --- # 64 | def is_valid_file(parser, arg): 65 | # https://stackoverflow.com/questions/11540854/file-as-command-line-argument-for-argparse-error-message-if-argument-is-not-va 66 | if not os.path.exists(arg): 67 | parser.error("The file {0} does not exist!".format(arg)) 68 | else: 69 | return str(arg) 70 | 71 | def interpolate_raster(in_Geogrid, inDEM, cellsize, out_file): 72 | tic1 = time.time() 73 | 74 | # Georeference geogrid file 75 | rootgrp = netCDF4.Dataset(in_Geogrid, 'r') # Establish an object for reading the input NetCDF files 76 | globalAtts = rootgrp.__dict__ # Read all global attributes into a dictionary 77 | if LooseVersion(netCDF4.__version__) > LooseVersion('1.4.0'): 78 | rootgrp.set_auto_mask(False) # Change masked arrays to old default (numpy arrays always returned) 79 | 80 | # Build grid object 81 | coarse_grid = wrfh.WRF_Hydro_Grid(rootgrp) # Instantiate a grid object 82 | fine_grid = cpy.copy(coarse_grid) # Copy the grid object for modification 83 | fine_grid.regrid(cellsize) # Regrid to the desired nest ratio 84 | print(' Created projection definition from input NetCDF GEOGRID file.') 85 | print(' Proj4: {0}'.format(coarse_grid.proj4)) # Print Proj.4 string to screen 86 | print(' Coarse grid GeoTransform: {0}'.format(coarse_grid.GeoTransformStr())) # Print affine transformation to screen. 87 | print(' Coarse grid extent [Xmin, Ymin, Xmax, Ymax]: {0}'.format(coarse_grid.grid_extent())) # Print extent to screen. 88 | print(' Fine grid extent [Xmin, Ymin, Xmax, Ymax]: {0}'.format(fine_grid.grid_extent())) # Print extent to screen. 89 | 90 | # Create high resolution topography layers 91 | in_DEM = gdal.Open(inDEM, 0) # Open with read-only mode 92 | mosprj = fine_grid.project_to_model_grid(in_DEM, saveRaster=True, OutGTiff=out_file, resampling=resampling_method) 93 | in_DEM = mosprj = None 94 | print(' Output file created: {0}'.format(out_file)) 95 | print(' Interpolation completed in {0:3.2f} seconds.'.format(time.time()-tic1)) 96 | 97 | # --- End Functions --- # 98 | 99 | # --- Main Codeblock --- # 100 | if __name__ == '__main__': 101 | print('Script initiated at {0}'.format(time.ctime())) 102 | tic = time.time() 103 | 104 | # Setup the input arguments 105 | parser = ArgumentParser(description=descText, add_help=True) 106 | parser.add_argument("-i", 107 | dest="in_Geogrid", 108 | required=True, 109 | help="Path to WPS geogrid (geo_em.d0*.nc) file or WRF-Hydro Fulldom_hires.nc file.") 110 | parser.add_argument("-d", 111 | dest="inDEM", 112 | type=lambda x: is_valid_file(parser, x), 113 | default='', 114 | required=True, 115 | help="Path to input high-resolution elevation raster [REQUIRED]") 116 | parser.add_argument("-R", 117 | dest="cellsize", 118 | type=int, 119 | default=default_regridFactor, 120 | help="Regridding (nest) Factor. default=10") 121 | parser.add_argument("-o", 122 | dest="out_file", 123 | default='./', 124 | required=True, 125 | help="Output raster file (.tif).") 126 | 127 | # If no arguments are supplied, print help message 128 | if len(sys.argv) == 1: 129 | parser.print_help(sys.stderr) 130 | sys.exit(1) 131 | args = parser.parse_args() 132 | all_defaults = {key: parser.get_default(key) for key in vars(args)} 133 | 134 | # Handle printing to user the default variable name 135 | print(' Parameter values that have not been altered from script default values:') 136 | if args.cellsize == all_defaults["cellsize"]: 137 | print('Using default regridding factor of: {0}'.format(all_defaults["cellsize"])) 138 | if args.out_file == all_defaults["out_file"]: 139 | print('Using default output location of: {0}'.format(all_defaults["out_file"])) 140 | 141 | # Print information to screen 142 | print(' Values that will be used in building this routing stack:') 143 | print(' Input WPS Geogrid file: {0}'.format(args.in_Geogrid)) 144 | print(' Input high-resolution DEM: {0}'.format(args.inDEM)) 145 | print(' Regridding factor: {0}'.format(args.cellsize)) 146 | print(' Output raster file: {0}'.format(args.out_file)) 147 | print(' Resampling method: {0}\n'.format(str(resampling_method))) 148 | 149 | # Run the function to interpolate the raster, given inputs 150 | interpolate_raster(args.in_Geogrid, args.inDEM, args.cellsize, args.out_file) 151 | print('Process completed in {0:3.2f} seconds.'.format(time.time()-tic)) -------------------------------------------------------------------------------- /wrfhydro_gis/Unused_Code.py: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | # Coastline harmonization with a landmask 6 | def coastlineHarmonize(maskFile, ds, outmaskFile, outDEM, minimum, waterVal=0): 7 | ''' 8 | This function is designed to take a coastline mask and harmonize elevation 9 | values to it, such that no elevation values that are masked as water cells 10 | will have elevation >0, and no land cells will have an elevation < minimum. 11 | ''' 12 | tic1 = time.time() 13 | 14 | # Read mask file for information 15 | refDS = gdal.Open(maskFile, gdalconst.GA_ReadOnly) 16 | target_ds = gdal.GetDriverByName(RasterDriver).Create(outmaskFile, ds.RasterXSize, ds.RasterYSize, 1, gdal.GDT_Byte) 17 | DEM_ds = gdal.GetDriverByName(RasterDriver).Create(outDEM, ds.RasterXSize, ds.RasterYSize, 1, ds.GetRasterBand(1).DataType) 18 | CopyDatasetInfo(ds, target_ds) # Copy information from input to output 19 | CopyDatasetInfo(ds, DEM_ds) # Copy information from input to output 20 | 21 | # Resample input to output 22 | gdal.ReprojectImage(refDS, target_ds, refDS.GetProjection(), target_ds.GetProjection(), gdalconst.GRA_NearestNeighbour) 23 | 24 | # Build numpy array of the mask grid and elevation grid 25 | maskArr = BandReadAsArray(target_ds.GetRasterBand(1)) 26 | elevArr = BandReadAsArray(ds.GetRasterBand(1)) 27 | 28 | # Reassign values 29 | ndv = ds.GetRasterBand(1).GetNoDataValue() # Obtain nodata value 30 | mask = maskArr==1 # A boolean mask of True wherever LANDMASK=1 31 | elevArr[elevArr==ndv] = 0 # Set Nodata cells to 0 32 | elevArr[mask] += minimum # For all land cells, add minimum elevation 33 | elevArr[~mask] = waterVal # ds.GetRasterBand(1).GetNoDataValue() 34 | 35 | # Write to output 36 | band = DEM_ds.GetRasterBand(1) 37 | BandWriteArray(band, elevArr) 38 | band.SetNoDataValue(ndv) 39 | 40 | # Clean up 41 | target_ds = refDS = DEM_ds = band = None 42 | del maskArr, elevArr, ndv, mask 43 | print(' DEM harmonized with landmask in %3.2f seconds.' %(time.time()-tic1)) 44 | 45 | def raster_extent(in_raster): 46 | ''' 47 | Given a raster object, return the bounding extent [xMin, yMin, xMax, yMax] 48 | ''' 49 | xMin, DX, xskew, yMax, yskew, DY = in_raster.GetGeoTransform() 50 | Xsize = in_raster.RasterXSize 51 | Ysize = in_raster.RasterYSize 52 | xMax = xMin + (float(Xsize)*DX) 53 | yMin = yMax + (float(Ysize)*DY) 54 | del Xsize, Ysize, xskew, yskew, DX, DY 55 | return [xMin, yMin, xMax, yMax] 56 | 57 | def alter_GT(GT, regridFactor): 58 | ''' 59 | This function will alter the resolution of a raster's affine transformation, 60 | assuming that the extent and CRS remain unchanged. 61 | ''' 62 | # Georeference geogrid file 63 | GeoTransform = list(GT) 64 | DX = GT[1]/float(regridFactor) 65 | DY = GT[5]/float(regridFactor) 66 | GeoTransform[1] = DX 67 | GeoTransform[5] = DY 68 | GeoTransformStr = ' '.join([str(item) for item in GeoTransform]) 69 | return GeoTransform, GeoTransformStr, DX, DY 70 | 71 | # Function to reclassify values in a raster 72 | def reclassifyRaster(array, thresholdDict): 73 | ''' 74 | Apply a dictionary of thresholds to an array for reclassification. 75 | This function may be made more complicated as necessary 76 | ''' 77 | # Reclassify array using bounds and new classified values 78 | new_arr = array.copy() 79 | for newval, oldval in thresholdDict.iteritems(): 80 | mask = numpy.where(array==oldval) 81 | new_arr[mask] = newval 82 | del array 83 | return new_arr 84 | 85 | # Function to calculate statistics on a raster using gdalinfo command-line 86 | def calcStats(inRaster): 87 | print(' Calculating statistics on %s' %inRaster) 88 | subprocess.call('gdalinfo -stats %s' %inRaster, shell=True) 89 | 90 | def apply_threshold(array, thresholdDict): 91 | ''' 92 | Apply a dictionary of thresholds to an array for reclassification. 93 | This function may be made more complicated as necessary 94 | ''' 95 | 96 | # Reclassify array using bounds and new classified values 97 | for newval, bounds in thresholdDict.iteritems(): 98 | mask = numpy.where((array > bounds[0]) & (array <= bounds[1])) # All values between bounds[0] and bounds[1] 99 | array[mask] = newval 100 | return array -------------------------------------------------------------------------------- /wrfhydro_gis/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NCAR/wrf_hydro_gis_preprocessor/efd89ea9c9250db2a8099a3b0894ff84b2008f4d/wrfhydro_gis/__init__.py --------------------------------------------------------------------------------