├── .DS_Store ├── .gitattributes ├── .gitignore ├── README.md ├── conda_setup_for_pycorr.instructions ├── images ├── L2A_S2BA_7VEG_15_20200512_20200527_log10.png ├── QGIS_selecting_vv_masked.png ├── QGIS_speed_with_colorscale.png └── S2Banner_log10.png ├── pycorr_iceflow_v1.1.py └── pycorr_processing_tools ├── hp_filter_B8_lowmem_makeprdir.py ├── make_new_nc_dcoff_and_divergence_v0.py ├── make_processing_commands_S2.py ├── pycorrtools.py └── run_pycorr_process_list_multiprocess_v0.py /.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/markf6/pycorr_iceflow/9c4a6f6a7afbce8cf79b3610095bf64994661c61/.DS_Store -------------------------------------------------------------------------------- /.gitattributes: -------------------------------------------------------------------------------- 1 | # Auto detect text files and perform LF normalization 2 | * text=auto 3 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | 2 | images/.DS_Store 3 | .DS_Store 4 | .DS_Store 5 | .DS_Store 6 | .DS_Store 7 | .DS_Store 8 | .DS_Store 9 | .DS_Store 10 | .DS_Store 11 | .DS_Store 12 | .DS_Store 13 | .DS_Store 14 | .DS_Store 15 | .DS_Store 16 | .DS_Store 17 | .DS_Store 18 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ![](images/S2Banner_log10.png) 2 | # pycorr_iceflow 3 | 4 | # Ice flow "feature" offset tracking in satellite images 5 | 6 | ## basic fuction 7 | `pycorr_iceflow_1.1.py` is based on the approach described in [Rapid large-area mapping of ice flow using Landsat 8](https://www.sciencedirect.com/science/article/pii/S003442571530211X) and the production code used for the [GoLIVE Landsat 8 processing](https://nsidc.org/data/golive) at NSIDC. It determines offsets at a user-specified grid spacing by comparing a square patch of pixels (a "chip") from an earlier image to the pixels in a larger square patch in a later image using the openCV cv2.matchTemplate function, a dft-based cross correlation which returns a correlation surface at integer pixel offsets between the two image chips. Sub-pixel offset is determined by finding the peak of the spline of the correlation surface in the vicinity of the highest integer correlation peak. 8 | 9 | --- 10 | ## why pycorr? 11 | pycorr is a "relatively" light weight python script that exploits [GDAL](https://gdal.org) and [openCV](https://opencv.org) to rapidly determine offsets in an image pair. Because it uses GDAL for image I/O, it can use image pairs in many geospatial formats, with the caveat that the images do overlap some spatially and that their image pixels have the same size. pycorr produces a netCDF4 file with offsets and correlation values at a user-specified grid resolution in the same projection as the original images **if** the input images are in a UTM or Antarctic Polar Stereo (epsg:3031) projection - this is the set of projections used for Landsat imagery. If your images are in a a different projection, you are not out of luck - use the `-output_geotiffs_instead_of_netCDF` option to output in the same projection as the input images - this option allows any projection GDAL knows about, which is most. The issue here is that the netCDF4 cf geolocation spec requires a variable block in the output file that is named after the projection, making it difficult to support all projections in a simple way. 12 | 13 | There are a number of packages that provide similar analyses and may have more sophisticated approaches to identifying and filtering "noisy" matches, which can be due to cloud cover, surface change, low contrast features or an absence of features, shadows, and low signal-to-noise input imagery. pycorr is intentionally simple - it does not use a series of larger chip sizes if the initial match fails to find a peak at a location; it returns a limited set of metrics that help document the uniqueness and strength of a peak that can be used to filter the output, but it does not attempt to provide an error estimate for each match. 14 | 15 | pycorr is computationally fast because of the use of numpy and the openCV library, and so can process an image pair in minutes or tens of minutes depending on the image sizes and requested grid spacing and maximum offset specified for the search. This process can be sped up by using land and ocean masks to limit search distances off of the ice and also by using reference ice flow speed maps to set search distances larger in fast flowing areas and smaller in slow flowing areas, but for simplest uses these are not applied. 16 | 17 | --- 18 | ## installation 19 | The libraries required to run pycorr can be installed into a local environment with [Anaconda](https://www.anaconda.com/products/individual) pulling from the [conda-forge](https://conda-forge.org) repository: 20 | ### create environment 21 | ``` 22 | conda create --name test_pycorr_env -c conda-forge numpy matplotlib gdal netCDF4 psutil scipy opencv ipython fiona shapely pyproj boto3 git 23 | ``` 24 | 25 | Deprecated: At the time of first writing (5/5/2021) the current gdal in conda-forge (3.2.2) did not install python bindings properly - version 3.1.2 was specified here to avoid this issue. 26 | 27 | ### clone the repository 28 | ``` 29 | git clone https://github.com/markf6/pycorr_iceflow.git 30 | ``` 31 | 32 | [From this repository the only file you need at this point is pycorr_iceflow_1.1.py] 33 | 34 | --- 35 | ## example run 36 | ### activate the conda environment: 37 | ``` 38 | conda activate test_pycorr_env 39 | ``` 40 | 41 | 42 | #### get two Sentinel 2 images from the [AWS S2 Level2A CloudOptimizedGeotiff (COG) public data archive](https://registry.opendata.aws/sentinel-2-l2a-cogs/) 43 | 44 | ``` 45 | curl https://sentinel-cogs.s3.us-west-2.amazonaws.com/sentinel-s2-l2a-cogs/7/V/EG/2020/5/S2B_7VEG_20200512_0_L2A/B08.tif --output S2B_7VEG_20200512_0_L2A_B08.tif 46 | curl https://sentinel-cogs.s3.us-west-2.amazonaws.com/sentinel-s2-l2a-cogs/7/V/EG/2020/5/S2A_7VEG_20200527_0_L2A/B08.tif --output S2A_7VEG_20200527_0_L2A_B08.tif 47 | ``` 48 | 49 | ### run pycorr on this image pair, generate output netCDF4 (.nc) data file and browse GeoTIFF (with log colorscale) 50 | ``` 51 | python pycorr_iceflow_v1.1.py -imgdir . S2B_7VEG_20200512_0_L2A_B08.tif S2A_7VEG_20200527_0_L2A_B08.tif \ 52 | -img1datestr 20200512 -img2datestr 20200527 -datestrfmt "%Y%m%d" \ 53 | -inc 10 -half_source_chip 10 -half_target_chip 55 -plotvmax 25 -log10 \ 54 | -out_name_base L2A_S2BA_7VEG_15_20200512_20200527 -progupdates -use_itslive_land_mask_from_web 55 | ``` 56 | ### explanation of these options 57 | ``` 58 | -imgdir . [image files are in current directory] 59 | or 60 | -img1dir aaa -img2dir bbb [specify directories for each image file] 61 | 62 | img1.tif img2.tif [the band 8 input file names - S2B_7VEG_20200512_0_L2A_B08.tif S2A_7VEG_20200527_0_L2A_B08.tif 63 | in this case - can be any geospatial format GDAL can read (.jp2 for S2 L1C images works, etc)] 64 | 65 | -img1datestr 20200512 -img2datestr 20200527 -datestrfmt "%Y%m%d" [specify the image acquisiton dates using the specified format %Y is 66 | 4 digit year, %m is two digit month (01 - 12), %d is two digit day 67 | (01-31). If not specified, pycorr will look for the date in the 68 | Landsat or S2 filename, but the conventions for these vary and may 69 | not be covered] 70 | 71 | -inc 10 -half_source_chip 10 -half_target_chip 55 72 | [-inc is output grid spacing in input image pixels (for S2 10m pixels, output will have 100m pixel size)] 73 | [-half_source_chip 10 specifies that a 20 x 20 pixel "chip" will be taken from each the grid pixel center] 74 | [-half_target_chip 55 specifies a 110 x 110 pixel "chip" in the second image - search looks at every integer pixel offset 75 | of the source chip within the target chip, generating a correlation surface, and then reports the peak 76 | in the splined correlation surface at subpixel level (to 1/100th of a pixel)] 77 | 78 | Note that maximum trackable offset in x or y is (half_target_chip - half_source_chip) * 79 | input_pixel_size_in_meters - so for this example (55 pixels - 10 pixels) * 10 m/pixel = 80 | max trackable offset of 450 meters, or 30 m/d for this 15-day pair. 81 | 82 | -plotvmax 25 -log10 [these two together cause a color scale GeoTIFF browse image to be generated - log color scale, 83 | max velocity for browse image color scale is 25 m/d (only for browse image - DOES NOT EFFECT 84 | max trackable offset set by half_source_chip and half_target_chip above) 85 | 86 | -out_name_base L2A_S2BA_7VEG_15_20200512_20200527 [a string you specify for the start of the output filename, so you remember what you did] 87 | 88 | -progupdates [output updates for each 10% of the input image rows processed, so you aren't staring at a blank screen] 89 | 90 | -use_itslive_land_mask_from_web [pycorr will search the its-live-data.jpl.nasa.gov.s3.amazonaws.com S3 bucket - first it pulls an index 91 | shapefile and uses the center lat,lon of the overlap of the input images to figure out which its-live 92 | land mask to use, then it builds a local land mask based on it and uses it for offset correction between 93 | the two images - this makes the "land" chips have a 0 median offset, which provides an initial correction 94 | for geolocation offset between the pair of images - very important for short time intervals] 95 | 96 | ``` 97 | 98 | 99 | ### output files 100 | 101 | the output browse image (speed represented by log colorscale, with dark red = 25 m/d (-log10 -plotvmax 25)) 102 | ![](images/L2A_S2BA_7VEG_15_20200512_20200527_log10.png) 103 | 104 | The output netCDF4 file (L2A_S2BA_7VEG_15_20200512_20200527.nc) can be opened as a raster in QGIS - 105 | choose the "vv_masked" layer to get ice flow speed, or vx_masked and vy_masked to get the vector components of the flow speed 106 | in projection x and y meters/day. 107 | 108 | selecting vv_masked layer from netCDF4 .nc file 109 | 110 | ![](images/QGIS_selecting_vv_masked.png) 111 | 112 | 113 | The selected layer, with colorscale applied 114 | 115 | ![](images/QGIS_speed_with_colorscale.png) 116 | 117 | --- 118 | ## major sources of error 119 | As a working point of reference the offsets determined in images of good quality over surfaces with recognizable features can be determined to 0.1 pixels or better. It is difficult to quantify this accuracy for any single match however. A sense of the background accuracy of the matching process for an image pair can be gained by looking at the scatter in the offsets determined for land pixels. Note that this scatter is likely similar for images separated in time by a few days or for a year - meaning that the same scatter would produce a much larger scatter in velocity determined from a few-day pair than from a pair with longer time separation. It is always a tradeoff to use longer pairs for accuracy if the ice you are studying has rapid variations in velocity over time - every image pair velocity field is a time-averaged displacement measurement. 120 | 121 | A typical Landsat or Sentinel 2 satellite image has a geolocation accuracy that may be tens of meters - this can translate into an offset between two images of a few pixels. This simple offset is the largest source of error in a velocity determination if the time interval between two images is short - a 15m (one pixel) offset for a 16-day Landsat 8 pair would result in a background "speed" over land of nearly 1 meter/day. The same offset for an image pair that had a 365-day time separation would produce a background "speed" over land of a few cm/day. For most slow flowing ice, if there is a significant amount of land visible in the imagery, one will want to use a land mask that can identify the non-ice pixels, so that the background offset can be removed at the time of processing. This version of pycorr is able to use the global land masks that are presently online as part of the [ITS_LIVE project](https://its-live.jpl.nasa.gov), but this will require an internet connection at the time the code is run. It is also possible to have your own local mask file - either a land(1)/glacier(0)/water(0) mask or a water(2)/land(1)/glacier(0) mask will work - it will be reprojected and sampled at the output grid resolution during processing. The second of these will also allow limiting search distances over water, speeding up processing. 122 | 123 | 124 | A second common source of error is internal distortion in the satellite imagery because of limited accuracy in the surface elevation model used by the image provider to geolocate the pixels in the image. While many optical imagers take images from a near-nadir viewpoint, the width of the image swath is large enough that pixels near the edge of the swath are viewed from an angle off nadir - meaning that any error in an elevation model used to map those pixels to the surface of the Earth will mis-locate them relative to pixels in the image center. If the ground location in question is to the right of the satellite on one pass, and to the left on an adjacent pass, then the topographic error will produce offsets in opposite directions. This parallax is the signal that is used to map topography from stereo imagery, but for ice flow mapping it isn't a good thing. It is also the case that topographic errors are much more common over rapidly flowing or rapidly melting ice than over any other land cover type. There is a simple solution to this problem: **use images from the same orbit track** so that the topographic distortion is essentially the same in both images - this eliminates 95% of the issue. For Landsat this means using image pairs with the same path and row designations. For Sentinel 2 there is a similar "track" in the metadata, but the easiest approach is to use images separated by multiples of 5 days - so a 5 day S2A x S2B pair, or a 10 day S2A x S2A or S2B x S2B pair (Sentinel 2 A and B are in 10-day repeat orbits, staggered 5 days apart). 125 | 126 | 127 | ## the full set of options 128 | ``` 129 | usage: pycorr_iceflow_v1.1.py [-h] [-imgdir IMGDIR] [-img1dir IMG1DIR] [-img2dir IMG2DIR] [-output_dir OUTPUT_DIR] [-img1datestr IMG1DATESTR] 130 | [-img2datestr IMG2DATESTR] [-datestrfmt DATESTRFMT] [-out_name_base OUT_NAME_BASE] [-bbox min_x min_y max_x max_y] 131 | [-plotvmax PLOTVMAX] [-trackvmax TRACKVMAX] [-nodatavmax NODATAVMAX] [-half_source_chip HALF_SOURCE_CHIP] 132 | [-half_target_chip HALF_TARGET_CHIP] [-inc INC] [-gfilt_sigma GFILT_SIGMA] [-dcam DCAM] [-cam CAM] [-cam1 CAM1] 133 | [-lgo_mask_filename LGO_MASK_FILENAME] [-lgo_mask_file_dir LGO_MASK_FILE_DIR] [-use_itslive_land_mask_from_web] 134 | [-VRT_dir VRT_DIR] [-max_allowable_pixel_offset_correction MAX_ALLOWABLE_PIXEL_OFFSET_CORRECTION] 135 | [-do_not_highpass_input_images] [-v] [-mpy] [-log10] [-nlf] [-progupdates] [-output_geotiffs_instead_of_netCDF] 136 | [-offset_correction_lgo_mask] [-lgo_mask_limit_land_offset] 137 | img1_name img2_name 138 | 139 | uses image to image correlation to detect offsets in surface features; 140 | produces map of offsets in units of pixels and velocity in m/day or m/year 141 | 142 | positional arguments: 143 | img1_name image 1 filename 144 | img2_name image 2 filename 145 | 146 | optional arguments: 147 | -h, --help show this help message and exit 148 | -imgdir IMGDIR single source dir for both images [.] 149 | -img1dir IMG1DIR source dir for image 1 [.] 150 | -img2dir IMG2DIR source dir for image 2 [.] 151 | -output_dir OUTPUT_DIR 152 | output dir [.] 153 | -img1datestr IMG1DATESTR 154 | date string for image 1 [None - set from L8 filename] 155 | -img2datestr IMG2DATESTR 156 | date string for image 2 [None - set from L8 filename] 157 | -datestrfmt DATESTRFMT 158 | date string format for img1datestr and img2datestr [None - set from L8 filename] eg. %m/%d/%Y - SEE: 159 | https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior 160 | -out_name_base OUT_NAME_BASE 161 | output filename base 162 | -bbox min_x min_y max_x max_y 163 | bbox for feature tracking area in projection m - minx miny maxx maxy - defaults to entire common area between images 164 | -plotvmax PLOTVMAX max vel for colormap [15] 165 | -trackvmax TRACKVMAX max vel that can be tracked (half_target_chip will be made larger if necessary, but if speed_ref is slow, that limit will be 166 | used...) [not set] 167 | -nodatavmax NODATAVMAX 168 | max vel (m/d) that can be tracked if speedref file is used and has nodata at location (using trackvmax for this for large time 169 | separations impractical - think ocean pixels...) [0.333000] 170 | -half_source_chip HALF_SOURCE_CHIP 171 | half size of source (small) chip [10] 172 | -half_target_chip HALF_TARGET_CHIP 173 | half size of target (large or search) chip [20] 174 | -inc INC inc(rement) or chip center grid spacing in pixels (must be even integer) [20] 175 | -gfilt_sigma GFILT_SIGMA 176 | gaussian filter sigma (standard deviation in pixels for Gaussian kernel) [3.000000] 177 | -dcam DCAM min difference between corr peaks 1 and 2 for mask ( full mask statement: masked_where(((del_corr_arr date modified to noload for sc application - don't read image on setup - will read image data in main code to hp filter, delete...and keep footprint small 29 | """geocoded image input and info 30 | a=GeoImg(in_file_name,indir='.') 31 | a.img will contain image 32 | a.parameter etc...""" 33 | def __init__(self, in_filename,in_dir='.',datestr=None,datefmt='%m/%d/%y'): 34 | self.filename = in_filename 35 | self.in_dir_path = in_dir #in_dir can be relative... 36 | self.in_dir_abs_path=os.path.abspath(in_dir) # get absolute path for later ref if needed 37 | self.gd=gdal.Open(self.in_dir_path + os.path.sep + self.filename) 38 | self.nodata_value=self.gd.GetRasterBand(1).GetNoDataValue() 39 | self.srs=osr.SpatialReference(wkt=self.gd.GetProjection()) 40 | self.gt=self.gd.GetGeoTransform() 41 | self.proj=self.gd.GetProjection() 42 | self.intype=self.gd.GetDriver().ShortName 43 | self.min_x=self.gt[0] 44 | self.max_x=self.gt[0]+self.gd.RasterXSize*self.gt[1] 45 | self.min_y=self.gt[3]+self.gt[5]*self.gd.RasterYSize 46 | self.max_y=self.gt[3] 47 | self.pix_x_m=self.gt[1] 48 | self.pix_y_m=self.gt[5] 49 | self.num_pix_x=self.gd.RasterXSize 50 | self.num_pix_y=self.gd.RasterYSize 51 | self.XYtfm=np.array([self.min_x,self.max_y,self.pix_x_m,self.pix_y_m]).astype('float') 52 | if (datestr is not None): 53 | self.imagedatetime=dt.datetime.strptime(datestr,datefmt) 54 | elif ((self.filename.find('LC8') == 0) | (self.filename.find('LO8') == 0) | \ 55 | (self.filename.find('LE7') == 0) | (self.filename.find('LT5') == 0) | \ 56 | (self.filename.find('LT4') == 0)): # looks landsat like - try parsing the date from filename (contains day of year) 57 | self.sensor=self.filename[0:3] 58 | self.path=int(self.filename[3:6]) 59 | self.row=int(self.filename[6:9]) 60 | self.year=int(self.filename[9:13]) 61 | self.doy=int(self.filename[13:16]) 62 | self.imagedatetime=dt.date.fromordinal(dt.date(self.year-1,12,31).toordinal()+self.doy) 63 | else: 64 | self.imagedatetime=None # need to throw error in this case...or get it from metadata 65 | # self.img=self.gd.ReadAsArray().astype(np.float32) # works for L8 and earlier - and openCV correlation routine needs float or byte so just use float... 66 | self.srs=osr.SpatialReference(wkt=self.gd.GetProjection()) 67 | def imageij2XY(self,ai,aj,outx=None,outy=None): 68 | it = np.nditer([ai,aj,outx,outy], 69 | flags = ['external_loop', 'buffered'], 70 | op_flags = [['readonly'],['readonly'], 71 | ['writeonly', 'allocate', 'no_broadcast'], 72 | ['writeonly', 'allocate', 'no_broadcast']]) 73 | for ii,jj,ox,oy in it: 74 | ox[...]=(self.XYtfm[0]+((ii+0.5)*self.XYtfm[2])); 75 | oy[...]=(self.XYtfm[1]+((jj+0.5)*self.XYtfm[3])); 76 | return np.array(it.operands[2:4]) 77 | def XY2imageij(self,ax,ay,outi=None,outj=None): 78 | it = np.nditer([ax,ay,outi,outj], 79 | flags = ['external_loop', 'buffered'], 80 | op_flags = [['readonly'],['readonly'], 81 | ['writeonly', 'allocate', 'no_broadcast'], 82 | ['writeonly', 'allocate', 'no_broadcast']]) 83 | for xx,yy,oi,oj in it: 84 | oi[...]=((xx-self.XYtfm[0])/self.XYtfm[2])-0.5; # if python arrays started at 1, + 0.5 85 | oj[...]=((yy-self.XYtfm[1])/self.XYtfm[3])-0.5; # " " " " " 86 | return np.array(it.operands[2:4]) 87 | # self.img=self.gd.ReadAsArray().astype(np.uint8) # L7 and earlier - doesn't work with plt.imshow... 88 | # self.img_ov2=self.img[0::2,0::2] 89 | # self.img_ov10=self.img[0::10,0::10] 90 | 91 | 92 | # these need to be global - based on numpy limits for int16 datatype 93 | int_limit_maxm1 = np.iinfo('int16').max - 1 # get max (and min) signed int16 values for rescaling float hp before converting back to signed int16 94 | int_limit_minp2 = np.iinfo('int16').min + 2 95 | int_nodata_val = np.iinfo('int16').min 96 | format = "GTiff" 97 | driver = gdal.GetDriverByName( format ) 98 | 99 | 100 | def make_hp_func(instuff): 101 | process_time=time.time() 102 | indir,b8_img_name,hpdir,hp1filename,hpargs=instuff 103 | if resource_available: 104 | print 'memory use at start of hp function: %s'%(memory_usage_resource()) 105 | # print 'working on image: %s'%(b8_img_name) 106 | img1=GeoImg_noload(b8_img_name,in_dir=indir) 107 | if resource_available: 108 | print 'memory use after noload: %s'%(memory_usage_resource()) 109 | # t_log("open image") 110 | img1_img=img1.gd.ReadAsArray().astype(np.float32) 111 | print 'pixels,lines ',img1_img.shape 112 | if resource_available: 113 | print 'memory use after load as float32: %s'%(memory_usage_resource()) 114 | hp_arr_1=np.zeros_like(img1_img,dtype=np.float32) 115 | if resource_available: 116 | print 'memory use after alloc for hp_arr_1 as float32: %s'%(memory_usage_resource()) 117 | gaussian_filter(img1_img,hpargs.gfilt_sigma,output=hp_arr_1) 118 | if resource_available: 119 | print 'memory use after gaussian_filter: %s'%(memory_usage_resource()) 120 | hp_arr_1 -= img1_img 121 | hp_arr_1 *= -1.0 122 | # for j in range(img1_img.shape[0]): 123 | # for i in range(img1_img.shape[1]): 124 | # tmp=hp_arr_1[j,i] 125 | # hp_arr_1[j,i]=img1_img[j,i]-tmp 126 | # hp_arr_1=img1_img-gaussian_filter(img1_img,hpargs.gfilt_sigma,output=None) 127 | if resource_available: 128 | print 'memory use after hp: %s'%(memory_usage_resource()) 129 | if img1.nodata_value: 130 | nodata_index = img1_img == img1.nodata_value 131 | else: 132 | nodata_index = img1_img == 0 # if there is no no-data value set, use 0 in the orginal image as no-data 133 | img1_img=None 134 | if resource_available: 135 | print 'memory use after del of img1_img: %s'%(memory_usage_resource()) 136 | stddev=np.std(hp_arr_1) 137 | hp_min=np.min(hp_arr_1) 138 | hp_max=np.max(hp_arr_1) 139 | if resource_available: 140 | print 'memory use after minmaxstd: %s'%(memory_usage_resource()) 141 | # img_min=np.min(img1_img[img1_img!=0.0]) 142 | # img_max=np.max(img1_img) 143 | hp_larger=np.max([np.abs(hp_min), hp_max]) 144 | if (hp_larger<=float(int_limit_maxm1)): # scale out to +- hp_larger 145 | # hp_arr_1=int_limit_maxm1 * (hp_arr_1/hp_larger) 146 | hp_arr_1 *= (float(int_limit_maxm1)/hp_larger) 147 | scaling=float(int_limit_maxm1)/hp_larger 148 | # print 'scaling is +- %f mapped to +- %f'%(hp_larger, int_limit_maxm1) 149 | elif ((10.0*stddev)<=float(int_limit_maxm1)): # or scale out to +- 10*std 150 | # hp_arr_1=int_limit_maxm1 * (hp_arr_1/(10.0*stddev)) 151 | hp_arr_1 *= (float(int_limit_maxm1)/(10.0*stddev)) 152 | scaling=float(int_limit_maxm1)/(10.0*stddev) 153 | # print 'scaling is +- %f mapped to +- %f'%(10.0*stddev, int_limit_maxm1) 154 | else: 155 | # hp_arr_1=hp_arr_1 # leave it alone and clip it (min radiometric resolution in hp is 1 DN from the original image...) 156 | scaling=1.0 157 | if resource_available: 158 | print 'memory use after rescaling: %s'%(memory_usage_resource()) 159 | # print 'scaling is one to one DN image to DN hp, with hp clipped at +- %f'%(int_limit_maxm1) 160 | # hp_arr_1=int_limit_maxm1 * (hp_arr_1/(2.0*stddev)) 161 | print 'image %s hp std dev: %f hp min %f hp max %f scaling %f hp DN : 1 img DN'%(b8_img_name,stddev,hp_min,hp_max,scaling) 162 | hp_arr_1[hp_arr_1<=float(int_limit_minp2)]=float(int_limit_minp2) # clip to min value plus 2 - min value is used for no_data 163 | hp_arr_1[hp_arr_1>float(int_limit_maxm1)]=float(int_limit_maxm1) # p2 and m1 are to help with float to int16 conversion issues (rounding won't produce wrapping) 164 | hp_arr_1=np.int16(hp_arr_1) 165 | # if img1.nodata_value: 166 | # hp_arr_1[img1_img == img1.nodata_value] = int_nodata_val 167 | # else: 168 | # hp_arr_1[img1_img == 0] = int_nodata_val # if there is no no-data value set, use 0 in the orginal image as no-data 169 | hp_arr_1[nodata_index] = int_nodata_val 170 | if resource_available: 171 | print 'memory use just before tiff output: %s'%(memory_usage_resource()) 172 | dict={} # set up tiff tag 173 | dict['TIFFTAG_IMAGEDESCRIPTION'] = "original image: %s"%(b8_img_name)+" python_script_parameters:" + " ".join(sys.argv) + " hp_scaling: %f "%(scaling) + ' Arg ' + str(hpargs) 174 | dst_filename = hpdir + '/' + hp1filename 175 | (out_lines,out_pixels)=hp_arr_1.shape 176 | out_bands=1 177 | compression_options=['COMPRESS=DEFLATE','PREDICTOR=1'] 178 | dst_ds = driver.Create( dst_filename, out_pixels, out_lines, out_bands, gdal.GDT_Int16, options = compression_options) 179 | dst_ds.SetMetadata(dict) # write tiff tag 180 | dst_ds.SetGeoTransform( img1.gt ) 181 | dst_ds.SetProjection( img1.proj ) 182 | dst_ds.GetRasterBand(1).SetNoDataValue( int_nodata_val ) 183 | dst_ds.GetRasterBand(1).WriteArray( (hp_arr_1).astype('int16') ) 184 | dst_ds = None # done, close the dataset 185 | # if psutil_available: 186 | # print 'img1_int16_hp written - using ',memory_usage_psutil(),memory_usage_resource() 187 | # img1 = None 188 | # img1_img = None 189 | # hp_arr_1 = None 190 | if resource_available: 191 | print 'memory use just after tiff output: %s'%(memory_usage_resource()) 192 | runtime=time.time()-process_time 193 | # print 'done with image %s in %5.1f seconds\n'%(hp1filename, runtime) 194 | return runtime 195 | 196 | # set up command line arguments 197 | parser = argparse.ArgumentParser( \ 198 | description="""high-pass filters Landsat band 8 image, writes out as signed int16, 199 | scaled to range of hp image, or if that gives output single DN > input single DN, 200 | to +-10 standard deviations (same restriction), or, if needed, clipped to int16 201 | range (so one output DN = one input DN). 202 | 203 | output format: as a lossless (simple) compressed geotiff""", 204 | epilog='>> <<', 205 | formatter_class=argparse.RawDescriptionHelpFormatter) 206 | 207 | parser.add_argument('-B8_dir', 208 | action='store', 209 | type=str, 210 | default='.', 211 | help='source dir for image [.]') 212 | parser.add_argument('B8_file', 213 | action='store', 214 | type=str, 215 | default=None, 216 | help='source B8 filename') 217 | parser.add_argument('-hp_dir', 218 | action='store', 219 | type=str, 220 | default='.', 221 | help='hp image output dir for both new, and possibly prior, hp images[.]') 222 | parser.add_argument('-gfilt_sigma', 223 | action='store', 224 | type=float, 225 | default=3.0, 226 | help='gaussian filter sigma (standard deviation for Gaussian kernel) [%(default)f]') 227 | # the rest of the parameters are flags - if raised, set to true, otherwise false 228 | parser.add_argument('-v', 229 | action='store_true', 230 | default=False, 231 | help='verbose - extra diagnostic and image info put in log file [False if not raised]') 232 | args = parser.parse_args() 233 | 234 | outdir=args.hp_dir 235 | 236 | if(args.v): 237 | print '#', args 238 | 239 | indir=args.B8_dir 240 | b8_img_name=args.B8_file 241 | hpdir=args.hp_dir 242 | 243 | if not(os.path.isdir(hpdir)): 244 | if not(os.path.isdir(os.path.dirname(hpdir))): 245 | print '>>>>>>>>>>>>>>>parent directory %s does not exist - halting'%(os.path.dirname(hpdir)) 246 | sys.exit(1) 247 | else: 248 | os.makedirs(hpdir) 249 | print '>>>>>>>>>>>>>>>created directory %s '%(hpdir) 250 | 251 | hp1filename=args.B8_file.replace('_B8.TIF','_B8_hp.tif') 252 | deltime=make_hp_func([indir,b8_img_name,hpdir,hp1filename,args]) 253 | 254 | if resource_available: 255 | print 'file %s/%s processed in %f seconds using %s'%(hpdir,hp1filename,deltime,memory_usage_resource()) 256 | else: 257 | print 'file %s/%s processed in %f seconds'%(hpdir,hp1filename,deltime) 258 | 259 | 260 | 261 | -------------------------------------------------------------------------------- /pycorr_processing_tools/make_new_nc_dcoff_and_divergence_v0.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | import netCDF4 3 | import numpy as np 4 | import gdal 5 | import os 6 | import sys 7 | from matplotlib.pyplot import get_cmap 8 | import re 9 | import argparse 10 | import glob 11 | 12 | # copy and modify netCDF file to change offset correction to dc x,y offset over land pixels 13 | # and add velocity field divergence layer 14 | 15 | ############################################################################################ 16 | # from accepted answer here: https://stackoverflow.com/questions/15141563/python-netcdf-making-a-copy-of-all-variables-and-attributes-but-one 17 | # import netCDF4 as nc #(this appears to be wrong - "netCDF4" used in code below) 18 | # toexclude = ['ExcludeVar1', 'ExcludeVar2'] 19 | # 20 | # with netCDF4.Dataset("in.nc") as src, netCDF4.Dataset("out.nc", "w") as dst: 21 | # # copy global attributes all at once via dictionary 22 | # dst.setncatts(src.__dict__) 23 | # # copy dimensions 24 | # for name, dimension in src.dimensions.items(): 25 | # dst.createDimension( 26 | # name, (len(dimension) if not dimension.isunlimited() else None)) 27 | # # copy all file data except for the excluded 28 | # for name, variable in src.variables.items(): 29 | # if name not in toexclude: 30 | # x = dst.createVariable(name, variable.datatype, variable.dimensions) 31 | # dst[name][:] = src[name][:] 32 | # # copy variable attributes all at once via dictionary 33 | # dst[name].setncatts(src[name].__dict__) 34 | ######## 35 | # NOTE: had to rearrange the above code to deal with case where variable had no shape (no values) 36 | # this occurs in cf1.6 files where a variable's attributes are used to store all the information 37 | # also chose to modify (back) to python 2 version of iteritems... 38 | # 39 | # See below for modified code 40 | ############################################################################################ 41 | default_indir = '.' 42 | default_outdir = '.' 43 | default_speed_max = 15.0 44 | default_cam = 1.0 45 | default_dcam = 0.05 46 | default_cam1 = 0.1 47 | 48 | # -dcam 0.05 -cam 1.0 -cam1 0.0 49 | 50 | make_output_prdir = False 51 | make_log10 = True 52 | 53 | parser = argparse.ArgumentParser( \ 54 | description="""make_new_nc_dcoff_and_divergence_v0.py input_vel.nc 55 | makes new version of ice flow nc file - uses del_i and del_j from original to do a constant i,j offset correction, adds divergence layer 56 | (this removes bilinear warping applied to fit land mask or interior velocity maps used in original processing) 57 | 58 | 59 | """, 60 | formatter_class=argparse.RawDescriptionHelpFormatter) 61 | 62 | parser.add_argument('-input_dir', 63 | action='store', 64 | type=str, 65 | default=default_indir, 66 | help = 'input directory for nc file [%(default)s]') 67 | parser.add_argument('-output_dir', 68 | action='store', 69 | type=str, 70 | default=default_outdir, 71 | help='output directory for nc, tif and png files [%(default)s]') 72 | parser.add_argument('-del_i_to_use', 73 | action='store', 74 | type=float, 75 | default = None, 76 | help='use this fixed del_i instead of calculating new one from land mask (requires -del_j_to_use) [%(default)f]') 77 | parser.add_argument('-del_j_to_use', 78 | action='store', 79 | type=float, 80 | default = None, 81 | help='use this fixed del_j instead of calculating new one from land mask (requires -del_i_to_use) [%(default)f]') 82 | parser.add_argument('-plt_speed_max', 83 | action='store', 84 | type=float, 85 | default = default_speed_max, 86 | help='max speed on plots [%(default)f]') 87 | parser.add_argument('-cam', 88 | action='store', 89 | type=float, 90 | default = default_cam, 91 | help='default cam [%(default)f]') 92 | parser.add_argument('-dcam', 93 | action='store', 94 | type=float, 95 | default = default_dcam, 96 | help='default dcam [%(default)f]') 97 | parser.add_argument('-cam1', 98 | action='store', 99 | type=float, 100 | default = default_cam1, 101 | help='default cam1 [%(default)f]') 102 | # parser.add_argument('-span_tree_make_missing_dirs', 103 | # action='store_true', 104 | # default = False, 105 | # help='find all nc files in p_r or S2_tile dirs in specified input_dir, create output dirs as needed in output_dir [%(default)s]') 106 | parser.add_argument('-make_missing_output_dir', 107 | action='store_true', 108 | default = False, 109 | help='make output_dir if it is missing [%(default)s]') 110 | parser.add_argument('-replace_existing_output', 111 | action='store_true', 112 | default = False, 113 | help='replace existing output nc file if present (False skips this file) [%(default)s]') 114 | parser.add_argument('input_nc_file', 115 | action='store', 116 | type=str, 117 | default=None, 118 | help='nc file to process [None]') 119 | args = parser.parse_args() 120 | 121 | if (args.del_i_to_use is None and not(args.del_j_to_use is None)) or ( not(args.del_i_to_use is None) and args.del_j_to_use is None): 122 | parser.error("both -del_i_to_use and del_j_to_use must be set if they are to be used.") 123 | 124 | if (not(args.del_j_to_use is None) and not(args.del_i_to_use is None)): 125 | using_fixed_del_i_del_j = True 126 | else: 127 | using_fixed_del_i_del_j = False 128 | 129 | 130 | 131 | cam = args.cam 132 | dcam = args.dcam 133 | cam1 = args.cam1 134 | 135 | 136 | 137 | 138 | variables_to_exclude = [ 139 | u'offset_correction', 140 | u'applied_bilinear_x_offset_correction_in_pixels', 141 | u'applied_bilinear_y_offset_correction_in_pixels', 142 | u'vx', 143 | u'vy', 144 | u'vv', 145 | u'vx_masked', 146 | u'vy_masked', 147 | u'vv_masked', 148 | u'applied_bilinear_x_offset_correction_in_pixels', 149 | u'applied_bilinear_y_offset_correction_in_pixels', 150 | u'offset_correction' 151 | ] 152 | 153 | # files_to_process = [] 154 | 155 | indir = args.input_dir 156 | outdir = args.output_dir 157 | if not(os.path.exists(outdir)): 158 | if args.make_missing_output_dir: 159 | os.makedirs(outdir) 160 | else: 161 | raise ValueError('output directory {} does not exist, and -make_missing_output_dir not set on command line'.format(outdir)) 162 | 163 | in_nc_file = args.input_nc_file 164 | out_nc_file = in_nc_file.replace('_hp','_dcd') # dcd => dc offset, divergence 165 | 166 | if not(args.replace_existing_output) and os.path.exists(outdir + '/' + out_nc_file): 167 | print('file: {} already present, skipping. Use -replace_existing_output to force replace.'.format(outdir + '/' + out_nc_file)) 168 | sys.exit(0) 169 | 170 | # if not(args.span_tree_make_missing_dirs): # only one file to process, make it the sole member of the list 171 | # in_nc_file = args.input_nc_file 172 | # out_nc_file = in_nc_file.replace('_hp','_dcd') # dcd => dc offset, divergence 173 | # files_to_process.append( (indir, in_nc_file, outdir, out_nc_file) ) 174 | # else: # traverse tree of input and output files_to_process 175 | 176 | src = netCDF4.Dataset(indir + '/' + in_nc_file) 177 | dst = netCDF4.Dataset(outdir + '/' + out_nc_file, "w", clobber=True, format='NETCDF4') 178 | 179 | ############## See above comment block - following is based on that stackoverflow code example - 180 | ############## modified to deal with empty variables that are used only for metadata fields (projection in cf1.6 file for example) 181 | # copy global attributes all at once via dictionary 182 | dst.setncatts(src.__dict__) 183 | # copy dimensions 184 | for name, dimension in src.dimensions.items(): 185 | dst.createDimension( 186 | name, (len(dimension) if not dimension.isunlimited() else None)) 187 | # copy all file data except for the excluded 188 | for name, variable in src.variables.iteritems(): 189 | if name not in variables_to_exclude: 190 | x = dst.createVariable(name, variable.datatype, variable.dimensions, **variable.filters()) 191 | # copy variable attributes all at once via dictionary 192 | dst[name].setncatts(src[name].__dict__) 193 | if src[name].shape != (): 194 | dst[name][:] = src[name][:] 195 | 196 | 197 | 198 | 199 | lgo_masked_offset_available = None 200 | found_valid_offset = False 201 | ################################ 202 | # values from sc_pycorr_v5p10 203 | lgo_masked_min_num_pix=500 204 | lgo_masked_min_percent_valid_pix_available=0.05 205 | max_allowable_pixel_offset_correction = 3.0 206 | 207 | corr_val_for_offset=0.3 208 | delcorr_min_for_offset = 0.15 209 | 210 | vel_nodata_val=np.float32(-9999.0) 211 | 212 | plotvmax = args.plt_speed_max # 15.0 213 | # 214 | ################################ 215 | 216 | regex = r"(-inc) (?P\d+)" 217 | bb = re.search(regex, src.getncattr('history')) # looking in processing history for the increment value used by sc_pycorr to get grid spacing in original image pixels 218 | inc_val_pixels = float(bb.groupdict()['incval']) # this will be the spacing between chip centers in the original images, in pixels in those images 219 | 220 | grid_mapping = src['vv'].getncattr('grid_mapping') # used below when writing new velocity grids to output nc (projection) 221 | del_t_speedunit_str = src['vv'].getncattr('units') # used below when writing new velocity grids to output nc (m/d typically) 222 | 223 | corr_arr = src['corr'][:] 224 | corr_nodata_val = src['corr'].getncattr('_FillValue') 225 | del_corr_arr = src['del_corr'][:] 226 | del_i = src['del_i'][:] 227 | del_j = src['del_j'][:] 228 | 229 | if not(using_fixed_del_i_del_j): 230 | if 'lgo_mask' in src.variables.keys(): 231 | lgo_mask_image_utm = src['lgo_mask'][:] 232 | 233 | print(out_nc_file + ': attempting to find offset correction for land (lgo mask) areas') 234 | # mask pixels that are not land not(lgo==1) 235 | if '_FillValue' in src['lgo_mask'].ncattrs(): # input speed reference has a specified no_data value. 236 | lgo_mask_nodata = src['lgo_mask'].getncattr('_FillValue') 237 | 238 | lgo_masked_d_i_m=np.ma.masked_where((corr_arr==corr_nodata_val) | (del_corr_arr < delcorr_min_for_offset) | (corr_arr < corr_val_for_offset) | \ 239 | (lgo_mask_image_utm!=1)|(lgo_mask_image_utm==lgo_mask_nodata),del_i) 240 | lgo_masked_d_j_m=np.ma.masked_where((corr_arr==corr_nodata_val) | (del_corr_arr < delcorr_min_for_offset) | (corr_arr < corr_val_for_offset) | \ 241 | (lgo_mask_image_utm!=1)|(lgo_mask_image_utm==lgo_mask_nodata),del_j) 242 | lgo_masked_num_possible_pix=np.count_nonzero(np.array((lgo_mask_image_utm==1)&(lgo_mask_image_utm!=lgo_mask_nodata)&(corr_arr!=corr_nodata_val), dtype=np.bool)) 243 | # lgo_masked_num_valid_pix=np.count_nonzero(np.array((speedref_vel_vv0: 255 | lgo_masked_offset_i = -(np.median(lgo_masked_d_i_m.compressed())) 256 | lgo_masked_offset_j = -(np.median(lgo_masked_d_j_m.compressed())) 257 | lgo_masked_stdev_i = np.std(lgo_masked_d_i_m.compressed()) 258 | lgo_masked_stdev_j = np.std(lgo_masked_d_j_m.compressed()) 259 | lgo_masked_offset_available=True 260 | print('found lgo_mask (land pixel) offset correction (del_i: {0} del_j: {1} std_i {2} std_j {3}) using {4} pixels out of {5} possible'.format( 261 | lgo_masked_offset_i,lgo_masked_offset_j,lgo_masked_stdev_i,lgo_masked_stdev_j,lgo_masked_num_valid_pix,lgo_masked_num_possible_pix)) 262 | 263 | else: # no lgo_mask available - how to do offset? 264 | print('No land/ice/ocean mask available in nc file, and not using fixed (specified) -del_i_to_use and del_j_to_use, SO exiting...') 265 | sys.exit(0) 266 | 267 | else: # using_fixed_del_i_del_j (still need mask below for new output fields, so read it if it is there) 268 | if 'lgo_mask' in src.variables.keys(): 269 | lgo_mask_image_utm = src['lgo_mask'][:] 270 | 271 | 272 | if not(using_fixed_del_i_del_j): 273 | if lgo_masked_offset_available and \ 274 | (lgo_masked_num_valid_pix>lgo_masked_min_num_pix) and \ 275 | ((float(lgo_masked_num_valid_pix)/lgo_masked_num_possible_pix) >= lgo_masked_min_percent_valid_pix_available/100.0) and \ 276 | (np.sqrt(lgo_masked_offset_i**2.0 + lgo_masked_offset_j**2.0) < max_allowable_pixel_offset_correction): 277 | # use lgo_masked corection 278 | final_offset_correction_i=lgo_masked_offset_i 279 | final_offset_correction_j=lgo_masked_offset_j 280 | found_valid_offset=True 281 | offset_correction_type_applied='lgo_masked_correction' 282 | offset_correction_type_descritption='lgo_masked_correction for land pixels, %d valid pixels out of %d possible for scene (%f %%)'%(lgo_masked_num_valid_pix,lgo_masked_num_possible_pix,100.0*(np.float(lgo_masked_num_valid_pix)/lgo_masked_num_possible_pix)) 283 | else: 284 | offset_correction_type_applied='None' 285 | offset_correction_type_descritption='None' 286 | else: # using_fixed_del_i_del_j 287 | final_offset_correction_i = args.del_i_to_use 288 | final_offset_correction_j = args.del_j_to_use 289 | found_valid_offset=True 290 | offset_correction_type_applied='specified_del_i_del_j' 291 | offset_correction_type_descritption='user specified del_i={} del_j={}'.format(args.del_i_to_use, args.del_j_to_use) 292 | 293 | 294 | ############################################################################ 295 | # apply offset correction if there is one 296 | ############################################################################ 297 | 298 | # dcam=args.dcam 299 | # cam=args.cam 300 | # cam1=args.cam1 301 | 302 | # -dcam 0.1 -cam 1.0 -cam1 0.5 typical parameters for GoLIVE processing 303 | # greenland -dcam 0.05 -cam 1.0 -cam1 0.0 304 | # dcam = 0.1 305 | # cam = 1.0 306 | # cam1 = 0.2 307 | 308 | image_pix_x_m = src['input_image_details'].getncattr('image1_pix_x_size') 309 | image_pix_y_m = src['input_image_details'].getncattr('image1_pix_y_size') 310 | del_t_val = float(src['image_pair_times'].getncattr('del_t')) 311 | 312 | # if not(args.offset_correction_speedref or args.offset_correction_lgo_mask) or not(found_valid_offset): # no offset to apply 313 | if not(found_valid_offset): # no offset to apply 314 | print('Not enough valid pixels to apply offset correction - proceeding without one') 315 | # create masked and unmasked versions of the speed, vx, and vy output 316 | # the masked version will be written to the nc file with no_data values in corners and also where the correlation parameters suggest masking is needed 317 | # vv=np.ma.masked_where(((del_corr_arrplotvmax),((offset_dist_ij_arr*img1.pix_x_m)/del_t_val)) 318 | vx=np.ma.masked_where(((del_corr_arrplotvmax),np.sqrt(np.square(vx) + np.square(vy))) 341 | vv=np.ma.masked_where(((del_corr_arr 0) & (i < (num_x - 2)) & (j > 0) & (j < (num_y - 2)) ): 383 | if ( (lgo_mask_image_utm[j,i]==0) & (lgo_mask_image_utm[j,i-1]==0) & (lgo_mask_image_utm[j,i+1]==0) & (lgo_mask_image_utm[j-1,i]==0) & (lgo_mask_image_utm[j+1,i]==0) ): # if pixel and surrounding are ice 384 | divergence[j,i] = ( ((nvx[j,i+1] - nvx[j,i-1])/strain_rate_distance) + (np.sign(image_pix_y_m)*(nvy[j+1,i] - nvy[j-1,i])/strain_rate_distance) ) # np.sign to get gradient in vy calculated in proper direction 385 | # divergence[divergence == vel_nodata_val] = np.nan 386 | # divergence[divergence == np.nan] = vel_nodata_val # reset the nans that made it through calculation to nodata_val for nc output 387 | divergence_ma = np.ma.masked_where(divergence == vel_nodata_val, divergence) 388 | 389 | 390 | 391 | 392 | # done with src nc file, close it and reopen as gdal image to get geotransform for gtiff output if log10 is used 393 | # output nc file will get its transform from input nc file. Otherwise, to get geotransform from nc file, you have to know what 394 | # the projection is in advance and then seek the metadata block named after the projection 395 | src.close() 396 | 397 | 398 | mapjet=get_cmap('jet') 399 | mmvv=vv.copy() 400 | mmvv[(mmvv>plotvmax) & (~mmvv.mask)]=plotvmax 401 | vv_zo=mmvv/plotvmax 402 | vv_rgba=mapjet(vv_zo) # this produces a 4-band 0.0-1.0 image array using the jet colormap 403 | (out_lines,out_pixels,out_bands)=vv_rgba.shape # NEED these values below, EVEN if only output tif is log10 404 | 405 | 406 | if (make_log10): # prepare a log10 version of the speed for output below 407 | mmvv=vv.copy() 408 | mmvv[(mmvv>plotvmax) & (~mmvv.mask)]=plotvmax 409 | mmvv.mask[(mmvv==0) & (~mmvv.mask)]= True 410 | # ignore warning for invalid points (0 or <0) - will be set to nans 411 | with np.errstate(invalid='ignore', divide='ignore'): 412 | lmmvv=np.log10(mmvv) 413 | # min_lmmvv=np.min(lmmvv) 414 | min_lmmvv=np.log10(0.1) 415 | max_lmmvv=np.log10(plotvmax) 416 | range_lmmvv=max_lmmvv - min_lmmvv 417 | lvv_zo=(lmmvv - min_lmmvv)/(range_lmmvv) 418 | lvv_rgba=mapjet(lvv_zo) # this produces a 4-band 0.0-1.0 image array using the jet colormap 419 | 420 | 421 | 422 | 423 | 424 | ########################################################################################## 425 | # 426 | # output section - write various output files 427 | # 428 | ########################################################################################## 429 | 430 | 431 | # check if output pr directory exists - if -make_output_prdir flag is set, create a missing pr directory, otherwise fail... 432 | 433 | 434 | 435 | if (make_output_prdir): 436 | if not(os.path.isdir(outdir)): 437 | if not(os.path.isdir(os.path.dirname(outdir))): # make sure the parent directory (one holding all the pr directories) exists 438 | print(out_name_base + ': ' + 'Error: >>>>>>>>>>>>>>>parent directory {}} does not exist - halting'.format(os.path.dirname(outdir))) 439 | sys.exit(0) 440 | else: 441 | os.makedirs(outdir) 442 | print(out_name_base + ': ' + '>>>>>>>>>>>>>>>created directory %s '.format(outdir)) 443 | os.chmod(outdir, 0o755) 444 | 445 | # if not(args.no_gtif): ###### This flag is not presently used, as GTiff forms basis for all output right now... 446 | format = "GTiff" 447 | driver = gdal.GetDriverByName( format ) 448 | metadata = driver.GetMetadata() 449 | 450 | pngdriver = gdal.GetDriverByName( 'PNG' ) 451 | 452 | 453 | if(make_log10): 454 | gd = gdal.Open('NETCDF:' + indir + '/' + in_nc_file + ':vv') # to get geotransform and other projection info from it, no data 455 | # dst_filename = outdir + '/' + file_name_base + '_log10.tif' 456 | dst_filename = outdir + '/' + out_nc_file.replace('.nc','.tif') 457 | dst_ds = driver.Create( dst_filename, out_pixels, out_lines, out_bands, gdal.GDT_Byte ) 458 | 459 | # dst_ds.SetGeoTransform( [ com_min_x, inc * img1.pix_x_m, 0, com_max_y, 0, inc * img1.pix_y_m ] ) # note pix_y_m typically negative 460 | dst_ds.SetGeoTransform( gd.GetGeoTransform() ) # note pix_y_m typically negative 461 | dst_ds.SetProjection( gd.GetProjection() ) 462 | dst_ds.GetRasterBand(1).WriteArray( (lvv_rgba[:,:,0]*255).astype('ubyte') ) 463 | dst_ds.GetRasterBand(2).WriteArray( (lvv_rgba[:,:,1]*255).astype('ubyte') ) 464 | dst_ds.GetRasterBand(3).WriteArray( (lvv_rgba[:,:,2]*255).astype('ubyte') ) 465 | dst_ds.GetRasterBand(4).WriteArray( (lvv_rgba[:,:,3]*255).astype('ubyte') ) 466 | # dst_ds = None # done, close the dataset 467 | 468 | 469 | ############ 470 | # Now create PNG as well, using CreateCopy - gdal png driver does not support Create at this time... 471 | dst_filename1 = outdir + '/' + out_nc_file.replace('.nc','.png') 472 | dst_ds1 = pngdriver.CreateCopy( dst_filename1, dst_ds) # copy geotiff to png 473 | 474 | 475 | 476 | dst_ds1 = None # done, close the dataset 477 | dst_ds = None # done, close the dataset 478 | # make output files rw-r--r-- and remove unneeded png.aux.xml file that gdal creates 479 | os.chmod(dst_filename, 0o644) 480 | os.chmod(dst_filename1, 0o644) 481 | os.remove(dst_filename1 + '.aux.xml') 482 | 483 | 484 | #################################################### 485 | # 486 | # now add new (corrected) velocity layers to nc file 487 | # 488 | #################################################### 489 | 490 | varname='vx' 491 | datatype=np.dtype('float32') 492 | dimensions=('y','x') 493 | FillValue=vel_nodata_val 494 | var = dst.createVariable(varname,datatype,dimensions, fill_value=FillValue, zlib=True, complevel=2, shuffle=True) 495 | var.setncattr('grid_mapping',grid_mapping) 496 | var.setncattr('standard_name','x_velocity') 497 | var.setncattr('long_name','x component of velocity') 498 | var.setncattr('units',del_t_speedunit_str) 499 | var[:] = vx_nomask.astype('float32') 500 | 501 | varname='vy' 502 | datatype=np.dtype('float32') 503 | dimensions=('y','x') 504 | FillValue=vel_nodata_val 505 | var = dst.createVariable(varname,datatype,dimensions, fill_value=FillValue, zlib=True, complevel=2, shuffle=True) 506 | var.setncattr('grid_mapping',grid_mapping) 507 | var.setncattr('standard_name','y_velocity') 508 | var.setncattr('long_name','y component of velocity') 509 | var.setncattr('units',del_t_speedunit_str) 510 | var[:] = vy_nomask.astype('float32') 511 | 512 | varname='vv' 513 | datatype=np.dtype('float32') 514 | dimensions=('y','x') 515 | FillValue=vel_nodata_val 516 | var = dst.createVariable(varname,datatype,dimensions, fill_value=FillValue, zlib=True, complevel=2, shuffle=True) 517 | var.setncattr('grid_mapping',grid_mapping) 518 | var.setncattr('standard_name','speed') 519 | var.setncattr('long_name','magnitude of velocity') 520 | var.setncattr('units',del_t_speedunit_str) 521 | var[:] = vv_nomask.astype('float32') 522 | 523 | varname='vx_masked' 524 | datatype=np.dtype('float32') 525 | dimensions=('y','x') 526 | FillValue=vel_nodata_val 527 | var = dst.createVariable(varname,datatype,dimensions, fill_value=FillValue, zlib=True, complevel=2, shuffle=True) 528 | var.setncattr('grid_mapping',grid_mapping) 529 | var.setncattr('standard_name','x_velocity_masked') 530 | var.setncattr('long_name','x component of velocity (masked)') 531 | var.setncattr('units',del_t_speedunit_str) 532 | var.setncattr('masking_info','masked_where(((del_corr_arr<%4.3f)&(corr_arr<%4.3f))|(corr_arr<%4.3f))'%(dcam,cam,cam1)) 533 | var[:] = vx.astype('float32') 534 | 535 | varname='vy_masked' 536 | datatype=np.dtype('float32') 537 | dimensions=('y','x') 538 | FillValue=vel_nodata_val 539 | var = dst.createVariable(varname,datatype,dimensions, fill_value=FillValue, zlib=True, complevel=2, shuffle=True) 540 | var.setncattr('grid_mapping',grid_mapping) 541 | var.setncattr('standard_name','y_velocity_masked') 542 | var.setncattr('long_name','y component of velocity (masked)') 543 | var.setncattr('units',del_t_speedunit_str) 544 | var.setncattr('masking_info','masked_where(((del_corr_arr<%4.3f)&(corr_arr<%4.3f))|(corr_arr<%4.3f))'%(dcam,cam,cam1)) 545 | var[:] = vy.astype('float32') 546 | 547 | varname='vv_masked' 548 | datatype=np.dtype('float32') 549 | dimensions=('y','x') 550 | FillValue=vel_nodata_val 551 | var = dst.createVariable(varname,datatype,dimensions, fill_value=FillValue, zlib=True, complevel=2, shuffle=True) 552 | var.setncattr('grid_mapping',grid_mapping) 553 | var.setncattr('standard_name','speed_masked') 554 | var.setncattr('long_name','magnitude of velocity (masked)') 555 | var.setncattr('units',del_t_speedunit_str) 556 | var.setncattr('masking_info','masked_where(((del_corr_arr<%4.3f)&(corr_arr<%4.3f))|(corr_arr<%4.3f))'%(dcam,cam,cam1)) 557 | var[:] = vv.astype('float32') 558 | 559 | varname='divergence_masked' 560 | datatype=np.dtype('float32') 561 | dimensions=('y','x') 562 | FillValue=vel_nodata_val 563 | var = dst.createVariable(varname,datatype,dimensions, fill_value=FillValue, zlib=True, complevel=2, shuffle=True) 564 | var.setncattr('grid_mapping',grid_mapping) 565 | var.setncattr('standard_name','divergence') 566 | var.setncattr('long_name','divergence of velocity (masked)') 567 | var.setncattr('units',del_t_speedunit_str.replace('m','1')) 568 | var.setncattr('masking_info','masked_where(((del_corr_arr<%4.3f)&(corr_arr<%4.3f))|(corr_arr<%4.3f))'%(dcam,cam,cam1)) 569 | var[:] = divergence_ma.astype('float32') 570 | 571 | # set variables 572 | # first set up variable not as dimension, but as holder for attributes for the times of the two input images, delt, etc. 573 | varname='offset_correction' 574 | datatype=np.dtype('S1') 575 | dimensions=() 576 | FillValue=None 577 | var = dst.createVariable(varname,datatype, dimensions, fill_value=FillValue) 578 | if found_valid_offset: 579 | var.setncattr('offset_correction_type_applied',offset_correction_type_applied) 580 | var.setncattr('offset_correction_type_descritption',offset_correction_type_descritption) 581 | var.setncattr('final_offset_correction_units','pixels') 582 | var.setncattr('final_offset_correction_i',str(final_offset_correction_i)) 583 | var.setncattr('final_offset_correction_j',str(final_offset_correction_j)) 584 | else: 585 | var.setncattr('offset_correction_type_applied',offset_correction_type_applied) 586 | var.setncattr('offset_correction_type_descritption',offset_correction_type_descritption) 587 | var.setncattr('final_offset_correction_units','pixels') 588 | var.setncattr('final_offset_correction_i','None') 589 | var.setncattr('final_offset_correction_j','None') 590 | 591 | 592 | dst.close() 593 | 594 | 595 | -------------------------------------------------------------------------------- /pycorr_processing_tools/make_processing_commands_S2.py: -------------------------------------------------------------------------------- 1 | import glob 2 | import numpy as np 3 | import datetime as dt 4 | 5 | inAfiles = glob.glob('S2*2021*_L2A') 6 | 7 | pycorr_exec = 'python /Users/mark/repos/pycorr_iceflow/pycorr_iceflow_v1.1.py' 8 | out_list_name = f'command_list_{dt.datetime.now().strftime("%Y%m%dT%H%M%S")}.txt' 9 | putname_base_start = 'L2A_S2' 10 | min_delt = 5 11 | max_delt = 25 12 | max_v = 30 # m/d 13 | plotvmax = 25 14 | half_source_chip = 10 15 | inc = 10 16 | modval = 5 # for S2 could be 5 (include AB and BA as well) or 10 (only AA and BB) 17 | 18 | command_list = [] 19 | inAdts = [(dt.datetime.strptime(x.split('_')[2],"%Y%m%d"),x.split('_')[2],x) for x in inAfiles] 20 | inAdts.sort() 21 | 22 | for ind1 in range(len(inAdts)-1): 23 | dt1,dt1str,dir1 = inAdts[ind1] 24 | 25 | dts = [(x[0] - inAdts[ind1][0]).days for x in inAdts[ind1+1:]] 26 | 27 | dts_arr = np.array([(lambda x,y: x>= min_delt and x<=max_delt and y==0)(x,np.mod(x,modval)) for x in dts]) 28 | 29 | for delt,(dt2,dt2str,dir2) in zip(np.array(dts)[dts_arr], np.array(inAdts[ind1+1:])[dts_arr] ): 30 | cmdstr = f'{pycorr_exec} -img1dir {dir1} -img2dir {dir2} B08.tif B08.tif -img1datestr {dt1str} -img2datestr {dt2str} -datestrfmt "%Y%m%d" -inc {inc} -half_source_chip {half_source_chip} -half_target_chip {np.ceil((delt*max_v/10)+half_source_chip).astype(np.int)} -plotvmax {plotvmax} -log10 -out_name_base {putname_base_start}{dir1[2]}{dir2[2]}_{dir1.split("_")[1]}_{delt:02d}_{dt1str}_{dt2str} -progupdates -use_itslive_land_mask_from_web' 31 | print(f'{cmdstr}') 32 | command_list.append(cmdstr) 33 | 34 | 35 | with open(out_list_name,'w') as outf: 36 | for line in command_list: 37 | outf.write(f'{line}\n') 38 | 39 | 40 | 41 | -------------------------------------------------------------------------------- /pycorr_processing_tools/pycorrtools.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from scipy.interpolate import RectBivariateSpline as RBS 3 | import cv2 4 | import gdal 5 | import gdalconst as gdc # constants for gdal - e.g. GA_ReadOnly, GA_Update ( http://www.gdal.org ) 6 | import os 7 | # import subprocess as sp 8 | # import string 9 | # import random 10 | # import sys 11 | # import time 12 | import datetime as dt 13 | import argparse 14 | import osr 15 | from scipy.ndimage.filters import gaussian_filter 16 | # from matplotlib.pyplot import get_cmap 17 | # import netCDF4 18 | # import re 19 | 20 | ########################################################################################## 21 | # 22 | # pycorrtools 23 | # 24 | # library of functions used in image-to-image offset tracking, with an ice focus 25 | # 26 | # built initially from code written by Mark Fahnestock 2013-2020 27 | # modeled on functionality of IMCORR (M. Fahnestock 1992-93) (a C wrapper for greycorr fortran fft-based correlation routine for tiepointing) 28 | # 29 | ########################################################################################## 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | ########################################################################################## 40 | # chip_corr - returns subpixel location of peak match for source chip within target chip 41 | ############### 42 | # 43 | # corr_setup_params = { 44 | # 'peak_block_halfsize':peak_block_halfsize, 45 | # 'rbs_halfsize':rbs_halfsize, 46 | # 'rbs_order':rbs_order, 47 | # 'offset_one_tenth_j':offset_one_tenth_j, 48 | # 'offset_one_tenth_i':offset_one_tenth_i, 49 | # 'offset_one_hundredth_j':offset_one_hundredth_j, 50 | # 'offset_one_hundredth_i':offset_one_hundredth_i, 51 | # 'd2xdx2_spacing':d2xdx2_spacing, 52 | # 'corr_nodata_val':corr_nodata_val, 53 | # 'curvature_nodata_value':curvature_nodata_value 54 | # } 55 | # 56 | # corr_return_values = { 57 | # corr, - correlation value at sub-pixel peak 58 | # del_corr, - corr minus next highest peak (peak_block_halfsize blocked out around peak) 59 | # del_i, - offset in i direction (decimal pixels) 60 | # del_j, - offset in j direction (decimal pixels) 61 | # d2idx2, - curvature of peak in i direction 62 | # d2jdx2 - curvature of peak in j direction 63 | # } 64 | # 65 | # 66 | # 67 | ########################################################################################## 68 | def chip_corr(chip_src, chip_tar, **corr_setup_params): 69 | """ docstring not here yet...see comment above 70 | """ 71 | # new_cent_loc is the integer number of pixels into the correlation surface in both i and j 72 | # to the "0 offset" location (the middle of the correlation surface array) 73 | # note peak1_loc array indicies are i,j instead of j,i, so new_cent_loc values switched here 74 | # - also resultCv.shape[::-1] reversed to get proper order below 75 | new_cent_loc = [(chip_tar.shape[1] - chip_src.shape[1])/2.0, (chip_tar.shape[0] - chip_src.shape[0])/2.0] 76 | resultCv=cv2.matchTemplate(chip_src, chip_tar, cv2.TM_CCOEFF_NORMED) 77 | mml_1=cv2.minMaxLoc(resultCv) 78 | peak1_corr=mml_1[1] 79 | peak1_loc=np.array(mml_1[3]) 80 | 81 | tempres_peak1_zeroed=resultCv.copy() # del_corr is difference in peak height and next highest peak with area around first peak zeroed out - need copy to zero out peak 82 | tempres_peak1_zeroed[(peak1_loc[1]-peak_block_halfsize):(peak1_loc[1]+peak_block_halfsize+1),(peak1_loc[0]-peak_block_halfsize):(peak1_loc[0]+peak_block_halfsize+1)]=0 83 | 84 | corr = peak1_corr 85 | del_corr = peak1_corr - cv2.minMaxLoc(tempres_peak1_zeroed)[1] 86 | # if peak is far enough from the edges of the correlation surface, fit locale of peak with a spline and find sub-pixel offset at 1/100th of a pixel level 87 | if((np.array([n-rbs_halfsize for n in peak1_loc])>=np.array([0,0])).all() & 88 | (np.array([(n+rbs_halfsize) for n in peak1_loc ])