├── CHANGES.txt ├── LICENSE ├── README.md ├── assets └── global_trenches_AV.txt ├── lib └── back_projection_lib.py ├── param └── back_projection_param.py ├── requirements.txt └── src └── back_projection_r2.py /CHANGES.txt: -------------------------------------------------------------------------------- 1 | 2 | requirements.txt, README, LICENSE, INSTALL.txt 3 | 2022-11-03 Manoch: v.2022.307 r2 Updated installation and license information. Added requirements document, 4 | removed the INSTALLATION file and updated the LICENSE file. 5 | 6 | src/back_projection_r2.py 7 | 2021-11-01 Manoch: v.2021.305 r2 (Python 3) data product release. 8 | 9 | lib/back_projection_lib.py 10 | 2021-11-01 Manoch: v.2021.305 r2 (Python 3) data product release. 11 | 12 | param/back_projection_param.py 13 | 2021-11-01 Manoch: v.2021.305 r2 (Python 3) data product release. -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | # BackProjection Data Product 3 | 4 | This Python 3 package contains the code behind the creation of [BackProjection data product](http://ds.iris.edu/ds/products/backprojection/) plots and animations: 5 | 6 | * Back projection plots and animations for the NA, EU, AU, and GSN virtual networks. 7 | * Animation of array response functions for all networks. 8 | 9 | 10 | ## Installation 11 | 12 | Either clone the repository or download a [release](https://github.com/iris-edu/backprojection/releases) and unzip/untar it. 13 | 14 | ### Requirements 15 | 16 | * [Python](https://www.python.org/) 3 17 | * Python modules listed in `requirements.txt` 18 | * Install these modules with `pip install -r requirements.txt` 19 | 20 | This package has been tested under Python 3.9.2 on macOS 12.5.1, it may work with older Python 3 versions. 21 | 22 | ### Configuring the package 23 | 24 | The main Python code in this package (_back\_projection\_r2.py_) can be configured via its parameter file 25 | (_back\_projection\_param.py_) and through the command line arguments. Currently, the parameters are optimized for 26 | use with four preconfigured virtual networks defined in the parameter file (NA, EU, AU, and GSN). 27 | 28 | With Python configured, you should be able to run the package examples without further modifications. However: 29 | * if necessary, update the Python path on the first line of the src/back_projection_r2.py 30 | * if desired, configure the package by updating the param/back_projection_param.py file 31 | * package will create the following directories as needed: 32 | - data: output data files (station list and peak amplitude values) 33 | - image: output images 34 | - log: log directory 35 | - metadata: metadata directory (not used) 36 | - scratch: work directory (cleaned up after each run) 37 | - video: output videos and video screenshots 38 | 39 | For more information visit the [Wiki page](https://github.com/iris-edu/backprojection/wiki). 40 | 41 | 42 | ### Package testing 43 | 44 | Run the main code (src/back_projection_r2.py) with the "-h" option. If you have properly configured your Python 45 | installation, it will print a usage messages. 46 | 47 | Run the first example in the usage message (the low-resolution example). This will take the code through all the steps. Check the image, 48 | video and data directories for the outputs. 49 | 50 | ## Package contents 51 | 52 | This package contains the following files: 53 | 54 | src/ 55 | back_projection_r2.py 56 | This is the main Python code behind the production of plots and animations. Calling 57 | the code with -h option displays a list of other options available to tune plot and 58 | animation production. It also provides examples to run. 59 | 60 | param/ 61 | back_projection_param.py 62 | A Python file that contains all the BackProjection data product parameters. You may 63 | modify this file to customize the virtual networks, plots and animations. All parameter 64 | definitions in this file must follow the Python rules. Each parameter group in this file 65 | is commented for clarification. 66 | 67 | lib/ 68 | - back_projection_lib.py 69 | A Python utility library used by the main script. 70 | 71 | requirements.txt 72 | List of the required Python modules. 73 | 74 | CHANGES.txt 75 | History of changes to this package. 76 | 77 | README.md 78 | The package README file 79 | 80 | ## Citation 81 | 82 | To cite the use of this software reference: 83 | 84 | ``` 85 | Trabant, C., A. R. Hutko, M. Bahavar, R. Karstens, T. Ahern, and R. Aster (2012), Data Products at the IRIS DMC: 86 | Stepping Stones for Research and Other Applications, Seismological Research Letters, 83(5), 846–854, 87 | https://doi.org/10.1785/0220120032 88 | ``` 89 | 90 | Or cite the following DOI: 91 | 92 | ``` 93 | doi:10.17611/dp/bp.code.1 94 | ``` 95 | 96 | ## Authors 97 | 98 | Incorporated Research Institutions for Seismology (IRIS) 99 | Data Management Center (DMC) 100 | Data Products Team 101 | 102 | ### Comments or questions 103 | 104 | Please contact data-help@earthscopeconsortium.atlassian.net 105 | 106 | 107 | ## License 108 | 109 | Licensed under the Apache License, Version 2.0 (the "License"); 110 | you may not use this file except in compliance with the License. 111 | You may obtain a copy of the License at 112 | 113 | [http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0) 114 | 115 | Unless required by applicable law or agreed to in writing, software 116 | distributed under the License is distributed on an "AS IS" BASIS, 117 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 118 | See the License for the specific language governing permissions and 119 | limitations under the License. 120 | 121 | Copyright (C) 2022 Manochehr Bahavar, IRIS Data Management Center 122 | 123 | 124 | -------------------------------------------------------------------------------- /lib/back_projection_lib.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import os 3 | 4 | import math 5 | 6 | from obspy.geodetics import degrees2kilometers 7 | # NOTE: gps2dist_azimuth will check if you have installed the Python module geographiclib. It will be better to 8 | # have it installed. 9 | from obspy.geodetics.base import gps2dist_azimuth, kilometer2degrees 10 | from obspy import UTCDateTime 11 | from obspy import read 12 | from obspy.taup import TauPyModel 13 | from obspy.signal.cross_correlation import correlate, xcorr_max 14 | from obspy.signal.filter import envelope 15 | 16 | from obspy.core.stream import Stream 17 | 18 | import matplotlib as mpl 19 | 20 | from datetime import datetime 21 | 22 | import numpy as np 23 | import time 24 | 25 | from urllib.request import urlopen 26 | 27 | # Import the back projection parameters and libraries. 28 | _dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '..')) 29 | param_dir = os.path.join(_dir, 'param') 30 | 31 | sys.path.append(param_dir) 32 | 33 | import back_projection_param as param 34 | 35 | """ 36 | Description: 37 | 38 | A Python utility library used by the main script. 39 | 40 | Copyright and License: 41 | 42 | This software Copyright (c) 2021 IRIS (Incorporated Research Institutions for Seismology). 43 | 44 | This program is free software: you can redistribute it and/or modify 45 | it under the terms of the GNU General Public License as published by 46 | the Free Software Foundation, either version 3 of the License, or (at 47 | your option) any later version. 48 | 49 | This program is distributed in the hope that it will be useful, but 50 | WITHOUT ANY WARRANTY; without even the implied warranty of 51 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 52 | General Public License for more details. 53 | 54 | You should have received a copy of the GNU General Public License 55 | along with this program. If not, see http://www.gnu.org/licenses/. 56 | 57 | History: 58 | 2021-11-01 Manoch: v.2021.305 r2 (Python 3) data product release. 59 | 60 | """ 61 | 62 | # Parameters. 63 | vn = param.virtual_networks 64 | vn_list = list(vn.keys()) 65 | vn_min_radius = param.vn_min_radius 66 | vn_max_radius = param.vn_max_radius 67 | vn_azimuth = param.vn_azimuth 68 | 69 | model = TauPyModel(model=param.travel_time_model) 70 | 71 | channel_order = param.channel_order 72 | channel_list = channel_order.keys() 73 | 74 | eq_min_radius = param.eq_min_radius 75 | eq_max_radius = param.eq_max_radius 76 | 77 | dc_to_exclude = param.dc_to_exclude 78 | 79 | chunk_count = param.chunk_count 80 | 81 | earthquakes = param.earthquakes 82 | 83 | fedcatalog_service_url = param.fedcatalog_service_url 84 | 85 | log_file = sys.stdout 86 | verbose = param.verbose 87 | 88 | timing = param.timing 89 | 90 | 91 | class ObjDict(dict): 92 | """Accessing dictionary items as object attributes: 93 | https://goodcode.io/articles/python-dict-object/ 94 | """ 95 | 96 | def __getattr__(self, name): 97 | if name in self: 98 | return self[name] 99 | else: 100 | raise AttributeError("No such attribute: {}".format(name)) 101 | 102 | def __setattr__(self, name, value): 103 | self[name] = value 104 | 105 | def __delattr__(self, name): 106 | if name in self: 107 | del self[name] 108 | else: 109 | raise AttributeError("No such attribute: {}".format(name)) 110 | 111 | 112 | def version_timestamp(version, search_radius, delimiter=' '): 113 | current_time = datetime.utcnow() 114 | if search_radius is None: 115 | second_line = f'Chan: {param.request_channel}' 116 | else: 117 | second_line = f'Chan: {param.request_channel}, sparse search: {search_radius}°' 118 | 119 | timestamp = f'{param.production_label}{delimiter}({version}/{current_time.strftime("%Y-%m-%d %H:%M")} UTC)'\ 120 | f'\n{second_line}' 121 | return timestamp 122 | 123 | 124 | def file_name_tag(event_date, date=None): 125 | """Create a file name tag from the event's date and time.""" 126 | if date is not None: 127 | event_date = datetime.strptime(event_date, '%Y-%m-%dT%H:%M:%S') 128 | tag = f"{event_date.strftime('%Y.%m.%d.%H.%M')}" 129 | return tag 130 | 131 | 132 | def print_message(flag, text, flush=True, log=sys.stdout, end='\n'): 133 | """Print out a message. Force the flush and write to the file handle""" 134 | 135 | if flag == 'ERR': 136 | print(f'\n\n{60 * "="}\n', file=log, flush=flush, end=end) 137 | print(f'[{flag}] {text}', sep='\n', file=log, flush=flush, end=end) 138 | print(f'\n\n{60 * "="}\n', file=log, flush=flush, end=end) 139 | elif log is not None: 140 | print(f'[{flag}] {text}', file=log, flush=flush, end=end) 141 | else: 142 | print(f'[{flag}] {text}', flush=flush, end=end) 143 | 144 | 145 | def read_global_trenches(bmap=None, log=sys.stdout): 146 | """Read location of the global trenches from the data file.""" 147 | global_trench_file = os.path.join(param.assets_dir, param.global_trenches_file) 148 | try: 149 | fp = open(global_trench_file, 'r') 150 | except Exception as ex: 151 | print_message('ERR', f'problem reading the trench file {global_trench_file}, will not plot trenches\n{ex}', log=log) 152 | return None, None 153 | 154 | data = fp.read() 155 | fp.close() 156 | lines = data.split('\n') 157 | trench_x = list() 158 | trench_y = list() 159 | for line in lines: 160 | line = line.strip() 161 | if not line: 162 | continue 163 | values = line.split() 164 | if values[0] == 'NaN': 165 | trench_x.append(float('NaN')) 166 | trench_y.append(float('NaN')) 167 | elif map is None: 168 | trench_x.append(float(values[0])) 169 | trench_y.append(float(values[1])) 170 | else: 171 | _x, _y = bmap(float(values[0]), float(values[1])) 172 | trench_x.append(_x) 173 | trench_y.append(_y) 174 | 175 | return trench_x, trench_y 176 | 177 | 178 | def get_location(lat1, lon1, brng, distance_degrees): 179 | """find location of a point at a given distance from lat0, lon0 180 | source: https://stochasticcoder.com/2016/04/06/python-custom-distance-radius-with-basemap/""" 181 | lat1 = math.radians(lat1) 182 | lon1 = math.radians(lon1) 183 | distance_radians = math.radians(distance_degrees) 184 | brng = math.radians(brng) 185 | 186 | lat2 = math.asin(math.sin(lat1) * math.cos(distance_radians) 187 | + math.cos(lat1) * math.sin(distance_radians) * math.cos(brng)) 188 | 189 | lon2 = lon1 + math.atan2(math.sin(brng) * math.sin(distance_radians) 190 | * math.cos(lat1), math.cos(distance_radians) - math.sin(lat1) * math.sin(lat2)) 191 | 192 | lon2 = math.degrees(lon2) 193 | lat2 = math.degrees(lat2) 194 | 195 | return lat2, lon2 196 | 197 | 198 | def create_circle(lat0, lon0, radius_degrees): 199 | """Create a circle around lat0, lon0 for a given radius 200 | source: https://stochasticcoder.com/2016/04/06/python-custom-distance-radius-with-basemap/""" 201 | lat_list = list() 202 | lon_list = list() 203 | 204 | for brng in range(0, 360): 205 | lat2, lon2 = get_location(lat0, lon0, brng, radius_degrees) 206 | lat_list.append(lat2) 207 | lon_list.append(lon2) 208 | 209 | return lon_list, lat_list 210 | 211 | 212 | def make_cmap(colors, position=None, bit=False, log=sys.stdout): 213 | """ 214 | Source: Chris Slocum (with some modifications) http://schubert.atmos.colostate.edu/~cslocum/custom_cmap.html 215 | 216 | make_cmap takes a list of tuples which contain RGB values. The RGB 217 | values may either be in 8-bit [0 to 255] (in which bit must be set to 218 | True when called) or arithmetic [0 to 1] (default). make_cmap returns 219 | a cmap with equally spaced colors. 220 | Arrange your tuples so that the first color is the lowest value for the 221 | colorbar and the last is the highest. 222 | position contains values from 0 to 1 to dictate the location of each color. 223 | """ 224 | 225 | bit_rgb = np.linspace(0, 1, 256) 226 | if position is None: 227 | position = np.linspace(0, 1, len(colors)) 228 | else: 229 | if len(position) != len(colors): 230 | print_message('ERR', f'position length ({len(position)}) ' 231 | f'must be the same as colors ({len(colors)})', log=log) 232 | sys.exit(2) 233 | elif position[0] != 0 or position[-1] != 1: 234 | print_message('ERR', f'position must start with 0 and end with 1 ({position[0]},' 235 | f' {position[-1]})', log=log) 236 | sys.exit(2) 237 | if bit: 238 | for i in range(len(colors)): 239 | colors[i] = (bit_rgb[colors[i][0]], 240 | bit_rgb[colors[i][1]], 241 | bit_rgb[colors[i][2]]) 242 | cdict = {'red': list(), 'green': list(), 'blue': list()} 243 | for pos, color in zip(position, colors): 244 | cdict['red'].append((pos, color[0], color[0])) 245 | cdict['green'].append((pos, color[1], color[1])) 246 | cdict['blue'].append((pos, color[2], color[2])) 247 | 248 | cmap = mpl.colors.LinearSegmentedColormap('bp_colormap', cdict, 256) 249 | return cmap 250 | 251 | 252 | def set_float_digits(value, places=4): 253 | """Set a float number precision""" 254 | form = f'0.{places}f' 255 | new_value = float(f'{value:{form}}') 256 | 257 | return new_value 258 | 259 | 260 | def case(this_val, case_dict): 261 | """Simulates a case statement to check a value against given ranges in a dictionary.""" 262 | 263 | default = case_dict['default'] 264 | condition = case_dict['condition'] 265 | ranges = case_dict['ranges'] 266 | 267 | reverse = False 268 | if condition in ('>', '>='): 269 | reverse = True 270 | value_items = sorted(ranges.items(), reverse=reverse) 271 | 272 | for limit, value in value_items: 273 | if condition == '<' and this_val < limit: 274 | return value 275 | elif condition == '<=' and this_val <= limit: 276 | return value 277 | elif condition == '==' and this_val == limit: 278 | return value 279 | elif condition == '>' and this_val > limit: 280 | return value 281 | if condition == '>=' and this_val >= limit: 282 | return value 283 | elif condition == '!=' and this_val != limit: 284 | return value 285 | return default 286 | 287 | 288 | def time_it(t, stage='', end='\n', log=sys.stdout): 289 | """Compute elapsed time since the last call.""" 290 | if stage is None: 291 | return t 292 | t1 = time.time() 293 | dt = t1 - t 294 | if dt < param.timing_threshold: 295 | return t 296 | if dt < 1.0: 297 | print_message('TIME', f'{stage} in {dt:0.3f} second', end=end, log=log) 298 | elif dt < 5.0: 299 | print_message('TIME', f'{stage} in {dt:0.2f} seconds', end=end, log=log) 300 | elif dt < 100.0: 301 | print_message('TIME', f'{stage} in {dt:0.1f} seconds', end=end, log=log) 302 | else: 303 | print_message('TIME', f'{stage} in {dt:0.0f} seconds', end=end, log=log) 304 | t = t1 305 | return t 306 | 307 | 308 | def mkdir(target_directory, log=sys.stderr): 309 | """ Make a directory if it does not exist.""" 310 | directory = None 311 | try: 312 | directory = target_directory 313 | if not os.path.exists(directory): 314 | os.makedirs(directory) 315 | return directory 316 | except Exception as _er: 317 | print_message('ERR', f'failed to create directory {directory}\n{_er}', log=log) 318 | return None 319 | 320 | 321 | def sign(number): 322 | """Find sign of a number""" 323 | 324 | num_sign = math.copysign(1, number) 325 | return num_sign 326 | 327 | 328 | def stfs_from_mccc_traces(eq_datetime, eq_magnitude, trace_list, optimal_mccc, double_mccc): 329 | """Create STFs based on the MCCCC traces.""" 330 | 331 | # Window length before the phase. 332 | pre_phase_seconds = 20.0 333 | 334 | # STF length in seconds. 335 | stf_length = case(eq_magnitude, param.stf_length_dict) 336 | 337 | for net_sta in trace_list.keys(): 338 | trace = trace_list[net_sta] 339 | 340 | p_time = trace['phase_delay'] 341 | # xcorr_win is the xcorr window starting pre_phase_seconds before the phase. 342 | segment_1 = trace['tr_final'].slice(starttime=eq_datetime + p_time - pre_phase_seconds, 343 | endtime=eq_datetime + p_time - pre_phase_seconds + stf_length) 344 | segment_1.normalize() 345 | 346 | segment_2 = trace['tr_final'].slice(starttime=eq_datetime + p_time - pre_phase_seconds, 347 | endtime=eq_datetime + p_time - pre_phase_seconds + 348 | optimal_mccc[net_sta]['delay']) 349 | segment_2.normalize() 350 | 351 | segment_3 = trace['tr_final'].slice(starttime=eq_datetime + p_time - pre_phase_seconds, 352 | endtime=eq_datetime + p_time - pre_phase_seconds + 353 | double_mccc[net_sta]['delay']) 354 | segment_3.normalize() 355 | 356 | 357 | def station_weight(trace_list, vn_name, intra_station_dist): 358 | """Compute wight for each active station based on the distance and azimuth of the neighboring stations.""" 359 | 360 | sta_weight_dist = param.virtual_networks[vn_name]['sta_weight_dist'] 361 | sta_weight_azim = param.virtual_networks[vn_name]['sta_weight_azim'] 362 | weight = dict() 363 | # Check one station at a time. 364 | for center_key in trace_list.keys(): 365 | center = trace_list[center_key] 366 | 367 | neighbor_dist_count = 0 368 | neighbor_azim_count = 0 369 | 370 | center_lat = center['lat'] 371 | center_lon = center['lon'] 372 | 373 | center_azim = center['azim'] 374 | 375 | weight[center_key] = 1.0 376 | 377 | # Find how far are other stations from this station. 378 | for sta_net_key in trace_list.keys(): 379 | if center_key == sta_net_key: 380 | continue 381 | 382 | neighbor = trace_list[sta_net_key] 383 | center_neighbor_key = f'{center_key}_{sta_net_key}' 384 | 385 | if center_neighbor_key in intra_station_dist: 386 | _dist, _distk, _azim, _back_azim = intra_station_dist[center_neighbor_key] 387 | else: 388 | _dist, _azim, _back_azim = gps2dist_azimuth(center_lat, center_lon, 389 | neighbor['lat'], 390 | neighbor['lon']) 391 | _distk = _dist / 1000.0 392 | _dist = kilometer2degrees(_distk) 393 | intra_station_dist[f'{center_key}_{neighbor}'] = (_dist, _distk, _azim, _back_azim) 394 | intra_station_dist[f'{neighbor}_{center_key}'] = (_dist, _distk, _azim, _back_azim) 395 | 396 | if _distk < sta_weight_dist: 397 | neighbor_dist_count += 1 398 | 399 | azim_diff_1 = abs(center_azim - _azim) 400 | azim_diff_2 = abs((360 - center_azim) - _azim) 401 | azim_diff_3 = abs(center_azim - (_azim - 360)) 402 | azim_diff = min(azim_diff_1, azim_diff_2, azim_diff_3) 403 | 404 | if azim_diff < sta_weight_azim: 405 | neighbor_azim_count += 1 406 | _weight = max(neighbor_dist_count, neighbor_azim_count, neighbor_dist_count * neighbor_azim_count) 407 | if _weight == 0: 408 | weight[center_key] = 1.0 409 | else: 410 | weight[center_key] = 1.0 / _weight 411 | 412 | return intra_station_dist, weight 413 | 414 | 415 | def get_slice(eq_datetime, trace, pre_sec, total_sec, delay=0.0): 416 | """Get a slice of a trace.""" 417 | 418 | # Calculate the Phase arrival time with correction_sec representing additional correction. 419 | p_time = eq_datetime + trace['phase_delay'] + delay 420 | _start = p_time - pre_sec 421 | _end = _start + total_sec 422 | 423 | # Should use filtered or raw traces. 424 | if param.filtered_trace_align: 425 | trace_slice = trace['tr_filter'].copy() 426 | else: 427 | trace_slice = trace['tr_final'].copy() 428 | 429 | # Now slice the trace. 430 | trace_slice.trim(starttime=_start, endtime=_end, pad=True, nearest_sample=True, fill_value=0.0) 431 | return trace_slice 432 | 433 | 434 | def mccc(eq_datetime, trace_list, xcorr_win, shift_t, correction_sec=0.0): 435 | """Multi-channel cross-correlation to estimate time delays using multiple shift.""" 436 | mccc_cc_max = dict() 437 | mccc_cc_shift_sec = dict() 438 | pre_phase_seconds = param.pre_phase_seconds 439 | 440 | # shift represents the number of samples to shift for cross correlation. 441 | shift = int(shift_t * param.trace_sampling_frequency) 442 | 443 | bad_traces = list() 444 | for net_sta_a in trace_list.keys(): 445 | 446 | # xcorr_win is the xcorr window starting pre_phase_seconds before the phase. 447 | segment_a = get_slice(eq_datetime, trace_list[net_sta_a], pre_phase_seconds, 448 | xcorr_win, correction_sec) 449 | 450 | cc_max_list = list() 451 | cc_max_index_list = list() 452 | 453 | for net_sta_b in trace_list.keys(): 454 | if net_sta_a == net_sta_b: 455 | continue 456 | segment_b = get_slice(eq_datetime, trace_list[net_sta_b], pre_phase_seconds, 457 | xcorr_win, correction_sec) 458 | 459 | cc = correlate(segment_a, segment_b, shift, demean=False) 460 | 461 | # Use the absolute value to find the max. 462 | shift_index, shift_value = xcorr_max(cc, abs_max=True) 463 | 464 | # Keep index for time shift calculation and value for comarison. 465 | if not math.isnan(shift_value): 466 | cc_max_index_list.append(shift_index) 467 | cc_max_list.append(abs(shift_value)) 468 | 469 | # MCCC for net_sta_a is done, so save the necessary values. 470 | if cc_max_list: 471 | # Calculate mean of the CC parameters. 472 | mean_cc_max = np.mean(cc_max_list) 473 | mean_cc_max_index = int(np.mean(cc_max_index_list)) 474 | mccc_cc_shift_sec[net_sta_a] = mean_cc_max_index / param.trace_sampling_frequency 475 | mccc_cc_max[net_sta_a] = mean_cc_max 476 | 477 | else: 478 | # Station with bad mccc. 479 | bad_traces.append(net_sta_a) 480 | 481 | return bad_traces, mccc_cc_max, mccc_cc_shift_sec 482 | 483 | 484 | def find_optimal_mccc_window(eq_datetime, trace_list, vnet, log=sys.stdout): 485 | """Performs MCCC for multiple windows.""" 486 | # 1103. 487 | optimal_mccc_window = dict() 488 | stf_pre_phase_seconds = param.stf_pre_phase_seconds 489 | stf_post_phase_seconds = param.stf_post_phase_seconds 490 | stf_search_seconds = param.stf_search_seconds 491 | shift = param.xcorr_shift_default 492 | 493 | # Go through all xcorr windows and see which one produces the maximum stacking amplitude. 494 | for xc_window in param.xcorr_window: 495 | # Create empty stack stream. 496 | stack_stream = read() 497 | stack_stream.clear() 498 | 499 | # Compute CC and time shift for this window. 500 | print_message('INFO', f'Evaluating mccc of {len(trace_list)} traces for xc_window of {xc_window}s.', log=log) 501 | bad_traces, cc_max, cc_delay = mccc(eq_datetime, trace_list, xc_window, shift) 502 | 503 | rejected_list = bad_traces.copy() 504 | # Inspect CC for each station. 505 | for tr_key in cc_max.keys(): 506 | if tr_key in bad_traces: 507 | continue 508 | 509 | # Only apply the correction if improvement is more than minimum. If this window is 510 | # selected, then the low CC stations will be removed. Segment is a little longer than 511 | # needed to avoid end effects. 512 | if cc_max[tr_key] >= vnet['xcorr_min']: 513 | _segment = get_slice(eq_datetime, trace_list[tr_key], param.pre_phase_seconds, 514 | xc_window + param.pre_phase_seconds, cc_delay[tr_key]) 515 | 516 | stack_stream.append(_segment) 517 | else: 518 | rejected_list.append(tr_key) 519 | print_message('WARN', f'Skipped {tr_key} for MCCC {xc_window} ' 520 | f'because CC {abs(cc_max[tr_key]):0.2f} is < {vnet["xcorr_min"]}', log=log) 521 | # Did anything go into stack_stream? 522 | if not stack_stream: 523 | print_message('WARN', f'For xcorr_window of {xc_window} s. The stack is empty!', log=log) 524 | continue 525 | 526 | stack = np.sum([tr.data for tr in stack_stream], axis=0) / len(stack_stream) 527 | max_search_index = int(round(stf_search_seconds * param.trace_sampling_frequency)) 528 | stf = stack[0:max_search_index].copy() 529 | stack_max = np.max(abs(stf)) 530 | print_message('INFO', f'Evaluated xcorr_window of {xc_window} s. The stack maximum is {stack_max:0.2f} for ' 531 | f'{len(stack_stream)} traces.', log=log) 532 | 533 | if not optimal_mccc_window: 534 | optimal_mccc_window['window_length'] = xc_window 535 | optimal_mccc_window['cc_max'] = cc_max 536 | optimal_mccc_window['delay'] = cc_delay.copy() 537 | optimal_mccc_window['stack_max'] = stack_max 538 | optimal_mccc_window['rejected_list'] = rejected_list 539 | optimal_mccc_window['stf'] = stf 540 | elif stack_max > optimal_mccc_window['stack_max']: 541 | optimal_mccc_window['window_length'] = xc_window 542 | optimal_mccc_window['cc_max'] = cc_max 543 | optimal_mccc_window['delay'] = cc_delay.copy() 544 | optimal_mccc_window['stack_max'] = stack_max 545 | optimal_mccc_window['rejected_list'] = rejected_list 546 | optimal_mccc_window['stf'] = stf 547 | 548 | # Did we find optimal_mccc_window? 549 | if not optimal_mccc_window: 550 | return trace_list, len(stack_stream), None 551 | 552 | print_message('INFO', f'Optimum X-Corr shift is {optimal_mccc_window["window_length"]:0.1f} seconds', log=log) 553 | 554 | # Only select traces that contributed to the optimum x-corr, if their amplitude above the 555 | # minimum designated value. Remove traces with bad MCCCC. 556 | for key in optimal_mccc_window['rejected_list']: 557 | if key in optimal_mccc_window['cc_max'].keys(): 558 | print_message('WARN', f'Removing {key} from trace list in "find_optimal_mccc_window" ' 559 | f'due to bad MCCC', log=log) 560 | optimal_mccc_window['cc_max'].pop(key) 561 | optimal_mccc_window['delay'].pop(key) 562 | if key in trace_list: 563 | trace_list.pop(key) 564 | 565 | # Reset the stack stream. 566 | stack_stream = read() 567 | stack_stream.clear() 568 | 569 | xcorr = int(min(25.0, optimal_mccc_window['window_length']) * param.trace_sampling_frequency) + 1 570 | for tr_key in optimal_mccc_window['cc_max']: 571 | cc_delay = optimal_mccc_window['delay'][tr_key] 572 | trace_list[tr_key]['mccc_delay'] = cc_delay 573 | print_message('INFO', f'{tr_key} MCCC {optimal_mccc_window["window_length"]} ' 574 | f'CC {abs(cc_max[tr_key]):0.2f} is >= {vnet["xcorr_min"]}', 575 | log=log) 576 | 577 | # Do one last pass at aligning by CCing w the best STF. 578 | #tr_slice = get_slice(eq_datetime, trace_list[tr_key], param.stf_pre_phase_seconds, 579 | # stf_post_phase_seconds + param.pre_phase_seconds, cc_delay) 580 | #cc = correlate(tr_slice, optimal_mccc_window['stf'], shift) 581 | #index_shift, value = xcorr_max(cc, abs_max=True) 582 | #correction = index_shift / param.trace_sampling_frequency 583 | #print_message('INFO', f'{tr_key} additional {correction} seconds correction from STF', log=log) 584 | #optimal_mccc_window['mccc_delay'][tr_key] += correction 585 | 586 | # Apply the correction and stack. 587 | tr_slice = get_slice(eq_datetime, trace_list[tr_key], stf_pre_phase_seconds, stf_post_phase_seconds, 588 | cc_delay) 589 | stack_stream.append(tr_slice.copy()) 590 | 591 | # Sufficient number of traces left? 592 | if len(stack_stream) < param.min_num_sta: 593 | return trace_list, len(stack_stream), None 594 | 595 | stack = np.sum([tr.data for tr in stack_stream], axis=0) / len(stack_stream) 596 | times = (stack_stream[0]).times() 597 | 598 | optimal_mccc_window['stack'] = stack 599 | optimal_mccc_window['time'] = times 600 | 601 | return trace_list, len(stack_stream), optimal_mccc_window 602 | 603 | 604 | def find_dense_sta_patch(intra_station_dist, vnet, trace_list, search_radius_edge, log=sys.stdout): 605 | """Find patch of dense stations. Currently, we take the first station with this many neighbors to match Alex's 606 | code. We may have to revisit this later.""" 607 | 608 | # Number of neighboring stations. 609 | neighbor_count = dict() 610 | 611 | # vnet parameters 612 | vnet_info = param.virtual_networks[vnet] 613 | 614 | # Stations that are within this distance are considered close neighbors. 615 | if search_radius_edge <= 0: 616 | return intra_station_dist, None 617 | 618 | # Check one station at a time. 619 | for center_key in trace_list.keys(): 620 | center = trace_list[center_key] 621 | 622 | neighbor_count[center_key] = 0 623 | center_lat = center['lat'] 624 | center_lon = center['lon'] 625 | 626 | # Find how far are other stations from this station. 627 | for sta_net_key in trace_list.keys(): 628 | if center_key == sta_net_key: 629 | continue 630 | neighbor = trace_list[sta_net_key] 631 | center_neighbor_key = f'{center_key}_{sta_net_key}' 632 | neighbor_center_key = f'{sta_net_key}_{center_key}' 633 | if center_neighbor_key in intra_station_dist: 634 | _dist, _distk, _azim, _back_azim = intra_station_dist[center_neighbor_key] 635 | else: 636 | _dist, _azim, _back_azim = gps2dist_azimuth(center_lat, center_lon, 637 | neighbor['lat'], 638 | neighbor['lon']) 639 | _distk = _dist / 1000.0 640 | _dist = kilometer2degrees(_distk) 641 | intra_station_dist[center_neighbor_key] = (_dist, _distk, _azim, _back_azim) 642 | intra_station_dist[neighbor_center_key] = (_dist, _distk, _azim, _back_azim) 643 | 644 | # If any station is within the search radius, it is considered a close neighbor. 645 | if _dist < search_radius_edge: 646 | neighbor_count[center_key] += 1 647 | if neighbor_count: 648 | sparse_patch_count = max(neighbor_count.values()) 649 | else: 650 | sparse_patch_count = 0 651 | 652 | # Do we have a dense patch? 653 | dense_patch_list = dict() 654 | if sparse_patch_count <= vnet_info['sparse_patch_count']: 655 | print_message('INFO', f'No dense patches with more than {vnet_info["sparse_patch_count"]} neighbors found.', 656 | log=log) 657 | return intra_station_dist, None 658 | else: 659 | # We take the first station with this many neighbors to match Alex's code. We may have to revisit this. 660 | s = '' 661 | if sparse_patch_count > 1: 662 | s = 's' 663 | print_message('INFO', f'The densest patch has {sparse_patch_count} station{s}.', log=log) 664 | for center_key in neighbor_count.keys(): 665 | if sparse_patch_count == neighbor_count[center_key]: 666 | center_lat = trace_list[center_key]['lat'] 667 | center_lon = trace_list[center_key]['lon'] 668 | 669 | # Find how far are other stations from this station. 670 | for sta_net_key in trace_list.keys(): 671 | if center_key == sta_net_key: 672 | dense_patch_list[sta_net_key] = False 673 | continue 674 | neighbor = trace_list[sta_net_key] 675 | center_neighbor_key = f'{center_key}_{sta_net_key}' 676 | neighbor_center_key = f'{sta_net_key}_{center_key}' 677 | 678 | if f'{center_key}_{sta_net_key}' in intra_station_dist: 679 | _dist, _distk, _azim, _back_azim = intra_station_dist[f'{center_key}_{sta_net_key}'] 680 | else: 681 | _dist, _azim, _back_azim = gps2dist_azimuth(center_lat, center_lon, 682 | neighbor['lat'], 683 | neighbor['lon']) 684 | _distk = _dist / 1000.0 685 | _dist = kilometer2degrees(_distk) 686 | intra_station_dist[center_neighbor_key] = (_dist, _distk, _azim, _back_azim) 687 | intra_station_dist[neighbor_center_key] = (_dist, _distk, _azim, _back_azim) 688 | 689 | # If any station is within the search radius, it is considered a close neighbor. 690 | if _dist < search_radius_edge: 691 | dense_patch_list[sta_net_key] = True 692 | else: 693 | dense_patch_list[sta_net_key] = False 694 | break 695 | return intra_station_dist, dense_patch_list 696 | 697 | 698 | def p_wave_snr(trace, eq_datetime, p_time): 699 | """Computes SNR for the p-wave""" 700 | snr_window = param.snr_window 701 | noise_segment = trace.slice(starttime=eq_datetime + p_time + snr_window['pre-p'][0], 702 | endtime=eq_datetime + p_time + snr_window['pre-p'][1]) 703 | noise_max = np.max(np.abs(noise_segment.data)) 704 | signal_segment = trace.slice(starttime=eq_datetime + p_time + snr_window['p'][0], 705 | endtime=eq_datetime + p_time + + snr_window['p'][1]) 706 | 707 | signal_max = np.max(np.abs(signal_segment.data)) 708 | snr = signal_max / noise_max 709 | 710 | return snr 711 | 712 | 713 | def p_wave_normalize(trace, eq_datetime, p_time): 714 | """Normalize on a window around the phase.""" 715 | signal_segment = trace.slice(starttime=eq_datetime + p_time - param.pre_phase_seconds, 716 | endtime=eq_datetime + p_time + param.post_phase_seconds) 717 | signal_max = np.max(np.abs(signal_segment.data)) 718 | 719 | return signal_max 720 | 721 | 722 | def xcorr_window(trace, eq_mag, eq_datetime): 723 | """Calculate length of the xcorr window starting pre_phase_seconds before the phase.""" 724 | # 732. 725 | window_sum = list() 726 | for key in trace.keys(): 727 | 728 | # Select a window around the phase. 729 | p_time = trace[key]['phase_delay'] 730 | signal_segment = trace[key]['tr_final'].slice(starttime=eq_datetime + p_time - param.pre_phase_seconds, 731 | endtime=eq_datetime + p_time + param.post_phase_seconds) 732 | signal_segment.normalize() 733 | 734 | # Get the trace envelope. 735 | data_envelope = envelope(signal_segment.data) 736 | 737 | # Select the window from start of the segment to just before the amplitude drops by 75%. 738 | # Note that the trace is already normalized in this window, so the max amplitude should be 1. 739 | times = (signal_segment.times()).copy() 740 | peak_flag = False 741 | for i, data in enumerate(data_envelope): 742 | if i > 0 and data >= 0.95: 743 | peak_flag = True 744 | 745 | # Want to check segment of the envelope that has a downward slope and is past the peak location. 746 | if peak_flag and data < 0.25: 747 | window_sum.append(times[i - 1] - times[0]) 748 | break 749 | average_win = np.mean(window_sum) 750 | 751 | # We also have a window length based on the event's magnitude. 752 | mag_win = case(eq_mag, param.mag_window_length_dict) 753 | 754 | # The largest of the two window lengths is selected. 755 | xcorr_win_length = max(average_win, mag_win) 756 | 757 | return xcorr_win_length 758 | 759 | 760 | def preprocess_trace(tr, filter_type, eq_datetime, phase_delay, normalize=True): 761 | """Preprocess the trace""" 762 | # Filter/normalize traces. 763 | tr_filter = tr.copy() 764 | tr_filter.detrend("linear") 765 | tr_filter.taper(max_percentage=0.05, type="hann") 766 | if filter_type is not None: 767 | freq = param.trace_filter[filter_type]['corners'] 768 | tr_filter.filter('bandpass', freqmin=freq[0], freqmax=freq[1], corners=4, zerophase=True) 769 | tr_filter.detrend(type='demean') 770 | if normalize: 771 | tr_filter.data /= p_wave_normalize(tr_filter, eq_datetime, phase_delay) 772 | 773 | return tr_filter 774 | 775 | 776 | def gen_synthetics(trace_list, eq_datetime, vn_name, create_synthetics=False): 777 | """Create synthetic seismograms.""" 778 | # 1646 779 | synthetics = dict() 780 | if not create_synthetics: 781 | return synthetics 782 | 783 | for net_sta in trace_list.keys(): 784 | trace = trace_list[net_sta] 785 | 786 | phase_delay = 0.0 787 | # Copy the structure of the filtered trace since this is the one we will be using for stacking. 788 | tr = trace[f'tr_filter'].copy() 789 | 790 | times = tr.times(reftime=eq_datetime) 791 | triangle_width = param.triangle_width 792 | triangle_half_width = triangle_width / 2.0 793 | syn_data = list() 794 | 795 | for t_index, t in enumerate(times): 796 | # Left half. 797 | 798 | if t < phase_delay - triangle_half_width: 799 | value = 0.0 800 | elif phase_delay - triangle_half_width <= t <= phase_delay: 801 | value = 1.0 + (t - phase_delay) * 1. / triangle_half_width 802 | 803 | # Right half 804 | elif phase_delay < t <= triangle_half_width + phase_delay: 805 | value = 1.0 - (t - phase_delay) * 1. / triangle_half_width 806 | else: 807 | value = 0.0 808 | 809 | syn_data.append(value) 810 | 811 | # Filter/normalize the trace. 812 | tr.data = np.array(syn_data.copy()) * trace['weight'] 813 | tr_filter = preprocess_trace(tr, param.bp_filter[vn_name], eq_datetime, phase_delay, normalize=False) 814 | 815 | # For debugging only. 816 | debug = False 817 | if debug: 818 | trace_1 = tr 819 | trace_2 = tr_filter 820 | stream = Stream(traces=[trace_1, trace_2]) 821 | stream.plot() 822 | sys.exit() 823 | 824 | synthetics[net_sta] = tr_filter.copy() 825 | 826 | return synthetics 827 | 828 | 829 | def has_phase(trace, eq_datetime, phase_delay): 830 | """Check if seismogram covers a particular phase based on the phase_delay.""" 831 | if (trace.stats.starttime <= eq_datetime + phase_delay - param.seconds_before_p) and \ 832 | (trace.stats.endtime >= eq_datetime + phase_delay + param.seconds_after_p): 833 | return True 834 | return False 835 | 836 | 837 | def set_time_parameters(eq_magnitude): 838 | """Set computation time parameters based on the earthquake magnitude 839 | """ 840 | bp_t_offset = param.bp_time_offset 841 | 842 | bp_t_total = case(eq_magnitude, param.bp_time_total_dict) 843 | 844 | bp_t_increment = case(eq_magnitude, param.bp_time_inc_dict) 845 | 846 | # Total seconds of the Hanning averaging taper. 847 | t_avg = case(eq_magnitude, param.hann_t_avg_dict) 848 | 849 | stack_start = - bp_t_offset 850 | stack_end = stack_start + bp_t_total 851 | 852 | return stack_start, stack_end, bp_t_offset, bp_t_increment, bp_t_total, t_avg 853 | 854 | 855 | def set_grid(eq_latitude, eq_longitude, eq_magnitude, grid_factor=1): 856 | """Set computation latitude grid increment.""" 857 | 858 | decimal_places = param.grid_decimal_places 859 | 860 | # Set the latitude range and increment. 861 | latitude_inc = round(set_float_digits(case(eq_magnitude, param.grid_latitude_inc_dict)), decimal_places) 862 | latitude_half = round(set_float_digits(case(eq_magnitude, param.grid_latitude_half_dict)), decimal_places) 863 | # We assume latitude_half_count represents number of latitude_incs. 864 | latitude_half_count = int(round(latitude_half / latitude_inc)) 865 | 866 | # Just make sure the half latitude is set as a multiple of latitude inc. 867 | latitude_half = round(latitude_inc * latitude_half_count, decimal_places) 868 | 869 | latitude_start = eq_latitude - latitude_half 870 | latitude_end = eq_latitude + latitude_half 871 | 872 | # Set the longitude range and increment base on the latitude. 873 | longitude_half = round(float(latitude_half / abs(math.cos(math.radians(eq_latitude)))), decimal_places) 874 | longitude_inc = round(float(longitude_half / latitude_half_count), decimal_places) 875 | # We assume longitude_half_count represents number of longitude_incs. 876 | longitude_half_count = int(round(longitude_half / longitude_inc)) 877 | 878 | # Just make sure the half longitude is set as a multiple of longitude inc. 879 | longitude_half = round(longitude_inc * longitude_half_count, decimal_places) 880 | 881 | longitude_start = eq_longitude - longitude_half 882 | longitude_end = eq_longitude + longitude_half 883 | 884 | latitude = {'start': min(latitude_start, latitude_end), 'end': max(latitude_start, latitude_end), 885 | 'inc': latitude_inc * grid_factor} 886 | longitude = {'start': min(longitude_start, longitude_end), 'end': max(longitude_start, longitude_end), 887 | 'inc': longitude_inc * grid_factor} 888 | 889 | return latitude, longitude 890 | 891 | 892 | def phase_time(source_depth_km, source_distance): 893 | """Returns time of a phase in seconds.""" 894 | 895 | arrivals = model.get_travel_times(source_depth_in_km=source_depth_km, distance_in_degree=source_distance, 896 | phase_list=param.phase) 897 | if arrivals: 898 | return arrivals[0].time 899 | else: 900 | return None 901 | 902 | 903 | def get_bp_time_window(eq_date_time, eq_magnitude): 904 | """Set the request window based on the event magnitude and time.""" 905 | 906 | # Base on "Get_date_for_FetchData_BP.f". 907 | # Start seconds before the event. 908 | t_before = param.request_time_before 909 | 910 | # Duration seconds after the event. 911 | t_after = case(eq_magnitude, param.request_time_after) 912 | 913 | event_datetime = UTCDateTime(eq_date_time) 914 | request_start_datetime = event_datetime - t_before 915 | request_start = request_start_datetime.strftime('%Y-%m-%d %H:%M:%S') 916 | request_start_date_time = request_start.replace(' ', 'T') 917 | request_end_datetime = event_datetime + t_after 918 | request_end = request_end_datetime.strftime('%Y-%m-%d %H:%M:%S') 919 | request_end_date_time = request_end.replace(' ', 'T') 920 | 921 | return request_start_date_time, request_start_datetime, request_end_date_time, request_end_datetime 922 | 923 | 924 | def is_number(n): 925 | """Check if the input string input is a number. 926 | """ 927 | try: 928 | float(n) 929 | except ValueError: 930 | return False 931 | return True 932 | 933 | 934 | def is_loc_higher_priority(loc1, loc2): 935 | if loc1 in ['--', '00', '', ' ']: 936 | return False 937 | elif loc2 in ['--', '00', '', ' ']: 938 | return True 939 | elif is_number(loc1) and is_number(loc2): 940 | if int(loc2) < int(loc1): 941 | return True 942 | else: 943 | return False 944 | elif loc2 < loc1: 945 | return True 946 | return False 947 | 948 | 949 | def get_2nd_point(lat1, lon1, radius, bearing=0): 950 | """Find latitude and longitude of a second point distance radius degrees from the first one.""" 951 | 952 | # Radius of the Earth 953 | earth_radius = param.earth_radius 954 | 955 | # Bearing is converted to radians. 956 | bearing = math.radians(bearing) 957 | 958 | # Distance in km. 959 | d = degrees2kilometers(radius) 960 | 961 | # Current lat,lon point converted to radians 962 | lat1 = math.radians(lat1) 963 | lon1 = math.radians(lon1) 964 | 965 | lat2 = math.asin(math.sin(lat1) * math.cos(d / earth_radius) + 966 | math.cos(lat1) * math.sin(d / earth_radius) * math.cos(bearing)) 967 | 968 | lon2 = lon1 + math.atan2(math.sin(bearing) * math.sin(d / earth_radius) * math.cos(lat1), 969 | math.cos(d / earth_radius) - math.sin(lat1) * math.sin(lat2)) 970 | 971 | # The new point. 972 | lat2 = math.degrees(lat2) 973 | lon2 = math.degrees(lon2) 974 | 975 | return lat2, lon2 976 | 977 | 978 | def get_service_url(ws_catalog, ws_dc): 979 | """Extract the service URL from dataselect service URL.""" 980 | ws_service_url = ws_catalog[ws_dc].dataselect.split('/fdsn')[0] 981 | return ws_service_url 982 | 983 | 984 | def get_request_items(req_lines): 985 | """Split a request line to its components.""" 986 | net_sta_rec = dict() 987 | bulk_rec = list() 988 | for _line in req_lines: 989 | _net, _sta, _loc, _chan, _start, _end = _line.strip().split() 990 | net_sta_key = f'{_net}-{_sta}' 991 | if net_sta_key not in net_sta_rec.keys(): 992 | net_sta_rec[net_sta_key] = [_net, _sta, _loc, _chan, _start, _end] 993 | else: 994 | if channel_order[_chan] < channel_order[net_sta_rec[net_sta_key][3]]: 995 | net_sta_rec[net_sta_key] = [_net, _sta, _loc, _chan, _start, _end] 996 | elif channel_order[_chan] == channel_order[net_sta_rec[net_sta_key][3]]: 997 | if is_loc_higher_priority(net_sta_rec[net_sta_key][2], _loc): 998 | net_sta_rec[net_sta_key] = [_net, _sta, _loc, _chan, _start, _end] 999 | for _key in net_sta_rec.keys(): 1000 | bulk_rec.append(net_sta_rec[_key]) 1001 | 1002 | return bulk_rec 1003 | 1004 | 1005 | def is_net_temporary(net): 1006 | """Exclude temporary networks.""" 1007 | if len(net) <= 2: 1008 | if net[0].isdigit(): 1009 | return True 1010 | if net[0].lower() in ['x', 'y', 'z']: 1011 | return True 1012 | return False 1013 | 1014 | 1015 | def read_url(target_url, log=sys.stdout, verbose=False): 1016 | """Read content of a URL.""" 1017 | if verbose: 1018 | print_message('INFO', f'Opening URL: {target_url}', log=log) 1019 | 1020 | with urlopen(target_url) as url: 1021 | content = url.read().decode() 1022 | return content 1023 | 1024 | 1025 | def get_fedcatalog_stations(start, end, lat, lon, rmin, rmax, net='*', dc='*', req='request', service='dataselect', log=sys.stdout): 1026 | url = f'{fedcatalog_service_url}net={net}&cha={param.request_channel}&starttime={start}&endtime={end}' \ 1027 | f'&targetservice={service}&level=channel&datacenter={dc}&format={req}&includeoverlaps=false' \ 1028 | f'&endafter={start}&lat={lat}&lon={lon}&minradius={rmin}&maxradius={rmax}&includerestricted=false&&nodata=404' 1029 | 1030 | print_message('INFO', f'Requesting: {url}', log=log) 1031 | try: 1032 | content = read_url(url, log=log) 1033 | except Exception as _er: 1034 | print_message('ERR', f'Request {url}: {_er}', log=log) 1035 | return None 1036 | 1037 | return content 1038 | 1039 | 1040 | def get_dc_dataselect_url(start, end, net, sta, loc, chan, dc='*', req='request', service='dataselect', log=sys.stdout): 1041 | url = f'{fedcatalog_service_url}starttime={start}&endtime={end}' \ 1042 | f'&targetservice={service}&level=channel&datacenter={dc}&format={req}&includeoverlaps=false' \ 1043 | f'&net={net}&sta={sta}&cha={chan}&loc={loc}&nodata=404' 1044 | 1045 | print_message('INFO', f'Requesting: {url}', log=log) 1046 | try: 1047 | content = read_url(url, log=log) 1048 | except Exception as _er: 1049 | print_message('ERR', f'Request {url}: {_er}', log=log) 1050 | return None 1051 | for _line in content.split('\n'): 1052 | if 'DATASELECTSERVICE' in _line: 1053 | _url = _line.replace('DATASELECTSERVICE=', '') 1054 | return _url.strip() 1055 | 1056 | 1057 | def split_fedcatalog_stations(station_data, log=sys.stdout): 1058 | """Get station list from fedcatalog service.""" 1059 | 1060 | # This dictionary provides a template for fetdatalog creation. 1061 | catalog_info = dict() 1062 | 1063 | bulk_list = dict() 1064 | 1065 | # Go through the station lines and split them to data centers. 1066 | _lines = station_data.split('\n') 1067 | 1068 | _line_index = -1 1069 | previous_dc = None 1070 | dc_name = None 1071 | for _line in _lines: 1072 | _line_index += 1 1073 | 1074 | # Skip the blank and the comment lines. 1075 | if not _line.strip(): 1076 | continue 1077 | if _line.startswith('#') and '=' not in _line: 1078 | continue 1079 | 1080 | # From the parameter=value lines, we are interested in the DATACENTER and DATASELECTSERVICE lines. 1081 | elif _line.startswith('#') and '=' in _line: 1082 | _par, _value = _line.split('=') 1083 | 1084 | # Found the data center name. 1085 | if _par == '#DATACENTER': 1086 | if dc_name is not None: 1087 | previous_dc = dc_name 1088 | print_message('INFO', f'from the {_value} data center', log=log) 1089 | dc_name, dc_url = _value.strip().split(',') 1090 | 1091 | # Initialize the data center information. 1092 | if dc_name not in catalog_info.keys(): 1093 | print_message('INFO', f'Initiating fedcatalog request for {dc_name}', log=log) 1094 | catalog_info[dc_name] = ObjDict({'url': dc_url, 'dataselect': '', 'bulk': []}) 1095 | 1096 | # if this is not the first data center, save the previous data center's bulk list 1097 | if bulk_list: 1098 | catalog_info[previous_dc].bulk = list() 1099 | for _key in bulk_list: 1100 | catalog_info[previous_dc].bulk.append(bulk_list[_key]['line']) 1101 | 1102 | # The list is saved. Now, reset the bulk_list. 1103 | bulk_list = dict() 1104 | 1105 | continue 1106 | # Found the dataselect service address. 1107 | elif _par == 'DATASELECTSERVICE': 1108 | # Save the dataselect service address in the catalog for this DC. 1109 | catalog_info[dc_name].dataselect = _value.strip() 1110 | print_message('INFO', f'dataselect service is {_value.strip()}', log=log) 1111 | continue 1112 | elif _par == 'STATIONSERVICE': 1113 | # Save the dataselect service address in the catalog for this DC. 1114 | catalog_info[dc_name].dataselect = _value.strip() 1115 | print_message('INFO', f'station service is {_value.strip()}', log=log) 1116 | continue 1117 | else: 1118 | # Ignore the other definitions. 1119 | continue 1120 | 1121 | # The rest are the station lines. 1122 | else: 1123 | # Skip the blank lines. 1124 | _line = _line.strip() 1125 | if not _line: 1126 | continue 1127 | 1128 | # Insert channels based on the channel priority list. 1129 | try: 1130 | items = _line.split('|') 1131 | _net, _sta, _loc, _chan = items[0:4] 1132 | except Exception as ex: 1133 | print_message('ERR', f'Failed to split station line {_line} to (_net, _sta, _loc, _chan)\n{ex}', 1134 | log=log) 1135 | continue 1136 | _key = '_'.join([_net, _sta]) 1137 | if _key not in bulk_list: 1138 | bulk_list[_key] = {'chan': _chan, 'line': _line} 1139 | else: 1140 | # If the station exists, just save the lower ordered channel. 1141 | if param.channel_order[_chan] < param.channel_order[bulk_list[_key]['chan']]: 1142 | bulk_list[_key] = {'chan': _chan, 'line': _line} 1143 | 1144 | # Save the last data center's bulk list. 1145 | if bulk_list: 1146 | catalog_info[dc_name].bulk = list() 1147 | for _key in bulk_list: 1148 | for _chan in bulk_list[_key]['chan']: 1149 | catalog_info[dc_name].bulk.append(bulk_list[_key]['line']) 1150 | 1151 | return ObjDict(catalog_info) 1152 | 1153 | 1154 | def trace_amp(tr_times, tr_data, t1, sampling=None): 1155 | """Get trace amplitude at a give time.""" 1156 | if sampling is None: 1157 | t_index = math.floor((t1 - tr_times[0]) * sampling) 1158 | 1159 | return tr_data[t_index] 1160 | 1161 | 1162 | def hann(stack, syn_stack, start, end, bp_t_increment, average_seconds, resamp=1, create_synthetics=False, 1163 | log=sys.stdout): 1164 | """Integrate the beams in a running beam_average_seconds window""" 1165 | 1166 | print_message('INFO', f'In Hann: Integrate {len(stack)} beams over a {average_seconds} s window', log=log) 1167 | # The Hanning function of length num_points is used to perform Hanning smoothing 1168 | num_points = int(round(average_seconds / bp_t_increment)) + 1 1169 | 1170 | # Half points. We always want odd numbers, so Hanning amplitude of 1 falls on the current sample. 1171 | if num_points % 2 == 0: 1172 | num_half_points = int(num_points / 2) 1173 | num_points = num_points + 1 1174 | else: 1175 | num_half_points = int((num_points - 1) / 2) 1176 | 1177 | # The Hanning window. 1178 | _hann = np.hanning(num_points) 1179 | 1180 | # Move the start and end by half of the averaging window length. 1181 | half_window = num_half_points * bp_t_increment 1182 | stack_start = start - half_window 1183 | stack_end = end + half_window 1184 | print_message('INFO', f'In Hann: half_window {half_window}, stack_start {stack_start}, ' 1185 | f'stack_end {stack_end}', log=log) 1186 | 1187 | smooth_stack = dict() 1188 | syn_smooth_stack = dict() 1189 | 1190 | # Loop over grid points. 1191 | for grid_key in stack: 1192 | smooth_stack[grid_key] = dict() 1193 | if create_synthetics: 1194 | syn_smooth_stack[grid_key] = dict() 1195 | 1196 | # Apply smoothing. First get a list of sample times at each grid point. 1197 | time_key_list = list(stack[grid_key]) 1198 | step_count = 0 1199 | 1200 | # Step through the time samples. 1201 | for time_index, time_key in enumerate(time_key_list): 1202 | 1203 | # Skip samples (resample) if necessary. But always start from the first sample. 1204 | step_count += 1 1205 | if time_index == 0: 1206 | step_count = 0 1207 | elif step_count != resamp: 1208 | continue 1209 | else: 1210 | step_count = 0 1211 | 1212 | # Initialize. 1213 | smooth_stack[grid_key][time_key] = 0.0 1214 | if create_synthetics: 1215 | syn_smooth_stack[grid_key][time_key] = 0.0 1216 | 1217 | # Loop over all samples within the trace window. 1218 | if stack_start <= float(time_key) <= stack_end: 1219 | # Then Hanning index. 1220 | hann_index = 0 1221 | # Index of data point with respect to the current point index. 1222 | this_point = - num_half_points 1223 | # Counter. 1224 | count = 0 1225 | 1226 | # Going from -num_half_points to +num_half_points. 1227 | while this_point <= num_half_points: 1228 | debug = False 1229 | if debug: 1230 | print("this_point:", this_point) 1231 | print("time_index:", time_index) 1232 | print("time_key:", time_key) 1233 | print("time_key_list:", time_key_list) 1234 | 1235 | # Make sure not to go beyond the list length. 1236 | if 0 <= time_index + this_point <= len(time_key_list) - 1: 1237 | t_key = time_key_list[time_index + this_point] 1238 | # cos(pi*x/L)**2 abs(x) <= L/2 1239 | try: 1240 | smooth_stack[grid_key][time_key] = smooth_stack[grid_key][time_key] + \ 1241 | stack[grid_key][t_key] * _hann[hann_index] 1242 | except Exception as ex: 1243 | print_message('ERR', f'KeyError: {t_key}, {time_key}', log=log) 1244 | 1245 | if create_synthetics: 1246 | try: 1247 | syn_smooth_stack[grid_key][time_key] = syn_smooth_stack[grid_key][time_key] + \ 1248 | syn_stack[grid_key][t_key] * _hann[hann_index] 1249 | except Exception as ex: 1250 | print_message('ERR', f'Synthetics KeyError: {t_key}, {time_key}', log=log) 1251 | this_point += 1 1252 | count += 1 1253 | hann_index += 1 1254 | 1255 | smooth_stack[grid_key][time_key] /= count 1256 | if create_synthetics: 1257 | syn_smooth_stack[grid_key][time_key] /= count 1258 | 1259 | return smooth_stack, syn_smooth_stack 1260 | 1261 | 1262 | def smooth(x, window_length, log=sys.stdout): 1263 | """smooth the data using a Hanning window of the requested size.""" 1264 | if x.size < window_length: 1265 | print(f'Input vector length {x.size} is smaller than the ' 1266 | f'requested window length of {window_length}.') 1267 | raise ValueError 1268 | 1269 | print_message('INFO', f'In Smooth: window_length {window_length}', log=log) 1270 | 1271 | # np.r_ stacks the comma-separated arraysalong their first axis. 1272 | signal = np.r_[x[window_length - 1:0:-1], x, x[-2:-window_length - 1:-1]] 1273 | weights = np.hanning(window_length) 1274 | smooth_signal = np.convolve(weights / weights.sum(), signal, mode='valid') 1275 | return smooth_signal 1276 | 1277 | 1278 | def stack_root(stack, syn_stack, stacking_root, create_synthetics=False): 1279 | """If necessary, raise stacks to the Nth power to do Nth-root stacking.""" 1280 | if stacking_root != 1.0: 1281 | for grid_key in stack: 1282 | for t_key in stack[grid_key]: 1283 | stack[grid_key][t_key] = np.power(stack[grid_key][t_key], stacking_root) 1284 | if create_synthetics: 1285 | syn_stack[grid_key][t_key] = np.power(syn_stack[grid_key][t_key], 1286 | stacking_root) 1287 | 1288 | return stack, syn_stack 1289 | 1290 | 1291 | def trace_trimmer(trace, origin_time, bp_t_offset, bp_t_total, phase_delay, mccc_delay): 1292 | """Trim a trace and pad with zeros and include delays""" 1293 | trimmed_trace = trace.copy() 1294 | 1295 | # Trim the trace based on the origin time and delays and BP offset. 1296 | _start = origin_time + phase_delay + mccc_delay - bp_t_offset 1297 | _end = _start + bp_t_total 1298 | trimmed_trace.trim(starttime=_start, endtime=_end, pad=True, nearest_sample=False, fill_value=0.0) 1299 | 1300 | # Reset the start time without including the delays so the corrections will always be included. 1301 | trimmed_trace.stats.starttime = origin_time - bp_t_offset 1302 | return trimmed_trace 1303 | 1304 | 1305 | def shift_trace(trace, origin_time, bp_t_offset, bp_t_total, shift_time): 1306 | """Shift a trace by the time given shift_time""" 1307 | shifted_trace = trace.copy() 1308 | 1309 | # Trim the trace based on the origin time and delays. 1310 | _start = trace.stats.starttime 1311 | _start += shift_time 1312 | _end = _start + bp_t_total 1313 | shifted_trace.trim(starttime=_start, endtime=_end, pad=True, nearest_sample=False, fill_value=0.0) 1314 | 1315 | # Reset the start time without including the delays so the corrections will always be included. 1316 | shifted_trace.stats.starttime = origin_time - bp_t_offset 1317 | return shifted_trace 1318 | 1319 | 1320 | def get_phase_delay(tt_cache, dist, depth): 1321 | """Obtauin the phase delay for a give distance either directly or from a cache.""" 1322 | # Testing shows that 2 decimal places (0.01 deg resolution) improves the match and speeds up the code while it has 1323 | # little affect on the quality. 1324 | dist /= 1000.0 1325 | dist = kilometer2degrees(dist) 1326 | dist_key = f'{dist:0.2f}' 1327 | if dist_key in tt_cache: 1328 | phase_delay = tt_cache[dist_key] 1329 | else: 1330 | # Delay from predicted travel time. 1331 | phase_delay = phase_time(depth, dist) 1332 | tt_cache[dist_key] = phase_delay 1333 | return tt_cache, phase_delay 1334 | 1335 | 1336 | def stacker(trace_list, vn_name, bp_t_offset, t_avg, eq_datetime, eq_lat, eq_lon, eq_mag, eq_depth, bp_t_increment, 1337 | bp_t_total, tt_cache, create_synthetics=False, grid_factor=1, log=sys.stdout, verbose=False): 1338 | """Stack traces using the BP window and step.""" 1339 | # inum=nint((ittotal+tavg)/fitincrement)+1 1340 | # ittotal = bp_t_total 1341 | # fitincrement = bp_time_inc_dict 1342 | # tavg = t_avg Total seconds of the Hanning averaging taper 1343 | # i1len = len(lon) 1344 | # i2len = len(lat) 1345 | # ntotal = total number of stations 1346 | # npts = number of samples 1347 | # dt() trace sampling interval 1348 | # 1705 1349 | # BP start one increment back. 1350 | # bp_initial_seconds, seconds before the event time when BP starts. 1351 | 1352 | # Delta time (seconds) before the event time when BP starts. 1353 | seconds_before = math.ceil(param.bp_initial_seconds - 1.0 * bp_t_offset - t_avg / 2.0 - bp_t_increment) 1354 | 1355 | # Delta time (seconds) after the event time when BP ends. 1356 | seconds_after = math.ceil(seconds_before + bp_t_total + t_avg) 1357 | 1358 | print_message('INFO', f'start {seconds_before}s and end {seconds_after}s relative to the ' 1359 | f'event time of {eq_datetime}, using bp_t_offset of ' 1360 | f'{bp_t_offset}s, ' 1361 | f't_avg {t_avg}s, and bp_t_total {bp_t_total}s', 1362 | log=log) 1363 | stacking_root = param.stacking_root 1364 | 1365 | # Set the grid points around the earthquake location. 1366 | latitude, longitude = set_grid(eq_lat, eq_lon, eq_mag, grid_factor=grid_factor) 1367 | print_message('INFO', f'Grid latitude {latitude}, longitude {longitude}', log=log) 1368 | if verbose: 1369 | print_message('INFO', f'Earthquake at {eq_lat}, {eq_lon}') 1370 | lat_ = latitude['start'] - latitude['inc'] 1371 | lat_str = '' 1372 | while lat_ < latitude['end']: 1373 | lat_ += latitude['inc'] 1374 | lat_str = f'{lat_str}, {lat_:0.3f}' 1375 | 1376 | lon_ = longitude['start'] - longitude['inc'] 1377 | lon_str = '' 1378 | while lon_ < longitude['end']: 1379 | lon_ += longitude['inc'] 1380 | lon_str = f'{lon_str}, {lon_:0.3f}' 1381 | print_message('INFO', f'Grid latitudes: {lat_str}') 1382 | print_message('INFO', f'Grid longitudes: {lon_str}') 1383 | 1384 | global_max = None 1385 | t0 = time.time() 1386 | 1387 | warn_list = list() 1388 | stack_list = list() 1389 | 1390 | stack = dict() 1391 | syn_stack = dict() 1392 | 1393 | # Individual stations. 1394 | sta_counter = 0 1395 | 1396 | # Get ready to loop through the grid. 1397 | lat_start = latitude['start'] 1398 | lat_inc = latitude['inc'] 1399 | lat_end = latitude['end'] 1400 | 1401 | lon_start = longitude['start'] 1402 | lon_inc = longitude['inc'] 1403 | lon_end = longitude['end'] 1404 | 1405 | net_sta_list = list() 1406 | 1407 | # Loop through station list and extract information on active traces. Here we want to avoid 1408 | # repeating this over every grid point. 1409 | for net_sta in trace_list: 1410 | 1411 | # Check the trace to process to see if it is active. (#1727) 1412 | trace = trace_list[net_sta] 1413 | 1414 | # The station location. 1415 | lat0, lon0 = trace['lat'], trace['lon'] 1416 | 1417 | # Keep track of stations in the stack. 1418 | if net_sta not in stack_list: 1419 | stack_list.append(net_sta) 1420 | sta_counter += 1 1421 | print_message('INFO', f'Adding {net_sta} to stack list ({sta_counter}), weight = {trace["weight"]:0.2f}', 1422 | log=log) 1423 | 1424 | # Trace information. 1425 | tr = trace[f'tr_filter'] 1426 | tr_starttime = tr.stats.starttime 1427 | tr_endtime = tr.stats.endtime 1428 | net_sta_list.append((net_sta, tr.copy(), float(lat0), float(lon0), tr_starttime, 1429 | tr_endtime, trace['phase_delay'])) 1430 | 1431 | # Synthetic trace information. 1432 | if create_synthetics: 1433 | syn_tr = trace['tr_syn'].copy() 1434 | 1435 | grid_key_list = list() 1436 | 1437 | # Loop over the grid latitudes. 1438 | grid_index = 0 1439 | grid_lat = lat_start - lat_inc 1440 | while grid_lat < lat_end: 1441 | grid_lat += lat_inc 1442 | 1443 | grid_lon = lon_start - lon_inc 1444 | # 1721 1445 | while grid_lon < lon_end: 1446 | grid_index += 1 1447 | grid_lon += lon_inc 1448 | 1449 | # Grid points. 1450 | grid_key = f'{grid_lat:0.3f}_{grid_lon:0.3f}' 1451 | grid_key_list.append((grid_index, grid_key, grid_lat, grid_lon)) 1452 | 1453 | n_grids = len(grid_key_list) 1454 | grid_count = 0 1455 | show_message = True 1456 | for (grid_index, grid_key, grid_lat, grid_lon) in grid_key_list: 1457 | grid_count += 1 1458 | 1459 | # Loop through grid longitudes. 1460 | shifted_trace = None 1461 | if grid_key not in stack: 1462 | _stack = None 1463 | stack[grid_key] = dict() 1464 | 1465 | if create_synthetics: 1466 | if grid_key not in syn_stack: 1467 | _syn_stack = None 1468 | syn_stack[grid_key] = dict() 1469 | 1470 | # For each new source location (grid_lat, grid_lon), loop through stations and stack. 1471 | for (net_sta, tr, lat0, lon0, tr_starttime, tr_endtime, phase_delay0) in net_sta_list: 1472 | 1473 | # Assume event is at the grid point. Get the distance from the station. 1474 | _dist, _azim, _back_azim = gps2dist_azimuth(grid_lat, grid_lon, lat0, lon0) 1475 | 1476 | tt_cache, phase_delay = get_phase_delay(tt_cache, _dist, eq_depth) 1477 | 1478 | if phase_delay is None: 1479 | if net_sta not in warn_list: 1480 | warn_list.append(net_sta) 1481 | print_message('WARN', f'Skipped, channel {tr.stats.channel} of Net.Sta {net_sta} ' 1482 | f'because no P-wave arrival', log=log) 1483 | continue 1484 | 1485 | # The time_shift variable represents the number of seconds difference in P- travel time due to 1486 | # event relocation. 1487 | time_shift = phase_delay - phase_delay0 1488 | 1489 | # Skip if trace window is not covered by BP. 1490 | if eq_datetime + seconds_before >= tr_endtime + time_shift or \ 1491 | eq_datetime + seconds_after <= tr_starttime + time_shift: 1492 | print_message('WARN', f'Skipped, channel {tr.stats.channel} of Net.Sta {net_sta} ' 1493 | f'because it does not fall in BP window', log=log) 1494 | continue 1495 | 1496 | # Here we shift the trace in time based on the time_shift. 1497 | shifted_trace = tr.copy() 1498 | 1499 | # time_shift < 0 trace must be shifted to the left (arrives sooner, trim window slides right). 1500 | # time_shift > 0 trace must be shifted to the right (arrives later, trim window slides left). 1501 | # So we use a negative sign to shift the trim window accordingly. 1502 | _start = tr_starttime + time_shift 1503 | _end = tr_endtime + time_shift 1504 | # Slide the trace based on the delta time shift due to new event location. 1505 | shifted_trace.trim(starttime=_start, endtime=_end, pad=True, nearest_sample=False, fill_value=0.0) 1506 | if verbose and show_message: 1507 | print_message('INFO', f'Slide the trace based on the delta time shift due to new event ' 1508 | f'location start:{_start} and end:{_end}', log=log) 1509 | 1510 | # Now, reset the start time to that of the original trace so the time shift will be implicit. 1511 | shifted_trace.stats.starttime = tr_starttime 1512 | 1513 | # Set the trace window to the BP time. 1514 | shifted_trace.trim(starttime=eq_datetime + seconds_before, endtime=eq_datetime + seconds_after, 1515 | pad=True, nearest_sample=False, fill_value=0.0) 1516 | if verbose and show_message: 1517 | show_message = False 1518 | print_message('INFO', f'Set the trace window to the BP time ' 1519 | f'start:eq_datetime -{abs(seconds_before)}s ' 1520 | f'and end:eq_datetime + {seconds_after}s', 1521 | log=log) 1522 | # Resample to bp increment. 1523 | shifted_trace.resample(1.0 / bp_t_increment) 1524 | 1525 | _data = shifted_trace.data 1526 | _data_sign = np.sign(_data) 1527 | _data_n = np.power(np.abs(_data), 1.0 / stacking_root) 1528 | _data_n *= _data_sign 1529 | 1530 | # Perform stacking. 1531 | if _stack is None: 1532 | _stack = _data_n.copy() 1533 | else: 1534 | _stack = np.add(_stack, _data_n) 1535 | 1536 | # For debugging only. 1537 | debug = False 1538 | if debug: 1539 | import matplotlib.pyplot as plt 1540 | fig = plt.figure(figsize=(9.5, 11)) 1541 | ax = fig.add_subplot(1, 1, 1) 1542 | ax.plot(shifted_trace.times(reftime=eq_datetime), 1543 | _stack, "k-", lw=0.4, label=f'{grid_lat}, {grid_lon}/ {eq_lat}, {eq_lon}') 1544 | plt.legend() 1545 | plt.show() 1546 | plt.close() 1547 | 1548 | # Do the same as above with synthetics. 1549 | if create_synthetics: 1550 | shifted_syn = syn_tr.copy() 1551 | shifted_syn.trim(starttime=_start, endtime=_end, pad=True, nearest_sample=False, fill_value=0.0) 1552 | shifted_syn.stats.starttime = tr_starttime 1553 | 1554 | # Set the trace window to the BP time. 1555 | shifted_syn.trim(starttime=eq_datetime + seconds_before, endtime=eq_datetime + seconds_after, 1556 | pad=True, nearest_sample=False, fill_value=0.0) 1557 | 1558 | shifted_syn.resample(1.0 / bp_t_increment) 1559 | 1560 | _data = shifted_syn.data 1561 | _data_sign = np.sign(_data) 1562 | _data_n = np.power(np.abs(_data), 1.0 / stacking_root) 1563 | _data_n *= _data_sign 1564 | 1565 | if _syn_stack is None: 1566 | _syn_stack = _data_n.copy() 1567 | else: 1568 | _syn_stack = np.add(_syn_stack, _data_n) 1569 | 1570 | # For debugging only. 1571 | debug = False 1572 | if debug: 1573 | import matplotlib.pyplot as plt 1574 | fig = plt.figure(figsize=(9.5, 11)) 1575 | ax = fig.add_subplot(1, 1, 1) 1576 | ax.plot(shifted_syn.times(reftime=eq_datetime), 1577 | _syn_stack, "k-", lw=0.4, label=f'{grid_lat}, {grid_lon}/ {eq_lat}, {eq_lon}') 1578 | plt.legend() 1579 | plt.show() 1580 | plt.close() 1581 | 1582 | # Store stack for this grid point. 1583 | # 1726 1584 | if shifted_trace is not None: 1585 | stack[grid_key] = {f'{t :0.2f}': _stack[t_index] for t_index, t in 1586 | enumerate(shifted_trace.times(reftime=eq_datetime))} 1587 | else: 1588 | print_message('WARN', f'stack trace empty!', log=log) 1589 | 1590 | if create_synthetics: 1591 | if shifted_syn is not None: 1592 | syn_stack[grid_key] = {f'{t :0.2f}': _syn_stack[t_index] for t_index, t in 1593 | enumerate(shifted_syn.times(reftime=eq_datetime))} 1594 | else: 1595 | print_message('WARN', f'synthetic stack trace empty!', log=log) 1596 | 1597 | if grid_count == 100: 1598 | t0 = time_it(t0, stage=f'All stations for grid points {grid_index}/{n_grids}', log=log) 1599 | grid_count = 0 1600 | 1601 | return stack, syn_stack 1602 | 1603 | -------------------------------------------------------------------------------- /param/back_projection_param.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys 3 | 4 | from subprocess import Popen, PIPE 5 | 6 | from PIL import Image 7 | 8 | """ 9 | Description: 10 | 11 | A Python file that contains all BackProjection data product parameters. You may modify this file to customize 12 | plot and animation production. All parameter definitions in this file must follow Python rules. Each 13 | parameter group in this file is commented for clarification. 14 | 15 | Copyright and License: 16 | 17 | This software Copyright (c) 2021 IRIS (Incorporated Research Institutions for Seismology). 18 | 19 | This program is free software: you can redistribute it and/or modify 20 | it under the terms of the GNU General Public License as published by 21 | the Free Software Foundation, either version 3 of the License, or (at 22 | your option) any later version. 23 | 24 | This program is distributed in the hope that it will be useful, but 25 | WITHOUT ANY WARRANTY; without even the implied warranty of 26 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 27 | General Public License for more details. 28 | 29 | You should have received a copy of the GNU General Public License 30 | along with this program. If not, see http://www.gnu.org/licenses/. 31 | 32 | History: 33 | 2021-11-01 Manoch: v.2021.305 r2 (Python 3) data product release. 34 | """ 35 | 36 | # Import the aftershocks libraries. 37 | parent_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '..')) 38 | 39 | # The run settings. 40 | log_to_screen = False 41 | verbose = False 42 | 43 | timing = True 44 | timing_threshold = 0.01 45 | 46 | # Always set to 1 as this is used for testing ONLY (reduces resolution but speeds up computation for testing). 47 | grid_factor = 1 48 | 49 | # Directories. 50 | src_dir = os.path.join(parent_dir, 'src') 51 | image_dir = os.path.join(parent_dir, 'image') 52 | scratch_dir = os.path.join(parent_dir, 'scratch') 53 | video_dir = os.path.join(parent_dir, 'video') 54 | log_dir = os.path.join(parent_dir, 'log') 55 | xml_dir = os.path.join(parent_dir, 'xml') 56 | param_dir = os.path.join(parent_dir, 'param') 57 | lib_dir = os.path.join(parent_dir, 'lib') 58 | assets_dir = os.path.join(parent_dir, 'assets') 59 | metadata_dir = os.path.join(parent_dir, 'metadata') 60 | data_dir = os.path.join(parent_dir, 'data') 61 | 62 | ffmpeg = 'ffmpeg' 63 | try: 64 | process = Popen([ffmpeg, "-version"], stdout=PIPE) 65 | (output, err) = process.communicate() 66 | exit_code = process.wait() 67 | except Exception as ex: 68 | print(f"[ERR] ffmpeg application (ffmpeg) not found!\n{ex}") 69 | sys.exit(2) 70 | 71 | # qtfaststart enables streaming and pseudo-streaming of QuickTime and 72 | # MP4 files by moving metadata and offset information to the beginning of the file. 73 | # set to None to disable. 74 | qtfaststart = None 75 | 76 | # Time integration resampling, This is to speed up processing. Time integration will be performed at 77 | # the resampling sample interval. Set to None to prevent resampling. The output sample rate will only exactly match 78 | # the selected decimation rate if the original to final rate ratio is factorable by 2,3,5 and 7. Otherwise, 79 | # the closest factorable rate will be chosen. 80 | decimate = 1 81 | 82 | # The global trenches file. 83 | # For available linestyles see: https://matplotlib.org/stable/gallery/lines_bars_and_markers/linestyles.html 84 | global_trenches_file = 'global_trenches_AV.txt' 85 | trench_linestyle = (0, (5, 10)) 86 | 87 | # Plot and animations. 88 | figure_size = (9.5, 11) 89 | dpi = 150 90 | video_size = (8, 10.8) 91 | video_dpi = 150 92 | frames_per_second = 6 93 | tile_frames_per_second = 8 94 | 95 | # Fill color for the beam power plot. 96 | beam_power_fill_color = 'silver' 97 | 98 | # Colormap to display local maxima by time. 99 | # https://matplotlib.org/stable/tutorials/colors/colormaps.html 100 | # To use time_cmap, set time_colors to an empty list. 101 | time_colors = ['#2f4f4f', '#8b4513', '#006400', '#bdb76b', '#4b0082', '#ff0000', '#00ced1', '#ffa500', '#ffff00', 102 | '#00ff00', '#00fa9a', '#0000ff', '#ff00ff', '#6495ed', '#ff1493', '#ffc0cb'] 103 | time_color = list() 104 | time_alpha = 0.5 105 | time_cmap = 'jet' 106 | 107 | # Logo for the plot. 108 | logo_file = '' 109 | logo_image = os.path.join(assets_dir, logo_file) 110 | if os.path.isfile(logo_image): 111 | image = Image.open(logo_image) 112 | logo_width, logo_height = image.size 113 | elif logo_file: 114 | print(f'[WARN] logo file {logo_image} not found') 115 | logo_width = 0 116 | logo_height = 70 117 | else: 118 | logo_width = 0 119 | logo_height = 70 120 | 121 | """Code from the question the OffsetImage is given an argument zoom=0.9. 122 | This means that each pixel of the original image takes 0.9/0.72=1.25 pixels on the screen. 123 | Hence 5 pixels of the original image needs to be squeezed into 4 pixels on the screen. 124 | see: https://stackoverflow.com/questions/48639369/does-adding-images-in-pyplot-lowers-their-resolution""" 125 | logo_zoom = 72.0 / dpi 126 | logo_alpha = 1 127 | 128 | logo_x = logo_width * logo_zoom * 0.75 129 | logo_y = logo_height * logo_alpha * 0.75 130 | 131 | # Padding in pixels between logo and text. 132 | logo_padding = 3 133 | 134 | # Logo location as axes pixels 135 | """'figure points' points from the lower left corner of the figure 136 | 'figure pixels' pixels from the lower left corner of the figure 137 | 'figure fraction' (0, 0) is lower left of figure and (1, 1) is upper right 138 | 'axes points' points from lower left corner of axes 139 | 'axes pixels' pixels from lower left corner of axes 140 | 'axes fraction' (0, 0) is lower left of axes and (1, 1) is upper right 141 | 'data' use the axes data coordinate system""" 142 | logo_coords = 'axes fraction' 143 | logo_location = (0.01, 0.01) 144 | 145 | # Label for plot timestamp. 146 | doi = '10.17611/dp/bp.1' 147 | production_label = f'dp.backprojection doi:{doi}' 148 | production_label_font_size = 8 149 | 150 | font_size = dict() 151 | font_size['video'] = {'label': 12, 'legend': 10, 'title': 18, 'time': 16, 'network': 16, 'insufficient_data': 18} 152 | font_size['image'] = {'label': 11, 'legend': 10, 'title': 14, 'time': 16, 'network': 16, 'insufficient_data': 18} 153 | 154 | fedcatalog_service_url = 'http://service.iris.edu/irisws/fedcatalog/1/query?' 155 | 156 | # Set sta_too_close_km <= 0 to disable sparsifying. 157 | vn_name = 'NA' 158 | virtual_networks = {'GSN': {'lat': -90.0, 'lon': 0.0, 'name': 'GSN Stations', 159 | 'color': 'red', 'marker': '^', 'ccc_min': (0.5, 0.55), 'network': '_GSN', 'xcorr_min': 0.45, 160 | 'max_sta_count': 300, 'sta_too_close_km': 0.0, 'sta_too_close_deg': 0.0, 161 | 'std_max': 0.3, 162 | 'sta_weight_dist': 500.0, 'sta_weight_azim': -1.0, 'sparse_patch_count': 4}, 163 | 164 | 'AU': {'lat': -24.3, 'lon': 134.4, 'name': 'Australia', 165 | 'color': 'red', 'marker': '^', 'ccc_min': (0.6, 0.65), 'network': '*', 'xcorr_min': 0.55, 166 | 'max_sta_count': 50, 'sta_too_close_km': 111.19, 'sta_too_close_deg': 2.0, 167 | 'std_max': 0.3, 168 | 'sta_weight_dist': 250.0, 'sta_weight_azim': -1.0, 'sparse_patch_count': 4}, 169 | 170 | 'NA': {'lat': 38.9, 'lon': -98.4, 'name': 'North America', 171 | 'color': 'red', 'marker': '^', 'ccc_min': (0.6, 0.65), 'network': '*', 'xcorr_min': 0.55, 172 | 'max_sta_count': 50, 'sta_too_close_km': 111.19, 'sta_too_close_deg': 2.0, 173 | 'std_max': 0.3, 174 | 'sta_weight_dist': 250.0, 'sta_weight_azim': -1.0, 'sparse_patch_count': 4}, 175 | 176 | 'EU': {'lat': 51.6, 'lon': 20.6, 'name': 'Europe', 177 | 'color': 'red', 'marker': '^', 'ccc_min': (0.6, 0.65), 'network': '*', 'xcorr_min': 0.55, 178 | 'max_sta_count': 50, 'sta_too_close_km': 111.19, 'sta_too_close_deg': 2.0, 179 | 'std_max': 0.3, 180 | 'sta_weight_dist': 250.0, 'sta_weight_azim': -1.0, 'sparse_patch_count': 4} 181 | } 182 | earthquakes = {'color': 'blue', 'marker': '*'} 183 | 184 | 185 | # Earth radius, km. 186 | earth_radius = 6378.1 187 | 188 | sta_too_close_deg_inc = 0.25 189 | #sta_too_close_deg_inc = {'default': 0.25, 'condition': '>', 'ranges': {500: 0.5, 900: 1.0}} 190 | sta_too_close_deg_init = {'default': 0.25, 'condition': '>', 'ranges': {250: 0.25, 500: 0.5, 750: 0.75, 1000: 1.0}} 191 | 192 | vn_check_dist = False 193 | vn_azimuth = 25 194 | vn_min_radius = 0 195 | 196 | # For these virtual networks do not perform azimuth check. 197 | vn_azimuth_exception_list = ['GSN'] 198 | 199 | # Waveform request time window in seconds before and after the event time: 200 | request_time_before = 600.0 201 | request_time_after = {'default': 1300, 'condition': '>', 'ranges': {8.6: 1600.0}} 202 | 203 | #channel_order = {'BHZ': 1, 'HHZ': 2} 204 | channel_order = {'BHZ': 1} 205 | request_channel = ','.join(list(channel_order.keys())) 206 | 207 | dc_to_exclude = [] 208 | 209 | # STD QC of waveforms. STD of the trace within std_window seconds before the event time is calculated. If computed 210 | # STD is more than std_max, trace is rejected. 211 | std_check = True 212 | std_window = request_time_before * 0.8 213 | std_offset = 10 214 | 215 | # Merge traces with gaps and fill with zero. 216 | merge_gaps = False 217 | 218 | # To avoid making one single large request for data to a data center, it is better to make multiple requests. 219 | # The parameter _chunck_count_ in the parameter file determines the number of 220 | # stations per request (chunk) that will be sent to each data center. This number should be adjusted based on the 221 | # number of station-channels involved and the length of each request to avoid memory issues or data center timeouts. 222 | chunk_count = 10 223 | 224 | # Filter [filt1,filt2,filt3,filt4] 225 | trace_filter = {'low': {'corners': (0.05, 0.25), 'label': '0.05 to 0.25 Hz'}, 226 | 'high': {'corners': (0.25, 1.0), 'label': '0.25 to 1.0Hz'} 227 | } 228 | 229 | # Filter to use for the back projection animation. The "high" filter is used for all except GSN 230 | bp_filter = {'EU': 'high', 'NA': 'high', 'AU': 'high', 'GSN': 'low'} 231 | 232 | # fnthroot1; fnthroot2=1./fnthroot1 233 | # Stacking root (integer 1: linear, 3: cube root stacking) [root, fnthroot1=root, fnthroot2=1./fnthroot1]. 234 | stacking_root = 2 235 | 236 | # Beam averaging window length (seconds). 237 | beam_average_seconds = 5 238 | 239 | # ixcorryesno 240 | do_xcorr = False 241 | 242 | # Use absolute values for MCCC? 243 | xcorr_abs = False 244 | 245 | # The minimum number of stations that we must have. 246 | min_num_sta = 1 247 | 248 | # Length (sec) of synthetic trapezoid function to replace data. 249 | # 0 to use data. Less than 1.0 will give you a 1-sec wide triangle. 250 | trapezoid_length = 0 251 | 252 | # The taper fraction of the cosine taper is applied to the waveform data in the time domain before deconvolution. 253 | taper_fraction = 0.05 254 | 255 | """" Output type after deconvolution. 256 | DISP" 257 | displacement, output unit is meters 258 | "VEL" 259 | velocity, output unit is meters/second 260 | "ACC" 261 | acceleration, output unit is meters/second**2""" 262 | output_type = 'VEL' 263 | 264 | # A bandpass filter to apply in frequency domain to the data before deconvolution. 265 | pre_filter = [0.018, 0.02, 2, 2.2] 266 | prefilter_label = f'pre-filter {pre_filter[1]} to {pre_filter[2]} Hz' 267 | 268 | # Peak amplitude marker on videos. For available markers see: 269 | # https://matplotlib.org/stable/api/markers_api.html 270 | # peak_marker_lw indicates that peak_marker will be plotted when amplitude drops below this value. 271 | peak_marker = 'P' 272 | peak_marker_size = 40 273 | peak_marker_color = 'maroon' 274 | peak_marker_lw = 2 275 | peak_marker_max = 0.05 276 | 277 | # Maximum allowable distance between max peak and the event location (km). Set to a large number to deactivate. 278 | peak_offset_max_km = 1000 279 | 280 | # Phase to use (P, PP, PKIKP, S) #738. 281 | phase = 'P' 282 | pre_phase_seconds = 5.0 283 | post_phase_seconds = 120.0 284 | 285 | stf_pre_phase_seconds = 10.0 286 | stf_post_phase_seconds = 40.0 287 | stf_search_seconds = 45 288 | 289 | # Cross-correlation shift in seconds. 290 | xcorr_window = [5.0, 15.0, 22.0, 30.0, 40.0] 291 | xcorr_shift_default = 5.0 292 | 293 | # Trace sampling. 294 | trace_sampling_frequency = 20.0 295 | trace_sampling_interval = 1.0 / trace_sampling_frequency 296 | 297 | # Min and max distance from earthquake [delmin,delmax]. 298 | eq_min_radius = 30 299 | eq_max_radius = 97.0 300 | vn_max_radius = 120.0 301 | 302 | # Distance circles to draw (degrees). 303 | distance_circles = (30.0, 60.0, 95.0) 304 | distance_circle_labels = [f'{int(d)}°' for d in distance_circles] 305 | map_width = 28000000 * 0.8 306 | 307 | # Basemap distance scale length in km. 308 | scalebar_km = 100 309 | 310 | # Find the local maxima peak and plot these within peak_marker_base_factor (0.5 = 50%) of the global maximum. 311 | peak_marker_base_factor = 0.5 312 | 313 | qc_max_peak_factor = 0.7 314 | qc_max_peak_count = 5 315 | qc_vn_max_distance = 25 316 | 317 | # Maximum peak amplitude marker size. 318 | peak_marker_size_max = 600 319 | peak_marker_size = 60 320 | 321 | # Azi_min Azimuth_max (deg going clockwise 0-360) [fazimin,fazimax]. 322 | azimuth_min = 0.0 323 | azimuth_max = 360.0 324 | 325 | # SNR window. 326 | snr_window = {'pre-p': [-50, -10], 'p': [-5, 35]} 327 | 328 | # Pre-P / P SNR on the unfiltered trace. 329 | min_snr = 2.0 330 | 331 | # Pre-P / P SNR on the filtered trace (set to None to disable). 332 | min_filtered_snr = 2.0 333 | 334 | # Should use filtered trace tp align MCCC [ifiltalign=0]. 335 | filtered_trace_align = False 336 | 337 | # Seconds before and after P-arrivals trace must exist. 338 | seconds_before_p = 45.0 339 | seconds_after_p = 30.0 340 | 341 | """ All Variable settings must be given in a dictionary structure (see lib.case): 342 | default value 343 | condition to impose >, >=, ==, <, <=, != 344 | range: value pairs 345 | """ 346 | # STF trace length settings. 347 | stf_length_dict = {'default': 60.0, 'condition': '>', 'ranges': {7.0: 100.0, 7.5: 120.0, 8.0: 170.0}} 348 | 349 | # Cross-correlation window length based on magnitude. 350 | mag_window_length_dict = {'default': 15.0, 'condition': '>', 'ranges': {7.0: 17.5, 7.9: 20.0}} 351 | 352 | # BP initial time in seconds [initialtime]. 353 | bp_initial_seconds = 0.0 354 | 355 | # Seconds before the event time when BP starts. 356 | bp_time_offset = 30.0 357 | 358 | # BP time increment [fitincrement]. 359 | bp_time_inc_dict = {'default': 0.25, 'condition': '>', 'ranges': {8.49: 0.5}} 360 | 361 | # BP total time, time increment, and time offset [ittotal]. 362 | bp_time_total_dict = {'default': 90.0, 'condition': '>', 'ranges': {6.99: 130.0, 7.49: 180.0, 7.99: 210.0, 8.49: 280.0, 363 | 8.99: 630.0, 9.09: 630.0, 9.19: 630.0}} 364 | 365 | # The total number of seconds of the Hanning averaging taper [tavg] 366 | hann_t_avg_dict = {'default': 10.0, 'condition': '>', 'ranges': {6.99: 10.0, 7.99: 15.0, 8.49: 20.0, 8.99: 30.0}} 367 | 368 | # Latitude grid increment [glatinc, gLatHalf]. Grid latitude starts gLatHalf before the eq latitude' 369 | grid_decimal_places = 2 370 | grid_latitude_inc_dict = {'default': 0.1, 'condition': '>', 'ranges': {8.51: 0.12, 8.99: 0.13, 9.09: 0.15}} 371 | grid_latitude_half_dict = {'default': 2.0, 'condition': '>', 'ranges': {6.99: 2.5, 7.49: 3.0, 7.99: 3.5, 8.49: 4.0, 372 | 8.99: 5.0, 9.09: 6.0, 9.19: 10.0}} 373 | # Synthetic triangle width in seconds. 374 | triangle_width = 5.0 375 | 376 | # Travel time model. 377 | travel_time_model = 'iasp91' 378 | 379 | # Create a summary plot. 380 | create_summary = True 381 | 382 | # Minimum/maximum trace time (seconds) for the summary plots 383 | max_summary_trace_time = 90 384 | min_summary_trace_time = -25 385 | 386 | # Y position of the summary plot trace labels (max 1). 387 | label_y_position = 0.70 388 | 389 | # What to create. 390 | create_animation = True 391 | 392 | # Arrow characters. 393 | arrows = (u'$\u2191$', u'$\u2193$') 394 | 395 | pcolormesh_grid_factor = 5 396 | 397 | # Resolution of boundary database to use. Can be c (crude), l (low), i (intermediate), h (high), f (full) or None. 398 | # If None, no boundary data will be read in. Higher-res datasets are much slower to draw. None will be the fastest 399 | # with no boundaries drawn. 400 | basemap_resolution = 'h' 401 | basemap_countries = False 402 | basemap_states = False 403 | 404 | # Azimuthal Equidistant Projection, shortest route from the center of the map to any other point is a straight line. 405 | basemap_projection = 'aeqd' 406 | 407 | # Options for the basemap's continent colors. 408 | fill_continents = True 409 | fill_continents_color = '#D3D3D3' 410 | fill_continents_alpha = 0.2 411 | sta_map_color = 'gray' 412 | 413 | bp_colors = [(1.0, 1.0, 1.0), 414 | (0.95, 1.0, 1.0), 415 | (0.93, 1.0, 1.0), 416 | (0.9, 1., 1.), 417 | (0.85, 1.0000, 1.0000), 418 | (0.8, 1.0000, 1.0000), 419 | (0.7, 1.0000, 1.0000), 420 | (0.6, 1.0000, 1.0000), 421 | (0.5, 1.0000, 1.0000), 422 | (0.4, 1.0000, 1.0000), 423 | (0.3, 1.0000, 1.0000), 424 | (0.2, 1.0000, 1.0000), 425 | (0.1, 1.0000, 1.0000), 426 | (0, 1.0000, 1.0000), 427 | (0.0769, 1.0000, 0.9231), 428 | (0.1538, 1.0000, 0.8462), 429 | (0.2308, 1.0000, 0.7692), 430 | (0.3077, 1.0000, 0.6923), 431 | (0.3846, 1.0000, 0.6154), 432 | (0.4615, 1.0000, 0.5385), 433 | (0.5385, 1.0000, 0.4615), 434 | (0.6154, 1.0000, 0.3846), 435 | (0.6923, 1.0000, 0.3077), 436 | (0.7692, 1.0000, 0.2308), 437 | (0.8462, 1.0000, 0.1538), 438 | (0.9231, 1.0000, 0.0769), 439 | (1.0000, 1.0000, 0), 440 | (1.0000, 0.9231, 0), 441 | (1.0000, 0.8462, 0), 442 | (1.0000, 0.7692, 0), 443 | (1.0000, 0.6923, 0), 444 | (1.0000, 0.6154, 0), 445 | (1.0000, 0.5385, 0), 446 | (1.0000, 0.4615, 0), 447 | (1.0000, 0.3846, 0), 448 | (1.0000, 0.3077, 0), 449 | (1.0000, 0.2308, 0), 450 | (1.0000, 0.1538, 0), 451 | (1.0000, 0.0769, 0), 452 | (1.0000, 0, 0), 453 | (0.9231, 0, 0), 454 | (0.8462, 0, 0), 455 | (0.7692, 0, 0)] 456 | 457 | 458 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | obspy >= 1.3.1 2 | matplotlib==3.5.2 3 | basemap >= 1.3.6 4 | basemap-data >= 1.3.2 5 | basemap-data-hires >= 1.3.2 6 | -------------------------------------------------------------------------------- /src/back_projection_r2.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import sys 3 | import os 4 | from mpl_toolkits.basemap import Basemap 5 | import matplotlib.colors as colors 6 | import matplotlib.pyplot as plt 7 | import matplotlib.patheffects as pe 8 | from matplotlib.transforms import Bbox 9 | 10 | import warnings 11 | 12 | import shutil 13 | 14 | from obspy.clients.fdsn import Client 15 | from obspy.geodetics import degrees2kilometers 16 | from obspy.geodetics.base import gps2dist_azimuth, kilometer2degrees 17 | from obspy import UTCDateTime 18 | from obspy.core.stream import Stream 19 | 20 | from datetime import datetime 21 | 22 | from time import time 23 | import math 24 | 25 | import subprocess 26 | 27 | import getopt 28 | 29 | import glob 30 | import numpy as np 31 | 32 | from PIL import Image 33 | 34 | import matplotlib 35 | from matplotlib.offsetbox import OffsetImage, AnnotationBbox 36 | matplotlib.use('Agg') 37 | 38 | from scipy.interpolate import griddata 39 | from scipy.signal import argrelextrema 40 | import pickle 41 | 42 | # Import the back projection parameters and libraries. 43 | parent_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '..')) 44 | lib_dir = os.path.join(parent_dir, 'lib') 45 | param_dir = os.path.join(parent_dir, 'param') 46 | 47 | sys.path.append(param_dir) 48 | sys.path.append(lib_dir) 49 | import back_projection_param as param 50 | 51 | import back_projection_lib as lib 52 | from back_projection_lib import print_message, ObjDict 53 | 54 | 55 | # Ignore user warnings. 56 | warnings.filterwarnings("ignore") 57 | 58 | """ 59 | Description: 60 | 61 | This is the Python 3 code behind the IRIS DMC's BackProjection Data product (http://ds.iris.edu/spud/backprojection) 62 | and it can producing the individual plots and animations that are part of the BackProjection product 63 | (http://ds.iris.edu/spud/backprojection). 64 | 65 | The code can be configured via its parameter file "back_projection_param.py" and via the command line arguments. 66 | Currently parameters are optimized for use with four virtual networks defined in the parameter file. 67 | 68 | Copyright and License: 69 | 70 | This software Copyright (c) 2021 IRIS (Incorporated Research Institutions for Seismology). 71 | 72 | This program is free software: you can redistribute it and/or modify 73 | it under the terms of the GNU General Public License as published by 74 | the Free Software Foundation, either version 3 of the License, or (at 75 | your option) any later version. 76 | 77 | This program is distributed in the hope that it will be useful, but 78 | WITHOUT ANY WARRANTY; without even the implied warranty of 79 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 80 | General Public License for more details. 81 | 82 | You should have received a copy of the GNU General Public License 83 | along with this program. If not, see http://www.gnu.org/licenses/. 84 | 85 | History: 86 | 2021-11-01 Manoch: v.2021.305 r2 (Python 3) data product release. 87 | 2011-10-01 Alex Hutko: r1, development and initial product release (Fortran). 88 | 89 | """ 90 | 91 | # Script info. 92 | script_version = 'v.2021.305' 93 | script = sys.argv[0] 94 | script = os.path.basename(script) 95 | 96 | # Parameters. 97 | up_arrow, down_arrow = (u'$\u2191$', u'$\u2193$') 98 | 99 | fedcatalog_service_url = param.fedcatalog_service_url 100 | 101 | logo_image = param.logo_image 102 | logo_zoom = param.logo_zoom 103 | logo_alpha = param.logo_alpha 104 | logo_coords = param.logo_coords 105 | logo_location = param.logo_location 106 | logo_width = param.logo_width 107 | logo_height = param.logo_height 108 | logo_padding = param.logo_padding 109 | 110 | basemap_countries = param.basemap_countries 111 | basemap_states = param.basemap_states 112 | basemap_projection = param.basemap_projection 113 | 114 | pcolormesh_grid_factor = param.pcolormesh_grid_factor 115 | 116 | production_label_font_size = param.production_label_font_size 117 | 118 | bp_filter = None 119 | 120 | vn = param.virtual_networks 121 | vn_list = list(vn.keys()) 122 | vn_min_radius = param.vn_min_radius 123 | vn_max_radius = param.vn_max_radius 124 | vn_azimuth = param.vn_azimuth 125 | 126 | eq_min_radius = param.eq_min_radius 127 | eq_max_radius = param.eq_max_radius 128 | 129 | dc_to_exclude = param.dc_to_exclude 130 | 131 | grid_factor = param.grid_factor 132 | 133 | distance_circles = param.distance_circles 134 | distance_circle_labels = param.distance_circle_labels 135 | 136 | timing = param.timing 137 | 138 | decimate = param.decimate 139 | pre_phase_seconds = param.pre_phase_seconds 140 | post_phase_seconds = param.pre_phase_seconds 141 | 142 | trace_sampling_frequency = param.trace_sampling_frequency 143 | trace_sampling_interval = param.trace_sampling_interval 144 | 145 | chunk_count = param.chunk_count 146 | 147 | earthquakes = param.earthquakes 148 | 149 | xcorr_shift = param.xcorr_shift_default 150 | 151 | create_synthetics = param.create_animation 152 | create_summary = param.create_summary 153 | create_animation = param.create_animation 154 | 155 | image_dir = lib.mkdir(param.image_dir) 156 | video_dir = lib.mkdir(param.video_dir) 157 | scratch_dir = lib.mkdir(param.scratch_dir) 158 | log_dir = lib.mkdir(param.log_dir) 159 | metadata_dir = lib.mkdir(param.metadata_dir) 160 | data_dir = lib.mkdir(param.data_dir) 161 | 162 | font_size = param.font_size 163 | 164 | log_to_screen = param.log_to_screen 165 | 166 | # Travel time cache to speed things up. 167 | tt_cache = dict() 168 | 169 | # Distance between pair of stations. 170 | intra_station_dist = dict() 171 | 172 | # Station inventory list. 173 | inventory_list = dict() 174 | 175 | coastline_skipped = False 176 | verbose = param.verbose 177 | 178 | 179 | def usage(): 180 | """The usage message. 181 | """ 182 | new_line = '\n' 183 | print(f'{new_line}{new_line}{script} ({script_version}):', file=sys.stdout, flush=True) 184 | 185 | print(f'{new_line}{new_line}This is the Python 3 code behind the IRIS DMC\'s BackProjection data product (BP):\n' 186 | f'http://ds.iris.edu/ds/products/backprojection/\n' 187 | f'\nand can producing the individual plots and animations that are part of the PackProjection product:\n' 188 | f'http://ds.iris.edu/spud/backprojection{new_line}', file=sys.stdout, flush=True) 189 | 190 | print(f'The code can be configured via its parameter file "back_projection_param.py" or via the ' 191 | f'command line arguments. {new_line}{new_line}' 192 | f'Currently parameters are optimized for use with four preconfigured virtual networks:', 193 | file=sys.stdout, flush=True) 194 | 195 | print(f'\t\tvirtual network\t\tname', file=sys.stdout, flush=True) 196 | print(f'\t\t{15 * "="}\t\t{11 * "="}', file=sys.stdout, flush=True) 197 | _vn = param.virtual_networks 198 | _ind = sorted(_vn.keys()) 199 | for _i in _ind: 200 | print(f'\t\t{_i}\t\t\t{_vn[_i]["name"]}', file=sys.stdout, flush=True) 201 | 202 | print(f'{new_line}Virtual networks could be modified by changing the ' 203 | f'"virtual_networks" parameter in the parameter file.' 204 | f'{new_line}{new_line}command line options:{new_line}' 205 | f'\t-h --help\t\tthis message{new_line}' 206 | f'\t-v --verbose \t\t[default: {verbose}] turns the verbose mode on{new_line}' 207 | f'\t-a --anim\t\t[default: {create_animation}] create animations [True/False]{new_line}' 208 | f'\t-l --logscreen\t\t[default: {param.log_to_screen}] send the log messages to screen{new_line}' 209 | f'\t-n --vnet\t\t[required] virtual network code (see above) {new_line}' 210 | f'\t-m --emag\t\t[required] event magnitude {new_line}' 211 | f'\t-s --summary\t\t[default: {create_summary}] create summary plot [True/False]{new_line}' 212 | f'\t-t --etime\t\t[required] the event time as "YYYY-MM-DDTHH:MM:SS" {new_line}' 213 | f'\t-x --elon\t\t[required] the event longitude {new_line}' 214 | f'\t-y --elat\t\t[required] the event latitude {new_line}' 215 | f'\t-z --edepth\t\t[required] the event depth (km) {new_line}' 216 | f'\t-d --decimate\t\t[default: {decimate}] the desired animations sample rate in seconds' 217 | f'{new_line}\t\t\t\tThe output sample rate will only exactly match the selected' 218 | f'{new_line}\t\t\t\tdecimation rate if the ratio of original to final rate is a whole number' 219 | f'{new_line}\t\t\t\tOtherwise, the closest factorable rate will be chosen. {new_line}' 220 | f'\t-f --factor\t\t[default: {grid_factor}] this parameter could be used for testing. ' 221 | f'The grid spacing is {new_line}' 222 | f'\t\t\t\tmultiplied by this factor (-f 5 or -f 10 are reasonable choices) {new_line}' 223 | f'\t\t\t\tand as a result, resolution is reduced, and computation takes place faster.{new_line}' 224 | f'\t\t\t\tYou could use this option to ' 225 | f'test your parameters before a production run.{new_line}{new_line}' 226 | f'Examples:{new_line}' 227 | f'\tlower resolution (faster, good for tuning parameters):{new_line}' 228 | f'\t\tpython src/{script} -m 7.8 -y 55.030 -x -158.522 -z 28 -n AU -t 2020-07-22T06:12:44 -l -f 5' 229 | f'{new_line}{new_line}\thigher resolution (slower, for the final product):{new_line}' 230 | f'\t\tpython src/{script} -m 7.8 -y 55.030 -x -158.522 -z 28 -n AU -t 2020-07-22T06:12:44 -l' 231 | , file=sys.stdout, flush=True) 232 | print('\n\n', file=sys.stdout, flush=True) 233 | 234 | 235 | def full_extent(ax, pad=0.0): 236 | """Get the full extent of an axes, including axes labels, tick labels, and 237 | titles. FroM; 238 | https://stackoverflow.com/questions/4325733/save-a-subplot-in-matplotlib/4328608""" 239 | # For text objects, we need to draw the figure first, otherwise the extents 240 | # are undefined. 241 | ax.figure.canvas.draw() 242 | items = ax.get_xticklabels() + ax.get_yticklabels() 243 | items += [ax, ax.title] 244 | bbox = Bbox.union([item.get_window_extent() for item in items]) 245 | return bbox.expanded(1.0 + pad, 1.0 + pad) 246 | 247 | 248 | def plot_stack(_stack): 249 | """This a debug function to plot a list""" 250 | for _grid_key in _stack: 251 | stack_list = list() 252 | for _t_key in _stack[_grid_key]: 253 | stack_list.append(_stack[_grid_key][_t_key]) 254 | plt.plot(stack_list) 255 | plt.show() 256 | return 257 | 258 | 259 | def make_insufficient_data_animation(eq_lat, eq_lon, eq_magnitude, search_radius, anim_tag=''): 260 | """Create an insufficient data image.""" 261 | 262 | # Since Basemap takes a while to draw, save it as a pickle file and reload it in the loop. 263 | # File for use by Python's object serialization module pickle. Added time tag to make it unique to this run. 264 | pickle_file = os.path.join(param.scratch_dir, f'{vn_name}_{int(datetime.now().timestamp())}.pickle') 265 | save_pickle = True 266 | 267 | # What type of media is generated? 268 | media = 'video' 269 | 270 | coastline_skipped = False 271 | title = f"{eq_datetime.strftime('%Y-%m-%d %H:%M:%S')} M{eq_magnitude} Z={eq_depth}km\n" \ 272 | f"{vn_name} VNet, {param.trace_filter[param.bp_filter[vn_name]]['label']}" 273 | if anim_tag == 'syn': 274 | anim_tag = f'BP_{anim_tag}' 275 | title = f'{title} (ARF, synthetics)' 276 | else: 277 | anim_tag = f'BP' 278 | 279 | t7 = time() 280 | 281 | # Get grid locations. 282 | latitude, longitude = lib.set_grid(eq_lat, eq_lon, eq_magnitude) 283 | 284 | lon_0 = eq_lon 285 | lat_0 = eq_lat 286 | 287 | # Video frame layout. 288 | subplot_columns = 1 289 | subplot_tall_rows = 1 290 | subplot_short_rows = 1 291 | tall_to_short_height = 3 292 | 293 | file_tag = '_'.join([anim_tag, vn_name, lib.file_name_tag(eq_datetime)]) 294 | screen_file_tag = '_'.join([anim_tag, 'screen', vn_name, lib.file_name_tag(eq_datetime)]) 295 | 296 | # Remove image files if they exist from previous runs. 297 | try: 298 | files_to_remove = glob.glob(f'{os.path.join(scratch_dir, file_tag)}*.png') 299 | for this_file in files_to_remove: 300 | os.remove(this_file) 301 | except Exception as _er: 302 | print_message('ERR', f'Failed to remove\n {_er}', flush=True, log=log_file) 303 | 304 | delimiter = ' ' 305 | # Production date time stamp. 306 | production_date = lib.version_timestamp(script_version, search_radius, delimiter=delimiter) 307 | 308 | # The moving dot should show color of amplitude. 309 | normalize = colors.Normalize(vmin=0, vmax=1) 310 | 311 | fig = plt.figure(figsize=param.video_size, facecolor='white') 312 | ax = plt.subplot2grid((subplot_tall_rows * tall_to_short_height + subplot_short_rows, subplot_columns), 313 | (0, 0), rowspan=tall_to_short_height, colspan=1) 314 | 315 | lat_min =latitude['start'] 316 | lat_max = latitude['end'] 317 | lon_min = longitude['start'] 318 | lon_max = longitude['end'] 319 | 320 | # Create the basemap 321 | width = lat_max - lat_min 322 | width = degrees2kilometers(width) * 1000.0 323 | print_message('INFO', f'Basemap(width={width}, height={width}, projection={basemap_projection}, lat_0={lat_0}, ' 324 | f'lon_0={lon_0}, resolution={param.basemap_resolution})', flush=True, log=log_file) 325 | bm = Basemap(width=width, height=width, projection=basemap_projection, 326 | lat_0=lat_0, lon_0=lon_0, resolution=param.basemap_resolution) 327 | 328 | # Avoid areas without coastlines. 329 | try: 330 | bm.drawcoastlines(color=param.fill_continents_color) 331 | except Exception as ex: 332 | if not coastline_skipped: 333 | print_message('WARN', f'Skipped drawcoastlines:\n{ex}', flush=True, log=log_file) 334 | coastline_skipped = True 335 | pass 336 | 337 | if basemap_countries: 338 | bm.drawcountries(color=param.fill_continents_color) 339 | if basemap_states: 340 | bm.drawstates(color=param.fill_continents_color) 341 | 342 | # labels = [left,right,top,bottom]. 343 | bm.drawparallels(np.arange(int(lat_min), int(lat_max), 1), labels=[1, 0, 0, 0], 344 | fontsize=font_size[media]['label'], linewidth=0.0, 345 | labelstyle='+/-', fmt='%0.0f') 346 | bm.drawmeridians(np.arange(int(lon_min), int(lon_max), 2), labels=[0, 0, 0, 1], 347 | rotation=0, fontsize=font_size[media]['label'], 348 | linewidth=0.0, 349 | labelstyle='+/-', fmt='%0.0f') 350 | 351 | trench_x, trench_y = lib.read_global_trenches(bmap=bm) 352 | 353 | # Earthquake location as map units. 354 | xpt, ypt = bm(eq_lon, eq_lat) 355 | 356 | # Mark the earthquake location. 357 | bm.plot([xpt], [ypt], marker=earthquakes['marker'], markerfacecolor='black', 358 | markeredgecolor=earthquakes['color'], 359 | markersize=15, label='event') 360 | 361 | # plt.ylabel('Latitude', labelpad=param.ylabel_pad, fontsize=font_size[media]['label']) 362 | # plt.xlabel('Longitude', labelpad=param.xlabel_pad, fontsize=font_size[media]['label']) 363 | 364 | bm.plot(trench_x, trench_y, color='black', linestyle=param.trench_linestyle, linewidth=0.5) 365 | 366 | # Insufficient Data in the middle. 367 | plt.text(0.5, 0.8, f'Insufficient Data', fontsize=font_size[media]['insufficient_data'], 368 | horizontalalignment='center', 369 | verticalalignment='center', 370 | backgroundcolor='white', color='maroon', weight='bold', transform=ax.transAxes) 371 | 372 | plt.title(title, fontsize=font_size[media]['title']) 373 | 374 | # Get info on this subplot so we can align the one below it. 375 | map_bbox = ax.get_position() 376 | 377 | # Plot the beam power. 378 | ax0 = plt.subplot2grid((subplot_tall_rows * tall_to_short_height + subplot_short_rows, subplot_columns), 379 | (3, 0), rowspan=1, colspan=1) 380 | ax0.axes.yaxis.set_ticklabels([]) 381 | 382 | # Get info on this subplot so we can align the one above it. 383 | # We always want to adopt the map's width since it is dynamic. 384 | power_bbox = ax0.get_position() 385 | ax0.set_position([map_bbox.x0, power_bbox.y0, map_bbox.width, power_bbox.height]) 386 | 387 | # Plot the logo. 388 | logo_x0 = map_bbox.x0 * param.video_dpi 389 | if os.path.isfile(param.logo_image): 390 | image = np.array(Image.open(param.logo_image)) 391 | im = OffsetImage(image, zoom=logo_zoom, alpha=logo_alpha, zorder=1000) 392 | b_box = AnnotationBbox(im, xy=logo_location, xycoords=logo_coords, box_alignment=(0.0, 0.0), 393 | boxcoords='offset pixels', frameon=False) 394 | ax0.add_artist(b_box) 395 | 396 | xytext = (logo_padding, - 1.5 * logo_height) 397 | 398 | plt.annotate(production_date, xy=xytext, xycoords='axes pixels', 399 | xytext=(0, 0), 400 | textcoords='offset pixels', horizontalalignment='left', 401 | fontsize=production_label_font_size, 402 | verticalalignment='center') 403 | plt.savefig(os.path.join(scratch_dir, f'{file_tag}_' 404 | f'{0:06d}.png'), bbox_inches='tight', pad_inches=0.25, 405 | dpi=param.video_dpi, facecolor='white') 406 | plt.close() 407 | 408 | print_message('INFO', f'Creating the video:', flush=True, log=log_file) 409 | # Apply -vf pad=ceil(iw/2)*2:ceil(ih/2)*2 filter to avoid eight not divisible by 2 (1644x1491) 410 | # error without rescaling. 411 | # The -r before the input means the video will play at that number of the original images per second. 412 | # Have to define -r for both input and output to avoid dropped frames due to different default input and 413 | # and the desired output rate. 414 | command = f'{param.ffmpeg} -loop 1 -r {param.frames_per_second} ' \ 415 | f'-i {os.path.join(scratch_dir, file_tag)}_%06d.png ' \ 416 | f'-c:v libx264 -pix_fmt yuv420p -crf 23 -t {bp_t_total} ' \ 417 | f'-r {param.frames_per_second} ' \ 418 | f'-vf pad=ceil(iw/2)*2:ceil(ih/2)*2 ' \ 419 | f'-y {os.path.join(video_dir, file_tag)}.mp4'.split() 420 | print_message('INFO', f'Creating the video: {command}', flush=True, log=log_file) 421 | subprocess.call(command) 422 | 423 | if param.qtfaststart is not None: 424 | command = f'{param.qtfaststart} {os.path.join(video_dir, file_tag)}.mp4 ' \ 425 | f'{os.path.join(video_dir, file_tag)}_q.mp4'.split() 426 | print_message('INFO', f'QTfaststart the video: {command}', flush=True, log=log_file) 427 | subprocess.call(command) 428 | 429 | # Remove image files if they exist from previous runs. 430 | try: 431 | files_to_remove = glob.glob(f'{os.path.join(scratch_dir, file_tag)}*.png') 432 | for file_index, this_file in enumerate(files_to_remove): 433 | if file_index == 0: 434 | shutil.move(this_file, f'{os.path.join(video_dir, screen_file_tag)}.png') 435 | else: 436 | os.remove(this_file) 437 | except Exception as _er: 438 | print_message('ERR', f'Failed to remove\n {_er}', flush=True, log=log_file) 439 | 440 | return 441 | 442 | 443 | def make_animation(anim_stack_start, anim_stack_end, anim_stack_amp, anim_stack_amp_loc, anim_stack_max, 444 | anim_stack_max_loc, anim_global_max, search_radius, anim_tag='', grid_factor=1): 445 | """Create animation from the stacked traces.""" 446 | 447 | # What type of media is generated? 448 | media = 'video' 449 | 450 | image_count = 0 451 | 452 | coastline_skipped = False 453 | 454 | # Since Basemap takes a while to draw, save it as a pickle file and reload it in the loop. 455 | # File for use by Python's object serialization module pickle. Added time tag to make it unique to this run. 456 | pickle_file = os.path.join(param.scratch_dir, f'{vn_name}_{int(datetime.now().timestamp())}.pickle') 457 | save_pickle = True 458 | 459 | title = f"{eq_datetime.strftime('%Y-%m-%d %H:%M:%S')} M{eq_magnitude} Z={eq_depth}km\n" \ 460 | f"{vn_name} VNet, {param.trace_filter[param.bp_filter[vn_name]]['label']}" 461 | if anim_tag == 'syn': 462 | anim_tag = f'BP_{anim_tag}' 463 | title = f'{title} (ARF, synthetics)' 464 | else: 465 | anim_tag = f'BP' 466 | 467 | t7 = time() 468 | 469 | # Get grid locations. 470 | latitude, longitude = lib.set_grid(eq_lat, eq_lon, eq_magnitude, grid_factor=grid_factor) 471 | 472 | lon_0 = eq_lon 473 | lat_0 = eq_lat 474 | 475 | # Make the custom color map. 476 | bp_cmap = lib.make_cmap(param.bp_colors, bit=False, log=log_file) 477 | 478 | # Video frame layout. 479 | subplot_columns = 1 480 | subplot_tall_rows = 1 481 | subplot_short_rows = 1 482 | tall_to_short_height = 3 483 | 484 | file_tag = '_'.join([anim_tag, vn_name, lib.file_name_tag(eq_datetime)]) 485 | 486 | # Remove image files if they exist from previous runs. 487 | try: 488 | files_to_remove = glob.glob(f'{os.path.join(scratch_dir, file_tag)}*.png') 489 | for this_file in files_to_remove: 490 | os.remove(this_file) 491 | except Exception as _er: 492 | print_message('ERR', f'Failed to remove\n {_er}', flush=True, log=log_file) 493 | 494 | time_key_list = list(anim_stack_amp.keys()) 495 | 496 | delimiter = ' ' 497 | # Production date time stamp. 498 | production_date = lib.version_timestamp(script_version, search_radius, delimiter=delimiter) 499 | 500 | # The moving dot should show color of amplitude. 501 | normalize = colors.Normalize(vmin=0, vmax=1) 502 | max_peak = -1 503 | for time_key in time_key_list: 504 | # Limit the animation to the actual BP time limits. 505 | if float(time_key) < anim_stack_start or float(time_key) > anim_stack_end: 506 | continue 507 | 508 | fig = plt.figure(figsize=param.video_size, facecolor='white') 509 | ax = plt.subplot2grid((subplot_tall_rows * tall_to_short_height + subplot_short_rows, subplot_columns), 510 | (0, 0), rowspan=tall_to_short_height, colspan=1) 511 | 512 | lat = list() 513 | lon = list() 514 | val = list() 515 | for _index, _val in enumerate(anim_stack_amp[time_key]): 516 | lat_key, lon_key = anim_stack_amp_loc[time_key][_index] 517 | lat.append(float(lat_key)) 518 | lon.append(float(lon_key)) 519 | val.append(_val / anim_global_max) 520 | 521 | # Must be np arrays for grid 522 | lat_list = np.array(lat) 523 | lon_list = np.array(lon) 524 | value_list = np.array(val) 525 | 526 | # Find the min and max of coordinates. 527 | lon_min = lon_list.min() 528 | lat_min = lat_list.min() 529 | lon_max = lon_list.max() 530 | lat_max = lat_list.max() 531 | 532 | # Create the basemap and save it to reuse 533 | if save_pickle: 534 | width = lat_max - lat_min 535 | width = degrees2kilometers(width) * 1000.0 536 | bm = Basemap(width=width, height=width, projection=basemap_projection, 537 | lat_0=lat_0, lon_0=lon_0, resolution=param.basemap_resolution) 538 | 539 | # Avoid areas without coastlines. 540 | try: 541 | bm.drawcoastlines(color=param.fill_continents_color) 542 | except Exception as ex: 543 | if not coastline_skipped: 544 | print_message('WARN', f'Skipped drawcoastlines:\n{ex}', flush=True, log=log_file) 545 | coastline_skipped = True 546 | pass 547 | 548 | if basemap_countries: 549 | bm.drawcountries(color=param.fill_continents_color) 550 | if basemap_states: 551 | bm.drawstates(color=param.fill_continents_color) 552 | 553 | pickle.dump(bm, open(pickle_file, 'wb'), -1) 554 | save_pickle = False 555 | 556 | else: 557 | # Read pickle back in and plot it again (should be much faster). 558 | bm = pickle.load(open(pickle_file, 'rb')) 559 | 560 | # labels = [left,right,top,bottom]. 561 | bm.drawparallels(np.arange(int(lat_min), int(lat_max), 1), labels=[1, 0, 0, 0], 562 | fontsize=font_size[media]['label'], 563 | linewidth=0.0, 564 | labelstyle='+/-', fmt='%0.0f') 565 | bm.drawmeridians(np.arange(int(lon_min), int(lon_max), 2), labels=[0, 0, 0, 1], rotation=0, 566 | fontsize=font_size[media]['label'], 567 | linewidth=0.0, 568 | labelstyle='+/-', fmt='%0.0f') 569 | 570 | # Avoid areas without coastlines. 571 | try: 572 | bm.drawcoastlines(color=param.fill_continents_color) 573 | except Exception as ex: 574 | if not coastline_skipped: 575 | print_message('WARN', f'Skipped drawcoastlines:\n{ex}', flush=True, log=log_file) 576 | coastline_skipped = True 577 | pass 578 | 579 | if basemap_countries: 580 | bm.drawcountries(color=param.fill_continents_color) 581 | if basemap_states: 582 | bm.drawstates(color=param.fill_continents_color) 583 | 584 | # Read global trenches coordinates. 585 | trench_x, trench_y = lib.read_global_trenches(bmap=bm) 586 | 587 | # Earthquake location in map units. 588 | xpt, ypt = bm(eq_lon, eq_lat) 589 | 590 | # Mark the earthquake location. 591 | bm.plot([xpt], [ypt], marker=earthquakes['marker'], markerfacecolor=earthquakes['color'], 592 | markeredgecolor='white', 593 | markersize=15, label='event') 594 | 595 | # plt.ylabel('Latitude', labelpad=param.ylabel_pad, fontsize=font_size[media]['label']) 596 | # plt.xlabel('Longitude', labelpad=param.xlabel_pad, fontsize=font_size[media]['label']) 597 | 598 | # Plot a vertical distance scale. 599 | scale_deg = kilometer2degrees(param.scalebar_km) 600 | lat0 = lat_min + (lat_max - lat_min) / 4.0 601 | lon0 = lon_max - (lon_max - lon_min) / 10.0 602 | lat1, lon1 = lib.get_location(lat0, lon0, 0, scale_deg) 603 | xs, ys = bm((lon0, lon0), (lat0, lat1)) 604 | 605 | # Use the same _X to ensure scale is vertical. 606 | bm.plot([xs[0], xs[0]], ys, color='black', linewidth=2) 607 | plt.text(xs[0], ys[1], f' {param.scalebar_km} km', fontsize=font_size[media]['legend'], 608 | horizontalalignment='center', rotation=90, 609 | verticalalignment='bottom', 610 | path_effects=[pe.withStroke(linewidth=2, foreground='white')]) 611 | 612 | # Now let's grid the data. Find the number of grid points in each direction. 613 | lon_num = pcolormesh_grid_factor * int(((lon_max - lon_min) / longitude['inc']) + 1) 614 | lat_num = pcolormesh_grid_factor * int(((lat_max - lat_min) / latitude['inc']) + 1) 615 | 616 | # Create a uniform mesh for contouring. First transfer lon, lat to map units (x, y). 617 | x_old, y_old = bm(lon_list, lat_list) 618 | x_new = np.linspace(min(x_old), max(x_old), lon_num) 619 | y_new = np.linspace(min(y_old), max(y_old), lat_num) 620 | 621 | # Basic mesh in map's x, y. 622 | grid_x, grid_y = np.meshgrid(x_new, y_new) 623 | 624 | try: 625 | # Interpolate at the new grid points. 626 | # Method : {'linear', 'nearest', 'cubic'}, optional. 627 | grid_v = griddata((x_old, y_old), value_list, (grid_x, grid_y), method='cubic') 628 | except Exception as _er: 629 | print_message('ERR', f'Griding failed: {_er}', log=log_file) 630 | sys.exit(1) 631 | 632 | # Create a pseudocolor plot. 633 | bm.pcolormesh(grid_x, grid_y, grid_v, cmap=bp_cmap, alpha=0.7, shading='auto', linewidths=0, 634 | vmin=0, vmax=1) 635 | 636 | # Plot a + at the peak location. 637 | if anim_stack_max[time_key] / anim_global_max < param.peak_marker_max: 638 | _x, _y = bm(anim_stack_max_loc[time_key][1], anim_stack_max_loc[time_key][0]) 639 | bm.scatter(_x, _y, s=param.peak_marker_size, marker=param.peak_marker, facecolor=param.peak_marker_color, 640 | edgecolors=param.peak_marker_color, linewidths=param.peak_marker_lw, 641 | linestyle='None', zorder=1000) 642 | 643 | # Plot the trench. 644 | bm.plot(trench_x, trench_y, color='black', linestyle=param.trench_linestyle, linewidth=0.5) 645 | 646 | # Frame time on the upper right 647 | plt.text(0.95, 0.95, f'{float(time_key):0.1f} sec ({anim_stack_max_loc[time_key][0]:0.2f}, ' 648 | f'{anim_stack_max_loc[time_key][1]:0.2f})', fontsize=font_size[media]['time'], 649 | horizontalalignment='right', verticalalignment='center', 650 | backgroundcolor='white', transform=ax.transAxes) 651 | 652 | plt.title(title, fontsize=font_size[media]['title']) 653 | 654 | # Get info on this subplot so we can align the one below it. 655 | map_bbox = ax.get_position() 656 | 657 | # Plot the beam power. 658 | ax0 = plt.subplot2grid((subplot_tall_rows * tall_to_short_height + subplot_short_rows, subplot_columns), 659 | (3, 0), rowspan=1, colspan=1) 660 | times = list(anim_stack_max.keys()) 661 | times = list(np.array(times, dtype=float)) 662 | values = list(anim_stack_max.values()) 663 | max_value = max(values) 664 | values = np.array(values) / max_value 665 | values = list(values) 666 | ax0.fill_between(times, 0, values, facecolor=param.beam_power_fill_color) 667 | ax0.set_xlim(stack_start, stack_end) 668 | ax0.set_ylim(bottom=0.0) 669 | ax0.set_xlabel('time relative to origin (sec.)') 670 | ax0.set_ylabel('beam power') 671 | ax0.axes.yaxis.set_ticklabels([]) 672 | 673 | # Get info on this subplot so we can align the one above it. 674 | # We always want to adopt the map's width since it is dynamic. 675 | power_bbox = ax0.get_position() 676 | ax0.set_position([map_bbox.x0, power_bbox.y0, map_bbox.width, power_bbox.height]) 677 | 678 | # Plot the logo. 679 | logo_x0 = map_bbox.x0 * param.video_dpi 680 | if os.path.isfile(param.logo_image): 681 | image = np.array(Image.open(param.logo_image)) 682 | im = OffsetImage(image, zoom=logo_zoom, alpha=logo_alpha, zorder=1000) 683 | b_box = AnnotationBbox(im, xy=logo_location, xycoords=logo_coords, box_alignment=(0.0, 0.0), 684 | boxcoords='offset pixels', frameon=False) 685 | ax0.add_artist(b_box) 686 | 687 | xytext = (logo_padding, - 1.5 * logo_height) 688 | 689 | plt.vlines(float(time_key), 0.0, 1.0) 690 | v = float(anim_stack_max[time_key]) / max_value 691 | if v > max_peak: 692 | max_time_frame_file = f'{os.path.join(scratch_dir, file_tag)}_{image_count + 1:06d}.png' 693 | max_peak = v 694 | plt.scatter([float(time_key)], [v], cmap=bp_cmap, 695 | c=[v], s=40, marker='o', edgecolors='k', linewidths=0.3, norm=normalize, 696 | linestyle='None', zorder=1000) 697 | 698 | image_count += 1 699 | 700 | plt.annotate(production_date, xy=xytext, xycoords='axes pixels', 701 | xytext=(0, 0), 702 | textcoords='offset pixels', horizontalalignment='left', 703 | fontsize=production_label_font_size, 704 | verticalalignment='center') 705 | plt.savefig(os.path.join(scratch_dir, f'{file_tag}_' 706 | f'{image_count:06d}.png'), bbox_inches='tight', pad_inches=0.25, 707 | dpi=param.video_dpi, facecolor='white') 708 | if timing: 709 | t7 = lib.time_it(t7, f'Image for time {time_key}', log=log_file) 710 | 711 | plt.close() 712 | 713 | # create the video. 714 | try: 715 | print_message('INFO', f'Creating the video:', flush=True, log=log_file) 716 | # Apply -vf pad=ceil(iw/2)*2:ceil(ih/2)*2 filter to avoid eight not divisible by 2 (1644x1491) 717 | # error without rescaling. 718 | # The -r before the input means the video will play at that number of the original images per second. 719 | command = f'{param.ffmpeg} -r {param.frames_per_second} ' \ 720 | f'-i {os.path.join(scratch_dir, file_tag)}_%06d.png ' \ 721 | f'-c:v libx264 -pix_fmt yuv420p -crf 23 -t {bp_t_total} ' \ 722 | f'-r {param.frames_per_second} ' \ 723 | f'-vf pad=ceil(iw/2)*2:ceil(ih/2)*2 ' \ 724 | f'-y {os.path.join(video_dir, file_tag)}.mp4'.split() 725 | print_message('INFO', f'Creating the video: {command}', flush=True, log=log_file) 726 | subprocess.call(command) 727 | shutil.move(max_time_frame_file, f'{os.path.join(video_dir, file_tag)}.png') 728 | 729 | if param.qtfaststart is not None: 730 | command = f'{param.qtfaststart} {os.path.join(video_dir, file_tag)}.mp4 ' \ 731 | f'{os.path.join(video_dir, file_tag)}_q.mp4'.split() 732 | print_message('INFO', f'QTfaststart the video: {command}', flush=True, log=log_file) 733 | subprocess.call(command) 734 | except Exception as _er: 735 | print_message('ERR', f'Command {command} failed\n {_er}', flush=True, log=log_file) 736 | 737 | # Remove the pickle file. 738 | try: 739 | print_message('INFO', f'Removing the pickle file {pickle_file}', log=log_file) 740 | os.remove(pickle_file) 741 | except Exception as _er: 742 | print_message('ERR', f'Failed to remove\n {_er}', flush=True, log=log_file) 743 | 744 | # Remove image files if they exist from previous runs. 745 | try: 746 | files_to_remove = glob.glob(f'{os.path.join(scratch_dir, file_tag)}*.png') 747 | for this_file in files_to_remove: 748 | os.remove(this_file) 749 | except Exception as _er: 750 | print_message('ERR', f'Failed to remove\n {_er}', flush=True, log=log_file) 751 | 752 | return 753 | 754 | 755 | """Main code""" 756 | """ 757 | eq_date_time = '2020-07-22T06:12:44' 758 | eq_datetime = UTCDateTime(eq_date_time) 759 | eq_magnitude = 7.8 760 | eq_lat = 55.030 761 | eq_lon = -158.522 762 | eq_depth = 10.0 763 | vn_name = 'AU' 764 | """ 765 | event_id, vn_name, event_mag, event_date_time, event_lon, event_lat, event_depth = \ 766 | (None, None, None, None, None, None, None) 767 | 768 | # Get the input parameters. 769 | try: 770 | options, remainder = getopt.getopt(sys.argv[1:], 'hlva:d:e:f:n:m:s:t:x:y:z:', 771 | ['anim', 'help', 'summary', 'verbose', 'eid=', 'factor=', 'vnet=', 772 | 'emag=', 'decimate=', 'etime=', 'elon=', 'elat=', 'edepth=', 'logscreen=']) 773 | for opt, arg in options: 774 | if opt in ('-h', '--help'): 775 | usage() 776 | sys.exit(2) 777 | elif opt in ('-a', '--anim'): 778 | if arg.strip().lower() == 'true': 779 | create_synthetics = True 780 | create_animation = True 781 | else: 782 | create_synthetics = False 783 | create_animation = False 784 | elif opt in ('-l', '--logscreen'): 785 | log_to_screen = True 786 | elif opt in ('-s', '--summary'): 787 | if arg.strip().lower() == 'true': 788 | create_summary = True 789 | else: 790 | create_summary = False 791 | elif opt in ('-v', '--verbose'): 792 | verbose = True 793 | elif opt in ('-e', '--eid'): 794 | event_id = arg.strip() 795 | elif opt in ('-f', '--factor'): 796 | try: 797 | grid_factor = float(arg.strip()) 798 | except Exception as ex: 799 | usage() 800 | print_message('ERR', f'Invalid factor {arg.strip()}\n' 801 | f'{ex}') 802 | sys.exit(2) 803 | elif opt in ('-n', '--vnet'): 804 | vn_name = arg.strip() 805 | elif opt in ('-m', '--emag'): 806 | try: 807 | event_mag = float(arg.strip()) 808 | except Exception as ex: 809 | usage() 810 | print_message('ERR', f'Invalid magnitude {arg.strip()}\n' 811 | f'{ex}') 812 | sys.exit(2) 813 | # The output sample rate will only exactly match the selected decimation rate if the ratio of original 814 | # to final rate is factorable by 2,3,5 and 7. Otherwise, the closest factorable rate will be chosen. 815 | elif opt in ('-d', '--decimate'): 816 | try: 817 | decimate = float(arg.strip()) 818 | except Exception as ex: 819 | usage() 820 | print_message('ERR', f'Invalid sampling {arg.strip()}\n' 821 | f'{ex}') 822 | sys.exit(2) 823 | elif opt in ('-t', '--etime'): 824 | event_date_time = arg.strip() 825 | event_date_time = event_date_time.replace(' ', 'T') 826 | elif opt in ('-x', '--elon'): 827 | try: 828 | event_lon = float(arg.strip()) 829 | except Exception as ex: 830 | usage() 831 | print_message('ERR', f'Invalid longitude {arg.strip()}\n' 832 | f'{ex}') 833 | sys.exit(2) 834 | elif opt in ('-y', '--elat'): 835 | try: 836 | event_lat = float(arg.strip()) 837 | except Exception as ex: 838 | usage() 839 | print_message('ERR', f'Invalid latitude {arg.strip()}\n' 840 | f'{ex}') 841 | sys.exit(2) 842 | elif opt in ('-z', '--edepth'): 843 | try: 844 | event_depth = arg.strip() 845 | except Exception as ex: 846 | usage() 847 | print_message('ERR', f'Invalid depth {arg.strip()}\n' 848 | f'{ex}') 849 | sys.exit(2) 850 | else: 851 | print_message('WARN', f'option {opt} not recognized and will be ignored!') 852 | except getopt.GetoptError as er: 853 | usage() 854 | print_message('ERR', f'\n\n{60 * "="}\n{er}\n{60 * "="}\n\n', flush=True) 855 | sys.exit(2) 856 | 857 | # Validate parameters. 858 | if event_id is not None and not any([event_mag, event_date_time, event_lon, event_lat, event_depth]): 859 | usage() 860 | print_message('ERR', f'Cannot provide both event ID and event parameter(s)') 861 | sys.exit(2) 862 | elif event_id is None and not all([event_mag, event_date_time, event_lon, event_lat, event_depth]): 863 | usage() 864 | print_message('ERR', f'Must provide either an event ID or all event parameters') 865 | sys.exit(2) 866 | 867 | if event_id is None: 868 | try: 869 | eq_magnitude = float(event_mag) 870 | except Exception as ex: 871 | usage() 872 | print_message('ERR', f'Invalid magnitude {event_mag}\n{ex}') 873 | sys.exit(2) 874 | 875 | try: 876 | eq_lon = float(event_lon) 877 | if eq_lon > 180: 878 | eq_lon -= 360 879 | if 180 < eq_lon or eq_lon < -180: 880 | raise Exception('Longitude must be between -180/180') 881 | except Exception as ex: 882 | usage() 883 | print_message('ERR', f'Invalid longitude {event_lon}\n{ex}') 884 | sys.exit(2) 885 | 886 | try: 887 | eq_lat = float(event_lat) 888 | if 90 < eq_lat or eq_lat < -90: 889 | raise Exception('Latitude must be between -90/90') 890 | except Exception as ex: 891 | usage() 892 | print_message('ERR', f'Invalid latitude {event_lat}\n{ex}') 893 | sys.exit(2) 894 | 895 | try: 896 | eq_depth = float(event_depth) 897 | except Exception as ex: 898 | usage() 899 | print_message('ERR', f'Invalid depth {event_depth}\n{ex}') 900 | sys.exit(2) 901 | 902 | try: 903 | eq_date_time = event_date_time 904 | eq_datetime = UTCDateTime(eq_date_time) 905 | except Exception as ex: 906 | usage() 907 | print_message('ERR', f'Invalid date-time {eq_date_time}\n{ex}') 908 | sys.exit(2) 909 | 910 | # Start logging. 911 | if log_to_screen: 912 | log_file = sys.stdout 913 | else: 914 | log_dir = lib.mkdir(param.log_dir) 915 | log_file_name = os.path.join(log_dir, script.replace('.py', '')) 916 | log_file_name = f"{log_file_name}_{vn_name}_{datetime.now().strftime('%Y-%m-%d')}" 917 | log_file_name = '.'.join([log_file_name, 'log']) 918 | log_file = open(log_file_name, 'a') 919 | error_file_name = '.'.join([log_file_name, 'err']) 920 | sys.stderr = open(error_file_name, 'a') 921 | 922 | print_message('INFO', f'Event: M{eq_magnitude} at ({eq_lat}, {eq_lon}, {eq_depth}) {eq_date_time}UTC', log=log_file) 923 | if vn_name not in vn.keys(): 924 | usage() 925 | print_message('ERR', f'Invalid virtual network {vn_name}\n' 926 | f'Must be one of {list(vn.keys())}') 927 | sys.exit(2) 928 | 929 | bp_filter = param.bp_filter[vn_name] 930 | max_sta_count = param.virtual_networks[vn_name]['max_sta_count'] 931 | sta_too_close_deg = param.virtual_networks[vn_name]['sta_too_close_deg'] 932 | 933 | if not any([create_animation, create_summary, create_synthetics]): 934 | usage() 935 | print_message('ERR', f'All outputs are off, please turn at least one on ' 936 | f'(create_animation, create_summary, create_synthetics)') 937 | sys.exit(2) 938 | 939 | if timing: 940 | t0 = time() 941 | t1 = lib.time_it(t0, 'START', log=log_file) 942 | 943 | # Get the waveform request interval. 944 | request_start_date_time, request_start_datetime, request_end_date_time, request_end_datetime = \ 945 | lib.get_bp_time_window(eq_date_time, eq_magnitude) 946 | 947 | # Get event's list of bulk requests using fedcatalog. 948 | eq_content = lib.get_fedcatalog_stations(request_start_date_time, request_end_date_time, 949 | eq_lat, eq_lon, eq_min_radius, eq_max_radius, 950 | net=vn[vn_name]['network'], req='text', service='station', log=log_file) 951 | 952 | if eq_content is None: 953 | print_message('ERR', f'get_fedcatalog_stations did not return any station list. Returned {eq_content} ') 954 | sys.exit(1) 955 | 956 | eq_stations = lib.split_fedcatalog_stations(eq_content, log=log_file) 957 | 958 | if eq_content is None: 959 | print_message('ERR', f'Station request for event failed, no stations to request!', log=log_file) 960 | make_insufficient_data_animation(eq_lat, eq_lon, eq_magnitude, None) 961 | make_insufficient_data_animation(eq_lat, eq_lon, eq_magnitude, None, anim_tag='syn') 962 | sys.exit(4) 963 | 964 | print_message('INFO', f'Requesting data from {len(eq_stations)} stations.', log=log_file) 965 | 966 | # Find the azimuth and distance from the earthquake to the center of the continent the BP is made for. 967 | center_dist, center_azim, center_back_azim = gps2dist_azimuth(eq_lat, eq_lon, vn[vn_name]['lat'], vn[vn_name]['lon']) 968 | print_message('INFO', f'Center of the {vn_name} network ({vn[vn_name]["lat"]}, {vn[vn_name]["lon"]}) azimuth is ' 969 | f'{center_azim} degrees.', log=log_file) 970 | 971 | # Distance is the great circle distance in m, convert to km. 972 | center_dist /= 1000.0 973 | 974 | # Before we start, check to see if this network is at a reasonable distance from the event. 975 | center_dist_degrees = kilometer2degrees(center_dist) 976 | if param.vn_check_dist and center_dist_degrees > param.qc_vn_max_distance and \ 977 | vn_name not in param.vn_azimuth_exception_list: 978 | print_message('ERR', f'{vn_name} virtual network is {center_dist_degrees:0.2f} ' 979 | f'degrees from the earthquake location.' 980 | f' Too far (> {param.qc_vn_max_distance}) for back Projection!', log=log_file) 981 | make_insufficient_data_animation(eq_lat, eq_lon, eq_magnitude, None) 982 | make_insufficient_data_animation(eq_lat, eq_lon, eq_magnitude, None, anim_tag='syn') 983 | sys.exit(4) 984 | 985 | bulk_request_info = dict() 986 | station_coordinates = dict() 987 | chunk = 0 988 | 989 | # Go through each contributing FDSN data center (DC) and clean up the station list and create actual data requests. 990 | # eq_stations is a DC-based dictionary. 991 | request_list = dict() 992 | 993 | for dc in eq_stations: 994 | _service = None 995 | _list = list() 996 | 997 | for _line in eq_stations[dc].bulk: 998 | items = _line.split('|') 999 | _net, _sta, _loc, _chan, _lat, _lon = items[0:6] 1000 | net_sta_key = f'{_net}.{_sta}' 1001 | 1002 | # Do not include, if already in the list: 1003 | if net_sta_key in request_list: 1004 | continue 1005 | 1006 | if _loc.strip() == '': 1007 | _loc = '--' 1008 | _chan = items[3] 1009 | _lat = float(_lat) 1010 | _lon = float(_lon) 1011 | _dist, _azim, _back_azim = gps2dist_azimuth(eq_lat, eq_lon, _lat, _lon) 1012 | _dist /= 1000.0 1013 | _dist = kilometer2degrees(_dist) 1014 | 1015 | # Only accept stations that are within vn_azimuth degrees from the center line, except for selected networks. 1016 | if vn_name not in param.vn_azimuth_exception_list: 1017 | _angle = abs(center_azim - _azim) 1018 | 1019 | # See if the station azimuth is within the range. 1020 | if _angle <= vn_azimuth or _angle >= 360.0 - vn_azimuth: 1021 | if verbose: 1022 | print_message('INFO', f'Station {_net}.{_sta} azimuth {_azim:0.2f} ' 1023 | f'from center {center_azim:0.2f} is ' 1024 | f'{_angle:0.2f} and is within the ' 1025 | f'{vn_azimuth} bounds.', log=log_file) 1026 | else: 1027 | if verbose: 1028 | print_message('WARN', f'Station {_net}.{_sta} azimuth {_azim:0.2f} ' 1029 | f'from center {center_azim:0.2f} is ' 1030 | f'{_angle:0.2f} and is outside the ' 1031 | f'{vn_azimuth} bounds.', log=log_file) 1032 | continue 1033 | 1034 | request_list[net_sta_key] = dict() 1035 | request_list[net_sta_key]['lat'] = _lat 1036 | request_list[net_sta_key]['lon'] = _lon 1037 | 1038 | dense_patch_list = '' 1039 | sparsify = 0 1040 | sta_too_close_deg_inc = param.sta_too_close_deg_inc 1041 | search_radius_edge = lib.case(len(request_list), param.sta_too_close_deg_init) 1042 | print_message('INFO', f'There are {len(request_list)} stations in the virtual network, setting search radius to ' 1043 | f'{search_radius_edge} degrees, increasing by {sta_too_close_deg_inc} degrees', log=log_file) 1044 | 1045 | awhile = ' (this may take a while)' 1046 | 1047 | # Do not set the initial search radius beyond the allowed sta_too_close_deg value for the virtual network. 1048 | if search_radius_edge > sta_too_close_deg: 1049 | search_radius_edge = sta_too_close_deg 1050 | 1051 | if len(request_list) <= max_sta_count: 1052 | print_message('INFO', f'{len(request_list)} stations in the virtual network <= {max_sta_count}', log=log_file) 1053 | else: 1054 | while search_radius_edge <= sta_too_close_deg: 1055 | # Make the network sparse for this search radius. 1056 | while dense_patch_list is not None: 1057 | sparsify += 1 1058 | print_message('INFO', f'Pre-request sparse iteration #{sparsify}, looking for dense patches ' 1059 | f'between {len(request_list)} stations within {search_radius_edge:0.2} degrees' 1060 | f'{awhile}', 1061 | log=log_file) 1062 | awhile = '' 1063 | intra_station_dist, dense_patch_list = lib.find_dense_sta_patch(intra_station_dist, vn_name, request_list, 1064 | search_radius_edge, log=log_file) 1065 | if dense_patch_list is not None: 1066 | for key in dense_patch_list: 1067 | if dense_patch_list[key]: 1068 | print_message('WARN', f'{key} removed to make network sparse.', log=log_file) 1069 | request_list.pop(key) 1070 | if len(request_list) <= max_sta_count: 1071 | print_message('INFO', f'Pre-request sparse done!', log=log_file) 1072 | break 1073 | else: 1074 | print_message('INFO', f'Network {vn_name} with {len(request_list)} stations is sparse within ' 1075 | f'{search_radius_edge:0.2} degrees', log=log_file) 1076 | 1077 | # Is the number of remaining stations still too high? 1078 | if len(request_list) > max_sta_count and (search_radius_edge < sta_too_close_deg or sta_too_close_deg <= 0): 1079 | search_radius_edge += sta_too_close_deg_inc 1080 | sparsify = 0 1081 | 1082 | # Do not set the search radius beyond the allowed sta_too_close_deg value for the virtual network. 1083 | if search_radius_edge > sta_too_close_deg > 0: 1084 | if search_radius_edge > sta_too_close_deg: 1085 | search_radius_edge = sta_too_close_deg 1086 | print_message('INFO', f'We still have {len(request_list)} > {max_sta_count} ' 1087 | f'stations in the virtual network ' 1088 | f'{vn_name}, increasing the search radius to {search_radius_edge:0.2} degrees.', 1089 | log=log_file) 1090 | dense_patch_list = '' 1091 | else: 1092 | print_message('INFO', f'Pre-request sparse done!', log=log_file) 1093 | break 1094 | 1095 | print_message('INFO', f'The search area is st at {search_radius_edge} degrees!', log=log_file) 1096 | if timing: 1097 | t1 = lib.time_it(t1, f'Pre-request sparse', log=log_file) 1098 | 1099 | dc_service_url = dict() 1100 | for dc in eq_stations: 1101 | _service = None 1102 | _list = list() 1103 | 1104 | for _line in eq_stations[dc].bulk: 1105 | items = _line.split('|') 1106 | _net, _sta, _loc, _chan, _lat, _lon = items[0:6] 1107 | if _loc.strip() == '': 1108 | _loc = '--' 1109 | _chan = items[3] 1110 | _lat = float(_lat) 1111 | _lon = float(_lon) 1112 | net_sta_key = f'{_net}.{_sta}' 1113 | if net_sta_key not in request_list: 1114 | continue 1115 | 1116 | # Add the station to the request list. 1117 | _list.append(' '.join([_net, _sta, _loc, _chan, request_start_date_time, request_end_date_time])) 1118 | 1119 | # Do we know the dataselect service address for this DC? If not, get it using this station. 1120 | if _service is None: 1121 | _service = lib.get_dc_dataselect_url(request_start_date_time, request_end_date_time, 1122 | _net, _sta, _loc, _chan, dc, log=log_file) 1123 | dc_service_url[dc] = _service 1124 | 1125 | # We got the list for this DC. Clean it up and break it into chunks for easier request. 1126 | _req = lib.get_request_items(_list) 1127 | print_message('INFO', f'DC {dc}_{chunk}, {len(_req)} channels out of possible {len(_list)}', log=log_file) 1128 | if timing: 1129 | t1 = lib.time_it(t1, f'Station list for {dc}', log=log_file) 1130 | 1131 | chunk_list = list() 1132 | for _item in _req: 1133 | chunk_list.append(_item) 1134 | if len(chunk_list) >= chunk_count: 1135 | 1136 | bulk_request_info[f'{dc}_{chunk}'] = ObjDict({'url': eq_stations[dc].url, 1137 | 'dataselect': _service, 1138 | 'bulk': chunk_list.copy()}) 1139 | chunk += 1 1140 | chunk_list = list() 1141 | 1142 | # Reached the end for this DC. Create request for the remaining, if any. 1143 | if chunk_list: 1144 | bulk_request_info[f'{dc}_{chunk}'] = ObjDict({'url': eq_stations[dc].url, 1145 | 'dataselect': _service, 1146 | 'bulk': chunk_list.copy()}) 1147 | chunk += 1 1148 | chunk_list = list() 1149 | 1150 | # Make data request to each DC. 1151 | trace_list = dict() 1152 | for dc in bulk_request_info: 1153 | data_center = dc.split('_')[0] 1154 | st = None 1155 | if data_center in dc_to_exclude: 1156 | print_message('WARN', f'skipped data from {data_center} because it is in the exclude list', log=log_file) 1157 | continue 1158 | else: 1159 | if verbose: 1160 | print_message('INFO', 'Sending requests for:', log=log_file) 1161 | for line in bulk_request_info[dc].bulk: 1162 | print(f'\t{line}\n', file=log_file) 1163 | print('\n', file=log_file, flush=True) 1164 | 1165 | print('\n', file=log_file, flush=True) 1166 | if len(bulk_request_info[dc].bulk) <= 0: 1167 | print_message('WARN', f'Skipping DC {dc}, no stations to request!\n', log=log_file) 1168 | continue 1169 | 1170 | print_message('INFO', f'Requesting data from {data_center} via ' 1171 | f'{bulk_request_info[dc].dataselect}\n', log=log_file) 1172 | 1173 | # Set the client up for this data center. 1174 | try: 1175 | print_message('INFO', f'Sending {len(bulk_request_info[dc].bulk)} requests', log=log_file) 1176 | client = Client(lib.get_service_url(bulk_request_info, dc)) 1177 | st = client.get_waveforms_bulk(bulk_request_info[dc].bulk, attach_response=True) 1178 | print_message('INFO', f'Received {len(st)} traces', log=log_file) 1179 | except Exception as er: 1180 | if verbose: 1181 | print_message('ERR', f'Request:\n\n{bulk_request_info[dc].bulk}\n\n from {data_center} ' 1182 | f'({bulk_request_info[dc].dataselect}) failed: {er}', log=log_file) 1183 | else: 1184 | print_message('ERR', f'({bulk_request_info[dc].dataselect}) failed: {er}', log=log_file) 1185 | continue 1186 | 1187 | # Fill the missing values with zeros. 1188 | if param.merge_gaps: 1189 | st.merge(fill_value=0.0) 1190 | 1191 | # Work on individual traces in the stream. 1192 | for index, tr in enumerate(st): 1193 | 1194 | # Reject traces with NaN and inf values. 1195 | if True in np.isnan(tr.data) or True in np.isinf(tr.data): 1196 | if verbose: 1197 | print_message('WARN', f'Skipped, channel {tr.stats.channel} of Net.Sta ' 1198 | f'{net_sta_key} from {data_center} ' 1199 | f'because of bad values', log=log_file) 1200 | continue 1201 | 1202 | # Get the individual trace times. 1203 | trace_times = list(tr.times()) 1204 | 1205 | # Calculate trace length based on the time difference between the last and the first samples. 1206 | trace_length = trace_times[-1] - trace_times[0] 1207 | net_sta_key = f'{tr.stats.network}.{tr.stats.station}' 1208 | 1209 | # Ignore the short traces. 1210 | min_trace_length = 0.9 * (request_end_datetime - request_start_datetime) 1211 | if trace_length < min_trace_length: 1212 | if verbose: 1213 | print_message('WARN', f'Skipped, channel {tr.stats.channel} of Net.Sta {net_sta_key} ' 1214 | f'from {data_center} ' 1215 | f'because it is shorter than {min_trace_length} s', log=log_file) 1216 | continue 1217 | 1218 | # Ignore stations with too high of STD before event time. 1219 | if param.std_check: 1220 | std_start = (param.request_time_before - param.std_offset - param.std_window) * tr.stats.sampling_rate 1221 | std_end = (std_start + param.std_window) * tr.stats.sampling_rate 1222 | _tr = tr.copy() 1223 | _tr.detrend() 1224 | _data = _tr.data 1225 | trace_max = max(abs(_data)) 1226 | std_data = _data[int(std_start):int(std_end)] / trace_max 1227 | std = np.std(np.array(std_data)) 1228 | if std > vn[vn_name]['std_max']: 1229 | print_message(f'WARN', 1230 | f'Rejected {data_center}.{net_sta_key} due to high STD of ' 1231 | f'{std:0.4f} > {vn[vn_name]["std_max"]}', log=log_file) 1232 | continue 1233 | 1234 | # Get the station coordinates. Here we assume there is a possibility that station contains gaps, so 1235 | # we may get more than one trace. We will get the station information from the first segment. 1236 | # NOTE: We ignore the second trace 1237 | if net_sta_key in station_coordinates: 1238 | if verbose: 1239 | print_message('WARN', f'Multiple trace, already have data from {net_sta_key} ' 1240 | f'for channel {station_coordinates[net_sta_key][2]}.', log=log_file) 1241 | continue 1242 | 1243 | else: 1244 | if verbose: 1245 | print_message('INFO', f'Getting information for station {tr.stats.network}.{tr.stats.station}.' 1246 | f'{tr.stats.location}.{tr.stats.channel}.', log=log_file) 1247 | try: 1248 | 1249 | inventory = client.get_stations(network=tr.stats.network, station=tr.stats.station, 1250 | location=tr.stats.location, channel=tr.stats.channel, level="station") 1251 | _lon = inventory.networks[0].stations[0].longitude 1252 | _lat = inventory.networks[0].stations[0].latitude 1253 | station_coordinates[net_sta_key] = (_lon, _lat, tr.stats.channel) 1254 | except Exception as er: 1255 | print_message('ERR', f'Request error {data_center}.{net_sta_key} failed: {er}', log=log_file) 1256 | continue 1257 | 1258 | # Make sure seismogram has the P-waves in it. 1259 | _dist, _azim, _back_azim = gps2dist_azimuth(eq_lat, eq_lon, _lat, _lon) 1260 | 1261 | # Delay from predicted travel time. 1262 | tt_cache, phase_delay = lib.get_phase_delay(tt_cache, _dist, eq_depth) 1263 | 1264 | if phase_delay is None: 1265 | if verbose: 1266 | print_message('WARN', f'Skipped, channel {tr.stats.channel} of Net.Sta {net_sta_key} ' 1267 | f'from {data_center} ' 1268 | f'because no P-wave arrival', log=log_file) 1269 | continue 1270 | 1271 | if not lib.has_phase(tr, eq_datetime, phase_delay): 1272 | if verbose: 1273 | print_message('WARN', f'Skipped, channel {tr.stats.channel} of Net.Sta {net_sta_key} ' 1274 | f'from {data_center} ' 1275 | f'because it does not cover the {param.phase} arrival time', log=log_file) 1276 | continue 1277 | 1278 | # Remove the response. 1279 | try: 1280 | if verbose: 1281 | print_message('INFO', f'Removing response from {data_center}.{net_sta_key}', log=log_file) 1282 | tr.remove_response(output=param.output_type, zero_mean=True, taper=True, 1283 | taper_fraction=param.taper_fraction, pre_filt=param.pre_filter) 1284 | except ValueError as er: 1285 | print_message('ERR', f'Removing response from {data_center}.{net_sta_key} failed: {er}', log=log_file) 1286 | continue 1287 | except Exception as er: 1288 | print_message('ERR', f'Removing response from {data_center}.{net_sta_key} failed: {er}', log=log_file) 1289 | continue 1290 | 1291 | # To make sure all is OK after response correction, reject traces with NaN and inf values. 1292 | if True in np.isnan(tr.data) or True in np.isinf(tr.data): 1293 | if verbose: 1294 | print_message('WARN', f'Skipped, channel {tr.stats.channel} of Net.Sta {net_sta_key} ' 1295 | f'from {data_center} ' 1296 | f'because of bad values', log=log_file) 1297 | continue 1298 | 1299 | # Demean the trace (#587). 1300 | tr.detrend(type='demean') 1301 | 1302 | # Taper the trace. 1303 | tr.taper(param.taper_fraction, type='hann') 1304 | 1305 | # Make sure pre-Phase to phase SNR is at least 2 on the unfiltered trace. (#525) 1306 | snr = lib.p_wave_snr(tr, eq_datetime, phase_delay) 1307 | if snr < param.min_snr: 1308 | print_message('WARN', f'Skipped, channel {tr.stats.channel} of Net.Sta {net_sta_key} from {data_center} ' 1309 | f'because its P-wave SNR is {snr:0.2f} < {param.min_snr} ', log=log_file) 1310 | continue 1311 | else: 1312 | print_message('INFO', f'Channel {tr.stats.channel} of Net.Sta {net_sta_key} from {data_center} ' 1313 | f'has acceptable unfiltered P-wave SNR of {snr:0.2f} > {param.min_snr} ', 1314 | log=log_file) 1315 | 1316 | # Make sure all traces have the same sampling (trace_sampling_frequency). 1317 | try: 1318 | if not math.isclose(tr.stats.delta, trace_sampling_frequency): 1319 | tr.resample(trace_sampling_frequency) 1320 | except Exception as ex: 1321 | print_message('ERR', f'Failed to re-sample {data_center}.{net_sta_key} to ' 1322 | f'{param.trace_sampling_frequency}: ' 1323 | f'{ex}', log=log_file) 1324 | continue 1325 | 1326 | # Filter/normalize the trace (#715). 1327 | tr_filter = lib.preprocess_trace(tr, bp_filter, eq_datetime, phase_delay) 1328 | 1329 | # No filter, just normalize and demean. 1330 | tr = lib.preprocess_trace(tr, None, eq_datetime, phase_delay) 1331 | 1332 | # One last time, make sure pre-Phase to phase SNR is at least 1.5 on the filtered trace.(#added) 1333 | if param.min_filtered_snr is not None: 1334 | tr_ = tr_filter 1335 | 1336 | snr = lib.p_wave_snr(tr_, eq_datetime, phase_delay) 1337 | if snr < param.min_filtered_snr: 1338 | print_message('WARN', f'Skipped, channel {tr.stats.channel} of Net.Sta {net_sta_key} ' 1339 | f'from {data_center} ' 1340 | f'because its {bp_filter} filtered P-wave SNR is ' 1341 | f'{snr:0.2f} < {param.min_filtered_snr} ', log=log_file) 1342 | continue 1343 | else: 1344 | print_message('INFO', f'Channel {tr.stats.channel} of Net.Sta {net_sta_key} from {data_center} ' 1345 | f'has {bp_filter} filtered P-wave SNR of ' 1346 | f'{snr:0.2f} > {param.min_filtered_snr} ', log=log_file) 1347 | 1348 | # Phase align traces (af2 is the tapered / filtered and normalized trace). 1349 | trace_list[net_sta_key] = {'tr_final': tr.copy(), 'tr_filter': tr_filter.copy(), 1350 | 'weight': 0.0, 'dc': data_center, 1351 | 'lat': inventory.networks[0].stations[0].latitude, 1352 | 'lon': inventory.networks[0].stations[0].longitude, 'azim': _azim, 1353 | 'phase_delay': phase_delay} 1354 | if timing: 1355 | t1 = lib.time_it(t1, f'Pre-processed station data for {data_center}', log=log_file) 1356 | 1357 | print_message('INFO', f'Pre-processing done', log=log_file) 1358 | 1359 | # BP parameters. 1360 | stack_start, stack_end, bp_t_offset, bp_t_increment, bp_t_total, t_avg = lib.set_time_parameters(eq_magnitude) 1361 | 1362 | # See if we have enough stations to continue. 1363 | if len(trace_list) < param.min_num_sta: 1364 | if verbose: 1365 | print_message('ERR', f'Only {len(trace_list)} stations remain, must be >= {param.min_num_sta}, ' 1366 | f'will not continue', log=log_file) 1367 | make_insufficient_data_animation(eq_lat, eq_lon, eq_magnitude, search_radius_edge) 1368 | make_insufficient_data_animation(eq_lat, eq_lon, eq_magnitude, search_radius_edge, anim_tag='syn') 1369 | sys.exit(4) 1370 | 1371 | # remove dense patch of stations (#764). 1372 | sparse_title = '' 1373 | sparse_title = ' (sparse)' 1374 | 1375 | dense_patch_list = '' 1376 | sparsify = 0 1377 | 1378 | while dense_patch_list is not None: 1379 | sparsify += 1 1380 | print_message('INFO', f'Sparse iteration #{sparsify} looking for dense patches ' 1381 | f'between {len(trace_list)} stations within {search_radius_edge} degrees', log=log_file) 1382 | intra_station_dist, dense_patch_list = lib.find_dense_sta_patch(intra_station_dist, vn_name, trace_list, 1383 | search_radius_edge, log=log_file) 1384 | if dense_patch_list is not None: 1385 | for key in dense_patch_list: 1386 | if dense_patch_list[key]: 1387 | print_message('WARN', f'{key} removed to make network sparse.', log=log_file) 1388 | trace_list.pop(key) 1389 | else: 1390 | print_message('INFO', f'Network is sparse', log=log_file) 1391 | 1392 | if timing: 1393 | t1 = lib.time_it(t1, f'Sparse', log=log_file) 1394 | 1395 | # Compute the cross-correlation window length (#734). 1396 | cc_window_length = lib.xcorr_window(trace_list, eq_magnitude, eq_datetime) 1397 | if timing: 1398 | t1 = lib.time_it(t1, f'X-corr window set to {cc_window_length:0.2f}s', log=log_file) 1399 | 1400 | # Optimal cross-correlation. 1401 | print_message('INFO', f'Computing optimal MCCC window for {len(trace_list)} stations', log=log_file) 1402 | 1403 | trace_list, nsta, optimal_mccc = lib.find_optimal_mccc_window(eq_datetime, trace_list, vn[vn_name], log=log_file) 1404 | 1405 | if optimal_mccc is None: 1406 | print_message('ERR', f' Insufficient number of traces ({nsta} < {param.min_num_sta}) ' 1407 | f'to set the optimal MCCC window!', log=log_file) 1408 | make_insufficient_data_animation(eq_lat, eq_lon, eq_magnitude, search_radius_edge) 1409 | make_insufficient_data_animation(eq_lat, eq_lon, eq_magnitude, search_radius_edge, anim_tag='syn') 1410 | sys.exit(4) 1411 | 1412 | if timing: 1413 | t1 = lib.time_it(t1, f'Optimal MCC window set ', log=log_file) 1414 | 1415 | # Double cross-correlation. 1416 | #trace_list, double_mccc = lib.mccc_double(eq_datetime, trace_list, cc_window_length) 1417 | #if timing: 1418 | # t1 = lib.time_it(t1, f'Double MCCC done', log=log_file) 1419 | 1420 | # Assign station weights #816. 1421 | print_message('INFO', f'Assign station weights', log=log_file) 1422 | 1423 | # Compute a weight to each active station based on the distance and azimuth of the neighboring stations. 1424 | intra_station_dist, weight = lib.station_weight(trace_list, vn_name, intra_station_dist) 1425 | weight_count = 0 1426 | for net_sta_key in weight.keys(): 1427 | if weight[net_sta_key] > 0.0: 1428 | weight_count += 1 1429 | 1430 | # See if we have enough stations to continue. 1431 | if weight_count < param.min_num_sta: 1432 | if verbose: 1433 | print_message('WARN', f'Only {weight_count} stations remain, must be >= {param.min_num_sta}, will not continue' 1434 | , log=log_file) 1435 | make_insufficient_data_animation(eq_lat, eq_lon, eq_magnitude, search_radius_edge) 1436 | make_insufficient_data_animation(eq_lat, eq_lon, eq_magnitude, search_radius_edge, anim_tag='syn') 1437 | sys.exit(4) 1438 | 1439 | for net_sta_key in weight.keys(): 1440 | if weight[net_sta_key] > 0.0: 1441 | trace_list[net_sta_key]['weight'] = weight[net_sta_key] 1442 | 1443 | # Apply the station weight to its traces. 1444 | trace_list[net_sta_key]['tr_final'].data = trace_list[net_sta_key]['tr_final'].data * weight[net_sta_key] 1445 | trace_list[net_sta_key]['tr_filter'].data = trace_list[net_sta_key]['tr_filter'].data * weight[net_sta_key] 1446 | else: 1447 | trace_list.pop(net_sta_key) 1448 | if timing: 1449 | t1 = lib.time_it(t1, f'Station weights assigned', log=log_file) 1450 | 1451 | # Cut or zero pad the seismograms so that all of them start at origin time and have the MCCC delay added in. 1452 | # Now, any delay due to change in the source location can be applied to these traces. 1453 | print_message('INFO', f'Trim traces to start {bp_t_offset}s before P and continue for {bp_t_total} seconds.', 1454 | log=log_file) 1455 | 1456 | # Set the trace start and length. 1457 | _t0 = max(bp_t_offset, - param.min_summary_trace_time) 1458 | _t = max(bp_t_total, _t0 + param.max_summary_trace_time) 1459 | for tr_key in trace_list.keys(): 1460 | 1461 | _tr = trace_list[tr_key]['tr_final'].copy() 1462 | _tr_filter = trace_list[tr_key]['tr_filter'].copy() 1463 | 1464 | trace_list[tr_key]['tr_final'] = lib.trace_trimmer(_tr, eq_datetime, _t0, _t, 1465 | trace_list[tr_key]['phase_delay'], 1466 | trace_list[tr_key]['mccc_delay'] 1467 | ) 1468 | trace_list[tr_key]['tr_filter'] = lib.trace_trimmer(_tr_filter, eq_datetime, _t0, _t, 1469 | trace_list[tr_key]['phase_delay'], 1470 | trace_list[tr_key]['mccc_delay'] 1471 | ) 1472 | 1473 | if create_synthetics: 1474 | synthetics = lib.gen_synthetics(trace_list, eq_datetime, vn_name, create_synthetics=create_synthetics) 1475 | # This synthetics should have the same time history as the 1476 | # regular trace, so should follow the same processing steps. 1477 | for tr_key in trace_list.keys(): 1478 | trace_list[tr_key]['tr_syn'] = synthetics[tr_key].copy() 1479 | 1480 | debug = False 1481 | if debug: 1482 | for tr_key in trace_list.keys(): 1483 | trace_1 = trace_list[tr_key]['tr_filter'] 1484 | trace_2 = trace_list[tr_key]['tr_syn'] 1485 | stream = Stream(traces=[trace_1, trace_2]) 1486 | stream.plot() 1487 | 1488 | # Stack traces. 1489 | if timing: 1490 | t1 = lib.time_it(t1, f'Stack Start', log=log_file) 1491 | if True: 1492 | stack, syn_stack = lib.stacker(trace_list, vn_name, bp_t_offset, t_avg, eq_datetime, eq_lat, eq_lon, eq_magnitude, 1493 | eq_depth, bp_t_increment, bp_t_total, tt_cache, create_synthetics=create_synthetics, 1494 | grid_factor=grid_factor, log=log_file, verbose=verbose) 1495 | if timing: 1496 | t1 = lib.time_it(t1, f'Stack END', log=log_file) 1497 | 1498 | # 1760 1499 | # Raise stacks to the Nth power to do Nth-root stacking 1500 | debug = False 1501 | if debug: 1502 | print_message('DEBUG', f'Stack before Nth power', log=log_file) 1503 | plot_stack(stack) 1504 | print_message('DEBUG', f'Syn stack before Nth power', log=log_file) 1505 | plot_stack(syn_stack) 1506 | stack, syn_stack = lib.stack_root(stack, syn_stack, param.stacking_root, create_synthetics=create_synthetics) 1507 | 1508 | if debug: 1509 | print_message('DEBUG', f'Stack after Nth power', log=log_file) 1510 | plot_stack(stack) 1511 | print_message('DEBUG', f'Syn stack after Nth power', log=log_file) 1512 | plot_stack(syn_stack) 1513 | 1514 | if timing: 1515 | t1 = lib.time_it(t1, f'Stack_root done', log=log_file) 1516 | 1517 | # 1771 1518 | # Average the beams by averaging the beams in a running beam_average_seconds window 1519 | print_message('INFO', f'Averaging the beams', log=log_file) 1520 | stack, syn_stack = lib.hann(stack, syn_stack, stack_start, stack_end, bp_t_increment, 1521 | param.beam_average_seconds, 1522 | create_synthetics=create_synthetics, log=log_file) 1523 | 1524 | if debug: 1525 | print_message('DEBUG', f'Running average', log=log_file) 1526 | plot_stack(stack) 1527 | print_message('DEBUG', f'Syn Running average', log=log_file) 1528 | plot_stack(syn_stack) 1529 | 1530 | if timing: 1531 | t1 = lib.time_it(t1, f'Beam averaging done', log=log_file) 1532 | 1533 | # 1815 1534 | # Time average the stacks. 1535 | print_message('INFO', f'Time averaging the stacked traces and final resampling', log=log_file) 1536 | 1537 | # The output sample rate will only exactly match the selected decimation rate if the ratio of original to final 1538 | # rate is factorable by 2,3,5 and 7. Otherwise, the closest factorable rate will be chosen. 1539 | step = 1 1540 | if decimate is not None: 1541 | step = int(round(decimate / bp_t_increment)) 1542 | 1543 | print_message('INFO', f'Stack decimation every {step} samples', log=log_file) 1544 | # Average the beams and do final averaging using a running t_avg window 1545 | 1546 | stack_final, syn_stack_final = lib.hann(stack, syn_stack, stack_start, stack_end, bp_t_increment, t_avg, 1547 | resamp=step, create_synthetics=create_synthetics, log=log_file) 1548 | 1549 | if debug: 1550 | print_message('DEBUG', f'Time average', log=log_file) 1551 | plot_stack(stack) 1552 | print_message('DEBUG', f'Syn Time average', log=log_file) 1553 | plot_stack(syn_stack) 1554 | 1555 | t1 = lib.time_it(t1, f'Time averaging done', log=log_file) 1556 | 1557 | # 1879 1558 | # Compute power, square the stacks. 1559 | stack_final, syn_stack_final = lib.stack_root(stack_final, syn_stack_final, 2, 1560 | create_synthetics=create_synthetics) 1561 | 1562 | if timing: 1563 | t1 = lib.time_it(t1, f'Stack root done', log=log_file) 1564 | 1565 | stack_amp = dict() 1566 | stack_amp_loc = dict() 1567 | stack_max = dict() 1568 | stack_median = dict() 1569 | stack_max_loc = dict() 1570 | global_max = None 1571 | 1572 | syn_stack_amp = dict() 1573 | syn_stack_amp_loc = dict() 1574 | syn_stack_max = dict() 1575 | syn_stack_max_loc = dict() 1576 | syn_global_max = None 1577 | 1578 | # Record amplitude and location of the stack at each grid point for the given time key. 1579 | amp_tag = f'BP_PeakAmps_{param.trace_filter[param.bp_filter[vn_name]]["label"].replace(" ", "_")}' 1580 | amp_file_tag = '_'.join([amp_tag, vn_name, lib.file_name_tag(eq_datetime)]) 1581 | amp_file_name = f'{amp_file_tag}.txt' 1582 | 1583 | sta_tag = f'BP' 1584 | sta_file_tag = '_'.join([sta_tag, vn_name, lib.file_name_tag(eq_datetime)]) 1585 | sta_list_file_name = f'{sta_file_tag}_stations.txt' 1586 | with open(os.path.join(data_dir, sta_list_file_name), 'w') as fp: 1587 | for _key in trace_list: 1588 | _dc = trace_list[_key]['dc'] 1589 | if _dc not in inventory_list: 1590 | inventory_list[_dc] = list() 1591 | _tr = trace_list[_key]['tr_final'] 1592 | _loc = _tr.stats.location.strip() 1593 | if not _loc: 1594 | _loc = '--' 1595 | inventory_list[_dc].append(f'{_tr.stats.network} {_tr.stats.station} {_loc} ' 1596 | f'{_tr.stats.channel} {request_start_date_time} ' 1597 | f'{request_end_date_time}') 1598 | for _dc in inventory_list: 1599 | fp.write(f'DATACENTER={_dc}\nDATASELECTSERVICE={dc_service_url[_dc]}\n') 1600 | for _sta in inventory_list[_dc]: 1601 | fp.write(f'{_sta}\n') 1602 | 1603 | amp_file = os.path.join(data_dir, amp_file_name) 1604 | fp = open(amp_file, 'w') 1605 | if create_synthetics: 1606 | fp.write(f'{"Time":>10}{"Lat":>10}{"Lon":>10}{"DataPeak":>15}{"DataMedian":>15}{"ARFPeak":>15}\n') 1607 | else: 1608 | fp.write(f'{"Time":>10}{"Lat":>10}{"Lon":>10}{"DataPeak":>15}{"DataMedian":>15}\n') 1609 | 1610 | # Create time slices. 1611 | for grid_key in stack_final: 1612 | for t_key in stack_final[grid_key]: 1613 | if float(t_key) < stack_start or float(t_key) > stack_end: 1614 | continue 1615 | if t_key not in stack_amp: 1616 | stack_amp[t_key] = list() 1617 | stack_amp_loc[t_key] = list() 1618 | 1619 | # Synthetic. 1620 | if create_synthetics: 1621 | if t_key not in syn_stack_amp: 1622 | syn_stack_amp[t_key] = list() 1623 | syn_stack_amp_loc[t_key] = list() 1624 | 1625 | _val = stack_final[grid_key][t_key] 1626 | stack_amp[t_key].append(_val) 1627 | _lat, _lon = grid_key.split('_') 1628 | stack_amp_loc[t_key].append((float(_lat), float(_lon))) 1629 | 1630 | # Synthetic. 1631 | if create_synthetics: 1632 | _val = syn_stack_final[grid_key][t_key] 1633 | syn_stack_amp[t_key].append(_val) 1634 | _lat, _lon = grid_key.split('_') 1635 | syn_stack_amp_loc[t_key].append((float(_lat), float(_lon))) 1636 | 1637 | # Here we scan all the values for a given time key and find the maximum save its value and location. 1638 | for t_key in stack_amp: 1639 | stack_max[t_key] = max(stack_amp[t_key]) 1640 | _index = stack_amp[t_key].index(stack_max[t_key]) 1641 | stack_max_loc[t_key] = stack_amp_loc[t_key][_index] 1642 | stack_median[t_key] = np.median(stack_amp[t_key]) 1643 | 1644 | # Synthetic. 1645 | if create_synthetics: 1646 | syn_stack_max[t_key] = max(syn_stack_amp[t_key]) 1647 | _index = syn_stack_amp[t_key].index(syn_stack_max[t_key]) 1648 | syn_stack_max_loc[t_key] = syn_stack_amp_loc[t_key][_index] 1649 | # Find the global maximum. 1650 | global_max = max(stack_max.values()) 1651 | 1652 | global_max_location = None 1653 | for _key, _value in stack_max.items(): 1654 | if _value == global_max: 1655 | global_max_location = stack_max_loc[_key] 1656 | break 1657 | 1658 | # The final QC to make sure BP for this VN is reasonable. The criteria is just a simple 1659 | # check to see if the peak maxima is too far from the event location. 1660 | if global_max_location is not None: 1661 | print_message('INFO', f'Global max location {global_max_location}', log=log_file) 1662 | _dist, _azim, _back_azim = gps2dist_azimuth(eq_lat, eq_lon, 1663 | global_max_location[0], 1664 | global_max_location[1]) 1665 | if _dist / 1000.0 > param.peak_offset_max_km: 1666 | print_message('ERR', f'The global max is {_dist / 1000.0:0.2f} km from the event > ' 1667 | f'{param.peak_offset_max_km}', log=log_file) 1668 | print_message('ERR', f'Station request for event failed, no stations to request!', log=log_file) 1669 | make_insufficient_data_animation(eq_lat, eq_lon, eq_magnitude, None) 1670 | make_insufficient_data_animation(eq_lat, eq_lon, eq_magnitude, None, anim_tag='syn') 1671 | sys.exit(4) 1672 | else: 1673 | print_message('INFO', f'The global max is {_dist / 1000.0:0.2f} km from the event < ' 1674 | f'{param.peak_offset_max_km}', log=log_file) 1675 | 1676 | else: 1677 | print_message('WARN', f'Could not locate the global max {global_max:0.2f} km', log=log_file) 1678 | 1679 | # Synthetic. 1680 | if create_synthetics: 1681 | syn_global_max = max(syn_stack_max.values()) 1682 | 1683 | for t_key in stack_max: 1684 | if create_synthetics: 1685 | fp.write(f'{t_key:>10}{stack_max_loc[t_key][0]:>10.3f}{stack_max_loc[t_key][1]:>10.3f}' 1686 | f'{stack_max[t_key]:>15.5e}{stack_median[t_key]:>15.5e}{syn_stack_max[t_key]:>15.5e}\n') 1687 | else: 1688 | fp.write(f'{t_key:>10}{stack_max_loc[t_key][0]:>10.3f}{stack_max_loc[t_key][1]:>10.3f}' 1689 | f'{stack_max[t_key]:>15.5e}{stack_median[t_key]:>15.5e}\n') 1690 | fp.close() 1691 | 1692 | if timing: 1693 | t1 = lib.time_it(t1, f'Maximum amplitude found', log=log_file) 1694 | 1695 | # Compute a cumulative stack. 1696 | cumulative_stack = dict() 1697 | cumulative_stack_max = None 1698 | for grid_key in stack_final: 1699 | if grid_key not in cumulative_stack: 1700 | cumulative_stack[grid_key] = 0.0 1701 | for time_key in stack_final[grid_key]: 1702 | cumulative_stack[grid_key] = np.add(cumulative_stack[grid_key], stack_final[grid_key][time_key]) 1703 | if cumulative_stack_max is None: 1704 | cumulative_stack_max = abs(cumulative_stack[grid_key]) 1705 | else: 1706 | cumulative_stack_max = max(cumulative_stack_max, abs(cumulative_stack[grid_key])) 1707 | if timing: 1708 | t1 = lib.time_it(t1, f'Cumulative stack computed', log=log_file) 1709 | 1710 | # Create animation for the virtual network, if requested. 1711 | if create_animation: 1712 | print_message('INFO', f'Creating the animation', log=log_file) 1713 | make_animation(stack_start, stack_end, stack_amp, stack_amp_loc, stack_max, stack_max_loc, global_max, 1714 | search_radius_edge, grid_factor=grid_factor) 1715 | if create_synthetics: 1716 | make_animation(stack_start, stack_end, syn_stack_amp, syn_stack_amp_loc, syn_stack_max, syn_stack_max_loc, 1717 | syn_global_max, search_radius_edge, grid_factor=grid_factor, anim_tag='syn') 1718 | 1719 | # Create summary Plot for the virtual network. 1720 | # Initialise the summary plot. 1721 | if create_summary: 1722 | # What type of media is generated? 1723 | media = 'image' 1724 | 1725 | fig = plt.figure(figsize=param.figure_size, facecolor='white') 1726 | fig.subplots_adjust(top=0.8) 1727 | 1728 | # Figure layout. 1729 | subplot_columns = 2 1730 | subplot_tall_rows = 2 1731 | subplot_short_rows = 1 1732 | tall_to_short_height = 3 1733 | 1734 | # 1. Top Left Subplot, stack sum. 1735 | print_message('INFO', f'Stack sum', log=log_file) 1736 | ax7 = plt.subplot2grid((subplot_tall_rows * tall_to_short_height + subplot_short_rows, subplot_columns), 1737 | (0, 0), rowspan=tall_to_short_height, colspan=1) 1738 | 1739 | # Extract grid and values. 1740 | lat = list() 1741 | lon = list() 1742 | val = list() 1743 | 1744 | for grid_key in cumulative_stack: 1745 | _lat, _lon = grid_key.split('_') 1746 | lat.append(float(_lat)) 1747 | lon.append(float(_lon)) 1748 | value = cumulative_stack[grid_key] / cumulative_stack_max 1749 | val.append(value) 1750 | 1751 | # Must be np arrays for grid 1752 | lat_list = np.array(lat) 1753 | lon_list = np.array(lon) 1754 | value_list = np.array(val) 1755 | 1756 | # Find the min and max of coordinates. 1757 | lon_min = lon_list.min() 1758 | lat_min = lat_list.min() 1759 | lon_max = lon_list.max() 1760 | lat_max = lat_list.max() 1761 | 1762 | latitude, longitude = lib.set_grid(eq_lat, eq_lon, eq_magnitude, grid_factor=grid_factor) 1763 | 1764 | # Now let's grid the data. Find the number of grid points in each direction. 1765 | lon_num = pcolormesh_grid_factor * int(((lon_max - lon_min) / longitude['inc']) + 1) 1766 | lat_num = pcolormesh_grid_factor * int(((lat_max - lat_min) / latitude['inc']) + 1) 1767 | 1768 | width = lat_max - lat_min 1769 | width = degrees2kilometers(width) * 1000.0 1770 | lon_0 = eq_lon 1771 | lat_0 = eq_lat 1772 | bm = Basemap(width=width, height=width, projection=basemap_projection, 1773 | lat_0=lat_0, lon_0=lon_0, resolution=param.basemap_resolution) 1774 | 1775 | coast_alpha = 1 1776 | if param.fill_continents: 1777 | coast_alpha = param.fill_continents_alpha 1778 | bm.fillcontinents(color=param.fill_continents_color, alpha=coast_alpha) 1779 | 1780 | # Network's name on the upper left 1781 | plt.text(0.1, 0.95, vn_name, fontsize=font_size[media]['network'], horizontalalignment='center', 1782 | verticalalignment='center', transform=ax7.transAxes, color='red', 1783 | path_effects=[pe.withStroke(linewidth=2, foreground='white')]) 1784 | 1785 | scalebar_deg = kilometer2degrees(param.scalebar_km) 1786 | _lat0 = lat_min + (lat_max - lat_min) / 4.0 1787 | _lon0 = lon_max - (lon_max - lon_min) / 10.0 1788 | _lat1, _lon1 = lib.get_location(_lat0, _lon0, 0, scalebar_deg) 1789 | _x, _y = bm((_lon0, _lon0), (_lat0, _lat1)) 1790 | print_message('INFO', f'Map scale between: ({_lat0:0.2f}, {_lon0:0.2f}) and ({_lat1:0.2f}, {_lon1:0.2f}) / ' 1791 | f'({_x[0]:0.2f}, {_y[0]:0.2f}) and ({_x[1]:0.2f}, {_y[1]:0.2f})', log=log_file) 1792 | # Use the same _X to ensure scale is vertical. 1793 | bm.plot([_x[0], _x[0]], _y, color='black', linewidth=2) 1794 | plt.text(_x[0], _y[1], f' {param.scalebar_km} km', fontsize=font_size[media]['legend'], 1795 | horizontalalignment='center', rotation=90, 1796 | verticalalignment='bottom', 1797 | path_effects=[pe.withStroke(linewidth=2, foreground='white')]) 1798 | 1799 | # Create a uniform mesh for contouring. First transfer lon, lat to map units (x, y). 1800 | x_old, y_old = bm(lon_list, lat_list) 1801 | x_new = np.linspace(min(x_old), max(x_old), lon_num) 1802 | y_new = np.linspace(min(y_old), max(y_old), lat_num) 1803 | 1804 | # Basic mesh in map's x, y. 1805 | grid_x, grid_y = np.meshgrid(x_new, y_new) 1806 | 1807 | try: 1808 | # Interpolate at the points in lon_new, lat_new. 1809 | # Method : {'linear', 'nearest', 'cubic'}, optional. 1810 | grid_v = griddata((x_old, y_old), value_list, (grid_x, grid_y), method='cubic') 1811 | except Exception as _er: 1812 | print_message('ERR', f'Griding failed: {_er}', log=log_file) 1813 | sys.exit(2) 1814 | 1815 | # Make the custom color map. 1816 | bp_cmap = lib.make_cmap(param.bp_colors, bit=False, log=log_file) 1817 | 1818 | # Mark the earthquake location. 1819 | xpt, ypt = bm(eq_lon, eq_lat) 1820 | bm.plot([xpt], [ypt], marker=earthquakes['marker'], markerfacecolor=earthquakes['color'], markeredgecolor='white', 1821 | markersize=15, label='event') 1822 | 1823 | # Create a pseudocolor plot. 1824 | bm.pcolormesh(grid_x, grid_y, grid_v, cmap=bp_cmap, alpha=0.7, shading='auto', linewidths=0, 1825 | vmin=0, vmax=1) 1826 | 1827 | # Avoid areas without coastlines. 1828 | try: 1829 | bm.drawcoastlines(color=param.fill_continents_color) 1830 | except Exception as ex: 1831 | if not coastline_skipped: 1832 | print_message('WARN', f'Skipped drawcoastlines:\n{ex}', flush=True, log=log_file) 1833 | coastline_skipped = True 1834 | pass 1835 | 1836 | if basemap_countries: 1837 | bm.drawcountries(color=param.fill_continents_color) 1838 | if basemap_states: 1839 | bm.drawstates(color=param.fill_continents_color) 1840 | 1841 | # labels = [left,right,top,bottom]. 1842 | bm.drawparallels(np.arange(int(lat_min), int(lat_max), 1), labels=[1, 0, 0, 0], fontsize=font_size[media]['label'], 1843 | linewidth=0.0, 1844 | labelstyle='+/-', fmt='%0.0f') 1845 | bm.drawmeridians(np.arange(int(lon_min), int(lon_max), 2), labels=[0, 0, 0, 1], rotation=45, 1846 | fontsize=font_size[media]['label'], 1847 | linewidth=0.0, 1848 | labelstyle='+/-', fmt='%0.0f') 1849 | 1850 | trench_x, trench_y = lib.read_global_trenches(bmap=bm) 1851 | bm.plot(trench_x, trench_y, color='black', linestyle=param.trench_linestyle, linewidth=0.5) 1852 | 1853 | # plt.ylabel('Latitude', labelpad=param.ylabel_pad, fontsize=font_size[media]['label']) 1854 | 1855 | title = f'Back Projection cumulative stack\n{param.trace_filter[param.bp_filter[vn_name]]["label"]}' 1856 | plt.title(title, fontsize=font_size[media]['title']) 1857 | 1858 | # 2. Top Right Subplot, event and station locations. 1859 | print_message('INFO', f'Summary plot /Station', log=log_file) 1860 | ax2 = plt.subplot2grid((subplot_tall_rows * tall_to_short_height + subplot_short_rows, subplot_columns), 1861 | (0, 1), rowspan=tall_to_short_height, colspan=1) 1862 | 1863 | width = param.map_width 1864 | sta_map = Basemap(width=width, height=width, projection=basemap_projection, 1865 | lat_0=lat_0, lon_0=lon_0) 1866 | 1867 | for dist_index, dist in enumerate(distance_circles): 1868 | # Retrieve X and Y radius values using earthquake location as center point, draw dist degrees out. 1869 | lon_1, lat_1 = lib.create_circle(lat_0, lon_0, dist) 1870 | X, Y = sta_map(lon_1, lat_1) 1871 | sta_map.plot(X, Y, marker=None, color='black', linestyle='--', linewidth=1) 1872 | 1873 | label_lat, label_lon = lib.get_location(lat_0, lon_0, 180.0, dist) 1874 | x_l, y_l = sta_map(label_lon, label_lat) 1875 | ax2.annotate(distance_circle_labels[dist_index], (x_l, y_l), xytext=(0, 0), 1876 | textcoords='offset points', c='black', backgroundcolor='white') 1877 | 1878 | sta_x = list() 1879 | sta_y = list() 1880 | print_message('INFO', f'Total {len(trace_list.keys())} stations selected', log=log_file) 1881 | 1882 | for sta_key in trace_list.keys(): 1883 | x, y = sta_map(trace_list[sta_key]['lon'], trace_list[sta_key]['lat']) 1884 | sta_x.append(x) 1885 | sta_y.append(y) 1886 | 1887 | if sta_x: 1888 | # Plot stations.pt 1889 | sta_map.plot(sta_x, sta_y, marker=vn[vn_name]['marker'], 1890 | markerfacecolor=vn[vn_name]['color'], linestyle='None', 1891 | markeredgewidth=0.3, markeredgecolor='None', alpha=1.0, markersize=6, zorder=3, 1892 | label=f'{vn_name}') 1893 | 1894 | # Avoid areas without coastlines. 1895 | try: 1896 | sta_map.drawcoastlines(linewidth=0.5, color=param.sta_map_color) 1897 | except Exception as ex: 1898 | if not coastline_skipped: 1899 | print_message('WARN', f'Skipped drawcoastlines:\n{ex}', flush=True, log=log_file) 1900 | coastline_skipped = True 1901 | pass 1902 | 1903 | # Place a star at the center. 1904 | xpt, ypt = sta_map(lon_0, lat_0) 1905 | sta_map.plot([xpt], [ypt], marker=earthquakes['marker'], markerfacecolor=earthquakes['color'], 1906 | markeredgecolor='white', markersize=15, 1907 | label='event') 1908 | x0, y0 = sta_map(lon_0, lat_0) 1909 | lat_1, lon_1 = lib.get_2nd_point(lat_0, lon_0, eq_min_radius) 1910 | if vn_name not in param.vn_azimuth_exception_list: 1911 | lat_1, lon_1 = lib.get_2nd_point(lat_0, lon_0, eq_max_radius, bearing=center_azim - vn_azimuth) 1912 | x1, y1 = sta_map(lon_1, lat_1) 1913 | sta_map.plot((x0, x1), (y0, y1), '--', color=earthquakes['color'], lw=1) 1914 | 1915 | lat_2, lon_2 = lib.get_2nd_point(lat_0, lon_0, eq_max_radius, bearing=center_azim + vn_azimuth) 1916 | x2, y2 = sta_map(lon_2, lat_2) 1917 | sta_map.plot((x0, x2), (y0, y2), '--', color=earthquakes['color'], lw=1) 1918 | 1919 | # Network's name on the upper left 1920 | plt.text(0.1, 0.95, vn_name, fontsize=font_size[media]['network'], horizontalalignment='center', 1921 | verticalalignment='center', transform=ax2.transAxes, color='red', 1922 | path_effects=[pe.withStroke(linewidth=2, foreground='white')]) 1923 | 1924 | title = f'{eq_date_time.replace("T", " ")}\nM{eq_magnitude} Z={eq_depth}km' 1925 | plt.title(title, fontsize=font_size[media]['title']) 1926 | 1927 | # Save just the portion _inside_ the first axis's boundaries for the animation screen. 1928 | #extent = full_extent(ax7, pad=0.25).transformed(fig.dpi_scale_trans.inverted()) 1929 | #anim_tag = 'BP' 1930 | #file_tag = '_'.join([anim_tag, vn_name, lib.file_name_tag(eq_datetime)]) 1931 | #plot_file_name = f'{file_tag}_screen.png' 1932 | #plot_file_name = os.path.join(param.video_dir, plot_file_name) 1933 | #fig.savefig(plot_file_name, bbox_inches=extent) 1934 | 1935 | # 3. Middle left Subplot, local maxima. 1936 | print_message('INFO', f'Summary plot / Local Maxima', log=log_file) 1937 | ax3 = fig.add_subplot(323) 1938 | ax3 = plt.subplot2grid((subplot_tall_rows * tall_to_short_height + subplot_short_rows, subplot_columns), 1939 | (3, 0), rowspan=tall_to_short_height, colspan=1) 1940 | 1941 | width = lat_max - lat_min 1942 | width = degrees2kilometers(width) * 1000.0 1943 | lon_0 = eq_lon 1944 | lat_0 = eq_lat 1945 | bm = Basemap(width=width, height=width, projection=basemap_projection, 1946 | lat_0=lat_0, lon_0=lon_0, resolution=param.basemap_resolution) 1947 | 1948 | coast_alpha = 1 1949 | if param.fill_continents: 1950 | coast_alpha = param.fill_continents_alpha 1951 | bm.fillcontinents(color=param.fill_continents_color, alpha=coast_alpha) 1952 | 1953 | scalebar_deg = kilometer2degrees(param.scalebar_km) 1954 | _lat0 = lat_min + (lat_max - lat_min) / 4.0 1955 | _lon0 = lon_max - (lon_max - lon_min) / 10.0 1956 | _lat1, _lon1 = lib.get_location(_lat0, _lon0, 0, scalebar_deg) 1957 | _x, _y = bm((_lon0, _lon1), (_lat0, _lat1)) 1958 | print_message('INFO', f'Map scale between: ({_lat0:0.2f}, {_lon0:0.2f}) and ({_lat1:0.2f}, {_lon1:0.2f}) / ' 1959 | f'({_x[0]:0.2f}, {_y[0]:0.2f}) and ({_x[1]:0.2f}, {_y[1]:0.2f})', log=log_file) 1960 | # Use the same _X to ensure scale is vertical. 1961 | bm.plot([_x[0], _x[0]], _y, color='black', linewidth=2) 1962 | plt.text(_x[0], _y[1], f' {param.scalebar_km} km', fontsize=font_size[media]['legend'], 1963 | horizontalalignment='center', rotation=90, 1964 | verticalalignment='bottom', 1965 | path_effects=[pe.withStroke(linewidth=2, foreground='white')]) 1966 | 1967 | # Make the custom color map. 1968 | bp_cmap = lib.make_cmap(param.bp_colors, bit=False, log=log_file) 1969 | 1970 | # Mark the earthquake location. 1971 | xpt, ypt = bm(eq_lon, eq_lat) 1972 | zorder = 10000 1973 | bm.plot([xpt], [ypt], marker=earthquakes['marker'], markerfacecolor=earthquakes['color'], markeredgecolor='white', 1974 | markersize=15, label='USGS epicenter', linestyle='None', zorder=zorder) 1975 | bm.plot(trench_x, trench_y, color='black', linestyle=param.trench_linestyle, linewidth=0.5, label='Trench') 1976 | 1977 | # Avoid areas without coastlines. 1978 | try: 1979 | bm.drawcoastlines(color=param.fill_continents_color) 1980 | except Exception as ex: 1981 | if not coastline_skipped: 1982 | print_message('WARN', f'Skipped drawcoastlines:\n{ex}', flush=True, log=log_file) 1983 | coastline_skipped = True 1984 | pass 1985 | 1986 | if basemap_countries: 1987 | bm.drawcountries(color=param.fill_continents_color) 1988 | if basemap_states: 1989 | bm.drawstates(color=param.fill_continents_color) 1990 | 1991 | # Post a note about the dot colors. 1992 | plt.annotate(f'dot: local maxima\ncolor: time {down_arrow}', (1.0, .0), xycoords='axes fraction', 1993 | horizontalalignment='right', xytext=(-10, 10), fontsize=font_size[media]['legend'], 1994 | textcoords='offset points') 1995 | 1996 | # labels = [left,right,top,bottom]. 1997 | bm.drawparallels(np.arange(int(lat_min), int(lat_max), 1), labels=[1, 0, 0, 0], fontsize=font_size[media]['label'], 1998 | linewidth=0.0, 1999 | labelstyle='+/-', fmt='%0.0f') 2000 | 2001 | time_keys = list(stack_max.keys()) 2002 | times = list(np.array(time_keys, dtype=float)) 2003 | values = list(stack_max.values()) 2004 | 2005 | # Find the local maxima and plot these within 50% of the maximum. 2006 | # Note that the return value is a tuple even when data is 1-D. 2007 | maxima_list = argrelextrema(np.array(values), np.greater) 2008 | maxima_list = list(maxima_list[0]) 2009 | maxima_max = max(values) 2010 | maxima_base = maxima_max * param.peak_marker_base_factor 2011 | 2012 | # Before we finish, check to see if this network has too many significant peaks. 2013 | _count = 0 2014 | for _max in maxima_list: 2015 | if _max >= maxima_max * param.qc_max_peak_factor: 2016 | _count += 1 2017 | if _count > param.qc_max_peak_count: 2018 | print_message('ERR', f'{vn_name} virtual network has {_count} ' 2019 | f'peaks above {param. qc_max_peak_count} max. peak of {maxima_max}' 2020 | f' > {param.qc_max_peak_count}', log=log_file) 2021 | make_insufficient_data_animation(eq_lat, eq_lon, eq_magnitude, None) 2022 | make_insufficient_data_animation(eq_lat, eq_lon, eq_magnitude, None, anim_tag='syn') 2023 | sys.exit(4) 2024 | else: 2025 | print_message('INFO', f'{vn_name} virtual network has {_count} ' 2026 | f'peaks above {param. qc_max_peak_count} max. peak of {maxima_max}' 2027 | f' <= {param.qc_max_peak_count}', log=log_file) 2028 | 2029 | print_message('INFO', f'Peak maxima index list: {maxima_list}, maxima base: {maxima_base}', log=log_file) 2030 | x = list() 2031 | y = list() 2032 | s = list() 2033 | c = list() 2034 | z = list() 2035 | count = -1 2036 | for max_index in maxima_list: 2037 | if values[max_index] < maxima_base: 2038 | continue 2039 | t_index = time_keys[max_index] 2040 | _x, _y = bm(stack_max_loc[t_index][1], stack_max_loc[t_index][0]) 2041 | x.append(_x) 2042 | y.append(_y) 2043 | val = values[max_index] / maxima_max 2044 | count += 1 2045 | c.append(count) 2046 | # Scale marker area based on the amplitude size. 2047 | marker_size = param.peak_marker_size_max * val * val 2048 | s.append(marker_size) 2049 | z.append(zorder - int(1000 * val)) 2050 | # Make sure the largest markers (lowest z) are plotted first. 2051 | x = np.array(x) 2052 | y = np.array(y) 2053 | c = np.array(c) 2054 | s = np.array(s) 2055 | z = np.array(z) 2056 | order = np.argsort(z) 2057 | norm = colors.Normalize(vmin=min(c), vmax=max(c)) 2058 | if param.time_colors: 2059 | c = np.array(param.time_colors[0:len(order)]) 2060 | bm.scatter(x[order], y[order], c=c[order], s=s[order], alpha=param.time_alpha, marker='o', linewidths=0, 2061 | edgecolor='black', linewidth=1, norm=norm, zorder=1000) 2062 | else: 2063 | bm.scatter(x[order], y[order], cmap=param.time_cmap, c=c[order], alpha=param.time_alpha, 2064 | s=s[order], marker='o', linewidths=0, edgecolor='black', linewidth=1, 2065 | norm=norm, zorder=1000) 2066 | 2067 | legend = plt.legend(loc='lower left', framealpha=1.0, frameon=False, facecolor='white', edgecolor=None, 2068 | fontsize=font_size[media]['legend']) 2069 | 2070 | # plt.ylabel('Latitude', labelpad=param.ylabel_pad, fontsize=font_size[media]['label']) 2071 | 2072 | # 4. Middle Right Subplot, traces. 2073 | ax4 = plt.subplot2grid((subplot_tall_rows * tall_to_short_height + subplot_short_rows, subplot_columns), 2074 | (3, 1), rowspan=tall_to_short_height, colspan=1) 2075 | 2076 | # Plot normalized and phase-aligned traces. 2077 | # Unfiltered data. 2078 | stacked = None 2079 | stacked_filtered = None 2080 | trace_count = len(trace_list.keys()) 2081 | for tr_key in trace_list.keys(): 2082 | 2083 | _tr = trace_list[tr_key]['tr_final'].copy() 2084 | _tr_filtered = trace_list[tr_key]['tr_filter'].copy() 2085 | 2086 | _tr_times = _tr.times(reftime=eq_datetime) 2087 | 2088 | # Stack with no weight applied. 2089 | if stacked is None: 2090 | stacked = _tr.data.copy() 2091 | stacked_filtered = _tr_filtered.data.copy() 2092 | else: 2093 | # This is to address the issue that in rare cases, there is one sample missing. 2094 | # This is a partial solution for now. 2095 | if len(stacked) < len(_tr.data): 2096 | _tr_copy = _tr.data.copy() 2097 | _tr_filtered_copy = _tr_filtered.data.copy() 2098 | stacked = np.add(stacked, _tr_copy[:len(stacked)]) 2099 | stacked_filtered = np.add(stacked_filtered, _tr_filtered_copy[:len(stacked_filtered)]) 2100 | else: 2101 | stacked = np.add(stacked, _tr.data.copy()) 2102 | stacked_filtered = np.add(stacked_filtered, _tr_filtered.data.copy()) 2103 | # Unstacked raw traces. 2104 | ax4.plot(_tr_times, _tr.data / trace_list[tr_key]['weight'] + 4.0, "k-", lw=0.4, clip_on=True, zorder=2000) 2105 | 2106 | # Unstacked filtered traces. 2107 | ax4.plot(_tr_times, _tr_filtered.data / trace_list[tr_key]['weight'] + 0.0, "k-", lw=0.4, 2108 | clip_on=True, zorder=2000) 2109 | 2110 | trace_end = param.max_summary_trace_time 2111 | 2112 | print_message('INFO', f'Stack time range from {stack_start}s to {trace_end}s', log=log_file) 2113 | ax4.set_xlim(param.min_summary_trace_time, param.max_summary_trace_time) 2114 | ax4.set_ylim(-1, 7) 2115 | plt.tick_params(left=False) 2116 | ax4.set_xlabel('Time (second)') 2117 | ax4.axes.yaxis.set_ticklabels([]) 2118 | 2119 | label_y_position = param.label_y_position 2120 | 2121 | title = param.prefilter_label 2122 | 2123 | # Unstacked raw traces' title. 2124 | t1 = plt.text(trace_end - 5, label_y_position + 4, title, fontsize=font_size[media]['label'], 2125 | horizontalalignment='right', verticalalignment='center', color='black', 2126 | path_effects=[pe.withStroke(linewidth=2, foreground='white')], zorder=2100) 2127 | 2128 | # Stacked raw traces' title. 2129 | title = f'stacked - {param.prefilter_label}' 2130 | t2 = plt.text(trace_end - 5, label_y_position + 6, title, fontsize=font_size[media]['label'], 2131 | horizontalalignment='right', verticalalignment='center', color='red', 2132 | path_effects=[pe.withStroke(linewidth=2, foreground='white')], zorder=1100) 2133 | 2134 | stacked_max = np.max(np.abs(stacked)) 2135 | stacked /= stacked_max 2136 | 2137 | # Stacked un filtered trace. 2138 | ax4.plot(_tr_times, stacked + 6.0, "r-", lw=0.4, clip_on=True, zorder=1000) 2139 | 2140 | stacked_max = np.max(np.abs(stacked_filtered)) 2141 | stacked_filtered /= stacked_max 2142 | 2143 | # Stacked filtered trace. 2144 | ax4.plot(_tr_times, stacked_filtered + 2.0, "r-", lw=0.4, clip_on=True, zorder=1000) 2145 | 2146 | # Unstacked traces title. 2147 | title = f'filtered {param.trace_filter[param.bp_filter[vn_name]]["label"]}' 2148 | t3 = plt.text(trace_end - 5, label_y_position, title, fontsize=font_size[media]['label'], 2149 | horizontalalignment='right', verticalalignment='center', color='black', 2150 | path_effects=[pe.withStroke(linewidth=2, foreground='white')], zorder=2100) 2151 | 2152 | # Stacked trace title. 2153 | title = f'stacked - {param.trace_filter[param.bp_filter[vn_name]]["label"]}' 2154 | t4 = plt.text(trace_end - 5, label_y_position + 2, title, fontsize=font_size[media]['label'], 2155 | horizontalalignment='right', verticalalignment='center', color='red', 2156 | path_effects=[pe.withStroke(linewidth=2, foreground='white')], zorder=1100) 2157 | 2158 | title = 'Time-shifted and aligned' 2159 | plt.title(title, fontsize=font_size[media]['title']) 2160 | 2161 | # Get info on this subplot so we can set its height the same as the map to the lef. 2162 | # We always want to adopt the map's height since it is dynamic. 2163 | map_bb = ax3.get_position() 2164 | trace_bb = ax4.get_position() 2165 | ax4.set_position([trace_bb.x0, trace_bb.y0, trace_bb.width, map_bb.height]) 2166 | 2167 | # 5. Local maxima time. 2168 | ax5 = plt.subplot2grid((subplot_tall_rows * tall_to_short_height + subplot_short_rows, subplot_columns), 2169 | (6, 0), rowspan=1, colspan=1) 2170 | times = list(stack_max.keys()) 2171 | times = list(np.array(times, dtype=float)) 2172 | values = list(stack_max.values()) 2173 | ax5.fill_between(times, 0, values, facecolor=param.beam_power_fill_color) 2174 | 2175 | x = list() 2176 | y = list() 2177 | c = list() 2178 | s = list() 2179 | 2180 | count = -1 2181 | for max_index in maxima_list: 2182 | if values[max_index] < maxima_base: 2183 | continue 2184 | _t = times[max_index] 2185 | x.append(_t) 2186 | y.append(values[max_index]) 2187 | count += 1 2188 | c.append(count) 2189 | s.append(param.peak_marker_size) 2190 | 2191 | # Set the color map range. 2192 | if param.time_colors: 2193 | c = np.array(param.time_colors[0:len(x)]) 2194 | plt.scatter(x, y, c=c, s=s, marker='o', linewidths=0, alpha=param.time_alpha, edgecolor='black', linewidth=1, 2195 | norm=norm, zorder=1000) 2196 | else: 2197 | plt.scatter(x, y, cmap=param.time_cmap, c=c, alpha=param.time_alpha, s=s, marker='o', linewidths=0, 2198 | edgecolor='black', linewidth=1, norm=norm, zorder=1000) 2199 | 2200 | ax5.axes.yaxis.set_ticklabels([]) 2201 | 2202 | # Title for the subplot above and this subplot. 2203 | title = f'{up_arrow} location map / {down_arrow} time plot\nPeak BP stack amplitudes' 2204 | plt.title(title, fontsize=font_size[media]['label']) 2205 | ax5.text(0.05, 0.2, f'dot: local maxima; color: time', horizontalalignment='left', 2206 | fontsize=font_size[media]['legend'], 2207 | verticalalignment='top', transform=ax5.transAxes) 2208 | 2209 | ax5.set_xlim(stack_start, stack_end) 2210 | ax5.set_ylim(bottom=0.0) 2211 | ax5.yaxis.set_ticks_position('none') 2212 | ax5.set_xlabel('time relative to origin (sec.)') 2213 | 2214 | # 6. Lover right, logo, etc.. 2215 | ax6 = plt.subplot2grid((subplot_tall_rows * tall_to_short_height + subplot_short_rows, subplot_columns), 2216 | (6, 1), rowspan=1, colspan=1) 2217 | plt.axis('off') 2218 | 2219 | # Plot the logo. 2220 | delimiter = ' ' 2221 | if os.path.isfile(logo_image): 2222 | logo_image = np.array(Image.open(logo_image)) 2223 | im = OffsetImage(logo_image, zoom=logo_zoom, alpha=logo_alpha, zorder=1000) 2224 | b_box = AnnotationBbox(im, xy=logo_location, xycoords=logo_coords, box_alignment=(0.0, 0.0), 2225 | boxcoords='offset pixels', frameon=False) 2226 | 2227 | # Add the AnnotationBbox artist. 2228 | ax6.add_artist(b_box) 2229 | delimiter = '\n' 2230 | 2231 | # Production date time stamp. 2232 | production_date = lib.version_timestamp(script_version, search_radius_edge, delimiter=delimiter) 2233 | ax6.annotate(production_date, xy=logo_location, xycoords='axes pixels', 2234 | xytext=(logo_location[0] + logo_width + logo_padding, logo_location[1] + logo_height / 2.0), 2235 | textcoords='offset pixels', horizontalalignment='left', fontsize=production_label_font_size, 2236 | verticalalignment='center') 2237 | 2238 | # Place the tight_layout after most of the elements are in, so the layout can be configured properly. 2239 | # Info - pad: Padding between the figure edge and the edges of subplots, as a fraction of the font size. 2240 | # Info - h_pad, w_pad : Padding (height/width) between edges of adjacent subplots, as a fraction of the font size. 2241 | # Defaults to pad. 2242 | plt.text(x=0.5, y=0.6, s=f'{vn_name} Virtual Network ({trace_count} stations)\nBack Projection Summary Plot', 2243 | fontsize=font_size[media]['title'], ha='center') 2244 | 2245 | plt.tight_layout(pad=2, h_pad=0, w_pad=0, rect=None) 2246 | 2247 | # Save the figure to a .png file 2248 | _tag = 'BP_summary' 2249 | file_tag = '_'.join([_tag, vn_name, lib.file_name_tag(eq_datetime)]) 2250 | plot_file_name = f'{file_tag}.png' 2251 | plot_file_name = os.path.join(param.image_dir, plot_file_name) 2252 | plt.savefig(plot_file_name, bbox_inches='tight', dpi=param.dpi, facecolor='white') 2253 | print_message('INFO', f'Summary plot for the {vn_name} virtual network saved as {plot_file_name}', log=log_file) 2254 | plt.close() 2255 | 2256 | # Generate a screen image for animations. 2257 | if create_animation: 2258 | for bp_plot_type in ('BP_screen', 'BP_syn_screen'): 2259 | # ax7 Plot screen plots for the video. 2260 | # Video frame layout. 2261 | subplot_columns = 1 2262 | subplot_tall_rows = 1 2263 | subplot_short_rows = 1 2264 | tall_to_short_height = 3 2265 | 2266 | fig = plt.figure(figsize=param.video_size, facecolor='white') 2267 | ax7 = plt.subplot2grid((subplot_tall_rows * tall_to_short_height + subplot_short_rows, subplot_columns), 2268 | (0, 0), rowspan=tall_to_short_height, colspan=1) 2269 | 2270 | # Extract grid and values. 2271 | lat = list() 2272 | lon = list() 2273 | val = list() 2274 | 2275 | if bp_plot_type == 'BP_syn_screen': 2276 | # Compute a cumulative stack. 2277 | cumulative_stack = dict() 2278 | cumulative_stack_max = None 2279 | for grid_key in syn_stack_final: 2280 | if grid_key not in cumulative_stack: 2281 | cumulative_stack[grid_key] = 0.0 2282 | for time_key in syn_stack_final[grid_key]: 2283 | cumulative_stack[grid_key] = np.add(cumulative_stack[grid_key], 2284 | syn_stack_final[grid_key][time_key]) 2285 | if cumulative_stack_max is None: 2286 | cumulative_stack_max = abs(cumulative_stack[grid_key]) 2287 | else: 2288 | cumulative_stack_max = max(cumulative_stack_max, abs(cumulative_stack[grid_key])) 2289 | 2290 | for grid_key in cumulative_stack: 2291 | _lat, _lon = grid_key.split('_') 2292 | lat.append(float(_lat)) 2293 | lon.append(float(_lon)) 2294 | value = cumulative_stack[grid_key] / cumulative_stack_max 2295 | val.append(value) 2296 | 2297 | # Must be np arrays for grid 2298 | lat_list = np.array(lat) 2299 | lon_list = np.array(lon) 2300 | value_list = np.array(val) 2301 | 2302 | # Find the min and max of coordinates. 2303 | lon_min = lon_list.min() 2304 | lat_min = lat_list.min() 2305 | lon_max = lon_list.max() 2306 | lat_max = lat_list.max() 2307 | 2308 | latitude, longitude = lib.set_grid(eq_lat, eq_lon, eq_magnitude, grid_factor=grid_factor) 2309 | 2310 | # Now let's grid the data. Find the number of grid points in each direction. 2311 | lon_num = pcolormesh_grid_factor * int(((lon_max - lon_min) / longitude['inc']) + 1) 2312 | lat_num = pcolormesh_grid_factor * int(((lat_max - lat_min) / latitude['inc']) + 1) 2313 | 2314 | width = lat_max - lat_min 2315 | width = degrees2kilometers(width) * 1000.0 2316 | lon_0 = eq_lon 2317 | lat_0 = eq_lat 2318 | bm = Basemap(width=width, height=width, projection=basemap_projection, 2319 | lat_0=lat_0, lon_0=lon_0, resolution=param.basemap_resolution) 2320 | 2321 | coast_alpha = 1 2322 | if param.fill_continents: 2323 | coast_alpha = param.fill_continents_alpha 2324 | bm.fillcontinents(color=param.fill_continents_color, alpha=coast_alpha) 2325 | 2326 | # Network's name on the upper left 2327 | plt.text(0.1, 0.95, vn_name, fontsize=font_size[media]['network'], horizontalalignment='center', 2328 | verticalalignment='center', transform=ax7.transAxes, color='red', 2329 | path_effects=[pe.withStroke(linewidth=2, foreground='white')]) 2330 | 2331 | scalebar_deg = kilometer2degrees(param.scalebar_km) 2332 | _lat0 = lat_min + (lat_max - lat_min) / 4.0 2333 | _lon0 = lon_max - (lon_max - lon_min) / 10.0 2334 | _lat1, _lon1 = lib.get_location(_lat0, _lon0, 0, scalebar_deg) 2335 | _x, _y = bm((_lon0, _lon0), (_lat0, _lat1)) 2336 | print_message('INFO', f'Map scale between: ({_lat0:0.2f}, {_lon0:0.2f}) and ({_lat1:0.2f}, {_lon1:0.2f}) / ' 2337 | f'({_x[0]:0.2f}, {_y[0]:0.2f}) and ({_x[1]:0.2f}, {_y[1]:0.2f})', log=log_file) 2338 | # Use the same _X to ensure scale is vertical. 2339 | bm.plot([_x[0], _x[0]], _y, color='black', linewidth=2) 2340 | plt.text(_x[0], _y[1], f' {param.scalebar_km} km', fontsize=font_size[media]['legend'], 2341 | horizontalalignment='center', rotation=90, 2342 | verticalalignment='bottom', 2343 | path_effects=[pe.withStroke(linewidth=2, foreground='white')]) 2344 | 2345 | # Create a uniform mesh for contouring. First transfer lon, lat to map units (x, y). 2346 | x_old, y_old = bm(lon_list, lat_list) 2347 | x_new = np.linspace(min(x_old), max(x_old), lon_num) 2348 | y_new = np.linspace(min(y_old), max(y_old), lat_num) 2349 | 2350 | # Basic mesh in map's x, y. 2351 | grid_x, grid_y = np.meshgrid(x_new, y_new) 2352 | 2353 | try: 2354 | # Interpolate at the points in lon_new, lat_new. 2355 | # Method : {'linear', 'nearest', 'cubic'}, optional. 2356 | grid_v = griddata((x_old, y_old), value_list, (grid_x, grid_y), method='cubic') 2357 | except Exception as _er: 2358 | print_message('ERR', f'Griding failed: {_er}', log=log_file) 2359 | sys.exit(2) 2360 | 2361 | # Make the custom color map. 2362 | bp_cmap = lib.make_cmap(param.bp_colors, bit=False, log=log_file) 2363 | 2364 | # Mark the earthquake location. 2365 | xpt, ypt = bm(eq_lon, eq_lat) 2366 | bm.plot([xpt], [ypt], marker=earthquakes['marker'], markerfacecolor=earthquakes['color'], 2367 | markeredgecolor='white', 2368 | markersize=15, label='event') 2369 | 2370 | # Create a pseudocolor plot. 2371 | bm.pcolormesh(grid_x, grid_y, grid_v, cmap=bp_cmap, alpha=0.7, shading='auto', linewidths=0, 2372 | vmin=0, vmax=1) 2373 | 2374 | # Avoid areas without coastlines. 2375 | try: 2376 | bm.drawcoastlines(color=param.fill_continents_color) 2377 | except Exception as ex: 2378 | if not coastline_skipped: 2379 | print_message('WARN', f'Skipped drawcoastlines:\n{ex}', flush=True, log=log_file) 2380 | coastline_skipped = True 2381 | pass 2382 | 2383 | if basemap_countries: 2384 | bm.drawcountries(color=param.fill_continents_color) 2385 | if basemap_states: 2386 | bm.drawstates(color=param.fill_continents_color) 2387 | 2388 | # labels = [left,right,top,bottom]. 2389 | bm.drawparallels(np.arange(int(lat_min), int(lat_max), 1), labels=[1, 0, 0, 0], 2390 | fontsize=font_size[media]['label'], 2391 | linewidth=0.0, 2392 | labelstyle='+/-', fmt='%0.0f') 2393 | bm.drawmeridians(np.arange(int(lon_min), int(lon_max), 2), labels=[0, 0, 0, 1], rotation=0, 2394 | fontsize=font_size[media]['label'], 2395 | linewidth=0.0, 2396 | labelstyle='+/-', fmt='%0.0f') 2397 | 2398 | trench_x, trench_y = lib.read_global_trenches(bmap=bm) 2399 | bm.plot(trench_x, trench_y, color='black', linestyle=param.trench_linestyle, linewidth=0.5) 2400 | 2401 | # plt.ylabel('Latitude', labelpad=param.ylabel_pad, fontsize=font_size[media]['label']) 2402 | 2403 | if bp_plot_type == 'BP_screen': 2404 | title = (f"{eq_datetime.strftime('%Y-%m-%d %H:%M:%S')} M{eq_magnitude} Z={eq_depth}km\n" 2405 | f"{param.trace_filter[param.bp_filter[vn_name]]['label']}") 2406 | else: 2407 | title = (f"{eq_datetime.strftime('%Y-%m-%d %H:%M:%S')} M{eq_magnitude} Z={eq_depth}km\n" 2408 | f'Array Response Function (synthetics) {param.trace_filter[param.bp_filter[vn_name]]["label"]}') 2409 | plt.title(title, fontsize=font_size[media]['title']) 2410 | 2411 | # Get info on this subplot so we can align the one below it. 2412 | map_bbox = ax7.get_position() 2413 | 2414 | # Plot the beam power. 2415 | ax8 = plt.subplot2grid((subplot_tall_rows * tall_to_short_height + subplot_short_rows, subplot_columns), 2416 | (3, 0), rowspan=1, colspan=1) 2417 | 2418 | if bp_plot_type == 'BP_syn_screen': 2419 | times = list(syn_stack_max.keys()) 2420 | times = list(np.array(times, dtype=float)) 2421 | values = list(syn_stack_max.values()) 2422 | else: 2423 | times = list(stack_max.keys()) 2424 | times = list(np.array(times, dtype=float)) 2425 | values = list(stack_max.values()) 2426 | max_value = max(values) 2427 | values = np.array(values) / max_value 2428 | values = list(values) 2429 | ax8.fill_between(times, 0, values, facecolor=param.beam_power_fill_color) 2430 | ax8.set_xlim(stack_start, stack_end) 2431 | ax8.set_ylim(bottom=0.0) 2432 | ax8.set_xlabel('time relative to origin (sec.)') 2433 | ax8.set_ylabel('beam power') 2434 | ax8.yaxis.set_ticklabels([]) 2435 | 2436 | # Get info on this subplot so we can align the one above it. 2437 | # We always want to adopt the map's width since it is dynamic. 2438 | power_bbox = ax8.get_position() 2439 | ax8.set_position([map_bbox.x0, power_bbox.y0, map_bbox.width, power_bbox.height]) 2440 | 2441 | # Save the figure to a .png file 2442 | _tag = bp_plot_type 2443 | file_tag = '_'.join([_tag, vn_name, lib.file_name_tag(eq_datetime)]) 2444 | plot_file_name = f'{file_tag}.png' 2445 | plot_file_name = os.path.join(param.video_dir, plot_file_name) 2446 | plt.savefig(plot_file_name, bbox_inches='tight', dpi=param.dpi, facecolor='white') 2447 | print_message('INFO', f'Screen plot for the {vn_name} virtual network saved as {plot_file_name}', 2448 | log=log_file) 2449 | plt.close() 2450 | 2451 | 2452 | --------------------------------------------------------------------------------