├── CONTRIBUTING.md
├── LICENSE
├── README.md
├── toolbox
├── RAPID Tools.CopyDataToServer.pyt.xml
├── RAPID Tools.CreateDischargeMap.pyt.xml
├── RAPID Tools.CreateDischargeTable.pyt.xml
├── RAPID Tools.CreateInflowFileFromECMWFRunoff.pyt.xml
├── RAPID Tools.CreateInflowFileFromWRFHydroRunoff.pyt.xml
├── RAPID Tools.CreateMuskingumParameterFiles.pyt.xml
├── RAPID Tools.CreateNetworkConnectivityFile.pyt.xml
├── RAPID Tools.CreateSubsetFile.pyt.xml
├── RAPID Tools.CreateWeightTableFromECMWFRunoff.pyt.xml
├── RAPID Tools.CreateWeightTableFromWRFGeogrid.pyt.xml
├── RAPID Tools.FlowlineToPoint.pyt.xml
├── RAPID Tools.PublishDischargeMap.pyt.xml
├── RAPID Tools.UpdateDischargeMap.pyt.xml
├── RAPID Tools.UpdateWeightTable.pyt.xml
├── RAPID Tools.pyt
├── RAPID Tools.pyt.xml
└── scripts
│ ├── CopyDataToServer.py
│ ├── CreateDischargeMap.py
│ ├── CreateDischargeTable.py
│ ├── CreateInflowFileFromECMWFRunoff.py
│ ├── CreateInflowFileFromWRFHydroRunoff.py
│ ├── CreateMuskingumParameterFiles.py
│ ├── CreateNetworkConnectivityFile.py
│ ├── CreateSubsetFile.py
│ ├── CreateWeightTableFromECMWFRunoff.py
│ ├── CreateWeightTableFromWRFGeogrid.py
│ ├── FlowlineToPoint.py
│ ├── PublishDischargeMap.py
│ ├── UpdateDischargeMap.py
│ ├── UpdateWeightTable.py
│ └── templates
│ ├── FGDB_TimeEnabled.lyr
│ ├── SQL_TimeEnabled.lyr
│ └── template_mxd.mxd
└── toolbox_screenshot.png
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | Esri welcomes contributions from anyone and everyone. Please see our [guidelines for contributing](https://github.com/esri/contributing).
2 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "{}"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright 2016 Esri
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # RAPID Tools
2 |
3 | This repository houses a Python toolbox of tools for preprocessing inputs and postprocessing outputs of the RAPID (Routing Application for Parallel computation of Discharge) model. To find out more about RAPID, please visit http://rapid-hub.org/.
4 |
5 | ## Getting Started
6 |
7 | * Ensure you have [ArcGIS for Desktop](http://desktop.arcgis.com/en/arcmap/) installed.
8 | * If your version of ArcGIS for Desktop is previous to 10.3, you must install the [netCDF4 Python package] (https://pypi.python.org/pypi/netCDF4). An executable for installing netCDF4-1.0.8 with Python 2.7 is available [here] (http://downloads.esri.com/archydro/archydro/Setup/10.2.x/rapid/).
9 | * Download the toolbox and place it in an appropriate folder on your machine. Navigate to the folder in Catalog. If you expand all the toolsets, you will see the following:
10 |
11 | 
12 |
13 | * In order to use the Preprocessing tools, you will need to have the following inputs available:
14 |
15 | * Drainage Line and Catchment feature classes for the watersheds of interests. To learn about how to create them, please refer to the workflow of Basic Dendritic Terrain Processing with [ArcHydro](https://geonet.esri.com/thread/105831) tools.
16 | * For [WRF-Hydro](https://www.ral.ucar.edu/projects/wrf_hydro), you will need both its geogrid file and runoff file; for [ECMWF](http://www.ecmwf.int/), you will need only the runoff file.
17 |
18 | ## Issues
19 |
20 | Find a bug or want to request a new feature? Please let us know by [submitting an issue](https://github.com/ArcGIS/RAPID_Tools/issues).
21 |
22 | ## Contributing
23 |
24 | Esri welcomes contributions from anyone and everyone. Please see our [guidelines for contributing](https://github.com/esri/contributing).
25 |
26 | ## Tools
27 |
28 | The tools are organized into three toolsets. One for preprocessing and preparing the input datasets for RAPID. Another for doing some postprocessing especially for visualizing the RAPID output of stream flow. The third toolset has some utilities to assist you with the workflow and sharing the visualization as web map service.
29 |
30 | ### Preprocessing tools
31 |
32 | * #### Create Connectivity File
33 |
34 | This tool creates the stream network connectivity file based on two fields in the input Drainage Line feature class: HydroID, and NextDownID. It does the following:
35 |
36 | 1. Finds the HydroID of each stream.
37 | 2. Counts the total number of its upstreams.
38 | 3. Writes the stream HydroID, the total number of its upstreams, and the HydroID(s) of all upstream(s) into the output file. The records are sorted in the ascending order based on the stream HydroID.
39 |
40 | * #### Create Subset File
41 |
42 | This tool writes the HydroID of a subset of the stream features. The subset is created by selecting stream features in the input layer.
43 |
44 | * #### Create Muskingum Parameters File
45 |
46 | This tool writes the values of the Muskingum parameter fields (Musk_kfac, Musk_k, and Musk_x) into individual parameter files. The three fields can be calculated using the Calculate Muskingum Parameters tool in the [ArcHydro toolbox](https://geonet.esri.com/thread/105831). The records in all files are sorted in the ascending order based on the stream HydroID.
47 |
48 | * #### Create Weight Table From ECMWF/WRF-Hydro Runoff
49 |
50 | This tool creates a table that represents the runoff contribution of the ECMWF/WRF-Hydro computational grid to the catchment. It requires that the input Catchment feature class has a drainage line ID field that corresponds to the HydroID of the Drainage Line feature class. If your input Catchment feature class does not have that field, you can use the Add DrainLnID to Catchment tool in the [ArcHydro toolbox](https://geonet.esri.com/thread/105831). This tool does the following:
51 |
52 | For ECMWF,
53 |
54 | 1. Creates computational grid point feature class based on the latitude and longitude in the ECMWF runoff file.
55 | 2. Creates Thiessen polygons from the computational grid points. The Thiessen polygons represent computational grids.
56 |
57 | For WRF-Hydro,
58 |
59 | 1. Creates a raster based on the spatial resolution, extent, and projection information in the WRF geogrid file.
60 | 2. Creates fishnet polygons and points based on the raster. The polygons and points represent the computational grids and points.
61 |
62 | Then for both,
63 |
64 | 3. Intersects the computational polygons with the input catchments.
65 | 4. Calculates the geodesic area for each intersected polygon.
66 | 5. Calculates the area ratio of each intersected polygon to its corresponding catchment, which is defined as the weight representing the contribution of the computational grid to the catchment (drainage line segment).
67 | 6. Writes the stream ID, the coordinates of the contributing computational grid, the contributing area, and the weight, etcetera, into the weight table.
68 |
69 | The records in the weight table are sorted in the ascending order based on the stream ID.
70 |
71 | * #### Update Weight Table
72 |
73 | This tool updates the weight table specifically for the scenario in which the Drainage Line and the Catchment features are not one-to-one relationship. It does the following to the weight table:
74 |
75 | 1. Removes the rows with stream IDs that don’t have a match in the Drainage Line feature class.
76 | 2. Adds rows with stream IDs that are in the Drainage Line but not in the Catchment feature class. In these newly added rows, the contributing area and the weight are given values of 0. And the "npoints" (number of points) column that represents the total number of computational grids contributing to the same catchment is given the value of 1.
77 |
78 | * #### Create Inflow File From ECMWF/WRF-Hydro Runoff
79 |
80 | This tool creates the RAPID inflow file from ECMWF / WRF-Hydro runoff data. It does the following:
81 |
82 | 1. Obtains a list of stream IDs based on the input drainage line feature class.
83 | 2. For each stream feature, obtains the information of all contributing computational grids from the weight table.
84 | 3. Calculates the runoff to each stream feature based on the computational-grid-specific runoff rates from the ECMWF / WRF-Hydro runoff data file and the contributing areas from the weight table.
85 | 4. If a stream ID does not have a corresponding record in the weight table, specifies its runoff as 0.
86 | 5. Writes the runoff data into the inflow file in netCDF format.
87 |
88 | ### Postprocessing tools
89 |
90 | * #### Create Discharge Table
91 |
92 | This tool creates a discharge table converted from the RAPID discharge file. In the discharge table, each row contains the information of COMID (an ID field of the [NHDPlus](http://www.horizon-systems.com/nhdplus/NHDPlusV2_home.php) dataset), time, and the discharge of the stream at the time step. Time in date/time format is calculated based on the start date and time input by the user and the time dimension of the stream flow variable in the RAPID discharge file. Attribute indexes are added respectively for COMID, and the time fields in the table. The discharge table is saved in a SQL Server geodatabase or a file geodatabase.
93 |
94 | * #### Create Discharge Map
95 |
96 | This tool creates a map document with time-enabled stream flow layer(s) showing stream- and time-specific discharge amounts. Stream flow can be animated in the discharge map. The tool does the following:
97 |
98 | 1. If the layer information is not specified, all stream features that have records in the discharge table will be copied into the same geodatabase where the discharge table is. If the layer information is specified, based on the information of minimum stream order for each layer, stream features are selected using the query definition of stream order >= the minimum, then the selected stream features are copied into the same geodatabase where the discharge table is.
99 | 2. Creates a map document and add all the copied stream feature classes into the map.
100 | 3. For each layer, adds a join to the discharge table based on COMID as the join field.
101 | 4. For each layer, defines the minScale and maxScale based on the user-specified information.
102 | 5. Applies symbology to each layer based on the same template layer file. Note that the layers of data in SQL Server geodatabase and file geodatabase have different templates.
103 | 6. Updates the time properties for each layer based on the time-enabled template.
104 |
105 | * #### Update Discharge Map
106 |
107 | This tool updates the existing map document by applying symbology from a template layer file to the layer(s) in the map document. The tool is run only if the discharge table has been updated.
108 |
109 | ### Utilities Tools
110 |
111 | * #### Flowline to Point
112 |
113 | This tool writes the centroid coordinates of flowlines into a CSV file.
114 |
115 | * #### Copy Data To Server
116 |
117 | This tool copies the discharge table, the drainage line features, or both to the ArcGIS server machine from the author/publisher machine.
118 |
119 | * #### Publish Discharge Map
120 |
121 | This tool publishes a discharge map service of stream flow visualization to an ArcGIS server.
122 |
123 | ## Licensing
124 | Copyright 2016 Esri
125 |
126 | Licensed under the Apache License, Version 2.0 (the "License");
127 | you may not use this file except in compliance with the License.
128 | You may obtain a copy of the License at
129 |
130 | http://www.apache.org/licenses/LICENSE-2.0
131 |
132 | Unless required by applicable law or agreed to in writing, software
133 | distributed under the License is distributed on an "AS IS" BASIS,
134 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
135 | See the License for the specific language governing permissions and
136 | limitations under the License.
137 |
138 | A copy of the license is available in the repository's [License.txt](/LICENSE) file.
139 |
140 | [](Esri Tags: ArcGIS Python Toolbox of Preprocessing and Postprocessing for a River Routing Model RAPID)
141 | [](Esri Language: Python)
142 |
--------------------------------------------------------------------------------
/toolbox/RAPID Tools.CopyDataToServer.pyt.xml:
--------------------------------------------------------------------------------
1 | 20150630112430001.0TRUE20150727174100001500000005000ItemDescriptionc:\program files (x86)\arcgis\desktop10.3\Help\gp<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input discharge dataset including the discharge table, and/or drainage line features</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input workspace on the server machine where the data copies are made. The workspace could be SQL server database or file geodatabase.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Copies the discharge table and/or the drainage line features to the ArcGIS server machine from the author/publisher machine.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><UL><LI><P><SPAN>This tool is an automation of copying discharge table and drainage line features and adding attribute index in order to create and register the database on the server machine.</SPAN></P></LI><LI><P><SPAN>The indexed attributes include COMID and TimeValue in Discharge_Table, and COMID in the drainage line feature classes. All indexes are added seperately.</SPAN></P></LI><LI><P><SPAN>The workspace on server machine must be of the same type as the database of the input discharge dataset on author/publisher machine.</SPAN></P></LI></UL></DIV></DIV></DIV>Copy Data To Server<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Copies the discharge table and/or the drainage line features to the ArcGIS server machine from the author/publisher machine.</SPAN></P></DIV></DIV></DIV>ArcToolbox Tool
2 |
--------------------------------------------------------------------------------
/toolbox/RAPID Tools.CreateDischargeMap.pyt.xml:
--------------------------------------------------------------------------------
1 | 20150527183402001.0TRUE20150727171913001500000005000ItemDescriptionc:\program files (x86)\arcgis\desktop10.3\Help\gp<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input drainage line features that compose the stream network.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input discharge table created with the Create Discharge Table tool.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The output discharge map in .mxd format. The discharge map document may contain one single layer or multiple layers with scale-dependent mapping of drainage line features.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input unique ID table previously created together with the input discharge table using the Create Discharge Table tool. The default value is None.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The multi-layer information about the layer name, map scales and the stream orders of drainage lines that are copied into the database and rendered at the corresponding scales. The default value is None and in this case all drainage line features are copied into the database and rendered in a single layer.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Creates a discharge map document for stream flow visualization based on the discharge table and the draiange line feature class.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><UL><LI><P><SPAN>The input drainage line features are not necessarily in 1-to-1 relationship with those in the input discharge table. Only the matching drainage line features between the two inputs will be displayed in the output discharge map.</SPAN></P></LI><LI><P><SPAN>If the unique ID table is not specified, unique IDs will be extracted from the input discharge table, which causes extra computation time in tool execution.</SPAN></P></LI><LI><P><SPAN>You should specify the reciprocals of the actual scale values as the input scale values. For example, use 2,000,000 when your scale is 1:2,000,000.</SPAN></P></LI></UL></DIV></DIV></DIV>Create Discharge Map<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Creates a discharge map document for stream flow visualization based on the discharge table and the draiange line feature class.</SPAN></P></DIV></DIV></DIV>ArcToolbox Tool
2 |
--------------------------------------------------------------------------------
/toolbox/RAPID Tools.CreateDischargeTable.pyt.xml:
--------------------------------------------------------------------------------
1 | 20150527183401001.0TRUE20150727171021001500000005000ItemDescriptionc:\program files (x86)\arcgis\desktop10.3\Help\gp<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input of RAPID discharge result file in netCDF format.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input starting date and time of the RAPID discharge result.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input time interval of flow rates in the RAPID discharge result. The unit is hours.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The output discharge table with stream ID and time as dimensions.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The output table with a list of stream IDs that are unique in the output discharge table. The default value is None.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Creates a RAPID discharge table using stream ID and time as row dimensions from RAPID discharge file.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><UL><LI><P><SPAN>The input start date and time can be specified through the calendar window.</SPAN></P></LI><LI><P><SPAN>The output discharge table name is automatically updated into Discharge_Table if it is not specified so.</SPAN></P></LI></UL></DIV></DIV></DIV>Create Discharge Table<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Creates a RAPID discharge table using stream ID and time as row dimensions from RAPID discharge file.</SPAN></P></DIV></DIV></DIV>ArcToolbox Tool
2 |
--------------------------------------------------------------------------------
/toolbox/RAPID Tools.CreateInflowFileFromECMWFRunoff.pyt.xml:
--------------------------------------------------------------------------------
1 | 20141024163024001.0TRUE20150727170434001500000005000ItemDescriptionc:\program files (x86)\arcgis\desktop10.3\Help\gp<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input of ECMWF runoff file in NetCDF format.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input of the weight table that is previously created with the tool Create Weight Table From ECMWF Runoff. The weight table should be based on the ECMWF runoff file of the same spatial resolution as used in this tool. </SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The output of RAPID inflow file in NetCDF format.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Time interval of the stream inflow in the RAPID inflow file. The unit is hours. The default value is 6 hr. </SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Creates RAPID NetCDF input of water inflow based on the European Centre for Medium-Range Weather Forecasts (ECMWF) runoff results and previously created weight table</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><UL><LI><P><SPAN>The input ECMWF runoff file must be of Network Command Data Form (NetCDF) format. It can be any file of Ensembles (1-50), Control (51), and Deterministic (52) forecasts. It must contain variables 'lat', 'lon', 'RO', and 'time'. The 'RO' variable has three dimensions: 'time', 'lat', and 'lon'. The spatial resolution of the ECMWF runoff file must be consistent with the input ECMWF runoff file used for creating the input weight table.</SPAN></P></LI><LI><P><SPAN>The input weight table must be written as comma-separated values (csv) in plain-text form. It is the output of the tool Create Weight Table From ECMWF Runoff. </SPAN></P></LI><LI><P><SPAN>As the input ECMWF runoff file is specified, there will be options of time interval for outputing the stream inflow in the RAPID inflow file. Available options depend on the input ECMWF runoff file. Low resolution data (Ensemble 1-50 and Control 51) has only 6 hr time interval. High resolution data (52) has 1 hr, 3hr, and 6hr time intervals. The default of the time interval is 6 hr.</SPAN></P></LI><LI><P><SPAN>The output RAPID inflow file has a m3_riv variable with the dimension of HydroID of the input drainage line features. The HydroID values are in ascending order.</SPAN></P></LI></UL></DIV></DIV></DIV>Create Inflow File From ECMWF Runoff<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Creates RAPID NetCDF input of water inflow based on the European Centre for Medium-Range Weather Forecasts (ECMWF) runoff results and previously created weight table</SPAN></P></DIV></DIV></DIV>ArcToolbox Tool
2 |
--------------------------------------------------------------------------------
/toolbox/RAPID Tools.CreateInflowFileFromWRFHydroRunoff.pyt.xml:
--------------------------------------------------------------------------------
1 | 20141024163024001.0TRUE20150727170542001500000005000ItemDescriptionc:\program files (x86)\arcgis\desktop10.3\Help\gp<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input of WRF-Hydro runoff file in NetCDF format.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input of the weight table that is previously created with the tool Create Weight Table From WRF Geogrid. The weight table should be based on the WRF geogrid file of the same spatial resolution as the WRF-Hydro runoff file input into this tool.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The output of RAPID inflow file in NetCDF format.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Creates RAPID NetCDF input of water inflow based on the Weather Research and Forecasting Model Hydrological (WRF-Hydro) modeling system output and previously created weight table</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><UL><LI><P><SPAN>The input WRF-Hydro runoff file must be of Network Command Data Form (NetCDF) format. It must contain variables 'SFCRNOFF'</SPAN><SPAN /><SPAN>(a</SPAN><SPAN>ccumulated</SPAN><SPAN>surface runoff) and 'INTRFLOW' (accumulated interflow runoff) in dimensions of 'Time', 'south_north', and 'west_east'. Its spatial resolution must be consistent with the spatial resolution of the input WRF geogrid file used for creating the weight table.</SPAN></P></LI><LI><P><SPAN>The input weight table must be written as comma-seperated values (csv) in plain-text form. It is the output of the tool Create Weight Table From WRF Geogrid. </SPAN></P></LI><LI><P><SPAN>The output RAPID inflow file has a m3_riv variable with the dimension of HydroID of the input drainage line features. The HydroID values are in ascending order.</SPAN></P></LI></UL></DIV></DIV></DIV>Create Inflow File From WRF-Hydro Runoff<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Creates RAPID NetCDF input of water inflow based on the Weather Research and Forecasting Model Hydrological (WRF-Hydro) modeling system output and previously created weight table</SPAN></P></DIV></DIV></DIV>ArcToolbox Tool
2 |
--------------------------------------------------------------------------------
/toolbox/RAPID Tools.CreateMuskingumParameterFiles.pyt.xml:
--------------------------------------------------------------------------------
1 | 20141024163024001.0TRUE20150727161345001500000005000ItemDescriptionc:\program files (x86)\arcgis\desktop10.3\Help\gp<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input drainage line features that compose the stream network. They must have the attributes of HydroID, NextDownID and Muskingum parameters including Musk_kfac, Musk_k, and Musk_x.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The output CSV file of the Muskingum k factor (in seconds) listed in the same order as in the stream network connecitivity file. The values are the first guesses for the parameter k. One value for each stream reach.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The output CSV file of Muskingum parameter k (in seconds) listed in the same order as in the stream network connecitivity file. One value for each stream reach.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The output CSV file of Muskingum parameter x listed in the same order as in the stream network connecitivity file. The parameter x is dimensionless. One value for each stream reach.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Creates the Muskingum parameter files as inputs into RAPID from the drainage line feature class.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><UL><LI><P><SPAN>If Musk_kfac, Musk_k, and Musk_x are not in the input drainage line feature class, they can be calculated using the Calculate Muskingum Parameters model in the RAPID_Parameters toolbox.</SPAN></P></LI></UL><P><SPAN /></P></DIV></DIV></DIV>Create Muskingum Parameter Files<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Creates the Muskingum parameter files as inputs into RAPID from the drainage line feature class.</SPAN></P></DIV></DIV></DIV>ArcToolbox Tool
2 |
--------------------------------------------------------------------------------
/toolbox/RAPID Tools.CreateNetworkConnectivityFile.pyt.xml:
--------------------------------------------------------------------------------
1 | 20141024163024001.0TRUE20150727155508001500000005000ItemDescriptionc:\program files (x86)\arcgis\desktop10.3\Help\gp<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input drainage line features that compose the stream network. The feature class must contain the fields of HydroID and NextDownID.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The output CSV file that provides information about the stream network and its connectivity. It is a required input to RAPID. Each reach in the stream network corresponds to an entry in the connecitivity file. The entry fields including 1) HydroID of the reach, 2) HydroID of its downstream reach, 3) the number of upstream reaches with a maximum number specified, and the remaining fields of 4) HydroIDs of the upstream reaches. </SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The maximum number of upstream reaches specified in the output network connectivity file. The default value is the maximum number of upstream reaches of all stream reaches in the input drainage line feature class.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Creates the stream network connectivity file as the input to RAPID from the drainage line feature class.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><UL><LI><P><SPAN>HydroID is the unique ID of the drainage line features in the input drainage line feature class. If this field is missing, you can either use Assign HydroID tool in the ArcHydro toolbox or add and calculate the field using another field in the attribute table (e.g. COMID) if the latter has unique values.</SPAN></P></LI><LI><P><SPAN>If NextDownID is missing, you can use the Find Next Downstream Line in the ArcHydro toolbox to add the field. More ArcHydro tools such as Assign HydroID, Generate From/To Node for Lines can be used to add the prerequisites fields including HydroID, From_Node, and To_Node.</SPAN></P></LI></UL><P><SPAN /></P></DIV></DIV></DIV>Create Connectivity File<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Creates the stream network connectivity file as the input to RAPID from the drainage line feature class.</SPAN></P></DIV></DIV></DIV>ArcToolbox Tool
2 |
--------------------------------------------------------------------------------
/toolbox/RAPID Tools.CreateSubsetFile.pyt.xml:
--------------------------------------------------------------------------------
1 | 20141024163024001.0TRUE20150727161741001500000005000ItemDescriptionc:\program files (x86)\arcgis\desktop10.3\Help\gp<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input drainage line features or those selected in an input feature layer. If selected, only the selected stream reaches are used when the model runs. Otherwise, all stream reaches areused.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The output CSV file of a list of HydroIDs of stream reaches used when RAPID runs. The HydroIDs are in ascending order.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Creates a CSV file of the list of stream reach HydroIDs in the entire domain defined in the network connectivity file or a smaller basin (subset) within the domain. </SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><UL><LI><P><SPAN>The drainage line features of interests can be selected within a layer and input into the tool. If the input is the file path of a drainage line feature class, all features are used.</SPAN></P></LI></UL></DIV></DIV></DIV>Create Subset File<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Creates a CSV file of the list of stream reach HydroIDs in the entire domain defined in the network connectivity file or a smaller basin (subset) within the domain. </SPAN></P></DIV></DIV></DIV>ArcToolbox Tool
2 |
--------------------------------------------------------------------------------
/toolbox/RAPID Tools.CreateWeightTableFromECMWFRunoff.pyt.xml:
--------------------------------------------------------------------------------
1 | 20141024163024001.0TRUE20150727165701001500000005000ItemDescriptionc:\program files (x86)\arcgis\desktop10.3\Help\gp<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input of ECMWF runoff file in NetCDF format. </SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input of stream network connectivity file that is created using the Create Connectivity File tool.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input of catchment features. It must contain an attribute field of unique stream IDs corresponding to the catchments.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input of stream ID field in the input catchment feature class.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The output weight table in CSV format. It characterizes the spatial relationship between the ECMWF computational grid and the input catchment features. </SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The output feature class of ECMWF computational grid polygons within a buffered extent of the input catchment features.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The output feature class of ECMWF computational grid points within a buffered extent of the input catchment features. The points are the centers of the computational grid polygons.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Creates weight table based on the European Centre for Medium-Range Weather Forecasts (ECMWF) runoff file and catchment features</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><UL><LI><P><SPAN>The input ECMWF runoff file must be of Network Command Data Form (NetCDF) format. It can be any file of Ensembles (1-50), Control (51), and Deterministic (52) forecasts. It must contain variables 'lat', 'lon', 'RO', and 'time'. The 'RO' variable has three dimensions: 'time', 'lat', and 'lon'.</SPAN></P></LI><LI><P><SPAN>The output weight table is written as comma-separated values (csv) in plain-text form. It is created by overlaying catchments on the computational grid. Fields in this table includes the stream ID name (from the input catchment feature class), area_sqm (area in square meters of the catchment portion that is contributed by a computational grid), lon_index (longitude dimension index of the 'RO' variable in the ECMWF runoff dataset), lat_index (latitude dimension of the 'RO' variable in the ECMWF runoff dataset), npoints (total number of the computational grids that contribute to the stream segment or the catchment), weight (area fraction of the catchment that is contributed by a computational grid), Lon (the longitude of the computational grid center), and Lat (the latitude of the computational grid center)</SPAN></P></LI><LI><P><SPAN>The output of computational grid polygon feature class is within a buffered extent of the input catchment features. It uses the GCS_WGS_1984 as spatial reference. The polygons in the most outer edge of the buffered extent are elongated and thus do not represent the real computational grid. </SPAN></P></LI><LI><P><SPAN>The output of computational grid point feature class is within a buffered extent of the input catchment features. It uses the GCS_WGS_1984 as spatial reference. </SPAN></P></LI><LI><P><SPAN>The functionality of the deprecated tool of Update Weight Table is integrated into this tool. The output weight table is updated by adding entries for stream reaches that do NOT have corresponding catchments in the input catchment feature class. Runoff values are written as zeros for these stream reaches. And the npoint is specified as one for further processing with the Create Inflow File tool. Other fields for these stream reaches are meaningless. </SPAN></P></LI></UL></DIV></DIV></DIV>Create Weight Table From ECMWF Runoff<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Creates weight table based on the European Centre for Medium-Range Weather Forecasts (ECMWF) runoff file and catchment features</SPAN></P></DIV></DIV></DIV>ArcToolbox Tool
2 |
--------------------------------------------------------------------------------
/toolbox/RAPID Tools.CreateWeightTableFromWRFGeogrid.pyt.xml:
--------------------------------------------------------------------------------
1 | 20141024163024001.0TRUE20150727165749001500000005000ItemDescriptionc:\program files (x86)\arcgis\desktop10.3\Help\gp<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input of WRF geogrid file in NetCDF format</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input of stream network connectivity file created using the Create Connectivity File tool.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input of catchment features. </SPAN><SPAN><SPAN>It must contain an attribute field of unique stream IDs corresponding to the catchments.</SPAN></SPAN></P><P><SPAN /></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input of stream ID field in the input catchment feature class.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The output of weight table in csv format. It characterizes the spatial relationship between the WRF computational grid and the input catchment features.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The output feature class of WRF computational grid polygons within a buffered extent of the input catchment features.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The output feature class of WRF computational grid points within a buffered extent of the input catchment features. The points are the centers of the computational grid polygons.</SPAN></P><P><SPAN /></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Creates weight table based on the Weather Research and Forecasting Model (WRF) geogrid file and catchment features</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><UL><LI><P><SPAN>The input WRF geogrid file must be of Network Common Data Form (NetCDF) format. It must contain</SPAN><SPAN /><SPAN>global attributes information to define the Lambert Conformal Conic map projection and the corner coordinates of raster data.</SPAN></P></LI><LI><P><SPAN>The output weight table is written as comma-separated values (csv) in plain-text form. It is created by overlaying catchments on the computational grid. Fields in this table includes the stream ID name (from the input catchment feature class), area_sqm (area in square meters of the catchment portion that is contributed by a computational grid), west_east (west_east dimension index of the runoff raster in the WRF-Hydro dataset, the index of westmost grid is 0), south_north (south_north dimension index of the runoff raster in the WRF_Hydro dataset, the index of southmost grid is 0), npoints (total number of the computational grids that contribute to the stream segment or the catchment), weight (area fraction of the catchment that is contributed by a computational grid), Lon (the longitude of the computational grid center), Lat (the latitude of the computational grid center), x (the projected X cooridnate of the computational grid center), and y (the projected Y coordinate of the computational grid center)</SPAN></P></LI><LI><P><SPAN>The output of computational grid polygon feature class is within a buffered extent of the input catchment features. It uses the North_America_Lambert_Conformal_Conic as spatial reference. </SPAN></P></LI><LI><P><SPAN>The output of computational grid point feature class is within a buffered extent of the input catchment features. It uses the North_America_Lambert_Conformal_Conic as spatial reference. </SPAN></P></LI><LI><P><SPAN>The functionality of the deprecated tool of Update Weight Table is integrated into this tool. The output weight table is updated by adding entries for stream reaches that do NOT have corresponding catchments in the input catchment feature class. Runoff values are written as zeros for these stream reaches. And the npoint is specified as one for further processing with the Create Inflow File tool. Other fields for these stream reaches are meaningless. </SPAN></P></LI></UL></DIV></DIV></DIV>Create Weight Table From WRF Geogrid<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Creates weight table based on the Weather Research and Forecasting Model (WRF) geogrid file and catchment features</SPAN></P></DIV></DIV></DIV>ArcToolbox Tool
2 |
--------------------------------------------------------------------------------
/toolbox/RAPID Tools.FlowlineToPoint.pyt.xml:
--------------------------------------------------------------------------------
1 | 20150717140430001.0TRUE20150727174331001500000005000ItemDescriptionc:\program files (x86)\arcgis\desktop10.3\Help\gp<DIV STYLE="text-align:Left;"><DIV><P><SPAN>The input of drainage line features that compose the stream network.</SPAN></P></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><P><SPAN>The output CSV file of drainage line centroid attributes including stream ID, coordinates (POINT_X, POINT_Y) and elevation (POINT_Z). If z-value is not enabled in the drainage line feature class, POINT_Z values will be zero in the file.</SPAN></P></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Writes the centroid coordinates of flowlines into a CSV file</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><UL><LI><P><SPAN>The CSV file of centroid coordinates of flowlines should be able to be used as the input of the script for converting the RAPID output to be CF (Climate and Forecast) compliant by adding metadata into the RAPID output.</SPAN></P></LI></UL></DIV></DIV></DIV>Flowline To Point<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Writes the centroid coordinates of flowlines into a CSV file</SPAN></P></DIV></DIV></DIV>ArcToolbox Tool
2 |
--------------------------------------------------------------------------------
/toolbox/RAPID Tools.PublishDischargeMap.pyt.xml:
--------------------------------------------------------------------------------
1 | 20150626155034001.0TRUE20150730112252001500000005000ItemDescriptionc:\program files (x86)\arcgis\desktop10.3\Help\gp<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input discharge map document that is created using the Create Discharge Map tool or updated using the Update Discharge Map tool.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><P><SPAN>An ArcGIS server connection to which the discharge map service is published. Use the server connection under the GIS Servers node in the Catalog window or a server connection file stored somewhere. </SPAN></P></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><P><SPAN>A string that represents the name of the discharge map service. The input service name can only contain alphanumeric characters and underscores. No spaces or special characters are allowed. The name cannot be more than 120 characters in length.</SPAN></P></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><P><SPAN>A string that represents the Item Description Summary. By default, the summary from the ArcMap Map Properties dialog box or Catalog window Item Description dialog box for the input map document will be used. Use this parameter to override the user interface summary, or to provide a summary if one does not exist. The summary provided here will not be persisted in the map document. The default value is None.</SPAN></P></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><P><SPAN>A string that represents the Item Description Tags. By default, the Tags from the ArcMap Map Properties dialog box or Catalog window Item Description dialog box for the map document will be used. Use this parameter to override the user interface tages, or to provide tags if they do not exist. The tags provided here will not be persisted in the map document. The default value is None.</SPAN></P></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Publishes discharge map service to an ArcGIS server for stream flow visualization. </SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><UL><LI><P><SPAN>Mapping, KML and WMS are enabled for the discharge map service.</SPAN></P></LI><LI><P><SPAN>During the tool execution, a Service Definition Draft (.sddraft) is created in scratch workspace. It is then analyzed. If there are no errors, it is converted into a service definition that is finally uploaded into server.</SPAN></P></LI><LI><P><SPAN>The tool execution will be terminated if there are errors in the analysis report on the .sddfraft.</SPAN></P></LI><LI><P><SPAN>If the input discharge map document is created using the Create Discharge Map tool, the map document does not contain any information about the summary or the tags.</SPAN></P></LI></UL></DIV></DIV></DIV>Publish Discharge Map<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Publishes discharge map service to an ArcGIS server for stream flow visualization. </SPAN></P></DIV></DIV></DIV>ArcToolbox Tool
2 |
--------------------------------------------------------------------------------
/toolbox/RAPID Tools.UpdateDischargeMap.pyt.xml:
--------------------------------------------------------------------------------
1 | 20150527183402001.0TRUE20150730105738001500000005000ItemDescriptionc:\program files (x86)\arcgis\desktop10.3\Help\gp<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input discharge map document that needs to be updated. The discharge map document is originally created using the Create Discharge Map tool.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Updates the symbology of layers in a discharge map document for stream flow visualization when the discharge table gets updated.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><UL><LI><P><SPAN>The discharge table must be updated in the database before running this tool.</SPAN></P></LI><LI><P><SPAN>The template symbology will be applied to all layers with the data source pointed to the updated discharge table.</SPAN></P></LI><LI><P><SPAN>The Time Extent property in the Time Slider Options of the map document does NOT get updated. Manual update on the Time Extent is necessary. </SPAN></P></LI></UL></DIV></DIV></DIV>Update Discharge Map<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Updates the symbology of layers in a discharge map document for stream flow visualization when the discharge table gets updated.</SPAN></P></DIV></DIV></DIV>ArcToolbox Tool
2 |
--------------------------------------------------------------------------------
/toolbox/RAPID Tools.UpdateWeightTable.pyt.xml:
--------------------------------------------------------------------------------
1 | 20150512105746001.0TRUE20150727164802001500000005000ItemDescriptionc:\program files (x86)\arcgis\desktop10.3\Help\gp<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input weight table created by the tool of Create Weight Table From ECMWF Runoff, or Create Weight Table From WRF Geogrid</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The input stream network connectivity file created by the Create Connectivity File tool.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>The output weight table updated by adding entries for stream reaches that do NOT have any corresponding catchments in the catchment feature class. The runoff values are written as zeros for these stream reaches.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Updated the weight table from the stream network connectivity file.</SPAN></P></DIV></DIV></DIV><DIV STYLE="text-align:Left;"><DIV><DIV><UL><LI><P><SPAN>The tool was developed to solve the problem in reality that some drainage line features in the stream network connectivity file do NOT have corresponding catchments in the catchment feature class.</SPAN></P></LI><LI><P><SPAN>The functionality of Update Weight Table has been integrated into the Create Weight Table From ECMWF Runoff and Create Weight Table From WRF Geogrid tools. Thus the Update Weight Table tool will probably be deprecated in the future. </SPAN></P></LI></UL></DIV></DIV></DIV>Update Weight Table<DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>Updated the weight table from the stream network connectivity file.</SPAN></P></DIV></DIV></DIV>ArcToolbox Tool
2 |
--------------------------------------------------------------------------------
/toolbox/RAPID Tools.pyt:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 |
4 | scripts_dir = os.path.join(os.path.dirname(__file__), 'scripts')
5 | sys.path.append(scripts_dir)
6 | # Do not compile .pyc files for the tool modules.
7 | sys.dont_write_bytecode = True
8 |
9 | from CreateNetworkConnectivityFile import CreateNetworkConnectivityFile
10 | from CreateMuskingumParameterFiles import CreateMuskingumParameterFiles
11 | from CreateSubsetFile import CreateSubsetFile
12 | from CreateWeightTableFromWRFGeogrid import CreateWeightTableFromWRFGeogrid
13 | from CreateInflowFileFromWRFHydroRunoff import CreateInflowFileFromWRFHydroRunoff
14 | from CreateWeightTableFromECMWFRunoff import CreateWeightTableFromECMWFRunoff
15 | from CreateInflowFileFromECMWFRunoff import CreateInflowFileFromECMWFRunoff
16 | from UpdateWeightTable import UpdateWeightTable
17 | from CreateDischargeTable import CreateDischargeTable
18 | from CreateDischargeMap import CreateDischargeMap
19 | from CopyDataToServer import CopyDataToServer
20 | from UpdateDischargeMap import UpdateDischargeMap
21 | from PublishDischargeMap import PublishDischargeMap
22 | from FlowlineToPoint import FlowlineToPoint
23 |
24 |
25 |
26 | class Toolbox(object):
27 | def __init__(self):
28 | """Define the toolbox (the name of the toolbox is the name of the
29 | .pyt file)."""
30 | self.label = "RAPIDTools"
31 | self.alias = "RAPID Tools"
32 |
33 | # List of tool classes associated with this toolbox
34 | self.tools = [CreateNetworkConnectivityFile,
35 | CreateMuskingumParameterFiles,
36 | CreateSubsetFile,
37 | CreateWeightTableFromWRFGeogrid,
38 | CreateInflowFileFromWRFHydroRunoff,
39 | CreateWeightTableFromECMWFRunoff,
40 | CreateInflowFileFromECMWFRunoff,
41 | UpdateWeightTable,
42 | CreateDischargeTable,
43 | CreateDischargeMap,
44 | CopyDataToServer,
45 | UpdateDischargeMap,
46 | PublishDischargeMap,
47 | FlowlineToPoint]
48 |
49 |
--------------------------------------------------------------------------------
/toolbox/RAPID Tools.pyt.xml:
--------------------------------------------------------------------------------
1 |
2 | 20140731144744001.0TRUE201602291727431500000005000ItemDescriptionc:\program files (x86)\arcgis\desktop10.5\Help\gpRAPID ToolsIncludes seven tools: 1. Create Connectivity File 2. Create Muskingum Parameter Files 3. Create Subset File and 4. Create Weight Table From ECMWF Runoff, 5. Create Inflow File from ECMWF Runoff, 6. Create Weight Table from WRF Geogrid, and 7. Create Inflow File from WRF-Hydro Runoff. Toobox Version 1.0, Updated on Aug 8, 2014. Toolbox Version 2.0, Updated on Oct 24, 2014, deleted the tool of Create Inflow File From ECMWF, and added four tools: Create Weight Table From WRF Geogrid, Create Inflow File From WRF-Hydro Runoff, Create Weight Table From ECMWF Runoff, and Create Inflow File From ECMWF Runoff. Toolbox Version 2.1, Updated on Feb 05, 2015, enhancement in Create Weight Table From WRF Geogrid: modified the equation for calculating central scale factor for MAP_PROJ = 2 according to Kevin Sampson; bug fixings in Create Connectivity File: corrects input parameter indices; Create Inflow File From ECMWF Runoff: calculates RAPID inflow from cumulative runoff, and outputs netCDF3-classic as file format; Create Inflow File From WRF-Hydro Runoff: outputs netCDF3-classic as file format; Create Weight Table From ECMWF Runoff: enables input catchment feature class with spatial reference that is not GCS_WGS_1984; and Create Weight Table From WRF Geogrid: enables output CG Point file name. Toolbox Version 2.2, Updated on Feb 20, 2015, tool redesign in Create Inflow File From ECMWF Runoff, and Create Inflow File From WRF-Hydro Runoff: added an input parameter of drainage line features, and output inflow based on the ascending order of all stream IDs in the drainage line feature class; bug fixings in Create Inflow File From WRF-Hydro Runoff: included UGDRNOFF as a component of stream inflow, and calculated inflow assuming that WRF-Hydro runoff variables are cumulative through time. Toolbox Version 2.3, Updated on Apr 28, 2015, bug fixings in Create Inflow File From WRF-Hydro Runoff and Create Inflow File From ECMWF Runoff: to deal with the first several rows of records in the weight table don't correspond to any drainage line featuresMinimum requirements: ArcGIS 10.2 or later. Toolbox Version 2.3.1, Updated on May 12, 2015, added a new tool Update Weight Table: this tool is temporary and will be merged with Create Weight Table tools later. Updated on May 24, 2015, bug fixing in Update Weight Table: bug fixing: set npoints as 1 in the replacement_row, and fixed the overwritting problem of the replacement_row. Toolbox Version 3.0, Updated on May 29, 2015, added three tools for RAPID postprocessing: Create Discharge Table, Create Discharge Map, and Update Discharge Map; bug fixing in Create Inflow File From ECMWF Runoff and Create Inflow File From WRF-Hydro Runoff: the pointer of weight table goes out of range. Toolbox Version 3.1, Updated on June 2, 2015, bug fixing on Update Discharge Map: symbology didn't get updated if using different classify method. Toolbox Version 3.2, Updated on June 4, 2015, tool enhancement: integrated the updating weight table functionality in both Create Weight Table tools. Toolbox Version 3.2, Updated on June 09, 2015, tool redesign: removed input drainage line feature class and computed RAPID inflow based on the streamIDs in the weight table in both Create Inflow File tools. Toolbox Version 3.2, Updated on July 31, 2015, deprecated the tool of Update Weight Table; added new tools, Copy Data To Server, Publish Discharge Map, and Flowline To Point; categorized tools into three sets, Preprocessing, Postprocessing, and Utilities. Toolbox Version 3.2, Updated on Oct 30, 2015, enhancement in Create Discharge Table: now the tool handles variable/dimension names in no matter uppercase or lowercase, and variable with dimensions in various sequences. Toolbox Version 3.3, Updated on 2/26/2016, fixed bug and removed a redundant loop on Create Weight Table From WRF Geogrid tool; updated on 2/29/2016, used numpy array for faster query in Create NetWork Connecitivity tool.includes three toolsets: RAPID preprocessing, postprocessing, and utility tools. The preprocessing tools prepare spatial and temporal inputs data for RAPID. The postprocessing tools create a data model and visualize RAPID output of stream flow over the stream network. Utility tools help automate the process of publishing the stream flow visualization into map service.ArcToolbox Toolbox
3 |
--------------------------------------------------------------------------------
/toolbox/scripts/CopyDataToServer.py:
--------------------------------------------------------------------------------
1 | '''-------------------------------------------------------------------------------
2 | Tool Name: CopyDataToServer
3 | Source Name: CopyDataToServer.py
4 | Version: ArcGIS 10.2
5 | License: Apache 2.0
6 | Author: Environmental Systems Research Institute Inc.
7 | Updated by: Environmental Systems Research Institute Inc.
8 | Description: Copy discharge table and/or NHDFlowLines to a workspace in the
9 | ArcGIS server machine.
10 | History: Initial coding - 06/26/2015, version 1.0
11 | Updated:
12 | -------------------------------------------------------------------------------'''
13 | import os
14 | import arcpy
15 | import xml.dom.minidom as DOM
16 |
17 |
18 | class CopyDataToServer(object):
19 | def __init__(self):
20 | """Define the tool (tool name is the name of the class)."""
21 | self.label = "Copy Data To Server"
22 | self.description = "Copy discharge table and/or the drainage line features to the ArcGIS \
23 | server machine"
24 | self.fields_oi = ["Time", "COMID", "Qout", "TimeValue"]
25 | self.name_ID = "COMID"
26 | self.errorMessages = []
27 | self.canRunInBackground = False
28 | self.category = "Utilities"
29 |
30 | def getParameterInfo(self):
31 | """Define parameter definitions"""
32 | param0 = arcpy.Parameter(name = "in_data",
33 | displayName = "Input Discharge Dataset",
34 | direction = "Input",
35 | parameterType = "Required",
36 | datatype = "GPValueTable")
37 | param0.columns = [['DEType', 'Dataset']]
38 |
39 | param1 = arcpy.Parameter(name = "in_workspace",
40 | displayName = "Input Workspace",
41 | direction = "Input",
42 | parameterType = "Required",
43 | datatype = "DEWorkspace")
44 | param1.filter.list = ["Local Database", "Remote Database"]
45 |
46 | param2 = arcpy.Parameter(name = "out_workspace",
47 | displayName = "Output Workspace",
48 | direction = "Output",
49 | parameterType = "Derived",
50 | datatype = "DEWorkspace")
51 |
52 | params = [param0, param1, param2]
53 |
54 | return params
55 |
56 | def isLicensed(self):
57 | """Set whether tool is licensed to execute."""
58 | return True
59 |
60 | def updateParameters(self, parameters):
61 | """Modify the values and properties of parameters before internal
62 | validation is performed. This method is called whenever a parameter
63 | has been changed."""
64 | return
65 |
66 |
67 | def updateMessages(self, parameters):
68 | """Modify the messages created by internal validation for each tool
69 | parameter. This method is called after internal validation."""
70 | return
71 |
72 | def execute(self, parameters, messages):
73 | """The source code of the tool."""
74 | arcpy.env.overwriteOutput = True
75 |
76 | in_data = parameters[0].value
77 | in_workspace_server = parameters[1].valueAsText
78 |
79 | for row in in_data:
80 | data = row[0]
81 | name = os.path.basename(str(data))
82 | if "Discharge_Table" in name:
83 | outTable = os.path.join(in_workspace_server, name)
84 | # Copy discharge table
85 | arcpy.CopyRows_management(data, outTable, '#')
86 | # Add attribute index to the discharge table
87 | arcpy.AddIndex_management(outTable, self.fields_oi[1], self.fields_oi[1])
88 | arcpy.AddIndex_management(outTable, self.fields_oi[3], self.fields_oi[3])
89 | elif "Flowline_" in name:
90 | outFlowline = os.path.join(in_workspace_server, name)
91 | # Copy flowline feature class
92 | arcpy.CopyFeatures_management(data, outFlowline)
93 | # Add attribute index to the flowline feature class
94 | arcpy.AddIndex_management(outFlowline, self.name_ID, self.name_ID, "UNIQUE", "ASCENDING")
95 | else:
96 | arcpy.AddMessage("{0} is not copied due to incorrect name".format(name))
97 |
98 |
99 | return
100 |
101 |
--------------------------------------------------------------------------------
/toolbox/scripts/CreateDischargeMap.py:
--------------------------------------------------------------------------------
1 | '''-------------------------------------------------------------------------------
2 | Tool Name: CreateDischargeMap
3 | Source Name: CreateDischargeMap.py
4 | Version: ArcGIS 10.2
5 | License: Apache 2.0
6 | Author: Environmental Systems Research Institute Inc.
7 | Updated by: Environmental Systems Research Institute Inc.
8 | Description: Create a dischage map document.
9 | History: Initial coding - 05/8/2015, version 1.0
10 | Updated: Version 1.0, 06/02/2015, rename the default template layer file names
11 | from FGDB_TimeEnabled_5NaturalBreaks.lyr and SQL_TimeEnabled_5NaturalBreaks.lyr
12 | to FGDB_TimeEnabled.lyr and SQL_TimeEnabled.lyr
13 | Version 1.1, 06/24/2015, add all layers into a group layer named as "AllScales",
14 | which is specified in the template .mxd
15 | Version 1.1, 04/01/2016, deleted unnecessary line of import netCDF4
16 | -------------------------------------------------------------------------------'''
17 | import os
18 | import arcpy
19 | import numpy as NUM
20 | import time
21 |
22 | class CreateDischargeMap(object):
23 | def __init__(self):
24 | """Define the tool (tool name is the name of the class)."""
25 | self.label = "Create Discharge Map"
26 | self.description = "Create a discharge map document for stream flow visualization based on \
27 | the discharge table and the drainage line feature class"
28 | self.name_ID = "COMID"
29 | self.GDBtemplate_layer = os.path.join(os.path.dirname(__file__), "templates", "FGDB_TimeEnabled.lyr")
30 | self.SQLtemplate_layer = os.path.join(os.path.dirname(__file__), "templates", "SQL_TimeEnabled.lyr")
31 | self.template_mxd = os.path.join(os.path.dirname(__file__), "templates", "template_mxd.mxd")
32 | self.name_df = "DischargeMap"
33 | self.field_streamOrder = "StreamOrde"
34 | self.layer_minScale_maxScale_query = {"All": [None, None, None]}
35 | self.canRunInBackground = False
36 | self.category = "Postprocessing"
37 |
38 | def copyFlowlines(self, in_drainage_line, path_database, list_uniqueID):
39 | """Create copies of flowlines based on the layer query definitions"""
40 | # make a feature layer for query selection
41 | name_lyr = "flowlines"
42 | arcpy.MakeFeatureLayer_management(in_drainage_line, name_lyr)
43 |
44 | '''Create the query expression for line features with matching records in the flat table'''
45 | expression_base = self.name_ID + " IN ("
46 | count = len(list_uniqueID)
47 | counter = 1
48 | for each_ID in list_uniqueID:
49 | if counter == count:
50 | expression_base = expression_base + str(each_ID) + ")"
51 | else:
52 | expression_base = expression_base + str(each_ID) + ", "
53 | counter += 1
54 |
55 |
56 | for each_key in self.layer_minScale_maxScale_query.keys():
57 | out_copy = os.path.join(path_database, "Flowline_"+each_key)
58 | pars = self.layer_minScale_maxScale_query[each_key]
59 | query = pars[2]
60 | expression = expression_base
61 | if query is not None:
62 | expression = expression_base + "AND " + query
63 |
64 | arcpy.SelectLayerByAttribute_management(name_lyr, "NEW_SELECTION", expression)
65 | arcpy.CopyFeatures_management(name_lyr, out_copy)
66 | arcpy.AddIndex_management(out_copy, self.name_ID, self.name_ID, "UNIQUE", "ASCENDING")
67 |
68 | return
69 |
70 |
71 | def getParameterInfo(self):
72 | """Define parameter definitions"""
73 | param0 = arcpy.Parameter(name = "in_drainage_line",
74 | displayName = "Input Drainage Line Features",
75 | direction = "Input",
76 | parameterType = "Required",
77 | datatype = "GPFeatureLayer")
78 |
79 | param1 = arcpy.Parameter(name = "in_discharge_table",
80 | displayName = "Input Discharge Table",
81 | direction = "Input",
82 | parameterType = "Required",
83 | datatype = "DETable")
84 |
85 | param2 = arcpy.Parameter(name = "out_discharge_map",
86 | displayName = "Output Discharge Map",
87 | direction = "Output",
88 | parameterType = "Required",
89 | datatype = "DEMapDocument"
90 | )
91 |
92 | param3 = arcpy.Parameter(name = "in_unique_ID_table",
93 | displayName = "Input Unique ID Table",
94 | direction = "Input",
95 | parameterType = "Optional",
96 | datatype = "DETable")
97 |
98 | param4 = arcpy.Parameter(name = "in_layer_info",
99 | displayName = "Input Layer Information",
100 | direction = "Input",
101 | parameterType = "Optional",
102 | datatype = "GPValueTable")
103 | param4.columns = [['String', 'Layer Name'], ['Long', 'Minimum Scale'], ['Long', 'Maximum Scale'], ['Long', 'Minimum Stream Order']]
104 |
105 |
106 | params = [param0, param1, param2, param3, param4]
107 |
108 | return params
109 |
110 | def isLicensed(self):
111 | """Set whether tool is licensed to execute."""
112 | return True
113 |
114 | def updateParameters(self, parameters):
115 | """Modify the values and properties of parameters before internal
116 | validation is performed. This method is called whenever a parameter
117 | has been changed."""
118 | '''Add .mxd suffix to the output map document name'''
119 | if parameters[2].altered:
120 | (dirnm, basenm) = os.path.split(parameters[2].valueAsText)
121 | if not basenm.endswith(".mxd"):
122 | parameters[2].value = os.path.join(dirnm, "{}.mxd".format(basenm))
123 | return
124 |
125 | def updateMessages(self, parameters):
126 | """Modify the messages created by internal validation for each tool
127 | parameter. This method is called after internal validation."""
128 | return
129 |
130 | def execute(self, parameters, messages):
131 | """The source code of the tool."""
132 | arcpy.env.overwriteOutput = True
133 |
134 | in_drainage_line = parameters[0].valueAsText
135 | in_flat_table = parameters[1].valueAsText
136 | out_map_document = parameters[2].valueAsText
137 | in_uniqueID_table = parameters[3].valueAsText
138 | in_layer_info = parameters[4].value
139 |
140 |
141 | ''' Obtain a list of unique IDs '''
142 | list_uniqueID = []
143 | if in_uniqueID_table is not None:
144 | arr_uniqueID = arcpy.da.TableToNumPyArray(in_uniqueID_table, self.name_ID)
145 | arr_uniqueID = arr_uniqueID[self.name_ID]
146 | list_uniqueID = list(arr_uniqueID)
147 | else:
148 | arr_ID = arcpy.da.TableToNumPyArray(in_flat_table, self.name_ID)
149 | arr_ID = arr_ID[self.name_ID]
150 | list_uniqueID = list(NUM.unique(arr_ID))
151 |
152 |
153 | ''' Update self.layer_minScale_maxScale_query if user defines the map layer information'''
154 | if in_layer_info is not None:
155 | self.layer_minScale_maxScale_query = {}
156 | for each_list in in_layer_info:
157 | layer_minScale = None
158 | layer_maxScale = None
159 | layer_query = None
160 |
161 | if each_list[1] > 0:
162 | layer_minScale = each_list[1]
163 | if each_list[2] > 0:
164 | layer_maxScale = each_list[2]
165 | if each_list[3] > 0:
166 | layer_query = self.field_streamOrder + " >= " + str(each_list[3])
167 |
168 | key_in_dict = each_list[0]
169 | list_in_dict = [layer_minScale, layer_maxScale, layer_query]
170 | self.layer_minScale_maxScale_query[key_in_dict] = list_in_dict
171 |
172 |
173 | # Get the database path of the flat table
174 | (dirnm, basenm) = os.path.split(in_flat_table)
175 |
176 | '''Copy Flow line features and add attribute index'''
177 | self.copyFlowlines(in_drainage_line, dirnm, list_uniqueID)
178 |
179 | '''Create Map Document'''
180 | mxd = arcpy.mapping.MapDocument(self.template_mxd)
181 | df = arcpy.mapping.ListDataFrames(mxd)[0]
182 | df.name = self.name_df
183 |
184 | mxd.saveACopy(out_map_document)
185 | del mxd, df
186 |
187 | template_lyr = self.GDBtemplate_layer
188 | if not dirnm.endswith('.gdb'):
189 | template_lyr = self.SQLtemplate_layer
190 |
191 |
192 | for each_key in self.layer_minScale_maxScale_query.keys():
193 | mxd = arcpy.mapping.MapDocument(out_map_document)
194 | df = arcpy.mapping.ListDataFrames(mxd)[0]
195 | targetGroupLayer = arcpy.mapping.ListLayers(mxd, "AllScales", df)[0]
196 | lyrFile = arcpy.mapping.Layer(template_lyr)
197 |
198 | out_flowlines = os.path.join(dirnm, "Flowline_"+each_key)
199 | # Create Layer
200 | lyr = arcpy.mapping.Layer(out_flowlines)
201 |
202 | # Add join to layer
203 | arcpy.AddJoin_management(lyr, self.name_ID, in_flat_table, self.name_ID, "KEEP_COMMON")
204 |
205 | # Set min and max scales for layers
206 | minScale = self.layer_minScale_maxScale_query[each_key][0]
207 | maxScale = self.layer_minScale_maxScale_query[each_key][1]
208 |
209 | if minScale is not None:
210 | lyr.minScale = minScale
211 | if maxScale is not None:
212 | lyr.maxScale = maxScale
213 |
214 | # Apply symbology from template
215 | arcpy.ApplySymbologyFromLayer_management(lyr, template_lyr)
216 |
217 | # Add layer
218 | arcpy.mapping.AddLayerToGroup(df, targetGroupLayer, lyr, "BOTTOM")
219 |
220 | mxd.save()
221 | del mxd, df, lyr, targetGroupLayer
222 |
223 | # Update layer time property
224 | for each_key in self.layer_minScale_maxScale_query.keys():
225 | mxd = arcpy.mapping.MapDocument(out_map_document)
226 | df = arcpy.mapping.ListDataFrames(mxd)[0]
227 | lyr = arcpy.mapping.ListLayers(mxd, "Flowline_"+each_key, df)[0]
228 | arcpy.mapping.UpdateLayerTime(df, lyr, lyrFile)
229 |
230 | dft = df.time
231 | dft.startTime = lyr.time.startTime
232 | dft.endTime = lyr.time.endTime
233 |
234 | mxd.save()
235 | del mxd, df, lyr
236 |
237 |
238 | # Add the flat table into map: as a workaround for a potential bug in publishing
239 | mxd = arcpy.mapping.MapDocument(out_map_document)
240 | df = arcpy.mapping.ListDataFrames(mxd)[0]
241 | flat_Table = arcpy.mapping.TableView(in_flat_table)
242 | arcpy.mapping.AddTableView(df, flat_Table)
243 |
244 | mxd.save()
245 | del mxd, df, flat_Table
246 |
247 |
248 |
249 | return
250 |
251 |
--------------------------------------------------------------------------------
/toolbox/scripts/CreateDischargeTable.py:
--------------------------------------------------------------------------------
1 | '''-------------------------------------------------------------------------------
2 | Tool Name: CreateDischargeTable
3 | Source Name: CreateDischargeTable.py
4 | Version: ArcGIS 10.2
5 | License: Apache 2.0
6 | Author: Environmental Systems Research Institute Inc.
7 | Updated by: Environmental Systems Research Institute Inc.
8 | Description: Create a RAPID discharge table using stream ID and time from RAPID
9 | discharge file as row dimensions.
10 | History: Initial coding - 03/25/2015, version 1.0
11 | Updated: Version 1.0, 10/30/2015, handle the situations of different Uppercase
12 | /Lowercase of variable or dimension names, and various sequences of
13 | the dimensions of Qout. Still have a bug in creating UniqueID table.
14 | -------------------------------------------------------------------------------'''
15 | import os
16 | import arcpy
17 | import numpy as NUM
18 | import netCDF4 as NET
19 | import datetime
20 |
21 | class CreateDischargeTable(object):
22 | def __init__(self):
23 | """Define the tool (tool name is the name of the class)."""
24 | self.label = "Create Discharge Table"
25 | self.description = "Create a RAPID discharge table in a file geodatabase \
26 | or SQL server geodatabase using stream ID and time \
27 | from RAPID discharge file as row dimensions of the table"
28 | self.vars_oi = ["COMID", "Qout"]
29 | self.dims_oi = ["Time", "COMID"]
30 | self.fields_oi = ["Time", "COMID", "Qout", "TimeValue"]
31 | self.errorMessages = ["Missing Variable {0}",
32 | "Missing Dimension {0} of Variable {1}"]
33 | self.canRunInBackground = False
34 | self.category = "Postprocessing"
35 |
36 | def validateNC(self, in_nc, messages):
37 | """Check the necessary variables and dimensions in the input netcdf data"""
38 | data_nc = NET.Dataset(in_nc)
39 |
40 | vars = data_nc.variables.keys()
41 | vars_upper = []
42 | for eachvar in vars:
43 | vars_upper.append(eachvar.upper())
44 |
45 | counter = 0
46 | for eachvar_oi in self.vars_oi:
47 | try:
48 | ind = vars_upper.index(eachvar_oi.upper())
49 | # Update the Uppercase/Lowercase of the names of vars of interests
50 | self.vars_oi[counter] = vars[ind]
51 | counter += 1
52 | except RuntimeError:
53 | messages.addErrorMessage(self.errorMessages[0].format(eachvar_oi))
54 | raise arcpy.ExecuteError
55 |
56 | dims = data_nc.variables[self.vars_oi[1]].dimensions
57 | dims_upper = []
58 | for eachdim in dims:
59 | dims_upper.append(eachdim.upper())
60 |
61 | counter = 0
62 | for eachdim_oi in self.dims_oi:
63 | try:
64 | ind = dims_upper.index(eachdim_oi.upper())
65 | # Update the Uppercase/Lowercase of the names of dims of interests
66 | self.dims_oi[counter] = dims[ind]
67 | counter += 1
68 | except RuntimeError:
69 | messages.addErrorMessage(self.errorMessages[1].format(eachdim_oi, self.vars_oi[1]))
70 | raise arcpy.ExecuteError
71 |
72 | data_nc.close()
73 |
74 | return
75 |
76 | def createFlatTable(self, in_nc, out_table):
77 | """Create discharge table"""
78 | # obtain numpy array from the netCDF data
79 | data_nc = NET.Dataset(in_nc)
80 | comid = data_nc.variables[self.vars_oi[0]][:]
81 | qout = data_nc.variables[self.vars_oi[1]][:]
82 |
83 |
84 | time_size = len(data_nc.dimensions[self.dims_oi[0]]) # to adapt to the changes of Qout dimensions
85 | comid_size = len(data_nc.dimensions[self.dims_oi[1]]) # to adapt to the changes of Qout dimensions
86 | total_size = time_size * comid_size
87 |
88 | qout_arr = qout.reshape(total_size, 1)
89 | time_arr = NUM.repeat(NUM.arange(1,time_size+1), comid_size)
90 | time_arr = time_arr.reshape(total_size, 1)
91 | comid_arr = NUM.tile(comid, time_size)
92 | comid_arr = comid_arr.reshape(total_size, 1)
93 | data_table = NUM.hstack((time_arr, comid_arr, qout_arr))
94 |
95 | # convert to numpy structured array
96 | str_arr = NUM.core.records.fromarrays(data_table.transpose(),
97 | NUM.dtype([(self.fields_oi[0], NUM.int32), (self.fields_oi[1], NUM.int32), (self.fields_oi[2], NUM.float32)]))
98 |
99 | data_nc.close()
100 |
101 | # numpy structured array to table
102 | arcpy.da.NumPyArrayToTable(str_arr, out_table)
103 |
104 | return
105 |
106 | def createUniqueIDTable(self, in_nc, out_table):
107 | """Create a table of unique stream IDs"""
108 | data_nc = NET.Dataset(in_nc)
109 | comid_arr = data_nc.variables[self.vars_oi[0]][:]
110 | comid_size = len(comid_arr)
111 | comid_arr = comid_arr.reshape(comid_size, 1)
112 | arcpy.AddMessage(comid_arr.transpose())
113 | arcpy.AddMessage(self.vars_oi[0])
114 |
115 | #convert to numpy structured array
116 | str_arr = NUM.core.records.fromarrays(comid_arr.transpose(), NUM.dtype([(self.vars_oi[0], NUM.int32)]))
117 |
118 | # numpy structured array to table
119 | arcpy.da.NumPyArrayToTable(str_arr, out_table)
120 |
121 | data_nc.close()
122 |
123 | return
124 |
125 |
126 | def calculateTimeField(self, out_table, start_datetime, time_interval):
127 | """Add & calculate TimeValue field: scripts adapted from TimeTools.pyt developed by N. Noman"""
128 | timeIndexFieldName = self.fields_oi[0]
129 | timeValueFieldName = self.fields_oi[3]
130 | #Add TimeValue field
131 | arcpy.AddField_management(out_table, timeValueFieldName, "DATE", "", "", "", timeValueFieldName, "NULLABLE")
132 | #Calculate TimeValue field
133 | expression = "CalcTimeValue(!" + timeIndexFieldName + "!, '" + start_datetime + "', " + time_interval + ")"
134 | codeBlock = """def CalcTimeValue(timestep, sdatestr, dt):
135 | if (":" in sdatestr):
136 | sdate = datetime.datetime.strptime(sdatestr, '%m/%d/%Y %I:%M:%S %p')
137 | else:
138 | sdate = datetime.datetime.strptime(sdatestr, '%m/%d/%Y')
139 | tv = sdate + datetime.timedelta(hours=(timestep - 1) * dt)
140 | return tv"""
141 |
142 | arcpy.AddMessage("Calculating " + timeValueFieldName + "...")
143 | arcpy.CalculateField_management(out_table, timeValueFieldName, expression, "PYTHON_9.3", codeBlock)
144 |
145 | return
146 |
147 | def getParameterInfo(self):
148 | """Define parameter definitions"""
149 | param0 = arcpy.Parameter(name = "in_RAPID_discharge_file",
150 | displayName = "Input RAPID Discharge File",
151 | direction = "Input",
152 | parameterType = "Required",
153 | datatype = "DEFile")
154 |
155 | param1 = arcpy.Parameter(name = "in_start_date_time",
156 | displayName = "Start Date and Time",
157 | direction = "Input",
158 | parameterType = "Required",
159 | datatype = "GPDate")
160 |
161 | param2 = arcpy.Parameter(name = "in_time_interval",
162 | displayName = "Time Interval in Hour",
163 | direction = "Input",
164 | parameterType = "Required",
165 | datatype = "GPDouble")
166 |
167 | param3 = arcpy.Parameter(name = "out_discharge_table",
168 | displayName = "Output Discharge Table",
169 | direction = "Output",
170 | parameterType = "Required",
171 | datatype = "DETable")
172 |
173 | param4 = arcpy.Parameter(name = "out_unique_ID_table",
174 | displayName = "Output Unique ID Table",
175 | direction = "Output",
176 | parameterType = "Optional",
177 | datatype = "DETable")
178 |
179 | params = [param0, param1, param2, param3, param4]
180 | return params
181 |
182 | def isLicensed(self):
183 | """Set whether tool is licensed to execute."""
184 | return True
185 |
186 | def updateParameters(self, parameters):
187 | """Modify the values and properties of parameters before internal
188 | validation is performed. This method is called whenever a parameter
189 | has been changed."""
190 | if parameters[3] is not None:
191 | (dirnm, basenm) = os.path.split(parameters[3].valueAsText)
192 | parameters[3].value = os.path.join(dirnm, "Discharge_Table")
193 |
194 | return
195 |
196 | def updateMessages(self, parameters):
197 | """Modify the messages created by internal validation for each tool
198 | parameter. This method is called after internal validation."""
199 |
200 | if parameters[0].altered:
201 | in_nc = parameters[0].valueAsText
202 | try:
203 | data_nc = NET.Dataset(in_nc)
204 | data_nc.close()
205 | except Exception as e:
206 | parameters[0].setErrorMessage(e.message)
207 |
208 | return
209 |
210 | def execute(self, parameters, messages):
211 | """The source code of the tool."""
212 | arcpy.env.overwriteOutput = True
213 |
214 | in_nc = parameters[0].valueAsText
215 | start_datetime = parameters[1].valueAsText
216 | time_interval = parameters[2].valueAsText
217 | out_flat_table = parameters[3].valueAsText
218 | out_uniqueID_table = parameters[4].valueAsText
219 |
220 | # validate the netCDF dataset
221 | self.validateNC(in_nc, messages)
222 |
223 | # create flat table based on the netcdf data file
224 | self.createFlatTable(in_nc, out_flat_table)
225 |
226 | # add and calculate TimeValue field
227 | self.calculateTimeField(out_flat_table, start_datetime, time_interval)
228 |
229 | # add attribute indices for COMID and TimeValue
230 | arcpy.AddIndex_management(out_flat_table, self.fields_oi[1], self.fields_oi[1])
231 | arcpy.AddIndex_management(out_flat_table, self.fields_oi[3], self.fields_oi[3])
232 |
233 | # create unique ID table if user defined
234 | arcpy.AddMessage("unique ID table: {0}".format(out_uniqueID_table))
235 | if out_uniqueID_table is not None:
236 | self.createUniqueIDTable(in_nc, out_uniqueID_table)
237 |
238 |
239 | return
--------------------------------------------------------------------------------
/toolbox/scripts/CreateInflowFileFromECMWFRunoff.py:
--------------------------------------------------------------------------------
1 | '''-------------------------------------------------------------------------------
2 | Tool Name: CreateInflowFileFromECMWFRunoff
3 | Source Name: CreateInflowFileFromECMWFRunoff.py
4 | Version: ArcGIS 10.2
5 | License: Apache 2.0
6 | Author: Environmental Systems Research Institute Inc.
7 | Updated by: Environmental Systems Research Institute Inc.
8 | Description: Creates RAPID inflow file based on the WRF_Hydro land model output
9 | and the weight table previously created.
10 | History: Initial coding - 10/21/2014, version 1.0
11 | Updated: Version 1.0, 10/23/2014, modified names of tool and parameters
12 | Version 1.0, 10/28/2014, added data validation
13 | Version 1.0, 10/30/2014, initial version completed
14 | Version 1.1, 11/05/2014, modified the algorithm for extracting runoff
15 | variable from the netcdf dataset to improve computation efficiency
16 | Version 1.2, 02/03/2015, bug fixing - output netcdf3-classic instead
17 | of netcdf4 as the format of RAPID inflow file
18 | Version 1.2, 02/03/2015, bug fixing - calculate inflow assuming that
19 | ECMWF runoff data is cumulative instead of incremental through time
20 | Version 1.3, 02/18/2015, tool redesign - input drainage line features
21 | and compute RAPID inflow in the ascending order of drainag line ID
22 | Version 1.3, 02/18/2015, use 'HydroID' as one of the dimension names
23 | of m3_riv in the output RAPID inflow file
24 | Version 1.3, 02/20/2015 Enhancement - Added error handling for message
25 | updating of input drainage line features and output .csv weight table
26 | Version 1.3, 04/20/2015 Bug fixing - "HydroID", "NextDownID" (case insensitive) as
27 | the required field names in the input drainage line feature class
28 | Version 1.4, 04/27/2015 Bug fixing (false zero inflows)- To deal with
29 | the first several rows of records in the weight table don't correspond
30 | to any drainage line features (e.g. sink polygon as in Region 12 catchment)
31 | Version 1.4, 05/29/2015 Bug fixing: the pointer of weight table goes out of range
32 | Version 2.0, 06/09/2015 tool redesign - remove input drainage line features and
33 | compute RAPID inflow based on the streamIDs in the weight table given that the
34 | Create Weight Table tool has already taken care of the mismatch between streamIDs
35 | in the drainage line feature class and the catchment feature class.
36 | Version 2.0, 06/10/2015, use streamID in the weight table as the dimension name of
37 | m3_riv in the output RAPID inflow file
38 |
39 | -------------------------------------------------------------------------------'''
40 | import os
41 | import arcpy
42 | import netCDF4 as NET
43 | import numpy as NUM
44 | import csv
45 |
46 | class CreateInflowFileFromECMWFRunoff(object):
47 | def __init__(self):
48 | """Define the tool (tool name is the name of the class)."""
49 | self.label = "Create Inflow File From ECMWF Runoff"
50 | self.description = ("Creates RAPID NetCDF input of water inflow " +
51 | "based on ECMWF runoff results and previously created weight table.")
52 | self.canRunInBackground = False
53 | self.header_wt = ['StreamID', 'area_sqm', 'lon_index', 'lat_index', 'npoints', 'weight', 'Lon', 'Lat']
54 | self.dims_oi = ['lon', 'lat', 'time']
55 | self.vars_oi = ["lon", "lat", "time", "RO"]
56 | self.length_time = {"LowRes": 61, "HighRes": 125}
57 | self.length_time_opt = {"LowRes": 61, "HighRes-1hr": 91, "HighRes-3hr": 49, "HighRes-6hr": 41}
58 | self.errorMessages = ["Missing Variable 'time'",
59 | "Incorrect dimensions in the input ECMWF runoff file.",
60 | "Incorrect variables in the input ECMWF runoff file.",
61 | "Incorrect time variable in the input ECMWF runoff file",
62 | "Incorrect number of columns in the weight table",
63 | "No or incorrect header in the weight table",
64 | "Incorrect sequence of rows in the weight table"]
65 | self.category = "Preprocessing"
66 |
67 |
68 | def dataValidation(self, in_nc, messages):
69 | """Check the necessary dimensions and variables in the input netcdf data"""
70 | data_nc = NET.Dataset(in_nc)
71 |
72 | dims = data_nc.dimensions.keys()
73 | if dims != self.dims_oi:
74 | messages.addErrorMessage(self.errorMessages[1])
75 | raise arcpy.ExecuteError
76 |
77 | vars = data_nc.variables.keys()
78 | if vars != self.vars_oi:
79 | messages.addErrorMessage(self.errorMessages[2])
80 | raise arcpy.ExecuteError
81 |
82 | return
83 |
84 |
85 | def dataIdentify(self, in_nc):
86 | """Check if the data is Ensemble 1-51 (low resolution) or 52 (high resolution)"""
87 | data_nc = NET.Dataset(in_nc)
88 | name_time = self.vars_oi[2]
89 | time = data_nc.variables[name_time][:]
90 | diff = NUM.unique(NUM.diff(time))
91 | data_nc.close()
92 | time_interval_highres = NUM.array([1.0,3.0,6.0],dtype=float)
93 | time_interval_lowres = NUM.array([6.0],dtype=float)
94 | if (diff == time_interval_highres).all():
95 | return "HighRes"
96 | elif (diff == time_interval_lowres).all():
97 | return "LowRes"
98 | else:
99 | return None
100 |
101 |
102 | def getParameterInfo(self):
103 | """Define parameter definitions"""
104 | param0 = arcpy.Parameter(name = "in_ECMWF_runoff_file",
105 | displayName = "Input ECMWF Runoff File",
106 | direction = "Input",
107 | parameterType = "Required",
108 | datatype = "DEFile")
109 |
110 | param1 = arcpy.Parameter(name = "in_weight_table",
111 | displayName = "Input Weight Table",
112 | direction = "Input",
113 | parameterType = "Required",
114 | datatype = "DEFile")
115 |
116 | param2 = arcpy.Parameter(name = "out_inflow_file",
117 | displayName = "Output Inflow File",
118 | direction = "Output",
119 | parameterType = "Required",
120 | datatype = "DEFile")
121 |
122 | param3 = arcpy.Parameter(name = "time_interval",
123 | displayName = "Time Interval",
124 | direction = "Input",
125 | parameterType = "Optional",
126 | datatype = "GPString")
127 |
128 | param3.filter.type = "ValueList"
129 | list_intervals = []
130 | param3.filter.list = list_intervals
131 |
132 | params = [param0, param1, param2, param3]
133 |
134 | return params
135 |
136 | def isLicensed(self):
137 | """Set whether tool is licensed to execute."""
138 | return True
139 |
140 | def updateParameters(self, parameters):
141 | """Modify the values and properties of parameters before internal
142 | validation is performed. This method is called whenever a parameter
143 | has been changed."""
144 | if parameters[0].altered:
145 | in_nc = parameters[0].valueAsText
146 | try:
147 | data_nc = NET.Dataset(in_nc)
148 | name_time = self.vars_oi[2]
149 | time = data_nc.variables[name_time][:]
150 | diff = NUM.unique(NUM.diff(time))
151 | max_interval = diff.max()
152 | parameters[3].filter.list = [(str(int(each)) + "hr") for each in diff]
153 | if parameters[3].valueAsText is None:
154 | parameters[3].value = str(int(max_interval)) + "hr"
155 | data_nc.close()
156 | except:
157 | pass
158 |
159 | if parameters[0].altered and parameters[1].altered:
160 | if parameters[2].valueAsText is not None:
161 | (dirnm, basenm) = os.path.split(parameters[2].valueAsText)
162 | if not basenm.endswith(".nc"):
163 | parameters[2].value = os.path.join(dirnm, "{}.nc".format(basenm))
164 |
165 | return
166 |
167 | def updateMessages(self, parameters):
168 | """Modify the messages created by internal validation for each tool
169 | parameter. This method is called after internal validation."""
170 |
171 | if parameters[0].altered:
172 | in_nc = parameters[0].valueAsText
173 | try:
174 | data_nc = NET.Dataset(in_nc)
175 | name_time = self.vars_oi[2]
176 | time = data_nc.variables[name_time][:]
177 | data_nc.close()
178 | except:
179 | parameters[0].setErrorMessage(self.errorMessages[0])
180 |
181 | try:
182 | if parameters[1].altered:
183 | (dirnm, basenm) = os.path.split(parameters[1].valueAsText)
184 | if not basenm.endswith(".csv"):
185 | parameters[1].setErrorMessage("The weight table must be in CSV format")
186 | except Exception as e:
187 | parameters[1].setErrorMessage(e.message)
188 |
189 | return
190 |
191 | def execute(self, parameters, messages):
192 | """The source code of the tool."""
193 |
194 | arcpy.env.overwriteOutput = True
195 |
196 | in_nc = parameters[0].valueAsText
197 | in_weight_table = parameters[1].valueAsText
198 | out_nc = parameters[2].valueAsText
199 | in_time_interval = parameters[3].valueAsText
200 |
201 | # Validate the netcdf dataset
202 | self.dataValidation(in_nc, messages)
203 |
204 | # identify if the input netcdf data is the High Resolution data with three different time intervals
205 | id_data = self.dataIdentify(in_nc)
206 | if id_data is None:
207 | messages.addErrorMessage(self.errorMessages[3])
208 | raise arcpy.ExecuteError
209 |
210 | ''' Read the netcdf dataset'''
211 | data_in_nc = NET.Dataset(in_nc)
212 | time = data_in_nc.variables[self.vars_oi[2]][:]
213 |
214 | # Check the size of time variable in the netcdf data
215 | if len(time) != self.length_time[id_data]:
216 | messages.addErrorMessage(self.errorMessages[3])
217 | raise arcpy.ExecuteError
218 |
219 | ''' Read .csv weight table '''
220 | arcpy.AddMessage("Reading the weight table...")
221 | dict_list = {self.header_wt[0]:[], self.header_wt[1]:[], self.header_wt[2]:[],
222 | self.header_wt[3]:[], self.header_wt[4]:[], self.header_wt[5]:[],
223 | self.header_wt[6]:[], self.header_wt[7]:[]}
224 | streamID = ""
225 | with open(in_weight_table, "rb") as csvfile:
226 | reader = csv.reader(csvfile)
227 | count = 0
228 | for row in reader:
229 | if count == 0:
230 | #check number of columns in the weight table
231 | if len(row) != len(self.header_wt):
232 | messages.addErrorMessage(self.errorMessages[4])
233 | raise arcpy.ExecuteError
234 | #check header
235 | if row[1:len(self.header_wt)] != self.header_wt[1:len(self.header_wt)]:
236 | messages.addErrorMessage(self.errorMessages[5])
237 | arcpy.ExecuteError
238 | streamID = row[0]
239 | count += 1
240 | else:
241 | for i in range(0,8):
242 | dict_list[self.header_wt[i]].append(row[i])
243 | count += 1
244 |
245 | '''Calculate water inflows'''
246 | arcpy.AddMessage("Calculating water inflows...")
247 |
248 | # Obtain size information
249 | if id_data == "LowRes":
250 | size_time = self.length_time_opt["LowRes"]
251 | else:
252 | if in_time_interval == "1hr":
253 | size_time = self.length_time_opt["HighRes-1hr"]
254 | elif in_time_interval == "3hr":
255 | size_time = self.length_time_opt["HighRes-3hr"]
256 | else:
257 | size_time = self.length_time_opt["HighRes-6hr"]
258 |
259 | size_streamID = len(set(dict_list[self.header_wt[0]]))
260 |
261 | # Create output inflow netcdf data
262 | data_out_nc = NET.Dataset(out_nc, "w", format = "NETCDF3_CLASSIC")
263 | dim_Time = data_out_nc.createDimension('Time', size_time)
264 | dim_RiverID = data_out_nc.createDimension(streamID, size_streamID)
265 | var_m3_riv = data_out_nc.createVariable('m3_riv', 'f4', ('Time', streamID))
266 | data_temp = NUM.empty(shape = [size_time, size_streamID])
267 |
268 |
269 | lon_ind_all = [long(i) for i in dict_list[self.header_wt[2]]]
270 | lat_ind_all = [long(j) for j in dict_list[self.header_wt[3]]]
271 |
272 | # Obtain a subset of runoff data based on the indices in the weight table
273 | min_lon_ind_all = min(lon_ind_all)
274 | max_lon_ind_all = max(lon_ind_all)
275 | min_lat_ind_all = min(lat_ind_all)
276 | max_lat_ind_all = max(lat_ind_all)
277 |
278 |
279 | data_subset_all = data_in_nc.variables[self.vars_oi[3]][:, min_lat_ind_all:max_lat_ind_all+1, min_lon_ind_all:max_lon_ind_all+1]
280 | len_time_subset_all = data_subset_all.shape[0]
281 | len_lat_subset_all = data_subset_all.shape[1]
282 | len_lon_subset_all = data_subset_all.shape[2]
283 | data_subset_all = data_subset_all.reshape(len_time_subset_all, (len_lat_subset_all * len_lon_subset_all))
284 |
285 |
286 | # compute new indices based on the data_subset_all
287 | index_new = []
288 | for r in range(0,count-1):
289 | ind_lat_orig = lat_ind_all[r]
290 | ind_lon_orig = lon_ind_all[r]
291 | index_new.append((ind_lat_orig - min_lat_ind_all)*len_lon_subset_all + (ind_lon_orig - min_lon_ind_all))
292 |
293 | # obtain a new subset of data
294 | data_subset_new = data_subset_all[:,index_new]
295 | pointer = 0
296 | # start compute inflow
297 | len_wt = len(dict_list[self.header_wt[0]])
298 | for s in range(0, size_streamID):
299 | npoints = int(dict_list[self.header_wt[4]][pointer])
300 | # Check if all npoints points correspond to the same streamID
301 | if len(set(dict_list[self.header_wt[0]][pointer : (pointer + npoints)])) != 1:
302 | messages.addErrorMessage(self.errorMessages[2])
303 | arcpy.ExecuteError
304 |
305 | area_sqm_npoints = [float(k) for k in dict_list[self.header_wt[1]][pointer : (pointer + npoints)]]
306 | area_sqm_npoints = NUM.array(area_sqm_npoints)
307 | area_sqm_npoints = area_sqm_npoints.reshape(1, npoints)
308 | data_goal = data_subset_new[:, pointer:(pointer + npoints)]
309 |
310 | ''''IMPORTANT NOTE: runoff variable in ECMWF dataset is cumulative instead of incremental through time'''
311 | # For data with Low Resolution, there's only one time interval 6 hrs
312 | if id_data == "LowRes":
313 | #ro_stream = data_goal * area_sqm_npoints
314 | ro_stream = NUM.concatenate([data_goal[0:1,],
315 | NUM.subtract(data_goal[1:,],data_goal[:-1,])]) * area_sqm_npoints
316 |
317 | #For data with High Resolution, from Hour 0 to 90 (the first 91 time points) are of 1 hr time interval,
318 | # then from Hour 90 to 144 (19 time points) are of 3 hour time interval, and from Hour 144 to 240 (15 time points)
319 | # are of 6 hour time interval
320 | else:
321 | if in_time_interval == "1hr":
322 | ro_stream = NUM.concatenate([data_goal[0:1,],
323 | NUM.subtract(data_goal[1:91,],data_goal[:90,])]) * area_sqm_npoints
324 | elif in_time_interval == "3hr":
325 | # Hour = 0 is a single data point
326 | ro_3hr_a = data_goal[0:1,]
327 | # calculate time series of 3 hr data from 1 hr data
328 | ro_3hr_b = NUM.subtract(data_goal[3:91:3,],data_goal[:88:3,])
329 | # get the time series of 3 hr data
330 | ro_3hr_c = NUM.subtract(data_goal[91:109,], data_goal[90:108,])
331 | # concatenate all time series
332 | ro_stream = NUM.concatenate([ro_3hr_a, ro_3hr_b, ro_3hr_c]) * area_sqm_npoints
333 | else:
334 | # in_time_interval is "6hr"
335 | # Hour = 0 is a single data point
336 | ro_6hr_a = data_goal[0:1,]
337 | # calculate time series of 6 hr data from 1 hr data
338 | ro_6hr_b = NUM.subtract(data_goal[6:91:6,], data_goal[:85:6,])
339 | # calculate time series of 6 hr data from 3 hr data
340 | ro_6hr_c = NUM.subtract(data_goal[92:109:2,], data_goal[90:107:2,])
341 | # get the time series of 6 hr data
342 | ro_6hr_d = NUM.subtract(data_goal[109:,], data_goal[108:124,])
343 | # concatenate all time series
344 | ro_stream = NUM.concatenate([ro_6hr_a, ro_6hr_b, ro_6hr_c, ro_6hr_d]) * area_sqm_npoints
345 |
346 |
347 | data_temp[:,s] = ro_stream.sum(axis = 1)
348 | pointer += npoints
349 |
350 |
351 | '''Write inflow data'''
352 | arcpy.AddMessage("Writing inflow data...")
353 | var_m3_riv[:] = data_temp
354 | # close the input and output netcdf datasets
355 | data_in_nc.close()
356 | data_out_nc.close()
357 |
358 |
359 | return
360 |
--------------------------------------------------------------------------------
/toolbox/scripts/CreateInflowFileFromWRFHydroRunoff.py:
--------------------------------------------------------------------------------
1 | '''-------------------------------------------------------------------------------
2 | Tool Name: CreateInflowFileFromWRFHydroRunoff
3 | Source Name: CreateInflowFileFromWRFHydroRunoff.py
4 | Version: ArcGIS 10.2
5 | License: Apache 2.0
6 | Author: Environmental Systems Research Institute Inc.
7 | Updated by: Environmental Systems Research Institute Inc.
8 | Description: Creates RAPID inflow file based on the WRF_Hydro land model output
9 | and the weight table previously created.
10 | History: Initial coding - 10/17/2014, version 1.0
11 | Updated: Version 1.0, 10/23/2014, modified names of tool and parameters
12 | Version 1.0, 10/28/2014, added data validation
13 | Version 1.1, 11/05/2014, modified the algorithm for extracting runoff
14 | variable from the netcdf dataset to improve computation efficiency
15 | Version 1.1, 02/03/2015, bug fixing - output netcdf3-classic instead
16 | of netcdf4 as the format of RAPID inflow file
17 | Version 1.2, 02/17/2015, tool redesign - input drainage line features
18 | and compute RAPID inflow in the ascending order of drainag line ID
19 | Version 1.2, 02/17/2015, bug fixing - included UGDRNOFF as a component
20 | of RAPID inflow; calculated inflow assuming that WRF-Hydro runoff variables
21 | are cumulative instead of incremental through time; use 'HydroID' as one of
22 | the dimension names of m3_riv in the output RAPID inflow file
23 | Version 1.2, 04/20/2015 Bug fixing - "HydroID", "NextDownID" (case insensitive) as
24 | the required field names in the input drainage line feature class
25 | Version 1.3, 04/27/2015 Bug fixing (false zero inflows)- To deal with
26 | the first several rows of records in the weight table don't correspond
27 | to any drainage line features (e.g. sink polygon as in Region 12 catchment)
28 | Version 1.4, 05/29/2015 Bug fixing: the pointer of weight table goes out of range
29 | Version 2.0, 06/09/2015 tool redesign - remove input drainage line features and
30 | compute RAPID inflow based on the streamIDs in the weight table given that the
31 | Create Weight Table tool has already taken care of the mismatch between streamIDs
32 | in the drainage line feature class and the catchment feature class.
33 | Version 2.0, 06/10/2015, use streamID in the weight table as the dimension name of
34 | m3_riv in the output RAPID inflow file
35 | -------------------------------------------------------------------------------'''
36 | import os
37 | import arcpy
38 | import netCDF4 as NET
39 | import numpy as NUM
40 | import csv
41 |
42 |
43 | class CreateInflowFileFromWRFHydroRunoff(object):
44 | def __init__(self):
45 | """Define the tool (tool name is the name of the class)."""
46 | self.label = "Create Inflow File From WRF-Hydro Runoff"
47 | self.description = ("Creates RAPID NetCDF input of water inflow based on the WRF-Hydro land" +
48 | " model output and the weight table previously created")
49 | self.canRunInBackground = False
50 | self.header_wt = ['StreamID', 'area_sqm', 'west_east', 'south_north',
51 | 'npoints', 'weight', 'Lon', 'Lat', 'x', 'y']
52 | # According to David Gochis, underground runoff is "a major fraction of total river flow in most places"
53 | self.vars_oi = ['SFCRNOFF', 'INTRFLOW','UGDRNOFF']
54 | self.dims_var = ('Time', 'south_north', 'west_east')
55 | self.errorMessages = ["Incorrect number of columns in the weight table",
56 | "No or incorrect header in the weight table",
57 | "Incorrect sequence of rows in the weight table",
58 | "Missing variable: {0} in the input WRF-Hydro runoff file",
59 | "Incorrect dimensions of variable {0} in the input WRF-Hydro runoff file"]
60 | self.category = "Preprocessing"
61 |
62 | def dataValidation(self, in_nc, messages):
63 | """Check the necessary dimensions and variables in the input netcdf data"""
64 | data_nc = NET.Dataset(in_nc)
65 | vars = data_nc.variables.keys()
66 | for each in self.vars_oi:
67 | if each not in vars:
68 | messages.addErrorMessage(self.errorMessages[3].format(each))
69 | raise arcpy.ExecuteError
70 | else:
71 | dims = data_nc.variables[each].dimensions
72 | if self.dims_var != dims:
73 | messages.addErrorMessage(self.errorMessages[4].format(each))
74 | raise arcpy.ExecuteError
75 |
76 | data_nc.close()
77 |
78 | return
79 |
80 | def getParameterInfo(self):
81 | """Define parameter definitions"""
82 | param0 = arcpy.Parameter(name = "in_WRF_Hydro_runoff_file",
83 | displayName = "Input WRF-Hydro Runoff File",
84 | direction = "Input",
85 | parameterType = "Required",
86 | datatype = "DEFile")
87 |
88 | param1 = arcpy.Parameter(name = "in_weight_table",
89 | displayName = "Input Weight Table",
90 | direction = "Input",
91 | parameterType = "Required",
92 | datatype = "DEFile")
93 |
94 | param2 = arcpy.Parameter(name = "out_inflow_file",
95 | displayName = "Output Inflow File",
96 | direction = "Output",
97 | parameterType = "Required",
98 | datatype = "DEFile")
99 |
100 | params = [param0, param1, param2]
101 |
102 | return params
103 |
104 | def isLicensed(self):
105 | """Set whether tool is licensed to execute."""
106 | return True
107 |
108 | def updateParameters(self, parameters):
109 | """Modify the values and properties of parameters before internal
110 | validation is performed. This method is called whenever a parameter
111 | has been changed."""
112 | if parameters[0].altered and parameters[1].altered:
113 | if parameters[2].valueAsText is not None:
114 | (dirnm, basenm) = os.path.split(parameters[2].valueAsText)
115 | if not basenm.endswith(".nc"):
116 | parameters[2].value = os.path.join(dirnm, "{}.nc".format(basenm))
117 | return
118 |
119 | def updateMessages(self, parameters):
120 | """Modify the messages created by internal validation for each tool
121 | parameter. This method is called after internal validation."""
122 | if parameters[0].altered:
123 | in_nc = parameters[0].valueAsText
124 | try:
125 | data_nc = NET.Dataset(in_nc)
126 | data_nc.close()
127 | except Exception as e:
128 | parameters[0].setErrorMessage(e.message)
129 |
130 | if parameters[1].altered:
131 | (dirnm, basenm) = os.path.split(parameters[1].valueAsText)
132 | if not basenm.endswith(".csv"):
133 | parameters[1].setErrorMessage("The weight table must be in CSV format")
134 |
135 | return
136 |
137 | def execute(self, parameters, messages):
138 | """The source code of the tool."""
139 |
140 | arcpy.env.overwriteOutput = True
141 |
142 | in_nc = parameters[0].valueAsText
143 | in_weight_table = parameters[1].valueAsText
144 |
145 | out_nc = parameters[2].valueAsText
146 |
147 | # Validate the netcdf dataset
148 | self.dataValidation(in_nc, messages)
149 |
150 | '''Read .csv weight table'''
151 | arcpy.AddMessage("Reading the weight table...")
152 | dict_list = {self.header_wt[0]:[], self.header_wt[1]:[], self.header_wt[2]:[],
153 | self.header_wt[3]:[], self.header_wt[4]:[]}
154 | streamID = ""
155 | with open(in_weight_table, "rb") as csvfile:
156 | reader = csv.reader(csvfile)
157 | count = 0
158 | for row in reader:
159 | if count == 0:
160 | #check number of columns in the weight table
161 | if len(row) != len(self.header_wt):
162 | messages.addErrorMessage(self.errorMessages[0])
163 | raise arcpy.ExecuteError
164 | #check header
165 | if row[1:len(self.header_wt)] != self.header_wt[1:len(self.header_wt)]:
166 | messages.addErrorMessage(self.errorMessages[1])
167 | arcpy.ExecuteError
168 | streamID = row[0]
169 | count += 1
170 | else:
171 | for i in range(0,5):
172 | dict_list[self.header_wt[i]].append(row[i])
173 | count += 1
174 |
175 | '''Calculate water inflows'''
176 | arcpy.AddMessage("Calculating water inflows...")
177 | data_in_nc = NET.Dataset(in_nc)
178 |
179 | # Obtain size information
180 | size_time = data_in_nc.variables[self.vars_oi[0]].shape[0]
181 | size_streamID = len(set(dict_list[self.header_wt[0]]))
182 |
183 | # Create output inflow netcdf data
184 | data_out_nc = NET.Dataset(out_nc, "w", format = "NETCDF3_CLASSIC")
185 | dim_Time = data_out_nc.createDimension('Time', size_time)
186 | dim_RiverID = data_out_nc.createDimension(streamID, size_streamID)
187 | var_m3_riv = data_out_nc.createVariable('m3_riv', 'f4', ('Time', streamID))
188 | data_temp = NUM.empty(shape = [size_time, size_streamID])
189 |
190 |
191 | we_ind_all = [long(i) for i in dict_list[self.header_wt[2]]]
192 | sn_ind_all = [long(j) for j in dict_list[self.header_wt[3]]]
193 |
194 | # Obtain a subset of runoff data based on the indices in the weight table
195 | min_we_ind_all = min(we_ind_all)
196 | max_we_ind_all = max(we_ind_all)
197 | min_sn_ind_all = min(sn_ind_all)
198 | max_sn_ind_all = max(sn_ind_all)
199 |
200 |
201 | data_subset_all = data_in_nc.variables[self.vars_oi[0]][:,min_sn_ind_all:max_sn_ind_all+1, min_we_ind_all:max_we_ind_all+1]/1000 \
202 | + data_in_nc.variables[self.vars_oi[1]][:,min_sn_ind_all:max_sn_ind_all+1, min_we_ind_all:max_we_ind_all+1]/1000 \
203 | + data_in_nc.variables[self.vars_oi[2]][:,min_sn_ind_all:max_sn_ind_all+1, min_we_ind_all:max_we_ind_all+1]/1000
204 | len_time_subset_all = data_subset_all.shape[0]
205 | len_sn_subset_all = data_subset_all.shape[1]
206 | len_we_subset_all = data_subset_all.shape[2]
207 | data_subset_all = data_subset_all.reshape(len_time_subset_all, (len_sn_subset_all * len_we_subset_all))
208 |
209 |
210 | # compute new indices based on the data_subset_all
211 | index_new = []
212 | for r in range(0,count-1):
213 | ind_sn_orig = sn_ind_all[r]
214 | ind_we_orig = we_ind_all[r]
215 | index_new.append((ind_sn_orig - min_sn_ind_all)*len_we_subset_all + (ind_we_orig - min_we_ind_all))
216 |
217 | # obtain a new subset of data
218 | data_subset_new = data_subset_all[:,index_new]
219 |
220 |
221 | # start compute inflow
222 | len_wt = len(dict_list[self.header_wt[0]])
223 | pointer = 0
224 | for s in range(0, size_streamID):
225 | npoints = int(dict_list[self.header_wt[4]][pointer])
226 | # Check if all npoints points correspond to the same streamID
227 | if len(set(dict_list[self.header_wt[0]][pointer : (pointer + npoints)])) != 1:
228 | messages.addErrorMessage(self.errorMessages[2])
229 | arcpy.ExecuteError
230 |
231 | area_sqm_npoints = [float(k) for k in dict_list[self.header_wt[1]][pointer : (pointer + npoints)]]
232 | area_sqm_npoints = NUM.array(area_sqm_npoints)
233 | area_sqm_npoints = area_sqm_npoints.reshape(1, npoints)
234 | data_goal = data_subset_new[:, pointer:(pointer + npoints)]
235 |
236 | ''''IMPORTANT NOTE: runoff variables in WRF-Hydro dataset is cumulative through time'''
237 | rnoff_stream = NUM.concatenate([data_goal[0:1,],
238 | NUM.subtract(data_goal[1:,],data_goal[:-1,])]) * area_sqm_npoints
239 | data_temp[:,s] = rnoff_stream.sum(axis = 1)
240 |
241 | pointer += npoints
242 |
243 |
244 | '''Write inflow data'''
245 | arcpy.AddMessage("Writing inflow data...")
246 | var_m3_riv[:] = data_temp
247 | # close the input and output netcdf datasets
248 | data_in_nc.close()
249 | data_out_nc.close()
250 |
251 |
252 | return
--------------------------------------------------------------------------------
/toolbox/scripts/CreateMuskingumParameterFiles.py:
--------------------------------------------------------------------------------
1 | '''-------------------------------------------------------------------------------
2 | Tool Name: CreateMuskingumParameterFiles
3 | Source Name: CreateMuskingumParameterFiles.py
4 | Version: ArcGIS 10.2
5 | License: Apache 2.0
6 | Author: Environmental Systems Research Institute Inc.
7 | Updated by: Environmental Systems Research Institute Inc.
8 | Description: Generates CSV file of kfac, k and x Muskingum parameter for RAPID based
9 | on the length of river reach and celerity of flow wave
10 | the input Drainage Line feature class with Length fields.
11 | History: Initial coding - 07/21/2014, version 1.0
12 | Updated: Version 1.0, 10/23/2014 Added comments about the order of rows in tool output
13 | Version 1.1, 10/24/2014 Modified file and tool names
14 | Version 1.1, 02/19/2015 Enhancement - Added error handling for message updating of
15 | input drainage line features
16 | -------------------------------------------------------------------------------'''
17 | import os
18 | import arcpy
19 | import csv
20 |
21 | class CreateMuskingumParameterFiles(object):
22 | def __init__(self):
23 | """Define the tool (tool name is the name of the class)."""
24 | self.label = "Create Muskingum Parameter Files"
25 | self.description = "Creates Muskingum Parameters input CSV files for RAPID \
26 | based on the Drainage Line feature class with HydroID and NextDownID fields"
27 | self.canRunInBackground = False
28 | self.category = "Preprocessing"
29 |
30 | def getParameterInfo(self):
31 | """Define parameter definitions"""
32 | in_drainage_line = arcpy.Parameter(
33 | displayName = 'Input Drainage Line Features',
34 | name = 'input_drainage_line_features',
35 | datatype = 'GPFeatureLayer',
36 | parameterType = 'Required',
37 | direction = 'Input')
38 | in_drainage_line.filter.list = ['Polyline']
39 |
40 | out_csv_file1 = arcpy.Parameter(
41 | displayName = 'Output kfac File',
42 | name = 'out_kfac_file',
43 | datatype = 'DEFile',
44 | parameterType = 'Required',
45 | direction = 'Output')
46 |
47 | out_csv_file2 = arcpy.Parameter(
48 | displayName = 'Output k File',
49 | name = 'out_k_file',
50 | datatype = 'DEFile',
51 | parameterType = 'Required',
52 | direction = 'Output')
53 |
54 | out_csv_file3 = arcpy.Parameter(
55 | displayName = 'Output x File',
56 | name = 'out_x_file',
57 | datatype = 'DEFile',
58 | parameterType = 'Required',
59 | direction = 'Output')
60 |
61 | return [in_drainage_line,
62 | out_csv_file1,
63 | out_csv_file2,
64 | out_csv_file3]
65 |
66 | def isLicensed(self):
67 | """Set whether tool is licensed to execute."""
68 | return True
69 |
70 | def updateParameters(self, parameters):
71 | """Modify the values and properties of parameters before internal
72 | validation is performed. This method is called whenever a parameter
73 | has been changed."""
74 | scratchWorkspace = arcpy.env.scratchWorkspace
75 | if not scratchWorkspace:
76 | scratchWorkspace = arcpy.env.scratchGDB
77 |
78 | if parameters[1].valueAsText is not None:
79 | (dirnm, basenm) = os.path.split(parameters[1].valueAsText)
80 | if not basenm.endswith(".csv"):
81 | parameters[1].value = os.path.join(
82 | dirnm, "{}.csv".format(basenm))
83 | else:
84 | parameters[1].value = os.path.join(
85 | scratchWorkspace, "kfac.csv")
86 |
87 | if parameters[2].valueAsText is not None:
88 | (dirnm, basenm) = os.path.split(parameters[2].valueAsText)
89 | if not basenm.endswith(".csv"):
90 | parameters[2].value = os.path.join(
91 | dirnm, "{}.csv".format(basenm))
92 | else:
93 | parameters[2].value = os.path.join(
94 | scratchWorkspace, "k.csv")
95 |
96 | if parameters[3].valueAsText is not None:
97 | (dirnm, basenm) = os.path.split(parameters[3].valueAsText)
98 | if not basenm.endswith(".csv"):
99 | parameters[3].value = os.path.join(
100 | dirnm, "{}.csv".format(basenm))
101 | else:
102 | parameters[3].value = os.path.join(
103 | scratchWorkspace, "x.csv")
104 |
105 | def updateMessages(self, parameters):
106 | """Modify the messages created by internal validation for each tool
107 | parameter. This method is called after internal validation."""
108 | try:
109 | if parameters[0].altered:
110 | field_names = []
111 | fields = arcpy.ListFields(parameters[0].valueAsText)
112 | for field in fields:
113 | field_names.append(field.baseName.upper())
114 | if not ("MUSK_KFAC" in field_names and "MUSK_K" in field_names and "MUSK_X" in field_names):
115 | parameters[0].setErrorMessage("Input Drainage Line must contain Musk_kfac, Musk_k and Musk_x.")
116 | except Exception as e:
117 | parameters[0].setErrorMessage(e.message)
118 |
119 | return
120 |
121 | def execute(self, parameters, messages):
122 | """The source code of the tool."""
123 | in_drainage_line = parameters[0].valueAsText
124 | out_csv_file1 = parameters[1].valueAsText
125 | out_csv_file2 = parameters[2].valueAsText
126 | out_csv_file3 = parameters[3].valueAsText
127 |
128 | fields = ['HydroID', 'Musk_kfac', 'Musk_k', 'Musk_x']
129 |
130 | list_all_kfac = []
131 | list_all_k = []
132 | list_all_x = []
133 |
134 | '''The script line below makes sure that rows in the muskingum parameter
135 | files are arranged in ascending order of HydroIDs of stream segements'''
136 | for row in sorted(arcpy.da.SearchCursor(in_drainage_line, fields)):
137 | kfac=row[1]
138 | k=row[2]
139 | x=row[3]
140 |
141 | list_all_kfac.append([kfac])
142 | list_all_k.append([k])
143 | list_all_x.append([x])
144 |
145 | with open(out_csv_file1,'wb') as csvfile:
146 | connectwriter = csv.writer(csvfile, dialect='excel')
147 | for row_list in list_all_kfac:
148 | out = row_list
149 | connectwriter.writerow(out)
150 |
151 | with open(out_csv_file2,'wb') as csvfile:
152 | connectwriter = csv.writer(csvfile, dialect='excel')
153 | for row_list in list_all_k:
154 | out = row_list
155 | connectwriter.writerow(out)
156 |
157 | with open(out_csv_file3,'wb') as csvfile:
158 | connectwriter = csv.writer(csvfile, dialect='excel')
159 | for row_list in list_all_x:
160 | out = row_list
161 | connectwriter.writerow(out)
162 |
163 | return
164 |
165 |
--------------------------------------------------------------------------------
/toolbox/scripts/CreateNetworkConnectivityFile.py:
--------------------------------------------------------------------------------
1 | '''-------------------------------------------------------------------------------
2 | Tool Name: CreateNetworkConnectivityFile
3 | Source Name: CreateNetworkConnectivityFile.py
4 | Version: ArcGIS 10.2
5 | License: Apache 2.0
6 | Author: Environmental Systems Research Institute Inc.
7 | Updated by: Environmental Systems Research Institute Inc.
8 | Description: Generates CSV file of stream network connectivity for RAPID based on
9 | the input Drainage Line feature class with HydroID and NextDownID fields.
10 | History: Initial coding - 07/07/2014, version 1.0
11 | Updated: Version 1.0, 10/23/2014 Added comments about the order of rows in tool output
12 | Version 1.1, 10/24/2014 Modified file and tool names
13 | Version 1.1, 02/03/2015 Bug fixing - input parameter indices
14 | Version 1.1, 02/17/2015 Bug fixing - "HydroID", "NextDownID" in the exact
15 | upper/lower cases as the required field names in input drainage line feature class
16 | Version 1.1, 02/19/2015 Enhancement - Added error handling for message updating of
17 | input drainage line features
18 | Version 1.1, 04/20/2015 Bug fixing - "HydroID", "NextDownID" (case insensitive) as
19 | the required field names in the input drainage line feature class
20 | Version 2.0, 02/29/2015 - Used numpy array instead to make program faster
21 | (adpated from Alan D. Snow, US Army ERDC)
22 | -------------------------------------------------------------------------------'''
23 | import os
24 | import arcpy
25 | import csv
26 | import numpy as NUM
27 |
28 | class CreateNetworkConnectivityFile(object):
29 | def __init__(self):
30 | """Define the tool (tool name is the name of the class)."""
31 | self.label = "Create Connectivity File"
32 | self.description = "Creates Network Connectivity input CSV file for RAPID \
33 | based on the Drainage Line feature class with HydroID and NextDownID fields"
34 | self.canRunInBackground = False
35 | self.category = "Preprocessing"
36 |
37 | def getParameterInfo(self):
38 | """Define parameter definitions"""
39 | in_drainage_line = arcpy.Parameter(
40 | displayName = 'Input Drainage Line Features',
41 | name = 'in_drainage_line_features',
42 | datatype = 'GPFeatureLayer',
43 | parameterType = 'Required',
44 | direction = 'Input')
45 | in_drainage_line.filter.list = ['Polyline']
46 |
47 | out_csv_file = arcpy.Parameter(
48 | displayName = 'Output Network Connectivity File',
49 | name = 'out_network_connectivity_file',
50 | datatype = 'DEFile',
51 | parameterType = 'Required',
52 | direction = 'Output')
53 |
54 | in_max_nbr_upstream = arcpy.Parameter(
55 | displayName = 'Maximum Number of Upstream Reaches',
56 | name = 'max_nbr_upstreams',
57 | datatype = 'GPLong',
58 | parameterType = 'Optional',
59 | direction = 'Input')
60 |
61 | return [in_drainage_line,
62 | out_csv_file,
63 | in_max_nbr_upstream]
64 |
65 | def isLicensed(self):
66 | """Set whether tool is licensed to execute."""
67 | return True
68 |
69 | def updateParameters(self, parameters):
70 | """Modify the values and properties of parameters before internal
71 | validation is performed. This method is called whenever a parameter
72 | has been changed."""
73 | if parameters[1].altered:
74 | (dirnm, basenm) = os.path.split(parameters[1].valueAsText)
75 | if not basenm.endswith(".csv"):
76 | parameters[1].value = os.path.join(
77 | dirnm, "{}.csv".format(basenm))
78 | else:
79 | scratchWorkspace = arcpy.env.scratchWorkspace
80 | if not scratchWorkspace:
81 | scratchWorkspace = arcpy.env.scratchGDB
82 | parameters[1].value = os.path.join(
83 | scratchWorkspace, "rapid_connect.csv")
84 | return
85 |
86 | def updateMessages(self, parameters):
87 | """Modify the messages created by internal validation for each tool
88 | parameter. This method is called after internal validation."""
89 | try:
90 | if parameters[0].altered:
91 | field_names = []
92 | fields = arcpy.ListFields(parameters[0].valueAsText)
93 | for field in fields:
94 | field_names.append(field.baseName.upper())
95 | if not ("HYDROID" in field_names and "NEXTDOWNID" in field_names):
96 | parameters[0].setErrorMessage("Input Drainage Line must contain HydroID and NextDownID.")
97 | except Exception as e:
98 | parameters[0].setErrorMessage(e.message)
99 |
100 | if parameters[2].altered:
101 | max_nbr = parameters[2].value
102 | if (max_nbr < 0 or max_nbr > 12):
103 | parameters[2].setErrorMessage("Input Maximum Number of Upstreams must be within [1, 12]")
104 | return
105 |
106 | def execute(self, parameters, messages):
107 | """The source code of the tool."""
108 | in_drainage_line = parameters[0].valueAsText
109 | out_csv_file = parameters[1].valueAsText
110 | in_max_nbr_upstreams = parameters[2].value
111 |
112 | fields = ['HydroID', 'NextDownID']
113 | stream_id = fields[0]
114 | next_down_id = fields[1]
115 |
116 | list_all = []
117 | max_count_Upstream = 0
118 | '''The script line below makes sure that rows in the output connectivity
119 | file are arranged in ascending order of HydroIDs of stream segements'''
120 | np_table = arcpy.da.TableToNumPyArray(in_drainage_line, fields)
121 | for hydroid in NUM.sort(np_table[stream_id]):
122 | # find the HydroID of the upstreams
123 | list_upstreamID = np_table[np_table[next_down_id]==hydroid][stream_id]
124 | # count the total number of the upstreams
125 | count_upstream = len(list_upstreamID)
126 | if count_upstream > max_count_Upstream:
127 | max_count_Upstream = count_upstream
128 | nextDownID = np_table[np_table[stream_id]==hydroid][next_down_id][0]
129 | #THIS IS REMOVED DUE TO THE FACT THAT THERE CAN BE STREAMS WITH ID OF ZERO
130 | # # replace the nextDownID with 0 if it equals to -1 (no next downstream)
131 | # if nextDownID == -1:
132 | # nextDownID = 0
133 | # append the list of Stream HydroID, NextDownID, Count of Upstream ID, and HydroID of each Upstream into a larger list
134 | list_all.append(NUM.concatenate([NUM.array([hydroid,nextDownID,count_upstream]),list_upstreamID]))
135 |
136 | # If the input maximum number of upstreams is none, the actual max number of upstreams is used
137 | if in_max_nbr_upstreams == None:
138 | in_max_nbr_upstreams = max_count_Upstream
139 |
140 | with open(out_csv_file,'wb') as csvfile:
141 | connectwriter = csv.writer(csvfile, dialect='excel')
142 |
143 | for row_list in list_all:
144 | out = NUM.concatenate([row_list, NUM.array([0 for i in xrange(in_max_nbr_upstreams - row_list[2])])])
145 | connectwriter.writerow(out.astype(int))
146 |
147 |
148 | return
149 |
150 |
--------------------------------------------------------------------------------
/toolbox/scripts/CreateSubsetFile.py:
--------------------------------------------------------------------------------
1 | '''-------------------------------------------------------------------------------
2 | Tool Name: CreateSubsetFile
3 | Source Name: CreateSubsetFile.py
4 | Version: ArcGIS 10.2
5 | License: Apache 2.0
6 | Author: Environmental Systems Research Institute Inc.
7 | Updated by: Environmental Systems Research Institute Inc.
8 | Description: Generates CSV file of HydroID river network subset for RAPID based
9 | on the selected features of the input Drainage Line feature class.
10 | History: Initial coding - 07/22/2014, version 1.0
11 | Updated: Version 1.0, 10/23/2014 Added comments about the order of rows in tool output
12 | Version 1.1, 10/24/2014 Modified file and tool names
13 | Version 1.1, 02/19/2015 Enhancement - Added error handling for message updating of
14 | input drainage line features
15 | -------------------------------------------------------------------------------'''
16 | import os
17 | import arcpy
18 | import csv
19 |
20 | class CreateSubsetFile(object):
21 | def __init__(self):
22 | """Define the tool (tool name is the name of the class)."""
23 | self.label = "Create Subset File"
24 | self.description = "Creates CSV file of HydroID river network subset for RAPID\
25 | based on the selected features of the input Drainage Line feature class"
26 | self.canRunInBackground = False
27 | self.category = "Preprocessing"
28 |
29 | def getParameterInfo(self):
30 | """Define parameter definitions"""
31 | in_drainage_line = arcpy.Parameter(
32 | displayName = 'Input Drainage Line Features',
33 | name = 'in_drainage_line_features',
34 | datatype = 'GPFeatureLayer',
35 | parameterType = 'Required',
36 | direction = 'Input')
37 | in_drainage_line.filter.list = ['Polyline']
38 |
39 | out_csv_file = arcpy.Parameter(
40 | displayName = 'Output Subset File',
41 | name = 'out_subset_file',
42 | datatype = 'DEFile',
43 | parameterType = 'Required',
44 | direction = 'Output')
45 |
46 | return [in_drainage_line,
47 | out_csv_file]
48 |
49 | def isLicensed(self):
50 | """Set whether tool is licensed to execute."""
51 | return True
52 |
53 | def updateParameters(self, parameters):
54 | """Modify the values and properties of parameters before internal
55 | validation is performed. This method is called whenever a parameter
56 | has been changed."""
57 | if parameters[1].valueAsText is not None:
58 | (dirnm, basenm) = os.path.split(parameters[1].valueAsText)
59 | if not basenm.endswith(".csv"):
60 | parameters[1].value = os.path.join(
61 | dirnm, "{}.csv".format(basenm))
62 | else:
63 | scratchWorkspace = arcpy.env.scratchWorkspace
64 | if not scratchWorkspace:
65 | scratchWorkspace = arcpy.env.scratchGDB
66 | parameters[1].value = os.path.join(
67 | scratchWorkspace, "riv_bas_id.csv")
68 | return
69 |
70 | def updateMessages(self, parameters):
71 | """Modify the messages created by internal validation for each tool
72 | parameter. This method is called after internal validation."""
73 | try:
74 | if parameters[0].altered:
75 | field_names = []
76 | fields = arcpy.ListFields(parameters[0].valueAsText)
77 | for field in fields:
78 | field_names.append(field.baseName.upper())
79 | if not ("HYDROID" in field_names and "NEXTDOWNID" in field_names):
80 | parameters[0].setErrorMessage("Input Drainage Line must contain HydroID and NextDownID.")
81 | except Exception as e:
82 | parameters[0].setErrorMessage(e.message)
83 |
84 | return
85 |
86 |
87 | def execute(self, parameters, messages):
88 | """The source code of the tool."""
89 | in_drainage_line = parameters[0].valueAsText
90 | out_csv_file = parameters[1].valueAsText
91 |
92 | fields = ['NextDownID', 'HydroID']
93 |
94 | list_all = []
95 |
96 | '''The script line below makes sure that rows in the subset file are
97 | arranged in descending order of NextDownID of stream segements'''
98 | for row in sorted(arcpy.da.SearchCursor(in_drainage_line, fields), reverse=True):
99 | list_all.append([row[1]])
100 |
101 | with open(out_csv_file,'wb') as csvfile:
102 | connectwriter = csv.writer(csvfile, dialect='excel')
103 | for row_list in list_all:
104 | out = row_list
105 | connectwriter.writerow(out)
106 |
107 | return
108 |
109 |
--------------------------------------------------------------------------------
/toolbox/scripts/CreateWeightTableFromECMWFRunoff.py:
--------------------------------------------------------------------------------
1 | '''-------------------------------------------------------------------------------
2 | Tool Name: CreateWeightTableFromECMWFRunoff
3 | Source Name: CreateWeightTableFromECMWFRunoff.py
4 | Version: ArcGIS 10.2
5 | License: Apache 2.0
6 | Author: Environmental Systems Research Institute Inc.
7 | Updated by: Environmental Systems Research Institute Inc.
8 | Description: Creates RAPID inflow file based on ECMWF runoff output
9 | and the weight table previously created.
10 | History: Initial coding - 10/21/2014, version 1.0
11 | Updated: Version 1.0, 10/23/2014, modified names of tool and parameters
12 | Version 1.0, 10/28/2014, added data validation
13 | Version 1.1, 10/30/2014, added lon_index, lat_index in output weight table
14 | Version 1.1, 11/07/2014, bug fixing - enables input catchment feature class
15 | with spatial reference that is not PCS_WGS_1984.
16 | Version 2.0, 06/04/2015, integrated Update Weight Table (according to Alan Snow, US Army ERDC)
17 | -------------------------------------------------------------------------------'''
18 | import os
19 | import arcpy
20 | import netCDF4 as NET
21 | import numpy as NUM
22 | import csv
23 |
24 | class CreateWeightTableFromECMWFRunoff(object):
25 | def __init__(self):
26 | """Define the tool (tool name is the name of the class)."""
27 | self.label = "Create Weight Table From ECMWF Runoff"
28 | self.description = ("Creates weight table based on the ECMWF Runoff file" +
29 | " and catchment features")
30 | self.canRunInBackground = False
31 | self.dims_oi = [['lon', 'lat', 'time'],
32 | ['longitude', 'latitude', 'time'],
33 | ['lon', 'lat'],
34 | ['longitude', 'latitude']]
35 | self.vars_oi = [["lon", "lat", "time", "RO"],
36 | ["longitude", "latitude", "time", "ro"],
37 | ["lon", "lat"],
38 | ["longitude", "latitude"]]
39 | self.errorMessages = ["Incorrect dimensions in the input ECMWF runoff file.",
40 | "Incorrect variables in the input ECMWF runoff file."]
41 | self.category = "Preprocessing"
42 |
43 | def dataValidation(self, in_nc, messages):
44 | """Check the necessary dimensions and variables in the input netcdf data"""
45 | data_nc = NET.Dataset(in_nc)
46 |
47 | dims = data_nc.dimensions.keys()
48 | if dims not in self.dims_oi:
49 | messages.addErrorMessage(self.errorMessages[0])
50 | raise arcpy.ExecuteError
51 |
52 | vars = data_nc.variables.keys()
53 | if vars not in self.vars_oi:
54 | messages.addErrorMessage(self.errorMessages[1])
55 | raise arcpy.ExecuteError
56 |
57 | return
58 |
59 | def createPolygon(self, lat, lon, extent, out_polygons, scratchWorkspace):
60 | """Create a Thiessen polygon feature class from numpy.ndarray lat and lon
61 | Each polygon represents the area described by the center point
62 | """
63 | buffer = 2 * max(abs(lat[0]-lat[1]),abs(lon[0] - lon[1]))
64 | # Extract the lat and lon within buffered extent (buffer with 2* interval degree)
65 | lat0 = lat[(lat >= (extent.YMin - buffer)) & (lat <= (extent.YMax + buffer))]
66 | lon0 = lon[(lon >= (extent.XMin - buffer)) & (lon <= (extent.XMax + buffer))]
67 | # Spatial reference: GCS_WGS_1984
68 | sr = arcpy.SpatialReference(4326)
69 |
70 | # Create a list of geographic coordinate pairs
71 | pointGeometryList = []
72 | for i in range(len(lon0)):
73 | for j in range(len(lat0)):
74 | point = arcpy.Point()
75 | point.X = float(lon0[i])
76 | point.Y = float(lat0[j])
77 | pointGeometry = arcpy.PointGeometry(point, sr)
78 | pointGeometryList.append(pointGeometry)
79 |
80 | # Create a point feature class with longitude in Point_X, latitude in Point_Y
81 | out_points = os.path.join(scratchWorkspace, 'points_subset')
82 | result2 = arcpy.CopyFeatures_management(pointGeometryList, out_points)
83 | out_points = result2.getOutput(0)
84 | arcpy.AddGeometryAttributes_management(out_points, 'POINT_X_Y_Z_M')
85 |
86 | # Create Thiessen polygon based on the point feature
87 | result3 = arcpy.CreateThiessenPolygons_analysis(out_points, out_polygons, 'ALL')
88 | out_polygons = result3.getOutput(0)
89 |
90 | return out_points, out_polygons
91 |
92 | def csvToList(self, csv_file, delimiter=','):
93 | """
94 | Reads in a CSV file and returns the contents as list,
95 | where every row is stored as a sublist, and each element
96 | in the sublist represents 1 cell in the table.
97 |
98 | """
99 | with open(csv_file, 'rb') as csv_con:
100 | reader = csv.reader(csv_con, delimiter=delimiter)
101 | return list(reader)
102 |
103 | def getParameterInfo(self):
104 | """Define parameter definitions"""
105 | param0 = arcpy.Parameter(name = "in_ECMWF_runoff_file",
106 | displayName = "Input ECMWF Runoff File",
107 | direction = "Input",
108 | parameterType = "Required",
109 | datatype = "DEFile")
110 |
111 | param1 = arcpy.Parameter(name = "in_network_connectivity_file",
112 | displayName = "Input Network Connecitivity File",
113 | direction = "Input",
114 | parameterType = "Required",
115 | datatype = "DEFile")
116 |
117 | param2 = arcpy.Parameter(name = "in_catchment_features",
118 | displayName = "Input Catchment Features",
119 | direction = "Input",
120 | parameterType = "Required",
121 | datatype = "GPFeatureLayer")
122 |
123 | param2.filter.list = ['Polygon']
124 |
125 |
126 | param3 = arcpy.Parameter(name = "stream_ID",
127 | displayName = "Stream ID",
128 | direction = "Input",
129 | parameterType = "Required",
130 | datatype = "Field"
131 | )
132 | param3.parameterDependencies = ["in_catchment_features"]
133 | param3.filter.list = ['Short', 'Long']
134 |
135 |
136 | param4 = arcpy.Parameter(name="out_weight_table",
137 | displayName="Output Weight Table",
138 | direction="Output",
139 | parameterType="Required",
140 | datatype="DEFile")
141 |
142 | param5 = arcpy.Parameter(name = "out_cg_polygon_feature_class",
143 | displayName = "Output Computational Grid Polygon Feature Class",
144 | direction = "Output",
145 | parameterType = "Optional",
146 | datatype = "DEFeatureClass")
147 |
148 | param6 = arcpy.Parameter(name = "out_cg_point_feature_class",
149 | displayName = "Output Computational Grid Point Feature Class",
150 | direction = "Output",
151 | parameterType = "Optional",
152 | datatype = "DEFeatureClass")
153 |
154 |
155 | params = [param0, param1, param2, param3, param4, param5, param6]
156 |
157 | return params
158 |
159 | def isLicensed(self):
160 | """Set whether tool is licensed to execute."""
161 | return True
162 |
163 | def updateParameters(self, parameters):
164 | """Modify the values and properties of parameters before internal
165 | validation is performed. This method is called whenever a parameter
166 | has been changed."""
167 | if parameters[0].valueAsText is not None and parameters[2].valueAsText is not None \
168 | and parameters[3].valueAsText is not None and parameters[4].valueAsText is None:
169 | scratchWorkspace = arcpy.env.scratchWorkspace
170 | if not scratchWorkspace:
171 | scratchWorkspace = arcpy.env.scratchGDB
172 | parameters[4].value = os.path.join(scratchWorkspace, "Weight_Table.csv")
173 |
174 | if parameters[4].altered:
175 | (dirnm, basenm) = os.path.split(parameters[4].valueAsText)
176 | if not basenm.endswith(".csv"):
177 | parameters[4].value = os.path.join(dirnm, "{}.csv".format(basenm))
178 |
179 | return
180 |
181 | def updateMessages(self, parameters):
182 | """Modify the messages created by internal validation for each tool
183 | parameter. This method is called after internal validation."""
184 | if parameters[0].altered:
185 | in_nc = parameters[0].valueAsText
186 | try:
187 | data_nc = NET.Dataset(in_nc)
188 | data_nc.close()
189 | except Exception as e:
190 | parameters[0].setErrorMessage(e.message)
191 | return
192 |
193 | def find_nearest(self, array, value):
194 | """Gets value in array closest to the value searching for"""
195 | return (NUM.abs(array-value)).argmin()
196 |
197 | def execute(self, parameters, messages):
198 | """The source code of the tool."""
199 | arcpy.env.overwriteOutput = True
200 |
201 | scratchWorkspace = arcpy.env.scratchWorkspace
202 | if not scratchWorkspace:
203 | scratchWorkspace = arcpy.env.scratchGDB
204 |
205 | in_nc = parameters[0].valueAsText
206 | in_rapid_connect_file = parameters[1].valueAsText
207 | in_catchment = parameters[2].valueAsText
208 | streamID = parameters[3].valueAsText
209 | out_WeightTable = parameters[4].valueAsText
210 | out_CGPolygon = parameters[5].valueAsText
211 | out_CGPoint = parameters[6].valueAsText
212 |
213 | # validate the netcdf dataset
214 | self.dataValidation(in_nc, messages)
215 |
216 | # Obtain catchment extent in lat and lon in GCS_WGS_1984
217 | sr_cat = arcpy.Describe(in_catchment).SpatialReference
218 | extent = arcpy.Describe(in_catchment).extent
219 | if (sr_cat.name == 'GCS_WGS_1984'):
220 | extent = extent
221 | else:
222 | envelope = os.path.join(scratchWorkspace, 'envelope')
223 | result0 = arcpy.MinimumBoundingGeometry_management(in_catchment, envelope, 'ENVELOPE', 'ALL')
224 | envelope = result0.getOutput(0)
225 | sr_out = arcpy.SpatialReference(4326) # 'GCS_WGS_1984'
226 | envelope_proj = os.path.join(scratchWorkspace,'envelope_proj')
227 | result1 = arcpy.Project_management(envelope, envelope_proj, sr_out)
228 | envelope_proj = result1.getOutput(0)
229 | extent = arcpy.Describe(envelope_proj).extent
230 |
231 |
232 | #Open nc file
233 | """ Variables in the netcdf file 1-51
234 | lat (1D): -89.78 to 89.78 by 0.28 (Size: 640)
235 | lon (1D): 0.0 to 359.72 by 0.28 (Size: 1280)
236 | RO (Geo2D): runoff (3 dimensions)
237 | time (1D): 0 to 360 by 6 (Size: 61)
238 | """
239 | """ Variables in the netcdf file 52 (High Resolution)
240 | lat (1D): -89.89 to 89.89 by 0.14 (Size: 1280)
241 | lon (1D): 0.0 to 359.86 by 0.14 (Size: 2560)
242 | RO (Geo2D): runoff (3 dimensions)
243 | time (1D): 0 to 240 (0 to 90 by 1, 90 to 144 by 3, 144 to 240 by 6) (Size: 125)
244 | """
245 |
246 | data_nc = NET.Dataset(in_nc)
247 |
248 | # Obtain geographic coordinates
249 | variables_list = data_nc.variables.keys()
250 | lat_var = 'lat'
251 | if 'latitude' in variables_list:
252 | lat_var = 'latitude'
253 | lon_var = 'lon'
254 | if 'longitude' in variables_list:
255 | lon_var = 'longitude'
256 | lon = (data_nc.variables[lon_var][:] + 180) % 360 - 180 # convert [0, 360] to [-180, 180]
257 | lat = data_nc.variables[lat_var][:]
258 |
259 | data_nc.close()
260 |
261 | # Create Thiessen polygons based on the points within the extent
262 | arcpy.AddMessage("Generating Thiessen polygons...")
263 | polygon_thiessen = os.path.join(scratchWorkspace,'polygon_thiessen')
264 | result4 = self.createPolygon(lat, lon, extent, polygon_thiessen, scratchWorkspace)
265 | polygon_thiessen = result4[1]
266 |
267 |
268 | # Output Thiessen polygons (computational grid polygons) and CG points if they are specified.
269 | if out_CGPolygon and out_CGPolygon != polygon_thiessen:
270 | arcpy.CopyFeatures_management(polygon_thiessen, out_CGPolygon)
271 | if out_CGPoint and out_CGPoint != result4[0]:
272 | arcpy.CopyFeatures_management(result4[0], out_CGPoint)
273 |
274 |
275 | # Intersect the catchment polygons with the Thiessen polygons
276 | arcpy.AddMessage("Intersecting Thiessen polygons with catchment...")
277 | intersect = os.path.join(scratchWorkspace, 'intersect')
278 | result5 = arcpy.Intersect_analysis([in_catchment, polygon_thiessen], intersect, 'ALL', '#', 'INPUT')
279 | intersect = result5.getOutput(0)
280 |
281 | # Calculate the geodesic area in square meters for each intersected polygon (no need to project if it's not projected yet)
282 | arcpy.AddMessage("Calculating geodesic areas...")
283 | arcpy.AddGeometryAttributes_management(intersect, 'AREA_GEODESIC', '', 'SQUARE_METERS', '')
284 |
285 | # Calculate the total geodesic area of each catchment based on the contributing areas of points
286 | fields = [streamID, 'POINT_X', 'POINT_Y', 'AREA_GEO']
287 | area_arr = arcpy.da.FeatureClassToNumPyArray(intersect, fields)
288 |
289 | arcpy.AddMessage("Writing the weight table...")
290 | # Get list of COMIDs in rapid_connect file so only those area included in computations
291 | connectivity_table = self.csvToList(in_rapid_connect_file)
292 | streamID_unique_list = [int(row[0]) for row in connectivity_table]
293 |
294 | #if point not in array append dummy data for one point of data
295 | lon_dummy = area_arr['POINT_X'][0]
296 | lat_dummy = area_arr['POINT_Y'][0]
297 | try:
298 | index_lon_dummy = int(NUM.where(lon == lon_dummy)[0])
299 | except TypeError as ex:
300 | #This happens when near meridian - lon_dummy ~ 0
301 | #arcpy.AddMessage("GRIDID: %s" % streamID_unique)
302 | #arcpy.AddMessage("Old Lon: %s" % lon_dummy)
303 | index_lon_dummy = int(self.find_nearest(lon, lon_dummy))
304 | #arcpy.AddMessage("Lon Index: %s" % index_lon_dummy)
305 | #arcpy.AddMessage("Lon Val: %s" % lon[index_lon_dummy])
306 | pass
307 |
308 | try:
309 | index_lat_dummy= int(NUM.where(lat == lat_dummy)[0])
310 | except TypeError as ex:
311 | #This happens when near equator - lat_dummy ~ 0
312 | #arcpy.AddMessage("GRIDID: %s" % streamID_unique)
313 | #arcpy.AddMessage("Old Lat: %s" % lat_dummy)
314 | index_lat_dummy = int(self.find_nearest(lat, lat_dummy))
315 | #arcpy.AddMessage("Lat Index: %s" % index_lat_dummy)
316 | #arcpy.AddMessage("Lat Val: %s" % lat[index_lat_dummy])
317 | pass
318 |
319 | # Output the weight table
320 | with open(out_WeightTable, 'wb') as csvfile:
321 | connectwriter = csv.writer(csvfile, dialect = 'excel')
322 | #header
323 | connectwriter.writerow([streamID, 'area_sqm', 'lon_index', 'lat_index',
324 | 'npoints', 'lon', 'lat'])
325 |
326 | for streamID_unique in streamID_unique_list:
327 | ind_points = NUM.where(area_arr[streamID]==streamID_unique)[0]
328 | num_ind_points = len(ind_points)
329 |
330 | if num_ind_points <= 0:
331 | # if point not in array, append dummy data for one point of data
332 | # streamID, area_sqm, lon_index, lat_index, npoints
333 | connectwriter.writerow([streamID_unique, 0, index_lon_dummy, index_lat_dummy,
334 | 1, lon_dummy, lat_dummy])
335 | else:
336 | for ind_point in ind_points:
337 | area_geo_each = float(area_arr['AREA_GEO'][ind_point])
338 | lon_each = area_arr['POINT_X'][ind_point]
339 | lat_each = area_arr['POINT_Y'][ind_point]
340 | try:
341 | index_lon_each = int(NUM.where(lon == lon_each)[0])
342 | except TypeError as ex:
343 | #This happens when near meridian - lon_each ~ 0
344 | index_lon_each = int(self.find_nearest(lon, lon_each))
345 | #arcpy.AddMessage("GRIDID: %s" % streamID_unique)
346 | #arcpy.AddMessage("Old Lon: %s" % lon_each)
347 | #arcpy.AddMessage("Lon Index: %s" % index_lon_each)
348 | #arcpy.AddMessage("Lon Val: %s" % lon[index_lon_each])
349 | pass
350 |
351 | try:
352 | index_lat_each = int(NUM.where(lat == lat_each)[0])
353 | except TypeError as ex:
354 | #This happens when near equator - lat_each ~ 0
355 | index_lat_each = int(self.find_nearest(lat, lat_each))
356 | #arcpy.AddMessage("GRIDID: %s" % streamID_unique)
357 | #arcpy.AddMessage("Old Lat: %s" % lat_each)
358 | #arcpy.AddMessage("Lat Index: %s" % index_lat_each)
359 | #arcpy.AddMessage("Lat Val: %s" % lat[index_lat_each])
360 | pass
361 | connectwriter.writerow([streamID_unique, area_geo_each, index_lon_each, index_lat_each,
362 | num_ind_points, lon_each, lat_each])
363 | return
364 |
--------------------------------------------------------------------------------
/toolbox/scripts/FlowlineToPoint.py:
--------------------------------------------------------------------------------
1 | '''-------------------------------------------------------------------------------
2 | Tool Name: FlowlineToPoint.py
3 | Source Name: FlowlineToPoint
4 | Version: ArcGIS 10.2
5 | License: Apache 2.0
6 | Author: Environmental Systems Research Institute Inc.
7 | Updated by: Environmental Systems Research Institute Inc.
8 | Description: Write the centroid coordinates of Flowlines into a CSV file for the
9 | conventions for CF (Climate and Forecast) metadata.
10 | History: Initial coding - 07/15/2015, version 1.0 (Adapted from Alan Snow's
11 | script)
12 | Updated:
13 | -------------------------------------------------------------------------------'''
14 | import arcpy
15 | import csv
16 | from numpy import array, isnan
17 | import os
18 |
19 | class FlowlineToPoint(object):
20 | def __init__(self):
21 | """Define the tool (tool name is the name of the class)."""
22 | self.label = "Flowline To Point"
23 | self.description = ("Write the centroid coordinates of flowlines into a csv file")
24 | self.canRunInBackground = False
25 | self.category = "Utilities"
26 |
27 | def getParameterInfo(self):
28 | """Define parameter definitions"""
29 | in_drainage_line = arcpy.Parameter(
30 | displayName = 'Input Drainage Line Features',
31 | name = 'in_drainage_line_features',
32 | datatype = 'GPFeatureLayer',
33 | parameterType = 'Required',
34 | direction = 'Input')
35 | in_drainage_line.filter.list = ['Polyline']
36 |
37 | param1 = arcpy.Parameter(name = 'out_point_file',
38 | displayName = 'Output Point File',
39 | direction = 'Output',
40 | parameterType = 'Required',
41 | datatype = 'DEFile')
42 |
43 | params = [in_drainage_line, param1]
44 |
45 | return params
46 |
47 | def isLicensed(self):
48 | """Set whether tool is licensed to execute."""
49 | return True
50 |
51 | def updateParameters(self, parameters):
52 | """Modify the values and properties of parameters before internal
53 | validation is performed. This method is called whenever a parameter
54 | has been changed."""
55 | if parameters[1].altered:
56 | (dirnm, basenm) = os.path.split(parameters[1].valueAsText)
57 | if not basenm.endswith(".csv"):
58 | parameters[1].value = os.path.join(dirnm, "{}.csv".format(basenm))
59 |
60 | return
61 |
62 | def updateMessages(self, parameters):
63 | """Modify the messages created by internal validation for each tool
64 | parameter. This method is called after internal validation."""
65 | return
66 |
67 | def execute(self, parameters, messages):
68 | """The source code of the tool."""
69 | arcpy.env.overwriteOutput = True
70 |
71 | # Script arguments
72 | Input_Features = parameters[0].valueAsText
73 | Output_Table = parameters[1].valueAsText
74 | Intermediate_Feature_Points = os.path.join("in_memory","flowline_centroid_points")
75 |
76 | # Process: Feature To Point
77 | arcpy.AddMessage("Converting flowlines to points ...")
78 | arcpy.FeatureToPoint_management(Input_Features, Intermediate_Feature_Points, "CENTROID")
79 |
80 | # Process: Add XY Coordinates
81 | arcpy.AddMessage("Adding XY coordinates to points ...")
82 | arcpy.AddXY_management(Intermediate_Feature_Points)
83 |
84 | # write only desired fields to csv
85 | arcpy.AddMessage("Writing output to csv ...")
86 | original_field_names = [f.name for f in arcpy.ListFields(Intermediate_Feature_Points)]
87 | #COMID,Lat,Lon,Elev_m
88 | actual_field_names = ["", "", "", ""]
89 | for original_field_name in original_field_names:
90 | original_field_name_lower = original_field_name.lower()
91 | if original_field_name_lower == 'comid':
92 | actual_field_names[0] = original_field_name
93 | elif original_field_name_lower == 'hydroid':
94 | if not actual_field_names[0]:
95 | actual_field_names[0] = original_field_name
96 | elif original_field_name_lower == 'point_y':
97 | actual_field_names[1] = original_field_name
98 | elif original_field_name_lower == 'point_x':
99 | actual_field_names[2] = original_field_name
100 | elif original_field_name_lower == 'point_z':
101 | if not actual_field_names[3]:
102 | actual_field_names[3] = original_field_name
103 |
104 | #check to make sure all fields exist
105 | for field_name in actual_field_names:
106 | if field_name == "":
107 | messages.addErrorMessage("Field name %s not found." % field_name)
108 | raise arcpy.ExecuteError
109 |
110 | #print valid field names to table
111 | with open(Output_Table, 'wb') as outfile:
112 | writer = csv.writer(outfile)
113 | writer.writerow(['COMID','Lat','Lon','Elev_m'])
114 | with arcpy.da.SearchCursor(Intermediate_Feature_Points, actual_field_names) as cursor:
115 | for row in cursor:
116 | #make sure all values are valid
117 | np_row = array(row)
118 | np_row[isnan(np_row)] = 0
119 | writer.writerow([int(row[0]), row[1], row[2], row[3]])
120 |
121 | arcpy.AddMessage("NaN value(s) replaced with zero. Please check output for accuracy.")
122 |
123 | return
--------------------------------------------------------------------------------
/toolbox/scripts/PublishDischargeMap.py:
--------------------------------------------------------------------------------
1 | '''-------------------------------------------------------------------------------
2 | Tool Name: PublishDischargeMap
3 | Source Name: PublishDischargeMap.py
4 | Version: ArcGIS 10.2
5 | License: Apache 2.0
6 | Author: Environmental Systems Research Institute Inc.
7 | Updated by: Environmental Systems Research Institute Inc.
8 | Description: Create a dischage map document.
9 | History: Initial coding - 06/26/2015, version 1.0
10 | Updated: 07/31/2015, Added an optional input parameter Overwrite an existing service
11 | -------------------------------------------------------------------------------'''
12 | import os
13 | import arcpy
14 | import xml.dom.minidom as DOM
15 |
16 |
17 | class PublishDischargeMap(object):
18 | def __init__(self):
19 | """Define the tool (tool name is the name of the class)."""
20 | self.label = "Publish Discharge Map"
21 | self.description = "Publish a discharge map document for stream flow visualization \
22 | to an ArcGIS server"
23 | self.errorMessages = ["Incorrect map document"]
24 | self.canRunInBackground = False
25 | self.category = "Utilities"
26 |
27 | def getParameterInfo(self):
28 | """Define parameter definitions"""
29 | param0 = arcpy.Parameter(name = "in_discharge_map",
30 | displayName = "Input Discharge Map",
31 | direction = "Input",
32 | parameterType = "Required",
33 | datatype = "DEMapDocument"
34 | )
35 |
36 | param1 = arcpy.Parameter(name = "in_connection",
37 | displayName = "Input ArcGIS for Server Connection",
38 | direction = "Input",
39 | parameterType = "Required",
40 | datatype = "DEServerConnection"
41 | )
42 |
43 | param2 = arcpy.Parameter(name = "in_service_name",
44 | displayName = "Input Service Name",
45 | direction = "Input",
46 | parameterType = "Required",
47 | datatype = "GPString")
48 |
49 | param3 = arcpy.Parameter(name = "in_service_summary",
50 | displayName = "Input Service Summary",
51 | direction = "Input",
52 | parameterType = "Optional",
53 | datatype = "GPString")
54 |
55 | param4 = arcpy.Parameter(name = "in_service_tags",
56 | displayName = "Input Service Tags",
57 | direction = "Input",
58 | parameterType = "Optional",
59 | datatype = "GPString")
60 |
61 | param5 = arcpy.Parameter(name = "in_overwrite",
62 | displayName = "Overwrite an existing service",
63 | direction = "Input",
64 | parameterType = "Required",
65 | datatype = "GPBoolean")
66 | param5.value = False
67 |
68 | params = [param0, param1, param2, param3, param4, param5]
69 |
70 | return params
71 |
72 | def isLicensed(self):
73 | """Set whether tool is licensed to execute."""
74 | return True
75 |
76 | def updateParameters(self, parameters):
77 | """Modify the values and properties of parameters before internal
78 | validation is performed. This method is called whenever a parameter
79 | has been changed."""
80 | return
81 |
82 |
83 | def updateMessages(self, parameters):
84 | """Modify the messages created by internal validation for each tool
85 | parameter. This method is called after internal validation."""
86 | '''Check if .mxd is the suffix of the input map document name'''
87 | if parameters[0].altered:
88 | (dirnm, basenm) = os.path.split(parameters[0].valueAsText)
89 | if not basenm.endswith(".mxd"):
90 | parameters[0].setErrorMessage(self.errorMessages[0])
91 | return
92 |
93 | def execute(self, parameters, messages):
94 | """The source code of the tool."""
95 | arcpy.env.overwriteOutput = True
96 | wrkspc = arcpy.env.scratchWorkspace
97 | if wrkspc is None:
98 | wrkspc = arcpy.env.scratchFolder
99 | else:
100 | if wrkspc.endswith('.gdb') or wrkspc.endswith('.sde') or wrkspc.endswith('.mdb'):
101 | wrkspc = arcpy.env.scratchFolder
102 |
103 | in_map_document = parameters[0].valueAsText
104 | in_connection = parameters[1].valueAsText
105 | in_service_name = parameters[2].valueAsText
106 | in_service_summary = parameters[3].valueAsText
107 | in_service_tags = parameters[4].valueAsText
108 | in_overwrite = parameters[5].value
109 |
110 | # Provide other service details
111 | sddraft = os.path.join(wrkspc, in_service_name + '.sddraft')
112 | sd = os.path.join(wrkspc, in_service_name + '.sd')
113 |
114 |
115 | # Create service definition draft
116 | arcpy.mapping.CreateMapSDDraft(in_map_document, sddraft, in_service_name,
117 | 'ARCGIS_SERVER', in_connection, True, None, in_service_summary, in_service_tags)
118 |
119 |
120 |
121 | # Properties that will be changed in the sddraft xml
122 | soe = 'WMSServer'
123 | # Read the sddraft xml.
124 | doc = DOM.parse(sddraft)
125 | # Find all elements named TypeName. This is where the server object extension (SOE) names are defined.
126 | typeNames = doc.getElementsByTagName("TypeName")
127 |
128 | for typeName in typeNames:
129 | if typeName.firstChild.data == soe:
130 | typeName.parentNode.getElementsByTagName('Enabled')[0].firstChild.data = 'true'
131 |
132 | if in_overwrite == True:
133 | newType = 'esriServiceDefinitionType_Replacement'
134 | descriptions = doc.getElementsByTagName("Type")
135 | for desc in descriptions:
136 | if desc.parentNode.tagName == "SVCManifest":
137 | if desc.hasChildNodes():
138 | desc.firstChild.data = newType
139 |
140 | # Delete the old sddraft
141 | if os.path.isfile(sddraft):
142 | os.remove(sddraft)
143 | # Output the new sddraft
144 | f = open(sddraft, 'w')
145 | doc.writexml(f)
146 | f.close()
147 |
148 | # Analyze the service definition draft
149 | analysis = arcpy.mapping.AnalyzeForSD(sddraft)
150 |
151 | # Print errors, warnings, and messages returned from the analysis
152 | arcpy.AddMessage("The following information was returned during analysis of the MXD:")
153 | for key in ('messages', 'warnings', 'errors'):
154 | arcpy.AddMessage('----' + key.upper() + '---')
155 | vars = analysis[key]
156 | for ((message, code), layerlist) in vars.iteritems():
157 | arcpy.AddMessage(' {0} (CODE {1})'.format(message, code))
158 | arcpy.AddMessage(' applies to:')
159 | for layer in layerlist:
160 | arcpy.AddMessage(layer.name)
161 |
162 | # Stage and upload the service if the sddraft analysis did not contain errors
163 | if analysis['errors'] == {}:
164 | # Execute StageService. This creates the service definition.
165 | arcpy.StageService_server(sddraft, sd)
166 |
167 | # Execute UploadServiceDefinition. This uploads the service definition and publishes the service.
168 | arcpy.UploadServiceDefinition_server(sd, in_connection)
169 | arcpy.AddMessage("Service successfully published")
170 | else:
171 | arcpy.AddMessage("Service could not be published because errors were found during analysis.")
172 |
173 | arcpy.AddMessage(arcpy.GetMessages())
174 |
175 | return
176 |
177 |
--------------------------------------------------------------------------------
/toolbox/scripts/UpdateDischargeMap.py:
--------------------------------------------------------------------------------
1 | '''-------------------------------------------------------------------------------
2 | Tool Name: UpdateDischargeMap
3 | Source Name: UpdateDischargeMap.py
4 | Version: ArcGIS 10.2
5 | License: Apache 2.0
6 | Author: Environmental Systems Research Institute Inc.
7 | Updated by: Environmental Systems Research Institute Inc.
8 | Description: Create a dischage map document.
9 | History: Initial coding - 05/26/2015, version 1.0
10 | Updated: Version 1.0, 06/02/2015 Bug fixing: uses arcpy.mapping.UpdateLayer instead of apply symbology
11 | from layer
12 | Version 1.1, 06/24/2015 Adapted to the group layer in the map document
13 | Version 1.1, 04/01/2016 deleted the lines for importing unnecessary modules
14 | -------------------------------------------------------------------------------'''
15 | import os
16 | import arcpy
17 | import time
18 |
19 | class UpdateDischargeMap(object):
20 | def __init__(self):
21 | """Define the tool (tool name is the name of the class)."""
22 | self.label = "Update Discharge Map"
23 | self.description = "Update a discharge map document for stream flow visualization based on \
24 | the .mxd file and a new discharge table with the same name"
25 | self.GDBtemplate_layer = os.path.join(os.path.dirname(__file__), "templates", "FGDB_TimeEnabled.lyr")
26 | self.SQLtemplate_layer = os.path.join(os.path.dirname(__file__), "templates", "SQL_TimeEnabled.lyr")
27 | self.errorMessages = ["Incorrect map document"]
28 | self.canRunInBackground = False
29 | self.category = "Postprocessing"
30 |
31 | def getParameterInfo(self):
32 | """Define parameter definitions"""
33 | param0 = arcpy.Parameter(name = "in_discharge_map",
34 | displayName = "Input Discharge Map",
35 | direction = "Input",
36 | parameterType = "Required",
37 | datatype = "DEMapDocument"
38 | )
39 |
40 | param1 = arcpy.Parameter(name = "out_discharge_map",
41 | displayName = "Output Discharge Map",
42 | direction = "Output",
43 | parameterType = "Derived",
44 | datatype = "DEMapDocument"
45 | )
46 |
47 | params = [param0, param1]
48 |
49 | return params
50 |
51 | def isLicensed(self):
52 | """Set whether tool is licensed to execute."""
53 | return True
54 |
55 | def updateParameters(self, parameters):
56 | """Modify the values and properties of parameters before internal
57 | validation is performed. This method is called whenever a parameter
58 | has been changed."""
59 | return
60 |
61 |
62 | def updateMessages(self, parameters):
63 | """Modify the messages created by internal validation for each tool
64 | parameter. This method is called after internal validation."""
65 | '''Check if .mxd is the suffix of the input map document name'''
66 | if parameters[0].altered:
67 | (dirnm, basenm) = os.path.split(parameters[0].valueAsText)
68 | if not basenm.endswith(".mxd"):
69 | parameters[0].setErrorMessage(self.errorMessages[0])
70 | return
71 |
72 | def execute(self, parameters, messages):
73 | """The source code of the tool."""
74 | arcpy.env.overwriteOutput = True
75 |
76 | in_map_document = parameters[0].valueAsText
77 |
78 | '''Update symbology for each layer in the map document'''
79 | mxd = arcpy.mapping.MapDocument(in_map_document)
80 | df = arcpy.mapping.ListDataFrames(mxd)[0]
81 | for lyr in arcpy.mapping.ListLayers(mxd):
82 | if not lyr.isGroupLayer:
83 | (dirnm, basenm) = os.path.split(lyr.dataSource)
84 | template_lyr = self.GDBtemplate_layer
85 | if not dirnm.endswith('.gdb'):
86 | template_lyr = self.SQLtemplate_layer
87 | # Update symbology from template
88 | templateLayer = arcpy.mapping.Layer(template_lyr)
89 | arcpy.mapping.UpdateLayer(df, lyr, templateLayer, True)
90 |
91 | mxd.save()
92 | del mxd, df, templateLayer
93 |
94 | return
95 |
96 |
97 |
--------------------------------------------------------------------------------
/toolbox/scripts/UpdateWeightTable.py:
--------------------------------------------------------------------------------
1 | '''-------------------------------------------------------------------------------
2 | Tool Name: UpdateWeightTable
3 | Source Name: UpdateWeightTable.py
4 | Version: ArcGIS 10.2
5 | License: Apache 2.0
6 | Author: Environmental Systems Research Institute Inc.
7 | Updated by: Environmental Systems Research Institute Inc.
8 | Description: Update weight table by comparing the IDs with those in connectivity
9 | file. This is a temporary tool to deal with the problem that there
10 | are more drainage line features than catchment features. Its functionality
11 | will be integrated into CreateWeightTableFrom***.py and the CreateInflowFile***
12 | .py tool will be redesigned.
13 | History: Initial coding - 05/12/2015, version 1.0
14 | Updated: Version 1.0, 05/12/2015, initial coding adapted from Alan Snow's script
15 | "format_weight_table_from_connecitivity.py", bug fixing: the rows appended by
16 | replacement_row in the new_weight_table get overwritten when replacement_row is
17 | updated.
18 | Version 1.0, 05/24/2015, bug fixing: set npoints as 1 in the replacement_row,
19 | and fixed the overwritting problem of the replacement_row (contributor: Alan Snow)
20 | -------------------------------------------------------------------------------'''
21 | import arcpy
22 | import csv
23 | import operator
24 | import os
25 |
26 |
27 |
28 | class UpdateWeightTable(object):
29 | def __init__(self):
30 | """Define the tool (tool name is the name of the class)."""
31 | self.label = "Update Weight Table"
32 | self.description = ("Update the weight table from the connnectivity file")
33 | self.canRunInBackground = False
34 | self.category = "Postprocessing"
35 | self.category = "Preprocessing"
36 |
37 | def csv_to_list(self, csv_file, delimiter=','):
38 | """
39 | Reads in a CSV file and returns the contents as list,
40 | where every row is stored as a sublist, and each element
41 | in the sublist represents 1 cell in the table.
42 |
43 | """
44 | with open(csv_file, 'rb') as csv_con:
45 | reader = csv.reader(csv_con, delimiter=delimiter)
46 | return list(reader)
47 |
48 | def convert_comid_to_int(self, csv_cont):
49 | """
50 | Converts cells to floats if possible
51 | (modifies input CSV content list).
52 |
53 | """
54 | for row in range(len(csv_cont)):
55 | try:
56 | csv_cont[row][0] = int(csv_cont[row][0])
57 | except ValueError:
58 | pass
59 |
60 | def get_comid_list(self, csv_cont):
61 | """
62 | Converts cells to floats if possible
63 | (modifies input CSV content list).
64 |
65 | """
66 | comid_list = []
67 | for row in range(len(csv_cont)):
68 | try:
69 | comid_list.append(int(csv_cont[row][0]))
70 | except ValueError:
71 | pass
72 | return comid_list
73 |
74 | def sort_by_column(self, csv_cont, col, reverse=False):
75 | """
76 | Sorts CSV contents by column name (if col argument is type )
77 | or column index (if col argument is type ).
78 |
79 | """
80 | header = csv_cont[0]
81 | body = csv_cont[1:]
82 | if isinstance(col, str):
83 | col_index = header.index(col)
84 | else:
85 | col_index = col
86 | body = sorted(body,
87 | key=operator.itemgetter(col_index),
88 | reverse=reverse)
89 | body.insert(0, header)
90 | return body
91 |
92 | def find_comid_weight_table(self, comid, weight_table):
93 | """
94 | Gives COMID row of weight table and remove it
95 |
96 | """
97 | for row in range(len(weight_table)):
98 | if weight_table[row][0] == comid:
99 | comid_row = weight_table[row]
100 | weight_table.remove(comid_row)
101 | return comid_row
102 | return None
103 |
104 |
105 | def getParameterInfo(self):
106 | """Define parameter definitions"""
107 | param0 = arcpy.Parameter(name="in_weight_table",
108 | displayName="Input Weight Table",
109 | direction="Input",
110 | parameterType="Required",
111 | datatype="DEFile")
112 |
113 | param1 = arcpy.Parameter(name = 'in_network_connectivity_file',
114 | displayName = 'Input Network Connectivity File',
115 | direction = 'Input',
116 | parameterType = 'Required',
117 | datatype = 'DEFile')
118 |
119 | param2 = arcpy.Parameter(name = "out_weight_table",
120 | displayName = "Output Weight Table",
121 | direction = "Output",
122 | parameterType = "Required",
123 | datatype = "DEFile")
124 |
125 | params = [param0, param1, param2]
126 |
127 | return params
128 |
129 | def isLicensed(self):
130 | """Set whether tool is licensed to execute."""
131 | return True
132 |
133 | def updateParameters(self, parameters):
134 | """Modify the values and properties of parameters before internal
135 | validation is performed. This method is called whenever a parameter
136 | has been changed."""
137 | if parameters[2].altered:
138 | (dirnm, basenm) = os.path.split(parameters[2].valueAsText)
139 | if not basenm.endswith(".csv"):
140 | parameters[2].value = os.path.join(dirnm, "{}.csv".format(basenm))
141 |
142 | return
143 |
144 | def updateMessages(self, parameters):
145 | """Modify the messages created by internal validation for each tool
146 | parameter. This method is called after internal validation."""
147 | return
148 |
149 | def execute(self, parameters, messages):
150 | """The source code of the tool."""
151 | arcpy.env.overwriteOutput = True
152 |
153 | in_WeightTable = parameters[0].valueAsText
154 | in_ConnectivityFile = parameters[1].valueAsText
155 | out_WeightTable = parameters[2].valueAsText
156 |
157 | #get all flowline comids
158 | connectivity = self.csv_to_list(in_ConnectivityFile)
159 | all_flowline_comid = self.get_comid_list(connectivity)
160 |
161 | #get all catchment comids
162 | weight_table = self.csv_to_list(in_WeightTable)
163 | self.convert_comid_to_int(weight_table)
164 |
165 | #FEATUREID,area_sqm,lon_index,lat_index,npoints,weight,Lon,Lat
166 | new_weight_table = weight_table[0:1][:]
167 |
168 | replacement_row = weight_table[1][1:]
169 | #set area_sqm to zero
170 | replacement_row[0] = 0
171 | #set npoints to one
172 | replacement_row[3] = 1
173 |
174 | for comid in all_flowline_comid:
175 | #delete rows in catchment but not in flowline
176 | new_row = self.find_comid_weight_table(comid, weight_table)
177 | row_count = 0
178 | while new_row:
179 | new_weight_table.append(new_row)
180 | new_row = self.find_comid_weight_table(comid, weight_table)
181 | row_count += 1
182 | if row_count <= 0:
183 | #add rows for each flowline not in catchment
184 | new_replacement_row = [comid]
185 | new_replacement_row.extend(replacement_row)
186 | #FEATUREID,area_sqm,lon_index,lat_index,npoints,weight,Lon,Lat
187 | new_weight_table.append(new_replacement_row)
188 |
189 |
190 | #print to file
191 | with open(out_WeightTable, 'wb') as outfile:
192 | writer = csv.writer(outfile)
193 | writer.writerows(new_weight_table)
194 |
195 |
196 |
197 | return
198 |
--------------------------------------------------------------------------------
/toolbox/scripts/templates/FGDB_TimeEnabled.lyr:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Esri/python-toolbox-for-rapid/05737d48c3f8ecf37aba37fe807b663929d0b63a/toolbox/scripts/templates/FGDB_TimeEnabled.lyr
--------------------------------------------------------------------------------
/toolbox/scripts/templates/SQL_TimeEnabled.lyr:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Esri/python-toolbox-for-rapid/05737d48c3f8ecf37aba37fe807b663929d0b63a/toolbox/scripts/templates/SQL_TimeEnabled.lyr
--------------------------------------------------------------------------------
/toolbox/scripts/templates/template_mxd.mxd:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Esri/python-toolbox-for-rapid/05737d48c3f8ecf37aba37fe807b663929d0b63a/toolbox/scripts/templates/template_mxd.mxd
--------------------------------------------------------------------------------
/toolbox_screenshot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Esri/python-toolbox-for-rapid/05737d48c3f8ecf37aba37fe807b663929d0b63a/toolbox_screenshot.png
--------------------------------------------------------------------------------