├── .gitignore ├── AUTHORS.rst ├── CHANGELOG.rst ├── CONTRIBUTING.rst ├── LICENSE ├── Makefile ├── README.rst ├── data ├── inputs │ ├── parameters_wolock.csv │ ├── timeseries_wolock.csv │ └── twi_wolock.csv ├── modelconfig.ini └── outputs │ ├── flow_duration_curve.png │ ├── flow_duration_curved_observed_vs_predicted.png │ ├── flow_observed.png │ ├── flow_observed_vs_flow_predicted.png │ ├── flow_predicted.png │ ├── output.csv │ ├── pet.png │ ├── precip_minus_pet.png │ ├── precipitation.png │ ├── report.html │ ├── saturation_deficit_avgs.png │ └── temperature.png ├── docs ├── Makefile ├── authors.rst ├── changelog.rst ├── code.rst ├── conf.py ├── contributing.rst ├── index.rst ├── installation.rst ├── lant-to-wolock-conversion-table.rst ├── make.bat ├── overview.rst ├── readme.rst ├── tutorial.rst └── usage.rst ├── requirements.txt ├── setup.py ├── tests ├── __init__.py ├── conftest.py ├── test_hydrocalcs.py ├── test_modelconfigfile.py ├── test_parametersfile.py ├── test_timeseriesfile.py ├── test_topmodel.py ├── test_twifile.py └── testdata │ ├── timeseries_wolock.csv │ └── twi_wolock.csv └── topmodelpy ├── __init__.py ├── cli.py ├── exceptions.py ├── hydrocalcs.py ├── main.py ├── modelconfigfile.py ├── parametersfile.py ├── plots.py ├── report.py ├── templates └── report_template.html ├── timeseriesfile.py ├── topmodel.py ├── twifile.py └── utils.py /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | .cache/ 3 | __pycache__/ 4 | *.py[cod] 5 | *$py.class 6 | *.py.swp 7 | .pytest_cache/ 8 | 9 | # Sphinx documentation 10 | docs/_build 11 | 12 | # Distribution / packaging 13 | *.egg-info/ 14 | 15 | # Virtual environment 16 | venv/ 17 | -------------------------------------------------------------------------------- /AUTHORS.rst: -------------------------------------------------------------------------------- 1 | Authors 2 | ======= 3 | 4 | Development Leads 5 | ----------------- 6 | 7 | - Jeremiah Lant 8 | 9 | Core Developers 10 | --------------- 11 | 12 | - name 13 | 14 | Contributors 15 | ------------ 16 | 17 | - name 18 | -------------------------------------------------------------------------------- /CHANGELOG.rst: -------------------------------------------------------------------------------- 1 | Changelog 2 | ========= 3 | 4 | Version 0.1.0 5 | -------------------- 6 | (bugfix | feature added | etc, released on yyyy-mm-dd) 7 | 8 | - Add information about changes; i.e 'first release of topmodelpy' 9 | -------------------------------------------------------------------------------- /CONTRIBUTING.rst: -------------------------------------------------------------------------------- 1 | Contributing 2 | ============ 3 | 4 | Contributions are welcome, and they are greatly appreciated! Every 5 | little bit helps, and credit will always be given. 6 | 7 | You can contribute in many ways: 8 | 9 | Types of Contributions 10 | ---------------------- 11 | 12 | Report Bugs 13 | ~~~~~~~~~~~ 14 | 15 | Report bugs to Jeremiah Lant at jlant@usgs.gov or on the issues page of 16 | the respective online repository where topmodelpy is hosted. 17 | 18 | If you are reporting a bug, please include: 19 | 20 | * Your operating system name and version. 21 | * Any details about your local setup that might be helpful in troubleshooting. 22 | * Detailed steps to reproduce the bug. 23 | 24 | Fix Bugs 25 | ~~~~~~~~ 26 | 27 | Look through the issues page of the respective online repository where topmodelpy is hosted for bugs. 28 | Anything tagged with "bug" is open to whoever wants to implement it. 29 | 30 | Implement Features 31 | ~~~~~~~~~~~~~~~~~~ 32 | 33 | Look through the issues page of the respective online repository where topmodelpy is hosted for features. 34 | Anything tagged with "feature" is open to whoever wants to implement it. 35 | 36 | Write Documentation 37 | ~~~~~~~~~~~~~~~~~~~ 38 | 39 | topmodelpy could always use more documentation, whether as part of the 40 | official topmodelpy docs or in docstrings. 41 | 42 | Submit Feedback 43 | ~~~~~~~~~~~~~~~ 44 | 45 | The best way to send feedback is to file an issue at jlant@usgs.gov or on the issues page of 46 | the respective online repository where topmodelpy is hosted. 47 | 48 | If you are proposing a feature: 49 | 50 | * Explain in detail how it would work. 51 | * Keep the scope as narrow as possible, to make it easier to implement. 52 | * Remember that this is a volunteer-driven project, and that contributions 53 | are welcome :) 54 | 55 | Get Started! 56 | ------------ 57 | 58 | Ready to contribute? Here's how to set up `topmodelpy` for local development. 59 | 60 | 1. Fork the `topmodelpy` repo. 61 | 2. Clone your fork locally:: 62 | 63 | $ git clone 64 | 65 | 3. Install your local copy into a virtual environment using virtualenv_ or conda_. 66 | If you have virtualenvwrapper_ installed, this is how you set up your fork for local development:: 67 | 68 | $ mkvirtualenv topmodelpy 69 | $ cd topmodelpy/ 70 | $ python setup.py develop 71 | 72 | If you have conda_ installed, this is how you set up your fork for local development on Linux or Mac OS X:: 73 | 74 | $ conda create --name topmodelpy 75 | $ source activate topmodelpy 76 | $ cd conda create --name topmodelpy 82 | > activate topmodelpy 83 | > cd python setup.py develop 85 | 86 | 4. Create a branch for local development:: 87 | 88 | $ git checkout -b name-of-your-bugfix-or-feature 89 | 90 | Now you can make your changes locally. 91 | 92 | 5. When you're done making changes, check that your changes pass flake8_ and the tests, including testing other Python versions with tox_:: 93 | 94 | $ flake8 topmodelpy tests 95 | $ py.test 96 | $ tox 97 | 98 | To get flake8 and tox, just pip install them into your virtualenv. 99 | 100 | 6. Commit your changes and push your branch to the respective online repository where topmodelpy is hosted:: 101 | 102 | $ git add . 103 | $ git commit -m "Your detailed description of your changes." 104 | $ git push origin name-of-your-bugfix-or-feature 105 | 106 | 7. Submit a pull request through the respective online repository website where topmodelpy is hosted. 107 | 108 | Pull Request Guidelines 109 | ----------------------- 110 | 111 | Before you submit a pull request, check that it meets these guidelines: 112 | 113 | 1. The pull request should include tests. 114 | 2. If the pull request adds functionality, the docs should be updated. Put 115 | your new functionality into a function with a docstring, and add the 116 | feature to the list in README.rst. 117 | 3. The pull request should work for Python 2.6, 2.7, 3.3, and 3.4. 118 | 119 | Tips 120 | ---- 121 | 122 | To run a subset of tests:: 123 | 124 | $ py.test tests/test_.py 125 | 126 | 127 | .. _virtualenv: https://virtualenv.pypa.io/en/latest/ 128 | .. _conda: http://conda.pydata.org/ 129 | .. _virtualenvwrapper: http://virtualenvwrapper.readthedocs.io/en/latest/ 130 | .. _flake8: https://flake8.readthedocs.io/en/latest/ 131 | .. _tox: http://tox.readthedocs.io/en/latest/ 132 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | License 2 | ------- 3 | 4 | This software is licensed under CC0 1.0 (http://creativecommons.org/publicdomain/zero/1.0/) and is in the public domain 5 | because it contains materials that originally came from the U.S. Geological Survey (USGS), an agency of the United 6 | States Department of Interior. For more information, see the official USGS copyright policy 7 | (http://www2.usgs.gov/visual-id/credit_usgs.html#copyright/). 8 | 9 | 10 | Disclaimer 11 | ---------- 12 | 13 | This software is preliminary or provisional and is subject to revision. It is being provided to meet the need for timely 14 | best science. The software has not received final approval by the U.S. Geological Survey (USGS). No warranty, expressed 15 | or implied, is made by the USGS or the U.S. Government as to the functionality of the software and related material nor 16 | shall the fact of release constitute any such warranty. The software is provided on the condition that neither the USGS 17 | nor the U.S. Government shall be held liable for any damages resulting from the authorized or unauthorized use of the 18 | software. 19 | 20 | The USGS provides no warranty, expressed or implied, as to the correctness of the furnished software or the suitability 21 | for any purpose. The software has been tested, but as with any complex software, there could be undetected errors. Users 22 | who find errors are requested to report them to the USGS. 23 | 24 | References to non-USGS products, trade names, and (or) services are provided for information purposes only and do not 25 | constitute endorsement or warranty, express or implied, by the USGS, U.S. Department of Interior, or U.S. Government, as 26 | to their suitability, content, usefulness, functioning, completeness, or accuracy. 27 | 28 | Although this program has been used by the USGS, no warranty, expressed or implied, is made by the USGS or the United 29 | States Government as to the accuracy and functioning of the program and related program material nor shall the fact of 30 | distribution constitute any such warranty, and no responsibility is assumed by the USGS in connection therewith. 31 | 32 | This software is provided "AS IS." -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | .PHONY: clean clean-pyc 2 | 3 | help: 4 | @echo "clean - remove all test and Python artifacts" 5 | @echo "clean-pyc - remove Python file artifacts" 6 | 7 | clean: clean-pyc 8 | 9 | clean-pyc: 10 | find . -name '*.pyc' -exec rm -f {} \; 11 | find . -name '*.pyo' -exec rm -f {} \; 12 | find . -name '*~' -exec rm -f {} \; 13 | find . -name '__pycache__' -exec rm -fr {} \; 14 | 15 | 16 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | .. image:: https://upload.wikimedia.org/wikipedia/commons/thumb/1/1c/USGS_logo_green.svg/320px-USGS_logo_green.svg.png 2 | :target: http://www.usgs.gov/ 3 | :alt: U.S. Geological Survey logo 4 | 5 | topmodelpy 6 | =============================== 7 | 8 | *topmodelpy* is a rainfall-runoff model that predicts the amount of water 9 | flow in rivers. *topmodelpy* is a command line application written in Python 10 | using Click_, and is a complete conversion of the original rainfall-runoff 11 | model, called Topmodel, from Fortran to Python. The specific version of 12 | Topmodel that *topmodelpy* is based on is the version by David Wolock, 13 | U.S. Geological Survey. Please see report below for more details: 14 | 15 | Wolock, D.M., "Simulating the variable-source-area concept of 16 | streamflow generation with the watershed model Topmodel", U.S. Geological 17 | Survey, Water-Resources Investigations Report 93-4124, 1993. 18 | 19 | 20 | Features 21 | -------- 22 | 23 | * Written entirely in Python for ease of use and model extension. 24 | 25 | 26 | Example 27 | ------- 28 | 29 | Add a short example 30 | 31 | :: 32 | 33 | show some example code here 34 | 35 | 36 | Documentation 37 | ------------- 38 | 39 | Add a link to the project and code documenation site. 40 | 41 | 42 | Tests 43 | ----- 44 | 45 | A suite of tests were built using `pytest `_. 46 | 47 | To run the test suite, from the command line in the project's root directory:: 48 | 49 | $ py.test tests/ 50 | 51 | 52 | 53 | Requirements 54 | ------------ 55 | 56 | Add requirements and code dependencies. 57 | 58 | 59 | Installation 60 | ------------ 61 | 62 | To install topmodelpy from source: 63 | 64 | 1. Check that you have Python_ installed:: 65 | 66 | $ python --version 67 | 68 | If you do not have Python_ installed, please download the latest version from `Python's download page`_ 69 | 70 | 2. Download topmodelpy from the repository and extract to a directory of your choice. 71 | 72 | Or, if you have git_ installed you can clone the project:: 73 | 74 | $ git clone 75 | 76 | 3. Navigate to the project's root directory where the setup script called `setup.py` is located:: 77 | 78 | $ cd topmodelpy/ 79 | 80 | | The `setup.py` is a Python file that contains information regarding the installation of a Python module/package, and 81 | | usually specifies that the module/package has been packaged and distributed with the standard Python distribution 82 | | package called Distutils_. 83 | 84 | 4. Run `setup.py` with the `install` command:: 85 | 86 | $ python setup.py install 87 | 88 | topmodelpy will now be installed to the standard location for third-party Python modules on your 89 | computer platform. 90 | 91 | For more information regarding installing third-party Python modules, please see `Installing Python Modules`_ 92 | For a description of how installation works including where the module will be installed on your computer platform, 93 | please see `How Installation Works`_. 94 | 95 | 96 | License 97 | ------- 98 | 99 | This software is licensed under `CC0 1.0`_ and is in the `public domain`_ because it contains materials that originally 100 | came from the `U.S. Geological Survey (USGS)`_, an agency of the `United States Department of Interior`_. For more 101 | information, see the `official USGS copyright policy`_. 102 | 103 | .. image:: http://i.creativecommons.org/p/zero/1.0/88x31.png 104 | :target: http://creativecommons.org/publicdomain/zero/1.0/ 105 | :alt: Creative Commons logo 106 | 107 | 108 | Disclaimer 109 | ---------- 110 | 111 | This software is preliminary or provisional and is subject to revision. It is being provided to meet the need for timely 112 | best science. The software has not received final approval by the U.S. Geological Survey (USGS). No warranty, expressed 113 | or implied, is made by the USGS or the U.S. Government as to the functionality of the software and related material nor 114 | shall the fact of release constitute any such warranty. The software is provided on the condition that neither the USGS 115 | nor the U.S. Government shall be held liable for any damages resulting from the authorized or unauthorized use of the 116 | software. 117 | 118 | The USGS provides no warranty, expressed or implied, as to the correctness of the furnished software or the suitability 119 | for any purpose. The software has been tested, but as with any complex software, there could be undetected errors. Users 120 | who find errors are requested to report them to the USGS. 121 | 122 | References to non-USGS products, trade names, and (or) services are provided for information purposes only and do not 123 | constitute endorsement or warranty, express or implied, by the USGS, U.S. Department of Interior, or U.S. Government, as 124 | to their suitability, content, usefulness, functioning, completeness, or accuracy. 125 | 126 | Although this program has been used by the USGS, no warranty, expressed or implied, is made by the USGS or the United 127 | States Government as to the accuracy and functioning of the program and related program material nor shall the fact of 128 | distribution constitute any such warranty, and no responsibility is assumed by the USGS in connection therewith. 129 | 130 | This software is provided "AS IS." 131 | 132 | 133 | Author 134 | ------ 135 | 136 | Jeremiah Lant 137 | 138 | 139 | .. _Python: https://www.python.org/ 140 | .. _pytest: http://pytest.org/latest/ 141 | .. _Click: https://click.palletsprojects.com/ 142 | .. _Sphinx: http://sphinx-doc.org/ 143 | .. _public domain: https://en.wikipedia.org/wiki/Public_domain 144 | .. _CC0 1.0: http://creativecommons.org/publicdomain/zero/1.0/ 145 | .. _U.S. Geological Survey: https://www.usgs.gov/ 146 | .. _USGS: https://www.usgs.gov/ 147 | .. _U.S. Geological Survey (USGS): https://www.usgs.gov/ 148 | .. _United States Department of Interior: https://www.doi.gov/ 149 | .. _official USGS copyright policy: http://www.usgs.gov/visual-id/credit_usgs.html#copyright/ 150 | .. _U.S. Geological Survey (USGS) Software User Rights Notice: http://water.usgs.gov/software/help/notice/ 151 | .. _Python's download page: https://www.python.org/downloads/ 152 | .. _git: https://git-scm.com/ 153 | .. _Distutils: https://docs.python.org/3/library/distutils.html 154 | .. _Installing Python Modules: https://docs.python.org/3.5/install/ 155 | .. _How Installation Works: https://docs.python.org/3.5/install/#how-installation-works 156 | -------------------------------------------------------------------------------- /data/inputs/parameters_wolock.csv: -------------------------------------------------------------------------------- 1 | name,value,units,description 2 | scaling_parameter,10,millimeters,controls the rate of decline of transmissivity in the soil profile 3 | saturated_hydraulic_conductivity,150,millimeters/day,saturated hydraulic conductivity of the C horizon of the soil 4 | macropore_fraction,0.2,fraction,fraction of precipitation that can bypass the unsaturated subsurface zone and move directly into the saturated subsurface zone 5 | soil_depth_total,1,mm,total soil depth - could be 90 cm for tile depth 6 | soil_depth_ab_horizon,0.5,mm,soil depth of AB horizon - 25% thickness 7 | field_capacity_fraction,0.2,fraction,fraction of soil moisture or water content in the soil after excess water has drained away 8 | latitude,40.5,degrees,mean latitude of basin 9 | basin_area_total,3.07,square kilometers,total basin area 10 | impervious_area_fraction,0.3,fraction,fraction of impervious area of basin 11 | snowmelt_temperature_cutoff,32,degrees fahrenheit,temperature cutoff for snowpack accumulation and snowmelt 12 | snowmelt_rate_coeff,0.06,1/degrees fahrenheit,used to control rate of snow melt 13 | snowmelt_rate_coeff_with_rain,0.007,inches per degree fahrenheit,used to control rate of snow melt when raining 14 | channel_length_max,1.98,kilometers,maximum channel length 15 | channel_velocity_avg,10,kilometers/day,average channel velocity 16 | flow_initial,1,millimeters/day,initial river flow 17 | -------------------------------------------------------------------------------- /data/inputs/timeseries_wolock.csv: -------------------------------------------------------------------------------- 1 | date, temperature (celsius),precipitation (mm/day),pet (mm/day),flow_observed (mm/day) 2 | 1980-01-22,-1.9,2.5,0.39,1.18 3 | 1980-01-23,-0.3,0,0.44,1.02 4 | 1980-01-24,-1.1,0,0.42,0.66 5 | 1980-01-25,-6.1,0,0.31,0.5 6 | 1980-01-26,-4.7,0,0.34,0.46 7 | 1980-01-27,-4.4,0,0.35,0.4 8 | 1980-01-28,-3.3,0,0.37,0.39 9 | 1980-01-29,-3.1,0,0.38,0.33 10 | 1980-01-30,-4.2,0,0.36,0.28 11 | 1980-01-31,-5.6,0,0.33,0.24 12 | 1980-02-01,-7.2,0,0.3,0.19 13 | 1980-02-02,-9.4,0,0.27,0.2 14 | 1980-02-03,-9.4,0,0.27,0.21 15 | 1980-02-04,-7.2,0,0.31,0.24 16 | 1980-02-05,-6.4,0,0.33,0.27 17 | 1980-02-06,-5.6,0,0.34,0.28 18 | 1980-02-07,-6.7,0,0.33,0.24 19 | 1980-02-08,-2.2,0,0.43,0.25 20 | 1980-02-09,-1.9,0,0.44,0.31 21 | 1980-02-10,-3.6,0,0.4,0.32 22 | 1980-02-11,-4.4,0,0.39,0.28 23 | 1980-02-12,-3.1,2.5,0.42,0.27 24 | 1980-02-13,-4.2,0,0.4,0.24 25 | 1980-02-14,-3.6,0,0.41,0.29 26 | 1980-02-15,-1.1,0,0.49,1.1 27 | 1980-02-16,0.3,12.7,0.54,1.26 28 | 1980-02-17,-4.4,0,0.41,0.64 29 | 1980-02-18,-7.8,0,0.33,0.34 30 | 1980-02-19,-5.3,0,0.39,0.79 31 | 1980-02-20,0,0,0.54,1.02 32 | 1980-02-21,1.1,0,0.59,0.87 33 | 1980-02-22,5.8,10.2,0.79,1.97 34 | 1980-02-23,0.6,0,0.58,1.42 35 | 1980-02-24,4.4,0,0.74,0.78 36 | 1980-02-25,3.9,0,0.72,0.58 37 | 1980-02-26,1.1,0,0.61,0.43 38 | 1980-02-27,-5.6,0,0.41,0.34 39 | 1980-02-28,-5.6,0,0.41,0.34 40 | 1980-02-29,-3.6,0,0.47,0.28 41 | 1980-03-01,-11.4,0,0.29,0.24 42 | 1980-03-02,-9.4,0,0.33,0.24 43 | 1980-03-03,-6.9,0,0.39,0.24 44 | 1980-03-04,-3.6,0,0.48,0.01 45 | 1980-03-05,1.1,2.5,0.65,0.28 46 | 1980-03-06,3.9,0,0.78,0.32 47 | 1980-03-07,1.9,0,0.69,0.23 48 | 1980-03-08,5,7.6,0.84,1.02 49 | 1980-03-09,9.7,0,1.14,1.73 50 | 1980-03-10,3.3,17.8,0.77,3.31 51 | 1980-03-11,4.4,2.5,0.83,7.87 52 | 1980-03-12,-1.7,0,0.57,1.89 53 | 1980-03-13,-2.2,0,0.56,1.1 54 | 1980-03-14,-2.5,15.2,0.56,2.05 55 | 1980-03-15,1.1,0,0.7,3.07 56 | 1980-03-16,-0.8,0,0.63,4.88 57 | 1980-03-17,1.7,7.6,0.74,10.23 58 | 1980-03-18,7.2,5.1,1.05,11.02 59 | 1980-03-19,2.5,0,0.79,2.52 60 | 1980-03-20,4.4,0,0.89,1.57 61 | 1980-03-21,6.1,58.4,1,41.72 62 | 1980-03-22,6.7,2.5,1.05,10.23 63 | 1980-03-23,1.1,0,0.74,3.38 64 | 1980-03-24,5.3,15.2,0.97,3.15 65 | 1980-03-25,6.1,7.6,1.03,16.53 66 | 1980-03-26,2.2,0,0.81,2.68 67 | 1980-03-27,4.4,0,0.94,1.57 68 | 1980-03-28,6.1,0,1.05,1.18 69 | 1980-03-29,6.1,20.3,1.06,9.45 70 | 1980-03-30,8.1,0,1.21,4.8 71 | 1980-03-31,7.2,38.1,1.15,20.47 72 | 1980-04-01,1.9,0,0.84,15.74 73 | 1980-04-02,5.8,0,1.07,3.38 74 | 1980-04-03,7.2,0,1.18,1.89 75 | 1980-04-04,10.8,17.8,1.49,8.66 76 | 1980-04-05,11.7,0,1.58,2.76 77 | 1980-04-06,7.5,0,1.23,1.42 78 | 1980-04-07,10.3,0,1.47,1.02 79 | 1980-04-08,12.8,0,1.73,0.87 80 | 1980-04-09,14.2,50.8,1.9,24.4 81 | 1980-04-10,10.6,0,1.53,8.66 82 | 1980-04-11,13.9,0,1.89,2.44 83 | 1980-04-12,10.6,0,1.55,1.65 84 | 1980-04-13,15.3,0,2.09,1.26 85 | 1980-04-14,11.9,10.2,1.71,3.54 86 | 1980-04-15,11.7,2.5,1.7,4.09 87 | 1980-04-16,8.9,0,1.44,1.34 88 | 1980-04-17,2.5,0,0.97,0.87 89 | 1980-04-18,6.7,0,1.27,0.72 90 | 1980-04-19,9.7,0,1.54,0.57 91 | 1980-04-20,11.9,0,1.77,0.53 92 | 1980-04-21,16.4,0,2.36,0.5 93 | 1980-04-22,12.8,0,1.9,0.43 94 | 1980-04-23,8.3,0,1.45,0.37 95 | 1980-04-24,13.9,0,2.06,0.32 96 | 1980-04-25,15.6,0,2.3,0.31 97 | 1980-04-26,14.2,0,2.12,0.3 98 | 1980-04-27,12.2,10.2,1.89,0.87 99 | 1980-04-28,10,53.3,1.66,25.98 100 | 1980-04-29,8.6,17.8,1.53,17.32 101 | 1980-04-30,10,2.5,1.68,3.7 102 | 1980-05-01,12.2,0,1.93,2.28 103 | 1980-05-02,14.7,0,2.27,1.26 104 | 1980-05-03,13.9,0,2.17,0.94 105 | 1980-05-04,17.2,0,2.68,0.68 106 | 1980-05-05,15.3,2.5,2.39,0.57 107 | 1980-05-06,19.7,0,3.16,0.49 108 | 1980-05-07,20.8,5.1,3.4,0.68 109 | 1980-05-08,17.2,7.6,2.74,2.83 110 | 1980-05-09,8.3,0,1.58,0.79 111 | 1980-05-10,8.1,0,1.57,0.49 112 | 1980-05-11,11.9,0,2,0.43 113 | 1980-05-12,14.2,12.7,2.32,1.34 114 | 1980-05-13,18.1,12.7,2.96,4.8 115 | 1980-05-14,20.3,0,3.41,1.18 116 | 1980-05-15,15,0,2.47,0.57 117 | 1980-05-16,11.4,0,1.98,0.56 118 | 1980-05-17,14.4,0,2.4,0.43 119 | 1980-05-18,16.1,2.5,2.68,0.76 120 | 1980-05-19,14.4,2.5,2.42,0.37 121 | 1980-05-20,20,2.5,3.44,0.27 122 | 1980-05-21,17.5,12.7,2.96,1.97 123 | 1980-05-22,12.8,0,2.22,0.72 124 | 1980-05-23,19.7,0,3.42,0.41 125 | 1980-05-24,22.8,0,4.15,0.35 126 | 1980-05-25,22.2,0,4.02,0.34 127 | 1980-05-26,19.2,0,3.35,0.24 128 | 1980-05-27,15.3,0,2.64,0.17 129 | 1980-05-28,15.8,0,2.73,0.23 130 | 1980-05-29,17.2,0,2.98,0.12 131 | 1980-05-30,16.4,0,2.85,0.12 132 | 1980-05-31,18.3,0,3.21,0.17 133 | 1980-06-01,20.3,0,3.65,0.17 134 | 1980-06-02,21.7,10.2,3.99,0.68 135 | 1980-06-03,21.9,5.1,4.05,0.54 136 | 1980-06-04,21.9,0,4.06,0.23 137 | 1980-06-05,15.8,0,2.79,0.12 138 | 1980-06-06,16.1,0,2.85,0.09 139 | 1980-06-07,17.8,0,3.17,0.24 140 | 1980-06-08,21.7,2.5,4.04,0.27 141 | 1980-06-09,15,10.2,2.67,0.74 142 | 1980-06-10,13.1,0,2.38,0.57 143 | 1980-06-11,10.8,0,2.07,0.2 144 | 1980-06-12,12.5,2.5,2.3,0.12 145 | 1980-06-13,15,2.5,2.69,0.09 146 | 1980-06-14,16.9,0,3.03,0.08 147 | 1980-06-15,19.2,0,3.5,0.08 148 | 1980-06-16,21.4,0,4.01,0.19 149 | 1980-06-17,16.9,0,3.04,0.07 150 | 1980-06-18,17.5,0,3.15,0.03 151 | 1980-06-19,16.7,0,3,0.22 152 | 1980-06-20,18.3,0,3.31,0.24 153 | 1980-06-21,17.8,0,3.21,0.09 154 | 1980-06-22,19.4,0,3.55,0.02 155 | 1980-06-23,20.6,0,3.82,0.01 156 | 1980-06-24,22.5,0,4.3,0.01 157 | 1980-06-25,23.9,0,4.69,0 158 | 1980-06-26,24.2,0,4.77,0 159 | 1980-06-27,23.3,2.5,4.51,0.14 160 | 1980-06-28,25.3,0,5.1,0.11 161 | 1980-06-29,22.8,25.4,4.37,0.55 162 | 1980-06-30,23.6,15.2,4.58,2.68 163 | 1980-07-01,19.7,0,3.59,0.54 164 | 1980-07-02,21.7,0,4.06,0.21 165 | 1980-07-03,23.3,0,4.48,0.17 166 | 1980-07-04,21.7,0,4.05,0.13 167 | 1980-07-05,23.3,10.2,4.47,0.61 168 | 1980-07-06,23.1,0,4.4,0.87 169 | 1980-07-07,17.2,0,3.05,0.19 170 | 1980-07-08,19.2,0,3.44,0.17 171 | 1980-07-09,22.5,0,4.21,0.13 172 | 1980-07-10,24.4,0,4.73,0.11 173 | 1980-07-11,25,0,4.9,0.1 174 | 1980-07-12,24.4,0,4.71,0.18 175 | 1980-07-13,20,0,3.57,0.09 176 | 1980-07-14,21.7,0,3.96,0.06 177 | 1980-07-15,23.3,0,4.36,0.12 178 | 1980-07-16,25.6,0,5.01,0.11 179 | 1980-07-17,28.6,2.5,6.02,0.12 180 | 1980-07-18,25.6,0,4.98,0.07 181 | 1980-07-19,25.8,0,5.02,0.06 182 | 1980-07-20,26.9,0,5.36,0.04 183 | 1980-07-21,31.1,0,6.93,0.06 184 | 1980-07-22,30,10.2,6.45,0.39 185 | 1980-07-23,28.1,15.2,5.71,3.23 186 | 1980-07-24,22.8,0,4.09,0.4 187 | 1980-07-25,21.9,0,3.86,0.24 188 | 1980-07-26,23.9,0,4.35,0.14 189 | 1980-07-27,25.8,0,4.87,0.1 190 | 1980-07-28,26.7,0,5.13,0.08 191 | 1980-07-29,25.8,20.3,4.83,1.57 192 | 1980-07-30,21.9,0,3.77,0.34 193 | 1980-07-31,23.6,0,4.17,0.17 194 | 1980-08-01,24.7,0,4.45,0.15 195 | 1980-08-02,25.6,7.6,4.68,0.13 196 | 1980-08-03,26.4,0,4.89,0.37 197 | 1980-08-04,27.2,2.5,5.12,0.13 198 | 1980-08-05,27.2,0,5.09,0.66 199 | 1980-08-06,28.1,0,5.35,0.49 200 | 1980-08-07,25.6,0,4.56,0.17 201 | 1980-08-08,26.7,0,4.86,0.13 202 | 1980-08-09,28.6,0,5.43,0.11 203 | 1980-08-10,25.8,0,4.54,0.09 204 | 1980-08-11,24.7,5.1,4.22,0.08 205 | 1980-08-12,26.7,0,4.75,0.17 206 | 1980-08-13,23.3,0,3.83,0.11 207 | 1980-08-14,22.5,0,3.62,0.08 208 | 1980-08-15,23.3,2.5,3.78,0.41 209 | 1980-08-16,21.1,0,3.28,0.17 210 | 1980-08-17,18.1,0,2.71,0.08 211 | 1980-08-18,20.6,0,3.14,0.06 212 | 1980-08-19,22.2,0,3.45,0.07 213 | 1980-08-20,23.6,0,3.74,0.15 214 | 1980-08-21,21.9,0,3.34,0.04 215 | 1980-08-22,22.2,0,3.38,0.04 216 | 1980-08-23,21.7,0,3.26,0.06 217 | 1980-08-24,23.1,0,3.53,0.08 218 | 1980-08-25,23.9,2.5,3.69,0.05 219 | 1980-08-26,25.3,0,4,0.04 220 | 1980-08-27,25.6,0,4.04,0.02 221 | 1980-08-28,26.1,0,4.14,0.02 222 | 1980-08-29,26.4,0,4.19,0.02 223 | 1980-08-30,23.9,0,3.57,0.02 224 | 1980-08-31,26.7,0,4.21,0.02 225 | 1980-09-01,27.2,0,4.32,0.02 226 | 1980-09-02,28.6,0,4.68,0.02 227 | 1980-09-03,27.2,0,4.26,0.01 228 | 1980-09-04,22.8,0,3.22,0.01 229 | 1980-09-05,24.2,0,3.49,0.01 230 | 1980-09-06,22.5,0,3.12,0.02 231 | 1980-09-07,24.2,0,3.44,0.01 232 | 1980-09-08,19.4,0,2.53,0.01 233 | 1980-09-09,19.4,0,2.52,0.01 234 | 1980-09-10,20.6,0,2.69,0.01 235 | 1980-09-11,16.9,0,2.12,0.01 236 | 1980-09-12,18.1,0,2.27,0.01 237 | 1980-09-13,20,0,2.54,0.01 238 | 1980-09-14,21.7,2.5,2.8,0.01 239 | 1980-09-15,22.5,0,2.92,0.02 240 | 1980-09-16,17.5,0,2.12,0.02 241 | 1980-09-17,17.2,25.4,2.07,0.53 242 | 1980-09-18,21.9,22.9,2.75,1.89 243 | 1980-09-19,15.8,0,1.87,0.13 244 | 1980-09-20,16.9,0,1.99,0.09 245 | 1980-09-21,18.9,0,2.23,0.09 246 | 1980-09-22,25,0,3.23,0.09 247 | 1980-09-23,26.4,0,3.5,0.06 248 | 1980-09-24,20.3,0,2.38,0.06 249 | 1980-09-25,16.4,7.6,1.85,0.09 250 | 1980-09-26,13.9,7.6,1.57,0.13 251 | 1980-09-27,14.2,0,1.59,0.04 252 | 1980-09-28,10.3,0,1.24,0.04 253 | 1980-09-29,13.3,0,1.48,0.04 254 | 1980-09-30,14.4,0,1.57,0.05 255 | 1980-10-01,18.1,0,1.96,0.06 256 | 1980-10-02,19.4,20.3,2.11,0.42 257 | 1980-10-03,18.1,10.2,1.93,0.44 258 | 1980-10-04,13.3,0,1.42,1.5 259 | 1980-10-05,11.9,0,1.29,0.28 260 | 1980-10-06,11.7,0,1.27,0.17 261 | 1980-10-07,11.4,0,1.24,0.12 262 | 1980-10-08,10.6,0,1.17,0.1 263 | 1980-10-09,13.1,0,1.35,0.1 264 | 1980-10-10,11.9,0,1.24,0.07 265 | 1980-10-11,10.3,0,1.12,0.11 266 | 1980-10-12,10.6,0,1.13,0.21 267 | 1980-10-13,10,0,1.08,0.19 268 | 1980-10-14,8.3,0,0.96,0.16 269 | 1980-10-15,9.2,0,1.01,0.15 270 | 1980-10-16,10.8,0,1.11,0.15 271 | 1980-10-17,15,0,1.43,0.15 272 | 1980-10-18,17.5,10.2,1.65,0.77 273 | 1980-10-19,19.2,0,1.82,0.46 274 | 1980-10-20,12.2,0,1.17,0.22 275 | 1980-10-21,9.7,0,0.99,0.17 276 | 1980-10-22,10.3,0,1.02,0.17 277 | 1980-10-23,6.9,0,0.82,0.13 278 | 1980-10-24,5,0,0.72,0.07 279 | 1980-10-25,6.7,55.9,0.8,14.17 280 | 1980-10-26,11.7,0,1.08,1.97 281 | 1980-10-27,6.1,0,0.76,0.54 282 | 1980-10-28,7.2,2.5,0.81,0.49 283 | 1980-10-29,6.7,0,0.78,0.28 284 | 1980-10-30,4.4,0,0.67,0.2 285 | 1980-10-31,3.3,0,0.62,0.17 286 | 1980-11-01,5.6,0,0.71,0.17 287 | 1980-11-02,5.8,0,0.71,0.12 288 | 1980-11-03,1.9,0,0.55,0.08 289 | 1980-11-04,3.6,7.6,0.61,0.52 290 | 1980-11-05,7.2,0,0.76,0.31 291 | 1980-11-06,3.6,0,0.6,0.13 292 | 1980-11-07,2.2,0,0.55,0.12 293 | 1980-11-08,12.2,0,1.01,0.16 294 | 1980-11-09,7.5,7.6,0.75,0.79 295 | 1980-11-10,5.8,2.5,0.67,0.46 296 | 1980-11-11,5,0,0.63,0.23 297 | 1980-11-12,2.5,0,0.54,0.11 298 | 1980-11-13,4.4,0,0.6,0.06 299 | 1980-11-14,8.3,0,0.76,0.06 300 | 1980-11-15,11.4,0,0.92,0.06 301 | 1980-11-16,5,0,0.61,0.08 302 | 1980-11-17,-0.6,0,0.43,0.06 303 | 1980-11-18,-1.7,22.9,0.4,3.46 304 | 1980-11-19,1.4,0,0.48,0.87 305 | 1980-11-20,-1.1,0,0.41,0.46 306 | 1980-11-21,0.3,0,0.45,0.35 307 | 1980-11-22,1.1,2.5,0.47,0.27 308 | 1980-11-23,1.9,0,0.49,0.23 309 | 1980-11-24,2.8,35.6,0.51,8.66 310 | 1980-11-25,8.3,0,0.71,3.78 311 | 1980-11-26,4.4,0,0.56,1.26 312 | 1980-11-27,2.2,0,0.48,0.87 313 | 1980-11-28,1.7,10.2,0.47,2.99 314 | 1980-11-29,3.9,0,0.53,1.65 315 | 1980-11-30,2.5,0,0.49,0.94 316 | 1980-12-01,3.1,0,0.5,0.72 317 | 1980-12-02,5,0,0.56,0.65 318 | 1980-12-03,8.1,0,0.68,0.68 319 | 1980-12-04,-1.9,0,0.36,0.31 320 | 1980-12-05,-2.5,0,0.35,0.31 321 | 1980-12-06,-0.8,0,0.39,0.32 322 | 1980-12-07,2.5,0,0.47,0.31 323 | 1980-12-08,4.4,0,0.53,0.32 324 | 1980-12-09,10.6,0,0.78,0.32 325 | 1980-12-10,4.7,0,0.54,0.37 326 | 1980-12-11,1.4,0,0.44,0.28 327 | 1980-12-12,-4.4,0,0.31,0.26 328 | 1980-12-13,-0.3,0,0.39,0.26 329 | 1980-12-14,0.6,0,0.41,0.23 330 | 1980-12-15,-3.6,0,0.32,0.15 331 | 1980-12-16,-4.4,2.5,0.3,0.32 332 | 1980-12-17,-3.3,2.5,0.33,0.26 333 | 1980-12-18,-6.4,0,0.27,0.2 334 | 1980-12-19,-2.2,0,0.35,0.2 335 | 1980-12-20,-5.6,0,0.28,0.11 336 | 1980-12-21,-10.6,0,0.21,0.09 337 | 1980-12-22,-10.3,0,0.21,0.09 338 | 1980-12-23,-8.9,0,0.23,0.11 339 | 1980-12-24,-5,0,0.29,0.13 340 | 1980-12-25,-9.4,0,0.22,0.24 341 | 1980-12-26,-17.8,0,0.13,0.02 342 | 1980-12-27,-13.1,0,0.18,0.02 343 | 1980-12-28,-9.2,2.5,0.23,0.02 344 | 1980-12-29,0.3,0,0.41,0.03 345 | 1980-12-30,5.8,0,0.57,0.03 346 | 1980-12-31,-3.9,0,0.32,0.03 347 | 1981-01-01,-7.8,0,0.25,0.02 348 | 1981-01-02,-4.4,0,0.31,0.02 349 | 1981-01-03,-5.8,0,0.28,0.02 350 | 1981-01-04,-13.3,0,0.18,0.02 351 | 1981-01-05,-15.6,0,0.15,0.02 352 | 1981-01-06,-7.8,0,0.25,0.02 353 | 1981-01-07,-5.3,2.5,0.29,0.03 354 | 1981-01-08,-6.4,0,0.28,0.03 355 | 1981-01-09,-15,0,0.16,0.03 356 | 1981-01-10,-13.9,0,0.17,0.03 357 | 1981-01-11,-11.7,0,0.2,0.02 358 | 1981-01-12,-16.1,0,0.15,0.02 359 | 1981-01-13,-18.3,0,0.14,0.02 360 | 1981-01-14,-15.6,0,0.16,0.02 361 | 1981-01-15,-5,0,0.31,0.02 362 | 1981-01-16,-2.5,0,0.37,0.02 363 | 1981-01-17,-5,0,0.32,0.02 364 | 1981-01-18,-10.3,0,0.23,0.02 365 | 1981-01-19,-6.4,0,0.29,0.02 366 | 1981-01-20,1.7,0,0.48,0.02 367 | 1981-01-21,-0.3,0,0.43,0.02 368 | 1981-01-22,-5.6,0,0.31,0.02 369 | 1981-01-23,-3.6,0,0.35,0.02 370 | 1981-01-24,1.7,0,0.5,0.02 371 | 1981-01-25,-0.3,0,0.44,0.02 372 | 1981-01-26,1.9,0,0.51,0.02 373 | 1981-01-27,2.8,0,0.54,0.02 374 | 1981-01-28,2.2,0,0.53,0.02 375 | 1981-01-29,1.1,0,0.49,0.02 376 | 1981-01-30,-4.4,0,0.35,0.02 377 | 1981-01-31,-7.5,0,0.29,0.02 378 | 1981-02-01,-4.7,2.5,0.35,0.03 379 | 1981-02-02,-0.8,35.6,0.45,11.02 380 | 1981-02-03,1.4,0,0.52,5.35 381 | 1981-02-04,-9.2,0,0.27,2.83 382 | 1981-02-05,-7.8,0,0.3,1.89 383 | 1981-02-06,-10.3,0,0.26,1.5 384 | 1981-02-07,-4.2,0,0.38,0.87 385 | 1981-02-08,-0.8,5.1,0.47,2.99 386 | 1981-02-09,-0.3,0,0.49,11.02 387 | 1981-02-10,-5.6,0,0.36,5.59 388 | 1981-02-11,3.3,22.9,0.62,13.38 389 | 1981-02-12,4.2,0,0.66,11.81 390 | 1981-02-13,-8.1,0,0.31,7.24 391 | 1981-02-14,-6.1,0,0.36,3.86 392 | 1981-02-15,-2.8,0,0.44,2.13 393 | 1981-02-16,0.3,0,0.54,1.18 394 | 1981-02-17,3.9,0,0.68,0.59 395 | 1981-02-18,8.9,0,0.93,0.32 396 | 1981-02-19,11.7,5.1,1.11,0.23 397 | 1981-02-20,11.9,22.9,1.14,14.17 398 | 1981-02-21,10,2.5,1.02,10.23 399 | 1981-02-22,11.4,2.5,1.12,9.45 400 | 1981-02-23,7.8,20.3,0.9,11.02 401 | 1981-02-24,6.1,2.5,0.82,14.17 402 | 1981-02-25,8.1,0,0.94,7.87 403 | 1981-02-26,5,0,0.78,5.98 404 | 1981-02-27,4.4,0,0.76,3.46 405 | 1981-02-28,3.6,0,0.72,2.28 406 | 1981-03-01,1.9,0,0.66,2.6 407 | 1981-03-02,3.3,0,0.72,1.73 408 | 1981-03-03,2.2,0,0.68,1.1 409 | 1981-03-04,-3.6,0,0.48,0.68 410 | 1981-03-05,-1.4,2.5,0.55,0.79 411 | 1981-03-06,-1.7,0,0.55,0.94 412 | 1981-03-07,0.3,0,0.62,1.18 413 | 1981-03-08,1.7,0,0.69,2.6 414 | 1981-03-09,5,0,0.85,2.91 415 | 1981-03-10,4.2,0,0.81,1.73 416 | 1981-03-11,2.8,2.5,0.75,0.94 417 | 1981-03-12,-0.8,0,0.61,0.87 418 | 1981-03-13,2.2,0,0.74,0.71 419 | 1981-03-14,5.6,0,0.92,0.54 420 | 1981-03-15,-1.9,0,0.58,0.41 421 | 1981-03-16,4.2,2.5,0.85,0.56 422 | 1981-03-17,1.7,0,0.74,0.4 423 | 1981-03-18,-3.3,0,0.55,0.3 424 | 1981-03-19,-3.1,0,0.56,0.3 425 | 1981-03-20,-3.6,0,0.54,0.28 426 | 1981-03-21,-3.1,0,0.56,0.26 427 | 1981-03-22,1.9,0,0.78,0.23 428 | 1981-03-23,4.7,0,0.93,0.24 429 | 1981-03-24,2.8,0,0.83,0.22 430 | 1981-03-25,5.3,0,0.98,0.2 431 | 1981-03-26,2.5,0,0.83,0.18 432 | 1981-03-27,5,0,0.98,0.23 433 | 1981-03-28,6.7,0,1.09,0.14 434 | 1981-03-29,8.9,0,1.26,0.16 435 | 1981-03-30,13.9,10.2,1.74,0.76 436 | 1981-03-31,13.9,10.2,1.75,0.76 437 | 1981-04-01,14.7,15.2,1.85,1.73 438 | 1981-04-02,14.2,0,1.81,1.97 439 | 1981-04-03,9.2,0,1.33,0.79 440 | 1981-04-04,14.7,0,1.89,0.58 441 | 1981-04-05,18.6,12.7,2.43,1.34 442 | 1981-04-06,12.8,0,1.71,1.97 443 | 1981-04-07,4.4,0,1.02,1.02 444 | 1981-04-08,8.6,0,1.33,0.61 445 | 1981-04-09,13.9,0,1.86,0.54 446 | 1981-04-10,12.2,0,1.69,0.46 447 | 1981-04-11,12.5,2.5,1.73,0.4 448 | 1981-04-12,15.6,7.6,2.12,0.61 449 | 1981-04-13,8.1,2.5,1.34,0.54 450 | 1981-04-14,8.3,20.3,1.36,6.06 451 | 1981-04-15,6.9,0,1.26,3.86 452 | 1981-04-16,3.1,0,1,1.89 453 | 1981-04-17,9.4,0,1.49,0.87 454 | 1981-04-18,12.8,0,1.85,0.72 455 | 1981-04-19,15.3,0,2.18,0.53 456 | 1981-04-20,12.5,0,1.84,0.46 457 | 1981-04-21,6.1,0,1.25,0.32 458 | 1981-04-22,3.3,0,1.05,0.29 459 | 1981-04-23,8.9,7.6,1.5,0.68 460 | 1981-04-24,10,2.5,1.62,1.42 461 | 1981-04-25,12.5,0,1.9,1.18 462 | 1981-04-26,6.7,0,1.33,1.26 463 | 1981-04-27,10.6,0,1.71,1.26 464 | 1981-04-28,12.5,0,1.93,1.5 465 | 1981-04-29,15.6,0,2.36,1.73 466 | 1981-04-30,15.3,0,2.33,1.1 467 | 1981-05-01,12.2,5.1,1.93,1.02 468 | 1981-05-02,13.9,12.7,2.16,5.27 469 | 1981-05-03,10.6,0,1.77,3.23 470 | 1981-05-04,13.6,0,2.14,1.97 471 | 1981-05-05,17.8,0,2.79,1.57 472 | 1981-05-06,15.8,0,2.48,1.42 473 | 1981-05-07,12.5,0,2.03,0.94 474 | 1981-05-08,11.4,0,1.91,0.76 475 | 1981-05-09,11.1,0,1.88,0.54 476 | 1981-05-10,12.2,0,2.03,0.39 477 | 1981-05-11,16.4,63.5,2.64,18.11 478 | 1981-05-12,16.4,0,2.66,10.23 479 | 1981-05-13,13.1,0,2.17,2.28 480 | 1981-05-14,14.2,0,2.34,1.34 481 | 1981-05-15,17.5,25.4,2.88,4.8 482 | 1981-05-16,19.4,0,3.26,6.22 483 | 1981-05-17,13.6,0,2.28,1.73 484 | 1981-05-18,13.6,0,2.29,1.18 485 | 1981-05-19,13.3,0,2.26,0.87 486 | 1981-05-20,11.7,0,2.06,0.73 487 | 1981-05-21,14.2,0,2.41,0.61 488 | 1981-05-22,15,0,2.54,0.5 489 | 1981-05-23,17.2,0,2.92,0.41 490 | 1981-05-24,16.9,0,2.88,0.31 491 | 1981-05-25,20.3,0,3.57,0.27 492 | 1981-05-26,22.2,0,4.03,0.2 493 | 1981-05-27,23.1,0,4.28,0.21 494 | 1981-05-28,23.6,0,4.42,0.18 495 | 1981-05-29,21.7,0,3.95,0.25 496 | 1981-05-30,19.7,0,3.5,0.16 497 | 1981-05-31,23.3,0,4.38,0.16 498 | 1981-06-01,17.8,0,3.12,0.02 499 | 1981-06-02,18.1,22.9,3.19,2.76 500 | 1981-06-03,15.3,5.1,2.69,0.64 501 | 1981-06-04,17.2,0,3.03,0.79 502 | 1981-06-05,21.4,0,3.94,0.3 503 | 1981-06-06,22.5,0,4.23,0.24 504 | 1981-06-07,21.1,0,3.89,0.18 505 | 1981-06-08,17.2,0,3.06,0.05 506 | 1981-06-09,20,2.5,3.65,0.59 507 | 1981-06-10,21.9,0,4.11,0.31 508 | 1981-06-11,17.5,0,3.13,0.2 509 | 1981-06-12,20.6,0,3.8,0.03 510 | 1981-06-13,22.2,0,4.2,0.01 511 | 1981-06-14,22.2,17.8,4.21,2.13 512 | 1981-06-15,17.2,2.5,3.09,0.79 513 | 1981-06-16,22.8,0,4.37,0.28 514 | 1981-06-17,27.8,0,5.97,0.18 515 | 1981-06-18,20,0,3.68,0.12 516 | 1981-06-19,21.4,0,4.02,0.1 517 | 1981-06-20,21.9,7.6,4.14,0.7 518 | 1981-06-21,21.1,0,3.94,0.27 519 | 1981-06-22,23.3,2.5,4.52,0.61 520 | 1981-06-23,24.7,0,4.93,0.2 521 | 1981-06-24,19.4,0,3.55,0.13 522 | 1981-06-25,20.8,20.3,3.87,2.76 523 | 1981-06-26,22.8,0,4.38,1.34 524 | 1981-06-27,16.4,0,2.94,0.29 525 | 1981-06-28,19.4,0,3.54,0.17 526 | 1981-06-29,20.8,0,3.86,0.13 527 | 1981-06-30,22.5,0,4.28,0.12 528 | 1981-07-01,22.8,7.6,4.36,0.43 529 | 1981-07-02,23.9,0,4.66,0.24 530 | 1981-07-03,23.9,0,4.65,0.13 531 | 1981-07-04,25,43.2,4.97,7.16 532 | 1981-07-05,21.1,2.5,3.9,3.23 533 | 1981-07-06,22.2,0,4.16,0.87 534 | 1981-07-07,24.2,0,4.7,0.54 535 | 1981-07-08,26.1,0,5.28,0.39 536 | 1981-07-09,26.9,0,5.54,0.28 537 | 1981-07-10,28.6,0,6.14,0.25 538 | 1981-07-11,23.9,0,4.58,0.28 539 | 1981-07-12,25.6,0,5.07,0.13 540 | 1981-07-13,26.7,15.2,5.41,1.81 541 | 1981-07-14,26.1,0,5.2,0.7 542 | 1981-07-15,21.1,0,3.8,0.13 543 | 1981-07-16,20.6,0,3.68,0.13 544 | 1981-07-17,21.7,0,3.92,0.1 545 | 1981-07-18,23.6,0,4.4,0.09 546 | 1981-07-19,25.3,0,4.87,0.07 547 | 1981-07-20,25.8,25.4,5.01,2.13 548 | 1981-07-21,24.4,7.6,4.57,4.33 549 | 1981-07-22,24.4,0,4.56,0.54 550 | 1981-07-23,20.8,0,3.63,0.2 551 | 1981-07-24,20,0,3.44,0.13 552 | 1981-07-25,20.6,0,3.56,0.1 553 | 1981-07-26,21.9,0,3.84,0.09 554 | 1981-07-27,25.3,22.9,4.72,5.75 555 | 1981-07-28,22.2,0,3.88,0.51 556 | 1981-07-29,20.8,5.1,3.54,0.79 557 | 1981-07-30,19.4,0,3.23,0.17 558 | 1981-07-31,20.3,0,3.4,0.13 559 | 1981-08-01,21.1,0,3.56,0.1 560 | 1981-08-02,21.9,0,3.72,0.08 561 | 1981-08-03,22.8,0,3.91,0.06 562 | 1981-08-04,22.2,0,3.75,0.06 563 | 1981-08-05,24.7,10.2,4.36,2.44 564 | 1981-08-06,23.3,0,3.98,0.19 565 | 1981-08-07,21.9,0,3.63,0.1 566 | 1981-08-08,23.1,15.2,3.88,3.07 567 | 1981-08-09,20,0,3.19,0.79 568 | 1981-08-10,23.6,0,3.96,0.18 569 | 1981-08-11,24.4,0,4.14,0.12 570 | 1981-08-12,25,0,4.28,0.37 571 | 1981-08-13,20.6,0,3.24,0.17 572 | 1981-08-14,22.2,0,3.55,0.11 573 | 1981-08-15,23.9,0,3.92,0.1 574 | 1981-08-16,24.7,0,4.1,0.12 575 | 1981-08-17,20.3,0,3.1,0.08 576 | 1981-08-18,16.9,0,2.5,0.03 577 | 1981-08-19,18.9,0,2.81,0 578 | 1981-08-20,20.3,0,3.04,0 579 | 1981-08-21,18.3,0,2.67,0 580 | 1981-08-22,20.3,0,3.01,0 581 | 1981-08-23,18.1,0,2.61,0 582 | 1981-08-24,20,0,2.91,0 583 | 1981-08-25,19.7,0,2.84,0 584 | 1981-08-26,20,0,2.88,0 585 | 1981-08-27,20,0,2.86,0 586 | 1981-08-28,22.5,0,3.31,0 587 | 1981-08-29,22.5,0,3.29,0 588 | 1981-08-30,23.9,2.5,3.57,0.02 589 | 1981-08-31,23.1,0,3.37,0.13 590 | 1981-09-01,21.9,0,3.11,0.09 591 | 1981-09-02,22.8,0,3.26,0.09 592 | 1981-09-03,20.3,0,2.78,0.07 593 | 1981-09-04,19.7,0,2.66,0.06 594 | 1981-09-05,18.6,0,2.46,0.06 595 | 1981-09-06,20.6,0,2.77,0.03 596 | 1981-09-07,20.8,0,2.78,0.03 597 | 1981-09-08,20.6,20.3,2.73,4.33 598 | 1981-09-09,19.2,0,2.48,1.89 599 | 1981-09-10,15,0,1.9,0.15 600 | 1981-09-11,15.8,0,1.98,0.12 601 | 1981-09-12,20.6,0,2.65,0.12 602 | 1981-09-13,20.8,0,2.67,0.09 603 | 1981-09-14,20.3,0,2.56,0.08 604 | 1981-09-15,23.6,12.7,3.12,0.51 605 | 1981-09-16,17.2,17.8,2.08,9.45 606 | 1981-09-17,15.3,0,1.84,1.1 607 | 1981-09-18,16.7,0,1.99,3.54 608 | 1981-09-19,16.1,0,1.9,0.79 609 | 1981-09-20,14.2,0,1.68,0.37 610 | 1981-09-21,14.2,0,1.67,0.25 611 | 1981-09-22,14.2,0,1.65,0.28 612 | 1981-09-23,16.7,0,1.92,0.33 613 | 1981-09-24,12.5,0,1.47,0.13 614 | 1981-09-25,11.9,0,1.4,0.09 615 | 1981-09-26,14.4,0,1.62,0.09 616 | 1981-09-27,15.8,0,1.76,0.09 617 | 1981-09-28,18.9,0,2.11,0.09 618 | 1981-09-29,10.6,0,1.25,0.09 619 | 1981-09-30,10,0,1.2,0.06 620 | 1981-10-01,10,10.2,1.19,0.02 621 | 1981-10-02,12.2,5.1,1.35,3.15 622 | 1981-10-03,10.6,0,1.21,0.32 623 | 1981-10-04,7.8,0,1.01,0.13 624 | 1981-10-05,9.4,0,1.11,0.14 625 | 1981-10-06,15.3,10.2,1.59,0.42 626 | 1981-10-07,14.4,0,1.49,1.1 627 | 1981-10-08,11.7,0,1.25,0.18 628 | 1981-10-09,8.3,0,1,0.1 629 | 1981-10-10,8.3,0,0.99,0.05 630 | 1981-10-11,7.5,0,0.94,0.03 631 | 1981-10-12,8.1,0,0.97,0.01 632 | 1981-10-13,5,0,0.79,0 633 | 1981-10-14,6.9,0,0.88,0 634 | 1981-10-15,8.9,0,0.99,0 635 | 1981-10-16,8.3,0,0.95,0 636 | 1981-10-17,11.7,0,1.16,0 637 | 1981-10-18,8.9,20.3,0.97,2.6 638 | 1981-10-19,9.4,0,0.99,1.57 639 | 1981-10-20,3.9,0,0.7,0.28 640 | 1981-10-21,5.6,0,0.77,0.17 641 | 1981-10-22,10.3,0,1.02,0.14 642 | 1981-10-23,12.5,15.2,1.16,1.5 643 | 1981-10-24,7.8,0,0.86,1.34 644 | 1981-10-25,4.2,0,0.69,0.39 645 | 1981-10-26,6.9,10.2,0.8,2.36 646 | 1981-10-27,12.8,33,1.15,11.02 647 | 1981-10-28,15.3,0,1.33,5.51 648 | 1981-10-29,10.8,0,1,1.18 649 | 1981-10-30,6.9,0,0.78,0.68 650 | 1981-10-31,7.5,0,0.8,0.5 651 | 1981-11-01,8.3,0,0.84,0.41 652 | 1981-11-02,9.2,0,0.88,0.35 653 | 1981-11-03,12.2,0,1.05,0.28 654 | 1981-11-04,9.2,0,0.86,0.26 655 | 1981-11-05,10.8,0,0.95,0.24 656 | 1981-11-06,10.3,12.7,0.91,2.6 657 | 1981-11-07,8.6,0,0.82,0.61 658 | 1981-11-08,6.4,0,0.71,0.43 659 | 1981-11-09,6.9,0,0.72,0.39 660 | 1981-11-10,7.8,0,0.76,0.35 661 | 1981-11-11,3.3,0,0.57,0.3 662 | 1981-11-12,5.6,0,0.65,0.28 663 | 1981-11-13,2.5,0,0.54,0.24 664 | 1981-11-14,3.9,0,0.58,0.24 665 | 1981-11-15,10,10.2,0.84,0.87 666 | 1981-11-16,9.7,0,0.82,1.34 667 | 1981-11-17,5.8,0,0.64,0.57 668 | 1981-11-18,6.1,2.5,0.65,0.51 669 | 1981-11-19,9.4,0,0.79,0.39 670 | 1981-11-20,10.3,15.2,0.83,1.42 671 | 1981-11-21,7.8,2.5,0.71,0.76 672 | 1981-11-22,5,5.1,0.59,0.51 673 | 1981-11-23,3.1,0,0.52,0.43 674 | 1981-11-24,-1.9,0,0.38,0.38 675 | 1981-11-25,-0.8,0,0.41,0.35 676 | 1981-11-26,-1.1,0,0.4,0.29 677 | 1981-11-27,0.3,0,0.43,0.31 678 | 1981-11-28,6.9,0,0.65,0.31 679 | 1981-11-29,2.5,0,0.49,0.26 680 | 1981-11-30,2.8,0,0.5,0.19 681 | 1981-12-01,-1.7,22.9,0.37,3.86 682 | 1981-12-02,2.2,20.3,0.47,13.38 683 | 1981-12-03,6.1,0,0.6,1.89 684 | 1981-12-04,3.1,0,0.5,1.02 685 | 1981-12-05,3.9,0,0.52,0.71 686 | 1981-12-06,2.8,0,0.48,0.53 687 | 1981-12-07,0.8,0,0.43,0.43 688 | 1981-12-08,3.9,5.1,0.52,0.87 689 | 1981-12-09,1.9,0,0.46,0.55 690 | 1981-12-10,-2.5,7.6,0.34,0.36 691 | 1981-12-11,-1.9,0,0.36,0.28 692 | 1981-12-12,0.3,0,0.41,0.25 693 | 1981-12-13,-2.2,2.5,0.35,0.21 694 | 1981-12-14,-2.8,15.2,0.34,0.47 695 | 1981-12-15,-2.5,35.6,0.34,20.47 696 | 1981-12-16,1.4,2.5,0.43,14.17 697 | 1981-12-17,-1.7,0,0.36,4.09 698 | 1981-12-18,-3.1,0,0.33,2.36 699 | 1981-12-19,-5.6,0,0.28,1.5 700 | 1981-12-20,-7.8,0,0.24,1.02 701 | 1981-12-21,-9.2,0,0.23,0.79 702 | 1981-12-22,-7.8,0,0.24,0.76 703 | 1981-12-23,1.1,5.1,0.43,5.35 704 | 1981-12-24,3.9,0,0.51,3.07 705 | 1981-12-25,1.1,0,0.43,1.81 706 | 1981-12-26,-3.1,0,0.33,1.18 707 | 1981-12-27,-1.9,2.5,0.35,1.81 708 | 1981-12-28,-0.3,0,0.39,1.81 709 | 1981-12-29,1.9,0,0.45,1.34 710 | 1981-12-30,-1.1,0,0.38,0.94 711 | 1981-12-31,-2.8,0,0.34,0.74 712 | -------------------------------------------------------------------------------- /data/inputs/twi_wolock.csv: -------------------------------------------------------------------------------- 1 | bin,twi,proportion,cells 2 | 1,11.6817837,0.003704784,0.370478397 3 | 2,11.21175,0.005604416,0.560441613 4 | 3,10.7417173,0.008413193,0.841319282 5 | 4,10.2716837,0.012517415,1.2517415 6 | 5,9.80165005,0.018429315,1.84293147 7 | 6,9.33161736,0.026795687,2.67956872 8 | 7,8.86158371,0.038372401,3.83724011 9 | 8,8.39155006,0.053925101,5.39251007 10 | 9,7.9215169,0.073987022,7.3987022 11 | 10,7.45148373,0.098363318,9.83633175 12 | 11,6.98145008,0.125227734,12.5227734 13 | 12,6.51141691,0.149650559,14.9650559 14 | 13,6.04138327,0.161592662,16.1592662 15 | 14,5.5713501,0.14437668,14.437668 16 | 15,5.10131645,0.079039745,7.9039745 17 | 16,0,0,0 18 | 17,0,0,0 19 | 18,0,0,0 20 | 19,0,0,0 21 | 20,0,0,0 22 | -------------------------------------------------------------------------------- /data/modelconfig.ini: -------------------------------------------------------------------------------- 1 | # Model configuration file 2 | # ------------------------ 3 | # This file contains all the information necessary to complete a model run: 4 | # - location of input files 5 | # - model specific options/flags for various calculations 6 | # - location to write output files along with output format 7 | # 8 | # This data file is in the standard INI file format, more information 9 | # can be found at https://en.wikipedia.org/wiki/INI_file 10 | # 11 | # This file contains different sections that group together common data 12 | # and/or options to make it easier for the user to find and edit. Each 13 | # section contains key-value pairs. For example: 14 | # 15 | # [Section Name] 16 | # key = value 17 | # 18 | # - Only edit the values, and DO NOT edit the sections or keys. 19 | # 20 | # - To use a value multiple times, use the following syntax: 21 | # ${Section Name:key} 22 | # 23 | # ------------------------------------------------------------------------- 24 | 25 | # INPUTS 26 | # ------------------------------------------------------------------------- 27 | [Inputs] 28 | # Input directory location, using for shortcut purposes 29 | input_dir = /home/jlant/jeremiah/projects/topmodelpy/data/inputs 30 | 31 | # Model parameter data file (*.csv) 32 | parameters_file = ${Inputs:input_dir}/parameters_wolock.csv 33 | 34 | # Climate timeseries data file(s) (*.csv) 35 | timeseries_file = ${Inputs:input_dir}/timeseries_wolock.csv 36 | 37 | # Topographic wetness index (TWI) file(s) (*.csv) 38 | twi_file = ${Inputs:input_dir}/twi_wolock.csv 39 | 40 | # OUTPUTS 41 | # ------------------------------------------------------------------------- 42 | [Outputs] 43 | # Output directory location 44 | output_dir = /home/jlant/jeremiah/projects/topmodelpy/data/outputs 45 | 46 | # Output filename for timeseries of main results (*.csv) 47 | output_filename = output.csv 48 | 49 | # Output filename for timeseries of saturation deficit locals (*.csv) 50 | # Note: This file has the same number of columns as the number of twi bins 51 | output_filename_saturation_deficit_locals = output_saturation_deficit_locals.csv 52 | 53 | # Output filename for timeseries of unsaturated zone storages (*.csv) 54 | # Note: This file has the same number of columns as the number of twi bins 55 | output_filename_unsaturated_zone_storages = output_unsaturated_zone_storages.csv 56 | 57 | # Output filename for timeseries of unsaturated zone storages (*.csv) 58 | # Note: This file has the same number of columns as the number of twi bins 59 | output_filename_root_zone_storages = output_root_zone_storages.csv 60 | 61 | # Output html report for timeseries of main results (*.html) 62 | output_report = report.html 63 | 64 | # OPTIONS 65 | # ------------------------------------------------------------------------- 66 | [Options] 67 | # Potential evapotranspiration (PET) calculation option, hamon only right now 68 | option_pet = hamon 69 | 70 | # Snowmelt calculation, yes | no 71 | option_snowmelt = no 72 | 73 | # Write output matrices of 74 | # saturation deficit local (mm) 75 | # root zone storage (mm) 76 | # unsaturated zone storage (mm) 77 | option_write_output_matrices = no 78 | -------------------------------------------------------------------------------- /data/outputs/flow_duration_curve.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jlant/topmodelpy/77440b5c1fb066435fad78b8b95ba6e7157fb055/data/outputs/flow_duration_curve.png -------------------------------------------------------------------------------- /data/outputs/flow_duration_curved_observed_vs_predicted.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jlant/topmodelpy/77440b5c1fb066435fad78b8b95ba6e7157fb055/data/outputs/flow_duration_curved_observed_vs_predicted.png -------------------------------------------------------------------------------- /data/outputs/flow_observed.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jlant/topmodelpy/77440b5c1fb066435fad78b8b95ba6e7157fb055/data/outputs/flow_observed.png -------------------------------------------------------------------------------- /data/outputs/flow_observed_vs_flow_predicted.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jlant/topmodelpy/77440b5c1fb066435fad78b8b95ba6e7157fb055/data/outputs/flow_observed_vs_flow_predicted.png -------------------------------------------------------------------------------- /data/outputs/flow_predicted.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jlant/topmodelpy/77440b5c1fb066435fad78b8b95ba6e7157fb055/data/outputs/flow_predicted.png -------------------------------------------------------------------------------- /data/outputs/pet.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jlant/topmodelpy/77440b5c1fb066435fad78b8b95ba6e7157fb055/data/outputs/pet.png -------------------------------------------------------------------------------- /data/outputs/precip_minus_pet.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jlant/topmodelpy/77440b5c1fb066435fad78b8b95ba6e7157fb055/data/outputs/precip_minus_pet.png -------------------------------------------------------------------------------- /data/outputs/precipitation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jlant/topmodelpy/77440b5c1fb066435fad78b8b95ba6e7157fb055/data/outputs/precipitation.png -------------------------------------------------------------------------------- /data/outputs/saturation_deficit_avgs.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jlant/topmodelpy/77440b5c1fb066435fad78b8b95ba6e7157fb055/data/outputs/saturation_deficit_avgs.png -------------------------------------------------------------------------------- /data/outputs/temperature.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jlant/topmodelpy/77440b5c1fb066435fad78b8b95ba6e7157fb055/data/outputs/temperature.png -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | PAPER = 8 | BUILDDIR = _build 9 | 10 | # User-friendly check for sphinx-build 11 | ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) 12 | $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) 13 | endif 14 | 15 | # Internal variables. 16 | PAPEROPT_a4 = -D latex_paper_size=a4 17 | PAPEROPT_letter = -D latex_paper_size=letter 18 | ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 19 | # the i18n builder cannot share the environment and doctrees with the others 20 | I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 21 | 22 | .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest coverage gettext 23 | 24 | help: 25 | @echo "Please use \`make ' where is one of" 26 | @echo " html to make standalone HTML files" 27 | @echo " dirhtml to make HTML files named index.html in directories" 28 | @echo " singlehtml to make a single large HTML file" 29 | @echo " pickle to make pickle files" 30 | @echo " json to make JSON files" 31 | @echo " htmlhelp to make HTML files and a HTML help project" 32 | @echo " qthelp to make HTML files and a qthelp project" 33 | @echo " applehelp to make an Apple Help Book" 34 | @echo " devhelp to make HTML files and a Devhelp project" 35 | @echo " epub to make an epub" 36 | @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" 37 | @echo " latexpdf to make LaTeX files and run them through pdflatex" 38 | @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" 39 | @echo " text to make text files" 40 | @echo " man to make manual pages" 41 | @echo " texinfo to make Texinfo files" 42 | @echo " info to make Texinfo files and run them through makeinfo" 43 | @echo " gettext to make PO message catalogs" 44 | @echo " changes to make an overview of all changed/added/deprecated items" 45 | @echo " xml to make Docutils-native XML files" 46 | @echo " pseudoxml to make pseudoxml-XML files for display purposes" 47 | @echo " linkcheck to check all external links for integrity" 48 | @echo " doctest to run all doctests embedded in the documentation (if enabled)" 49 | @echo " coverage to run coverage check of the documentation (if enabled)" 50 | 51 | clean: 52 | rm -rf $(BUILDDIR)/* 53 | 54 | html: 55 | $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html 56 | @echo 57 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." 58 | 59 | dirhtml: 60 | $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml 61 | @echo 62 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." 63 | 64 | singlehtml: 65 | $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml 66 | @echo 67 | @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." 68 | 69 | pickle: 70 | $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle 71 | @echo 72 | @echo "Build finished; now you can process the pickle files." 73 | 74 | json: 75 | $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json 76 | @echo 77 | @echo "Build finished; now you can process the JSON files." 78 | 79 | htmlhelp: 80 | $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp 81 | @echo 82 | @echo "Build finished; now you can run HTML Help Workshop with the" \ 83 | ".hhp project file in $(BUILDDIR)/htmlhelp." 84 | 85 | qthelp: 86 | $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp 87 | @echo 88 | @echo "Build finished; now you can run "qcollectiongenerator" with the" \ 89 | ".qhcp project file in $(BUILDDIR)/qthelp, like this:" 90 | @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/nwispy.qhcp" 91 | @echo "To view the help file:" 92 | @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/nwispy.qhc" 93 | 94 | applehelp: 95 | $(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp 96 | @echo 97 | @echo "Build finished. The help book is in $(BUILDDIR)/applehelp." 98 | @echo "N.B. You won't be able to view it unless you put it in" \ 99 | "~/Library/Documentation/Help or install it in your application" \ 100 | "bundle." 101 | 102 | devhelp: 103 | $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp 104 | @echo 105 | @echo "Build finished." 106 | @echo "To view the help file:" 107 | @echo "# mkdir -p $$HOME/.local/share/devhelp/nwispy" 108 | @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/nwispy" 109 | @echo "# devhelp" 110 | 111 | epub: 112 | $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub 113 | @echo 114 | @echo "Build finished. The epub file is in $(BUILDDIR)/epub." 115 | 116 | latex: 117 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 118 | @echo 119 | @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." 120 | @echo "Run \`make' in that directory to run these through (pdf)latex" \ 121 | "(use \`make latexpdf' here to do that automatically)." 122 | 123 | latexpdf: 124 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 125 | @echo "Running LaTeX files through pdflatex..." 126 | $(MAKE) -C $(BUILDDIR)/latex all-pdf 127 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 128 | 129 | latexpdfja: 130 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 131 | @echo "Running LaTeX files through platex and dvipdfmx..." 132 | $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja 133 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 134 | 135 | text: 136 | $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text 137 | @echo 138 | @echo "Build finished. The text files are in $(BUILDDIR)/text." 139 | 140 | man: 141 | $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man 142 | @echo 143 | @echo "Build finished. The manual pages are in $(BUILDDIR)/man." 144 | 145 | texinfo: 146 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 147 | @echo 148 | @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." 149 | @echo "Run \`make' in that directory to run these through makeinfo" \ 150 | "(use \`make info' here to do that automatically)." 151 | 152 | info: 153 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 154 | @echo "Running Texinfo files through makeinfo..." 155 | make -C $(BUILDDIR)/texinfo info 156 | @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." 157 | 158 | gettext: 159 | $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale 160 | @echo 161 | @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." 162 | 163 | changes: 164 | $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes 165 | @echo 166 | @echo "The overview file is in $(BUILDDIR)/changes." 167 | 168 | linkcheck: 169 | $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck 170 | @echo 171 | @echo "Link check complete; look for any errors in the above output " \ 172 | "or in $(BUILDDIR)/linkcheck/output.txt." 173 | 174 | doctest: 175 | $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest 176 | @echo "Testing of doctests in the sources finished, look at the " \ 177 | "results in $(BUILDDIR)/doctest/output.txt." 178 | 179 | coverage: 180 | $(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage 181 | @echo "Testing of coverage in the sources finished, look at the " \ 182 | "results in $(BUILDDIR)/coverage/python.txt." 183 | 184 | xml: 185 | $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml 186 | @echo 187 | @echo "Build finished. The XML files are in $(BUILDDIR)/xml." 188 | 189 | pseudoxml: 190 | $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml 191 | @echo 192 | @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." 193 | -------------------------------------------------------------------------------- /docs/authors.rst: -------------------------------------------------------------------------------- 1 | .. include:: ../AUTHORS.rst 2 | -------------------------------------------------------------------------------- /docs/changelog.rst: -------------------------------------------------------------------------------- 1 | .. include:: ../CHANGELOG.rst 2 | -------------------------------------------------------------------------------- /docs/code.rst: -------------------------------------------------------------------------------- 1 | Code documentation 2 | ================== 3 | 4 | .. automodule:: topmodelpy.topmodelpy 5 | :members: 6 | -------------------------------------------------------------------------------- /docs/conf.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # topmodelpy documentation build configuration file, created by 5 | # sphinx-quickstart on Tue Jul 9 22:26:36 2013. 6 | # 7 | # This file is execfile()d with the current directory set to its 8 | # containing dir. 9 | # 10 | # Note that not all possible configuration values are present in this 11 | # autogenerated file. 12 | # 13 | # All configuration values have a default; values that are commented out 14 | # serve to show the default. 15 | 16 | import sys 17 | import os 18 | 19 | # If extensions (or modules to document with autodoc) are in another 20 | # directory, add these directories to sys.path here. If the directory is 21 | # relative to the documentation root, use os.path.abspath to make it 22 | # absolute, like shown here. 23 | #sys.path.insert(0, os.path.abspath('.')) 24 | 25 | # Get the project root dir, which is the parent dir of this 26 | cwd = os.getcwd() 27 | project_root = os.path.dirname(cwd) 28 | 29 | # Insert the project root dir as the first element in the PYTHONPATH. 30 | # This lets us ensure that the source package is imported, and that its 31 | # version is used. 32 | sys.path.insert(0, project_root) 33 | 34 | import topmodelpy 35 | 36 | # -- General configuration --------------------------------------------- 37 | 38 | # If your documentation needs a minimal Sphinx version, state it here. 39 | #needs_sphinx = '1.0' 40 | 41 | # Add any Sphinx extension module names here, as strings. They can be 42 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. 43 | extensions = ['sphinx.ext.autodoc', 'sphinx.ext.viewcode'] 44 | 45 | # Add any paths that contain templates here, relative to this directory. 46 | templates_path = ['_templates'] 47 | 48 | # The suffix of source filenames. 49 | source_suffix = '.rst' 50 | 51 | # The encoding of source files. 52 | #source_encoding = 'utf-8-sig' 53 | 54 | # The master toctree document. 55 | master_doc = 'index' 56 | 57 | # General information about the project. 58 | project = u'topmodelpy' 59 | # copyright = u'2019, Jeremiah Lant' 60 | 61 | # The version info for the project you're documenting, acts as replacement 62 | # for |version| and |release|, also used in various other places throughout 63 | # the built documents. 64 | # 65 | # The short X.Y version. 66 | version = topmodelpy.__version__ 67 | # The full version, including alpha/beta/rc tags. 68 | release = topmodelpy.__version__ 69 | 70 | # The language for content autogenerated by Sphinx. Refer to documentation 71 | # for a list of supported languages. 72 | #language = None 73 | 74 | # There are two options for replacing |today|: either, you set today to 75 | # some non-false value, then it is used: 76 | #today = '' 77 | # Else, today_fmt is used as the format for a strftime call. 78 | #today_fmt = '%B %d, %Y' 79 | 80 | # List of patterns, relative to source directory, that match files and 81 | # directories to ignore when looking for source files. 82 | exclude_patterns = ['_build'] 83 | 84 | # The reST default role (used for this markup: `text`) to use for all 85 | # documents. 86 | #default_role = None 87 | 88 | # If true, '()' will be appended to :func: etc. cross-reference text. 89 | #add_function_parentheses = True 90 | 91 | # If true, the current module name will be prepended to all description 92 | # unit titles (such as .. function::). 93 | #add_module_names = True 94 | 95 | # If true, sectionauthor and moduleauthor directives will be shown in the 96 | # output. They are ignored by default. 97 | #show_authors = False 98 | 99 | # The name of the Pygments (syntax highlighting) style to use. 100 | pygments_style = 'sphinx' 101 | 102 | # A list of ignored prefixes for module index sorting. 103 | #modindex_common_prefix = [] 104 | 105 | # If true, keep warnings as "system message" paragraphs in the built 106 | # documents. 107 | #keep_warnings = False 108 | 109 | 110 | # -- Options for HTML output ------------------------------------------- 111 | 112 | # The theme to use for HTML and HTML Help pages. See the documentation for 113 | # a list of builtin themes. 114 | html_theme = 'alabaster' 115 | 116 | # Theme options are theme-specific and customize the look and feel of a 117 | # theme further. For a list of options available for each theme, see the 118 | # documentation. 119 | #html_theme_options = {} 120 | 121 | # Add any paths that contain custom themes here, relative to this directory. 122 | #html_theme_path = [] 123 | 124 | # The name for this set of Sphinx documents. If None, it defaults to 125 | # " v documentation". 126 | #html_title = None 127 | 128 | # A shorter title for the navigation bar. Default is the same as 129 | # html_title. 130 | #html_short_title = None 131 | 132 | # The name of an image file (relative to this directory) to place at the 133 | # top of the sidebar. 134 | #html_logo = None 135 | 136 | # The name of an image file (within the static path) to use as favicon 137 | # of the docs. This file should be a Windows icon file (.ico) being 138 | # 16x16 or 32x32 pixels large. 139 | #html_favicon = None 140 | 141 | # Add any paths that contain custom static files (such as style sheets) 142 | # here, relative to this directory. They are copied after the builtin 143 | # static files, so a file named "default.css" will overwrite the builtin 144 | # "default.css". 145 | #html_static_path = ['_static'] 146 | 147 | # If not '', a 'Last updated on:' timestamp is inserted at every page 148 | # bottom, using the given strftime format. 149 | #html_last_updated_fmt = '%b %d, %Y' 150 | 151 | # If true, SmartyPants will be used to convert quotes and dashes to 152 | # typographically correct entities. 153 | #html_use_smartypants = True 154 | 155 | # Custom sidebar templates, maps document names to template names. 156 | #html_sidebars = {} 157 | 158 | # Additional templates that should be rendered to pages, maps page names 159 | # to template names. 160 | #html_additional_pages = {} 161 | 162 | # If false, no module index is generated. 163 | #html_domain_indices = True 164 | 165 | # If false, no index is generated. 166 | #html_use_index = True 167 | 168 | # If true, the index is split into individual pages for each letter. 169 | #html_split_index = False 170 | 171 | # If true, links to the reST sources are added to the pages. 172 | #html_show_sourcelink = True 173 | 174 | # If true, "Created using Sphinx" is shown in the HTML footer. 175 | # Default is True. 176 | #html_show_sphinx = True 177 | 178 | # If true, "(C) Copyright ..." is shown in the HTML footer. 179 | # Default is True. 180 | html_show_copyright = False 181 | 182 | # If true, an OpenSearch description file will be output, and all pages 183 | # will contain a tag referring to it. The value of this option 184 | # must be the base URL from which the finished HTML is served. 185 | #html_use_opensearch = '' 186 | 187 | # This is the file name suffix for HTML files (e.g. ".xhtml"). 188 | #html_file_suffix = None 189 | 190 | # Output file base name for HTML help builder. 191 | htmlhelp_basename = 'topmodelpydoc' 192 | 193 | 194 | # -- Options for LaTeX output ------------------------------------------ 195 | 196 | latex_elements = { 197 | # The paper size ('letterpaper' or 'a4paper'). 198 | #'papersize': 'letterpaper', 199 | 200 | # The font size ('10pt', '11pt' or '12pt'). 201 | #'pointsize': '10pt', 202 | 203 | # Additional stuff for the LaTeX preamble. 204 | #'preamble': '', 205 | } 206 | 207 | # Grouping the document tree into LaTeX files. List of tuples 208 | # (source start file, target name, title, author, documentclass 209 | # [howto/manual]). 210 | latex_documents = [ 211 | ('index', 'topmodelpy.tex', 212 | u'topmodelpy Documentation', 213 | u'Jeremiah Lant', 'manual'), 214 | ] 215 | 216 | # The name of an image file (relative to this directory) to place at 217 | # the top of the title page. 218 | #latex_logo = None 219 | 220 | # For "manual" documents, if this is true, then toplevel headings 221 | # are parts, not chapters. 222 | #latex_use_parts = False 223 | 224 | # If true, show page references after internal links. 225 | #latex_show_pagerefs = False 226 | 227 | # If true, show URL addresses after external links. 228 | #latex_show_urls = False 229 | 230 | # Documents to append as an appendix to all manuals. 231 | #latex_appendices = [] 232 | 233 | # If false, no module index is generated. 234 | #latex_domain_indices = True 235 | 236 | 237 | # -- Options for manual page output ------------------------------------ 238 | 239 | # One entry per manual page. List of tuples 240 | # (source start file, name, description, authors, manual section). 241 | man_pages = [ 242 | ('index', 'topmodelpy', 243 | u'topmodelpy Documentation', 244 | [u'Jeremiah Lant'], 1) 245 | ] 246 | 247 | # If true, show URL addresses after external links. 248 | #man_show_urls = False 249 | 250 | 251 | # -- Options for Texinfo output ---------------------------------------- 252 | 253 | # Grouping the document tree into Texinfo files. List of tuples 254 | # (source start file, target name, title, author, 255 | # dir menu entry, description, category) 256 | texinfo_documents = [ 257 | ('index', 'topmodelpy', 258 | u'topmodelpy Documentation', 259 | u'Jeremiah Lant', 260 | 'topmodelpy', 261 | 'One line description of project.', 262 | 'Miscellaneous'), 263 | ] 264 | 265 | # Documents to append as an appendix to all manuals. 266 | #texinfo_appendices = [] 267 | 268 | # If false, no module index is generated. 269 | #texinfo_domain_indices = True 270 | 271 | # How to display URL addresses: 'footnote', 'no', or 'inline'. 272 | #texinfo_show_urls = 'footnote' 273 | 274 | # If true, do not generate a @detailmenu in the "Top" node's menu. 275 | #texinfo_no_detailmenu = False 276 | -------------------------------------------------------------------------------- /docs/contributing.rst: -------------------------------------------------------------------------------- 1 | .. include:: ../CONTRIBUTING.rst 2 | -------------------------------------------------------------------------------- /docs/index.rst: -------------------------------------------------------------------------------- 1 | .. documentation master file, created by 2 | sphinx-quickstart 3 | You can adapt this file completely to your liking, but it should at least 4 | contain the root `toctree` directive. 5 | 6 | Welcome to topmodelpy's documentation! 7 | =========================================================== 8 | 9 | topmodelpy is a command line interface for a rainfall-runoff model, 10 | called Topmodel, that predicts the amount of water flow in rivers. 11 | 12 | 13 | Basics 14 | ------- 15 | 16 | .. toctree:: 17 | :maxdepth: 2 18 | 19 | readme 20 | overview 21 | installation 22 | usage 23 | tutorial 24 | 25 | Code Documentation 26 | ------------------ 27 | 28 | .. toctree:: 29 | :maxdepth: 2 30 | 31 | code 32 | 33 | Project Info 34 | ------------ 35 | 36 | .. toctree:: 37 | :maxdepth: 2 38 | 39 | contributing 40 | authors 41 | changelog 42 | 43 | 44 | 45 | Index 46 | ----- 47 | 48 | * :ref:`genindex` 49 | * :ref:`modindex` 50 | * :ref:`search` 51 | -------------------------------------------------------------------------------- /docs/installation.rst: -------------------------------------------------------------------------------- 1 | Installation 2 | ============ 3 | 4 | To install topmodelpy from source: 5 | 6 | 1. Check that you have Python_ installed:: 7 | 8 | $ python --version 9 | 10 | If you do not have Python_ installed, please download the latest version from `Python's download page`_ 11 | 12 | 2. Download topmodelpy from the repository and extract to a directory of your choice. 13 | 14 | Or, if you have git_ installed you can clone the project:: 15 | 16 | $ git clone 17 | 18 | 3. Navigate to the project's root directory where the setup script called `setup.py` is located:: 19 | 20 | $ cd topmodelpy/ 21 | 22 | | The `setup.py` is a Python file that contains information regarding the installation of a Python module/package, and 23 | | usually specifies that the module/package has been packaged and distributed with the standard Python distribution package 24 | | called Distutils_. 25 | 26 | 4. Run `setup.py` with the `install` command:: 27 | 28 | $ python setup.py install 29 | 30 | topmodelpy will now be installed to the standard location for third-party Python modules on your computer platform. 31 | 32 | For more information regarding installing third-party Python modules, please see `Installing Python Modules`_ 33 | For a description of how installation works including where the module will be installed on your computer platform, please see `How Installation Works`_. 34 | 35 | 36 | .. _Python: https://www.python.org/ 37 | .. _Python's download page: https://www.python.org/downloads/ 38 | .. _git: https://git-scm.com/ 39 | .. _Distutils: https://docs.python.org/3/library/distutils.html 40 | .. _Installing Python Modules: https://docs.python.org/3.5/install/ 41 | .. _How Installation Works: https://docs.python.org/3.5/install/#how-installation-works 42 | -------------------------------------------------------------------------------- /docs/lant-to-wolock-conversion-table.rst: -------------------------------------------------------------------------------- 1 | Table 1. Lookup table between variable names used in this code versus variable 2 | names used in Wolock's Fortran code. 3 | 4 | ========================================== ===================== ================================================================ ============================ ======================= 5 | Lant Python Code Wolock Fortran Code Definition Units Notes 6 | ========================================== ===================== ================================================================ ============================ ======================= 7 | timestep_daily_fraction DT Fraction of a daily timestep fraction 8 | scaling_parameter SZM Scaling parameter based on soil properties millimeters 9 | saturated_hydraulic_conductivity CONMEAN Saturated hydraulic conductivity of the C horizon of the soil millimeters/day 10 | macropore_fraction PMAC Fraction of precipitation bypassing the soil zone fraction 11 | soil_depth_total ZTOT Total soil depth (zone AB + zone C) meters 12 | soil_depth_ab_horizon ZAB Depth of AB horizon meters 13 | field_capacity_fraction THFC Field capacity of soil fraction 14 | latitude XLAT Latitude degrees 15 | basin_area_total ATOT Total watershed area square kilometers 16 | impervious_area_fraction PIMP Fraction of impervious area fraction 17 | snowmelt_temperature_cutoff TCUT Temperature cutoff for snowpack accumulation and snowmelt degrees Fahrenheit 18 | snowmelt_rate_coeff SNOPROP Snowmelt parameter inches/degree Fahrenheit 19 | snowmelt_rate_with_rain_coeff RAINPRO Rain-induced snowmelt parameter 1/degree Fahrenheit 20 | channel_length_max DMAX Maximum channel length kilometers 21 | channel_velocity_avg SUBV Channel velocity kilometers/day 22 | channel_travel_time NTW Channel travel time days 23 | flow_initial Q0 Initial flow millimeters/day 24 | twi_values ST ln(a/tanB) values ln(meters) 25 | twi_saturated_areas AC Saturated land-surface area in watershed fraction 26 | twi_mean TL Mean of ln(a/tanB) distribution ln(meters) 27 | num_twi_increments NAC Number of twi increments or bins --- 28 | precip PP,P,PPT Precipitation rate millimeters/day 29 | pet PET Potential evapotranspiration rate millimeters/day millimeters in Wolock 30 | precip_available PPTPET Precipiation - Potential evapotranspiration millimeters/day 31 | num_timesteps NSTEPS Number of timesteps; number of precipitation data values --- 32 | flow_predicted QPRED Total predicted flow millimeters/day 33 | soil_depth_roots ZROOT Root-zone depth meters 34 | soil_depth_c_horizon ZAB Depth of AB horizon meters 35 | vertical_drainage_flux_initial U0 Initial vertical drainage flux millimeters/day CONMEAN*TSTEP in Wolock 36 | transmissivity_saturated_max TRANS Maximum saturated transmissivity square millimeters/day 37 | flow_subsurface_max SZQ Maximum subsurface flow rate millimeters/day 38 | root_zone_storage_max SRMAX Maximum root-zone storage millimeters 39 | saturation_deficit_avg S Watershed average saturation deficit millimeters 40 | vertical_drainage_flux UZ Vertical drainage flux; upper soil to saturated subsurface millimeters/day 41 | unsaturated_zone_storage SUZ Soil water available for drainage millimeters 42 | root_zone_storage SRZ Root zone storage millimeters 43 | saturation_deficit_local SD Saturation deficit at location x millimeters 44 | precip_for_evaporation EPC The negative of precip minus pet millimeters/day -PPTPET in Wolock 45 | precip_for_recharge PP The positive of precip minus pet millimeters/day PPTPET in Wolock 46 | precip_excesses EX Precip in excess of pet and field capacity storage millimeters array 47 | precip_excess PPEX Precip in excess of pet and field capacity storage millimeters float 48 | flow_predicted_overland QOF Predicted overland flow millimeters/day 49 | flow_predicted_vertical_drainage_flux QUZ Predicted vertical drainage flux millimeters/day units? (UZ*AC(IA)) 50 | flow_predicted_subsurface QB Predicted subsurface flow millimeters/day 51 | flow_predicted_impervious_area -- Predicted impervious area flow millimeters/day in Lant 52 | flow_predicted_total QPRED Total predicted flow millimeters/day 53 | flow_predicted_stream QQ Flow delivered to stream channel millimeters/day 54 | ========================================== ===================== ================================================================ ============================ ======================= 55 | -------------------------------------------------------------------------------- /docs/make.bat: -------------------------------------------------------------------------------- 1 | @ECHO OFF 2 | 3 | REM Command file for Sphinx documentation 4 | 5 | if "%SPHINXBUILD%" == "" ( 6 | set SPHINXBUILD=sphinx-build 7 | ) 8 | set BUILDDIR=_build 9 | set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . 10 | set I18NSPHINXOPTS=%SPHINXOPTS% . 11 | if NOT "%PAPER%" == "" ( 12 | set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% 13 | set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% 14 | ) 15 | 16 | if "%1" == "" goto help 17 | 18 | if "%1" == "help" ( 19 | :help 20 | echo.Please use `make ^` where ^ is one of 21 | echo. html to make standalone HTML files 22 | echo. dirhtml to make HTML files named index.html in directories 23 | echo. singlehtml to make a single large HTML file 24 | echo. pickle to make pickle files 25 | echo. json to make JSON files 26 | echo. htmlhelp to make HTML files and a HTML help project 27 | echo. qthelp to make HTML files and a qthelp project 28 | echo. devhelp to make HTML files and a Devhelp project 29 | echo. epub to make an epub 30 | echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter 31 | echo. text to make text files 32 | echo. man to make manual pages 33 | echo. texinfo to make Texinfo files 34 | echo. gettext to make PO message catalogs 35 | echo. changes to make an overview over all changed/added/deprecated items 36 | echo. xml to make Docutils-native XML files 37 | echo. pseudoxml to make pseudoxml-XML files for display purposes 38 | echo. linkcheck to check all external links for integrity 39 | echo. doctest to run all doctests embedded in the documentation if enabled 40 | echo. coverage to run coverage check of the documentation if enabled 41 | goto end 42 | ) 43 | 44 | if "%1" == "clean" ( 45 | for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i 46 | del /q /s %BUILDDIR%\* 47 | goto end 48 | ) 49 | 50 | 51 | REM Check if sphinx-build is available and fallback to Python version if any 52 | %SPHINXBUILD% 2> nul 53 | if errorlevel 9009 goto sphinx_python 54 | goto sphinx_ok 55 | 56 | :sphinx_python 57 | 58 | set SPHINXBUILD=python -m sphinx.__init__ 59 | %SPHINXBUILD% 2> nul 60 | if errorlevel 9009 ( 61 | echo. 62 | echo.The 'sphinx-build' command was not found. Make sure you have Sphinx 63 | echo.installed, then set the SPHINXBUILD environment variable to point 64 | echo.to the full path of the 'sphinx-build' executable. Alternatively you 65 | echo.may add the Sphinx directory to PATH. 66 | echo. 67 | echo.If you don't have Sphinx installed, grab it from 68 | echo.http://sphinx-doc.org/ 69 | exit /b 1 70 | ) 71 | 72 | :sphinx_ok 73 | 74 | 75 | if "%1" == "html" ( 76 | %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html 77 | if errorlevel 1 exit /b 1 78 | echo. 79 | echo.Build finished. The HTML pages are in %BUILDDIR%/html. 80 | goto end 81 | ) 82 | 83 | if "%1" == "dirhtml" ( 84 | %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml 85 | if errorlevel 1 exit /b 1 86 | echo. 87 | echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. 88 | goto end 89 | ) 90 | 91 | if "%1" == "singlehtml" ( 92 | %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml 93 | if errorlevel 1 exit /b 1 94 | echo. 95 | echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. 96 | goto end 97 | ) 98 | 99 | if "%1" == "pickle" ( 100 | %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle 101 | if errorlevel 1 exit /b 1 102 | echo. 103 | echo.Build finished; now you can process the pickle files. 104 | goto end 105 | ) 106 | 107 | if "%1" == "json" ( 108 | %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json 109 | if errorlevel 1 exit /b 1 110 | echo. 111 | echo.Build finished; now you can process the JSON files. 112 | goto end 113 | ) 114 | 115 | if "%1" == "htmlhelp" ( 116 | %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp 117 | if errorlevel 1 exit /b 1 118 | echo. 119 | echo.Build finished; now you can run HTML Help Workshop with the ^ 120 | .hhp project file in %BUILDDIR%/htmlhelp. 121 | goto end 122 | ) 123 | 124 | if "%1" == "qthelp" ( 125 | %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp 126 | if errorlevel 1 exit /b 1 127 | echo. 128 | echo.Build finished; now you can run "qcollectiongenerator" with the ^ 129 | .qhcp project file in %BUILDDIR%/qthelp, like this: 130 | echo.^> qcollectiongenerator %BUILDDIR%\qthelp\nwispy.qhcp 131 | echo.To view the help file: 132 | echo.^> assistant -collectionFile %BUILDDIR%\qthelp\nwispy.ghc 133 | goto end 134 | ) 135 | 136 | if "%1" == "devhelp" ( 137 | %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp 138 | if errorlevel 1 exit /b 1 139 | echo. 140 | echo.Build finished. 141 | goto end 142 | ) 143 | 144 | if "%1" == "epub" ( 145 | %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub 146 | if errorlevel 1 exit /b 1 147 | echo. 148 | echo.Build finished. The epub file is in %BUILDDIR%/epub. 149 | goto end 150 | ) 151 | 152 | if "%1" == "latex" ( 153 | %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex 154 | if errorlevel 1 exit /b 1 155 | echo. 156 | echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. 157 | goto end 158 | ) 159 | 160 | if "%1" == "latexpdf" ( 161 | %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex 162 | cd %BUILDDIR%/latex 163 | make all-pdf 164 | cd %~dp0 165 | echo. 166 | echo.Build finished; the PDF files are in %BUILDDIR%/latex. 167 | goto end 168 | ) 169 | 170 | if "%1" == "latexpdfja" ( 171 | %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex 172 | cd %BUILDDIR%/latex 173 | make all-pdf-ja 174 | cd %~dp0 175 | echo. 176 | echo.Build finished; the PDF files are in %BUILDDIR%/latex. 177 | goto end 178 | ) 179 | 180 | if "%1" == "text" ( 181 | %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text 182 | if errorlevel 1 exit /b 1 183 | echo. 184 | echo.Build finished. The text files are in %BUILDDIR%/text. 185 | goto end 186 | ) 187 | 188 | if "%1" == "man" ( 189 | %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man 190 | if errorlevel 1 exit /b 1 191 | echo. 192 | echo.Build finished. The manual pages are in %BUILDDIR%/man. 193 | goto end 194 | ) 195 | 196 | if "%1" == "texinfo" ( 197 | %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo 198 | if errorlevel 1 exit /b 1 199 | echo. 200 | echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. 201 | goto end 202 | ) 203 | 204 | if "%1" == "gettext" ( 205 | %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale 206 | if errorlevel 1 exit /b 1 207 | echo. 208 | echo.Build finished. The message catalogs are in %BUILDDIR%/locale. 209 | goto end 210 | ) 211 | 212 | if "%1" == "changes" ( 213 | %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes 214 | if errorlevel 1 exit /b 1 215 | echo. 216 | echo.The overview file is in %BUILDDIR%/changes. 217 | goto end 218 | ) 219 | 220 | if "%1" == "linkcheck" ( 221 | %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck 222 | if errorlevel 1 exit /b 1 223 | echo. 224 | echo.Link check complete; look for any errors in the above output ^ 225 | or in %BUILDDIR%/linkcheck/output.txt. 226 | goto end 227 | ) 228 | 229 | if "%1" == "doctest" ( 230 | %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest 231 | if errorlevel 1 exit /b 1 232 | echo. 233 | echo.Testing of doctests in the sources finished, look at the ^ 234 | results in %BUILDDIR%/doctest/output.txt. 235 | goto end 236 | ) 237 | 238 | if "%1" == "coverage" ( 239 | %SPHINXBUILD% -b coverage %ALLSPHINXOPTS% %BUILDDIR%/coverage 240 | if errorlevel 1 exit /b 1 241 | echo. 242 | echo.Testing of coverage in the sources finished, look at the ^ 243 | results in %BUILDDIR%/coverage/python.txt. 244 | goto end 245 | ) 246 | 247 | if "%1" == "xml" ( 248 | %SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml 249 | if errorlevel 1 exit /b 1 250 | echo. 251 | echo.Build finished. The XML files are in %BUILDDIR%/xml. 252 | goto end 253 | ) 254 | 255 | if "%1" == "pseudoxml" ( 256 | %SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml 257 | if errorlevel 1 exit /b 1 258 | echo. 259 | echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml. 260 | goto end 261 | ) 262 | 263 | :end 264 | -------------------------------------------------------------------------------- /docs/overview.rst: -------------------------------------------------------------------------------- 1 | Overview 2 | ======== 3 | 4 | Add an overview of the project. 5 | -------------------------------------------------------------------------------- /docs/readme.rst: -------------------------------------------------------------------------------- 1 | .. include:: ../README.rst 2 | -------------------------------------------------------------------------------- /docs/tutorial.rst: -------------------------------------------------------------------------------- 1 | Tutorial 2 | ======== 3 | 4 | Add a tutorial. 5 | -------------------------------------------------------------------------------- /docs/usage.rst: -------------------------------------------------------------------------------- 1 | Usage 2 | ===== 3 | 4 | Add details on how to use the project, i.e. 5 | 6 | To use topmodelpy from the command line:: 7 | 8 | $ topmodelpy --help 9 | 10 | or 11 | 12 | To use topmodelpy in a project:: 13 | 14 | import topmodelpy 15 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | atomicwrites==1.3.0 2 | attrs==19.1.0 3 | Click==7.0 4 | cycler==0.10.0 5 | entrypoints==0.3 6 | flake8==3.7.7 7 | Jinja2==2.10.1 8 | kiwisolver==1.0.1 9 | MarkupSafe==1.1.1 10 | matplotlib==3.0.3 11 | mccabe==0.6.1 12 | more-itertools==6.0.0 13 | mpld3==0.3 14 | numpy==1.16.2 15 | pandas==0.24.1 16 | pkg-resources==0.0.0 17 | pluggy==0.9.0 18 | py==1.8.0 19 | pycodestyle==2.5.0 20 | pyflakes==2.1.1 21 | pyparsing==2.3.1 22 | pytest==4.3.0 23 | python-dateutil==2.8.0 24 | pytz==2018.9 25 | scipy==1.2.1 26 | six==1.12.0 27 | -e git+git@github.com:jlant/topmodelpy.git@067cdea4d6bc272d24e224b4c12d36cef915899f#egg=topmodelpy 28 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | try: 5 | from setuptools import setup 6 | except ImportError: 7 | from distutils.core import setup 8 | 9 | 10 | with open("README.rst") as readme_file: 11 | readme = readme_file.read() 12 | 13 | with open("CHANGELOG.rst") as changelog_file: 14 | changelog = changelog_file.read() 15 | 16 | with open("LICENSE") as license_file: 17 | license = license_file.read() 18 | 19 | 20 | requirements = [ 21 | "click", 22 | "numpy", 23 | "pandas", 24 | ] 25 | 26 | test_requirements = [ 27 | "pytest", 28 | ] 29 | 30 | 31 | setup( 32 | name="topmodelpy", 33 | version="0.1", 34 | description=("A rainfall-runoff model that predicts the amount " 35 | "of water flow in rivers."), 36 | long_description=readme + "\n\n" + changelog, 37 | author="Jeremiah Lant", 38 | author_email="jlant@usgs.gov", 39 | url="", 40 | packages=[ 41 | "topmodelpy", 42 | ], 43 | package_dir={"topmodelpy": "topmodelpy"}, 44 | entry_points={"console_scripts": ["topmodelpy = topmodelpy.cli:main"]}, 45 | include_package_data=True, 46 | install_requires=requirements, 47 | license=license, 48 | zip_safe=False, 49 | keywords="topmodelpy", 50 | classifiers=[ 51 | "Development Status :: 2 - Pre-Alpha", 52 | "Intended Audience :: Developers", 53 | "License :: CC0 1.0 Universal (CC0 1.0) Public Domain Dedication", 54 | "Natural Language :: English", 55 | "Programming Language :: Python :: 3", 56 | "Programming Language :: Python :: 3.3", 57 | "Programming Language :: Python :: 3.4", 58 | "Programming Language :: Python :: 3.5", 59 | "Programming Language :: Python :: 3.6", 60 | ], 61 | test_suite="tests", 62 | tests_require=test_requirements 63 | ) 64 | -------------------------------------------------------------------------------- /tests/__init__.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | -------------------------------------------------------------------------------- /tests/conftest.py: -------------------------------------------------------------------------------- 1 | """Test configurations and fixtures.""" 2 | 3 | from configparser import ConfigParser, ExtendedInterpolation 4 | import os 5 | import numpy as np 6 | import pandas as pd 7 | import pytest 8 | from pathlib import Path 9 | 10 | 11 | @pytest.fixture(scope="module") 12 | def timeseries_wolock(): 13 | """Return a Pandas dataframe of timeseries test data 14 | from Dave Wolock's Topmodel version. Test data contains 15 | input values along with model output values in a single file. 16 | """ 17 | 18 | fname = os.path.join(os.path.dirname(__file__), 19 | "testdata/timeseries_wolock.csv") 20 | data = pd.read_csv(fname, 21 | names=["date", 22 | "temperature", 23 | "precipitation", 24 | "pet", 25 | "precip_minus_pet", 26 | "flow_observed", 27 | "flow_predicted"], 28 | header=0) # skip header 29 | 30 | return data 31 | 32 | 33 | @pytest.fixture(scope="module") 34 | def twi_wolock(): 35 | """Return a Pandas dataframe of twi test data 36 | from Dave Wolock's Topmodel version""" 37 | 38 | fname = os.path.join(os.path.dirname(__file__), 39 | "testdata/twi_wolock.csv") 40 | data = pd.read_csv(fname, 41 | names=["bin", 42 | "twi", 43 | "proportion", 44 | "cells"], 45 | header=0) # skip header 46 | 47 | return data 48 | 49 | 50 | @pytest.fixture(scope="module") 51 | def twi_weighted_mean_wolock(): 52 | """Return the weighted mean of the twi test data 53 | from Dave Wolock's Topmodel version""" 54 | 55 | fname = os.path.join(os.path.dirname(__file__), 56 | "testdata/twi_wolock.csv") 57 | data = pd.read_csv(fname, 58 | names=["bin", 59 | "twi", 60 | "proportion", 61 | "cells"], 62 | header=0) # skip header 63 | 64 | twi_weighted_mean = ( 65 | np.sum(data["twi"].values * data["proportion"].values) 66 | / np.sum(data["proportion"].values) 67 | ) 68 | 69 | return twi_weighted_mean 70 | 71 | 72 | @pytest.fixture(scope="module") 73 | def parameters_wolock(): 74 | """Return a dictionary of parameter test data 75 | from Dave Wolock's Topmodel version""" 76 | 77 | data = { 78 | "scaling_parameter": 10, 79 | "saturated_hydraulic_conductivity": 150, 80 | "macropore_fraction": 0.2, 81 | "soil_depth_total": 1, 82 | "soil_depth_ab_horizon": 0.5, 83 | "field_capacity_fraction": 0.2, 84 | "latitude": 40.5, 85 | "basin_area_total": 3.07, 86 | "impervious_area_fraction": 0.3, 87 | "snowmelt_temperature_cutoff": 32, 88 | "snowmelt_rate_coeff": 0.06, 89 | "snowmelt_rate_coeff_with_rain": 0.007, 90 | "channel_length_max": 1.98, 91 | "channel_velocity_avg": 10, 92 | "flow_initial": 1 93 | } 94 | 95 | return data 96 | 97 | 98 | @pytest.fixture(scope="module") 99 | def modelconfig_obj(): 100 | config = ConfigParser(interpolation=ExtendedInterpolation()) 101 | config["Inputs"] = { 102 | "input_dir": Path.cwd(), 103 | "parameters_file": "${Inputs:input_dir}/parameters.csv", 104 | "timeseries_file": "${Inputs:input_dir}/timeseries.csv", 105 | "twi_file": "${Inputs:input_dir}/twi.csv", 106 | } 107 | 108 | config["Outputs"] = { 109 | "output_dir": Path.cwd(), 110 | } 111 | 112 | config["Options"] = { 113 | "option_pet": "hamon", 114 | "option_snowmelt": "yes", 115 | } 116 | 117 | return config 118 | 119 | 120 | @pytest.fixture(scope="module") 121 | def modelconfig_obj_invalid_sections(): 122 | return ConfigParser(interpolation=ExtendedInterpolation()) 123 | 124 | 125 | @pytest.fixture(scope="module") 126 | def modelconfig_obj_invalid_filepath(): 127 | config = ConfigParser(interpolation=ExtendedInterpolation()) 128 | config["Inputs"] = { 129 | "input_dir": Path.cwd(), 130 | "parameters_file": "some/bad/filepath/", 131 | "timeseries_file": "${Inputs:input_dir}/timeseries.csv", 132 | "twi_file": "${Inputs:input_dir}/twi.csv", 133 | } 134 | 135 | config["Outputs"] = { 136 | "output_dir": "", 137 | } 138 | 139 | config["Options"] = { 140 | "option_pet": "hamon", 141 | "option_snowmelt": "yes", 142 | } 143 | 144 | return config 145 | 146 | 147 | @pytest.fixture(scope="module") 148 | def modelconfig_obj_invalid_options(): 149 | config = ConfigParser(interpolation=ExtendedInterpolation()) 150 | config["Inputs"] = { 151 | "input_dir": Path.cwd(), 152 | "parameters_file": "${Inputs:input_dir}/parameters.csv", 153 | "timeseries_file": "${Inputs:input_dir}/timeseries.csv", 154 | "twi_file": "${Inputs:input_dir}/twi.csv", 155 | } 156 | 157 | config["Outputs"] = { 158 | "output_dir": Path.cwd(), 159 | } 160 | 161 | config["Options"] = { 162 | "option_pet": "yes", 163 | "option_snowmelt": "hello", 164 | } 165 | 166 | return config 167 | 168 | 169 | @pytest.fixture(scope="module") 170 | def parameters_file(): 171 | return ("""name,value,units,description 172 | scaling_parameter,10,millimeters,a description 173 | saturated_hydraulic_conductivity,150,millimeters/day,a description 174 | macropore_fraction,0.2,fraction,a description 175 | soil_depth_total,1,meters,a description 176 | soil_depth_ab_horizon,0.7,meters,a description 177 | field_capacity_fraction,0.2,fraction,a description 178 | latitude,40.5,degrees,a description 179 | basin_area_total,3.5,square kilometers,a description 180 | impervious_area_fraction,0.3,fraction,a description 181 | snowmelt_temperature_cutoff,-32,degrees fahrenheit,a description 182 | snowmelt_rate_coeff,0.06,1/degrees fahrenheit,a description 183 | snowmelt_rate_coeff_with_rain,7E-3,inches per degree fahrenheit,a description 184 | channel_length_max,5,kilometers,a description 185 | channel_velocity_avg,19,kilometers/day,a description 186 | flow_initial,1,millimeters/day,a description 187 | """) 188 | 189 | 190 | @pytest.fixture(scope="module") 191 | def parameters_file_invalid_header(): 192 | return ("""names,val,unit,descr 193 | scaling_parameter,10,millimeters,a description 194 | saturated_hydraulic_conductivity,150,millimeters/day,a description 195 | macropore_fraction,0.2,fraction,a description 196 | soil_depth_total,1,meters,a description 197 | soil_depth_ab_horizon,0.7,meters,a description 198 | field_capacity_fraction,0.2,fraction,a description 199 | latitude,40.5,degrees,a description 200 | basin_area_total,3.5,square kilometers,a description 201 | impervious_area_fraction,0.3,fraction,a description 202 | snowmelt_temperature_cutoff,-32,degrees fahrenheit,a description 203 | snowmelt_rate_coeff,0.06,1/degrees fahrenheit,a description 204 | snowmelt_rate_coeff_with_rain,7E-3,inches per degree fahrenheit,a description 205 | channel_length_max,5,kilometers,a description 206 | channel_velocity_avg,19,kilometers/day,a description 207 | flow_initial,1,millimeters/day,a description 208 | """) 209 | 210 | 211 | @pytest.fixture(scope="module") 212 | def timeseries_file(): 213 | return ("""date,temperature (celsius),precipitation (mm/day),pet (mm/day),flow_observed (mm/day) 214 | 2019-01-01,1.0,2.0,3.0,4.0 215 | 2019-01-02,1.1,2.1,3.1,4.1 216 | 2019-01-03,1.2,2.2,3.2,4.2 217 | 2019-01-04,1.3,2.3,3.3,4.3 218 | 2019-01-05,1.4,2.4,3.4,4.4 219 | """) 220 | 221 | 222 | @pytest.fixture(scope="module") 223 | def timeseries_file_invalid_header(): 224 | return ("""date,temperature (fahrenheit),precipitation (mm/day),pet (mm/day),flow observed (mm/day) 225 | 2019-01-01,1.0,2.0,3.0,4.0 226 | 2019-01-02,1.1,2.1,3.1,4.1 227 | 2019-01-03,1.2,2.2,3.2,4.2 228 | 2019-01-04,1.3,2.3,3.3,4.3 229 | 2019-01-05,1.4,2.4,3.4,4.4 230 | """) 231 | 232 | 233 | @pytest.fixture(scope="module") 234 | def timeseries_file_missing_dates(): 235 | return ("""date,temperature (celsius),precipitation (mm/day),pet (mm/day),flow_observed (mm/day) 236 | 2019-01-01,1.0,2.0,3.0,4.0 237 | ,1.1,2.1,3.1,4.1 238 | 2019-01-03,1.2,2.2,3.2,4.2 239 | 2019-01-04,1.3,2.3,3.3,4.3 240 | ,1.4,2.4,3.4,4.4 241 | """) 242 | 243 | 244 | @pytest.fixture(scope="module") 245 | def timeseries_file_missing_values(): 246 | return ("""date,temperature (celsius),precipitation (mm/day),pet (mm/day),flow_observed (mm/day) 247 | 2019-01-01,1.0,2.0,3.0, 248 | 2019-01-02,1.1,2.1,3.1,4.1 249 | 2019-01-03,,2.2,3.2,4.2 250 | 2019-01-04,1.3,2.3,3.3,4.3 251 | 2019-01-05,1.4,2.4,3.4,4.4 252 | """) 253 | 254 | 255 | @pytest.fixture(scope="module") 256 | def timeseries_file_invalid_timestep(): 257 | return ("""date,temperature (celsius),precipitation (mm/day),pet (mm/day),flow_observed (mm/day) 258 | 2019-01-01,1.0,2.0,3.0,4.0 259 | 2019-02-01,1.1,2.1,3.1,4.1 260 | 2019-03-01,1,2.2,3.2,4.2 261 | 2019-04-01,1.3,2.3,3.3,4.3 262 | 2019-05-01,1.4,2.4,3.4,4.4 263 | """) 264 | 265 | 266 | @pytest.fixture(scope="module") 267 | def twi_file(): 268 | return ("""bin,twi,proportion,cells 269 | 1,0.02,0.10,10 270 | 2,0.03,0.15,15 271 | 3,0.04,0.25,25 272 | 4,0.05,0.30,30 273 | 5,0.06,0.20,20 274 | """) 275 | 276 | 277 | @pytest.fixture(scope="module") 278 | def twi_file_invalid_header(): 279 | return ("""bin,value,proportion,cells 280 | 1,0.02,0.10,10 281 | 2,0.03,0.15,15 282 | 3,0.04,0.25,25 283 | 4,0.05,0.30,30 284 | 5,0.06,0.20,20 285 | """) 286 | 287 | 288 | @pytest.fixture(scope="module") 289 | def twi_file_missing_values(): 290 | return ("""bin,twi,proportion,cells 291 | 1,0.02,0.10, 292 | 2,0.03,0.15,15 293 | 3,0.04,0.25,25 294 | 4,0.05,0.30,30 295 | 5,,0.20,20 296 | """) 297 | 298 | 299 | @pytest.fixture(scope="module") 300 | def twi_file_invalid_proportion(): 301 | return ("""bin,twi,proportion,cells 302 | 1,0.02,0.10,10 303 | 2,0.03,0.15,15 304 | 3,0.04,0.25,25 305 | 4,0.05,0.30,30 306 | 5,0.06,0.22,20 307 | """) 308 | 309 | 310 | @pytest.fixture(scope="module") 311 | def observed_data(): 312 | return np.array([55.7, 62.0, 65.5, 64.7, 61.1]) 313 | 314 | 315 | @pytest.fixture(scope="module") 316 | def modeled_data(): 317 | return np.array([55.5, 62.1, 65.3, 64.4, 61.2]) 318 | -------------------------------------------------------------------------------- /tests/test_hydrocalcs.py: -------------------------------------------------------------------------------- 1 | """Tests for hydrocals module.""" 2 | 3 | from datetime import datetime 4 | import numpy as np 5 | 6 | from waterpy import hydrocalcs 7 | 8 | 9 | def test_pet(parameters_wolock, 10 | timeseries_wolock): 11 | 12 | dates = np.array(timeseries_wolock["date"].values, 13 | dtype=np.datetime64).astype(datetime) 14 | actual = hydrocalcs.pet( 15 | dates=dates, 16 | temperatures=timeseries_wolock["temperature"].values, 17 | latitude=parameters_wolock["latitude"], 18 | method="hamon" 19 | ) 20 | 21 | # The absolute tolerance is set because the calculation done in waterpy 22 | # is a little different than the calculation done in Wolock's version. 23 | np.testing.assert_allclose(actual, 24 | timeseries_wolock["pet"].values, 25 | atol=1.5) 26 | 27 | 28 | def test_absolute_error(observed_data, modeled_data): 29 | 30 | expected = np.array([0.2, -0.1, 0.2, 0.3, -0.1]) 31 | actual = hydrocalcs.absolute_error(observed_data, modeled_data) 32 | 33 | np.testing.assert_allclose(actual, expected) 34 | 35 | 36 | def test_mean_squared_error(observed_data, modeled_data): 37 | 38 | expected = 0.038 39 | actual = hydrocalcs.mean_squared_error(observed_data, modeled_data) 40 | 41 | np.testing.assert_allclose(actual, expected) 42 | 43 | 44 | def test_relative_error(observed_data, modeled_data): 45 | 46 | expected = np.array([0.00359066, -0.0016129, 0.00305344, 0.00463679, -0.00163666]) 47 | actual = hydrocalcs.relative_error(observed_data, modeled_data) 48 | 49 | np.testing.assert_allclose(actual, expected, atol=1e-8) 50 | 51 | 52 | def test_percent_error(observed_data, modeled_data): 53 | 54 | expected = np.array([0.359066, -0.16129, 0.305344, 0.463679, -0.163666]) 55 | actual = hydrocalcs.percent_error(observed_data, modeled_data) 56 | 57 | np.testing.assert_allclose(actual, expected, atol=1e-6) 58 | 59 | 60 | def test_percent_difference(observed_data, modeled_data): 61 | 62 | expected = np.array([0.35971223, -0.16116035, 0.3058104, 0.464756, -0.1635323]) 63 | actual = hydrocalcs.percent_difference(observed_data, modeled_data) 64 | 65 | np.testing.assert_allclose(actual, expected, atol=1e-6) 66 | 67 | 68 | def test_r_squared(observed_data, modeled_data): 69 | 70 | expected = 0.99768587638100936 71 | actual = hydrocalcs.r_squared(observed_data, modeled_data) 72 | 73 | np.testing.assert_allclose(actual, expected) 74 | 75 | 76 | def test_nash_sutcliffe(observed_data, modeled_data): 77 | 78 | expected = 0.99682486631016043 79 | actual = hydrocalcs.nash_sutcliffe(observed_data, modeled_data) 80 | 81 | np.testing.assert_allclose(actual, expected) 82 | -------------------------------------------------------------------------------- /tests/test_modelconfigfile.py: -------------------------------------------------------------------------------- 1 | """Tests for modelconfigfile module.""" 2 | 3 | import pytest 4 | from pathlib import Path, PurePath 5 | 6 | from waterpy.exceptions import (ModelConfigFileErrorInvalidSection, 7 | ModelConfigFileErrorInvalidFilePath, 8 | ModelConfigFileErrorInvalidOption) 9 | from waterpy import modelconfigfile 10 | 11 | 12 | def test_modelconfig_obj(modelconfig_obj): 13 | expected_sections = ["Inputs", "Outputs", "Options"] 14 | expected_input_dir = Path.cwd() 15 | expected_parameters_file = PurePath(expected_input_dir).joinpath( 16 | "parameters.csv") 17 | expected_timeseries_file = PurePath(expected_input_dir).joinpath( 18 | "timeseries.csv") 19 | expected_twi_file = PurePath(expected_input_dir).joinpath( 20 | "twi.csv") 21 | expected_output_dir = Path.cwd() 22 | expected_option_pet = "hamon" 23 | expected_option_snowmelt = True 24 | 25 | actual_sections = modelconfig_obj.sections() 26 | actual_input_dir = modelconfig_obj["Inputs"]["input_dir"] 27 | actual_parameters_file = modelconfig_obj["Inputs"]["parameters_file"] 28 | actual_timeseries_file = modelconfig_obj["Inputs"]["timeseries_file"] 29 | actual_twi_file = modelconfig_obj["Inputs"]["twi_file"] 30 | actual_output_dir = modelconfig_obj["Outputs"]["output_dir"] 31 | actual_option_pet = modelconfig_obj["Options"]["option_pet"] 32 | actual_option_snowmelt = modelconfig_obj["Options"].getboolean("option_snowmelt") 33 | 34 | assert actual_sections == expected_sections 35 | assert actual_input_dir == str(expected_input_dir) 36 | assert actual_parameters_file == str(expected_parameters_file) 37 | assert actual_timeseries_file == str(expected_timeseries_file) 38 | assert actual_twi_file == str(expected_twi_file) 39 | assert actual_output_dir == str(expected_output_dir) 40 | assert actual_option_pet == expected_option_pet 41 | assert actual_option_snowmelt == expected_option_snowmelt 42 | 43 | 44 | def test_modelconfig_obj_no_sections(modelconfig_obj_invalid_sections): 45 | with pytest.raises(ModelConfigFileErrorInvalidSection) as err: 46 | modelconfigfile.check_config_sections(modelconfig_obj_invalid_sections) 47 | 48 | assert "Invalid section" in str(err.value) 49 | 50 | 51 | def test_modelconfig_obj_invalid_filepath(modelconfig_obj_invalid_filepath): 52 | with pytest.raises(ModelConfigFileErrorInvalidFilePath) as err: 53 | modelconfigfile.check_config_filepaths(modelconfig_obj_invalid_filepath) 54 | 55 | assert "Invalid file path" in str(err.value) 56 | 57 | 58 | def test_modelconfig_obj_invalid_options(modelconfig_obj_invalid_options): 59 | with pytest.raises(ModelConfigFileErrorInvalidOption) as err: 60 | modelconfigfile.check_config_options(modelconfig_obj_invalid_options) 61 | 62 | assert "Invalid option" in str(err.value) 63 | -------------------------------------------------------------------------------- /tests/test_parametersfile.py: -------------------------------------------------------------------------------- 1 | """Tests for parametersfile module.""" 2 | 3 | from io import StringIO 4 | import pytest 5 | 6 | from topmodelpy.exceptions import (ParametersFileErrorInvalidHeader, 7 | ParametersFileErrorInvalidScalingParameter, 8 | ParametersFileErrorInvalidLatitude, 9 | ParametersFileErrorInvalidSoilDepthTotal, 10 | ParametersFileErrorInvalidSoilDepthAB, 11 | ParametersFileErrorInvalidFieldCapacity, 12 | ParametersFileErrorInvalidMacropore, 13 | ParametersFileErrorInvalidImperviousArea,) 14 | from topmodelpy import parametersfile 15 | 16 | 17 | def test_parameters_file_read_in(parameters_file): 18 | expected = { 19 | "scaling_parameter": { 20 | "value": 10, 21 | "units": "millimeters", 22 | "description": "a description", 23 | }, 24 | } 25 | filestream = StringIO(parameters_file) 26 | actual = parametersfile.read_in(filestream) 27 | 28 | assert isinstance(actual["scaling_parameter"]["value"], float) 29 | assert actual["scaling_parameter"]["value"] == ( 30 | expected["scaling_parameter"]["value"]) 31 | assert actual["scaling_parameter"]["units"] == ( 32 | expected["scaling_parameter"]["units"]) 33 | assert actual["scaling_parameter"]["description"] == ( 34 | expected["scaling_parameter"]["description"]) 35 | 36 | 37 | def test_parameters_file_invalid_header(parameters_file_invalid_header): 38 | filestream = StringIO(parameters_file_invalid_header) 39 | 40 | with pytest.raises(ParametersFileErrorInvalidHeader) as err: 41 | parametersfile.read_in(filestream) 42 | 43 | assert "Invalid header" in str(err.value) 44 | 45 | 46 | def test_parameters_file_invalid_scaling_parameter(): 47 | invalid_value = 0 48 | with pytest.raises(ParametersFileErrorInvalidScalingParameter) as err: 49 | parametersfile.check_scaling_parameter(invalid_value) 50 | 51 | assert "Invalid scaling parameter" in str(err.value) 52 | 53 | 54 | def test_parameters_file_invalid_latitude(): 55 | invalid_value = -1 56 | with pytest.raises(ParametersFileErrorInvalidLatitude) as err: 57 | parametersfile.check_latitude(invalid_value) 58 | 59 | assert "Invalid latitude" in str(err.value) 60 | 61 | 62 | def test_parameters_file_invalid_soil_depth_total(): 63 | invalid_value = 0 64 | with pytest.raises(ParametersFileErrorInvalidSoilDepthTotal) as err: 65 | parametersfile.check_soil_depth_total(invalid_value) 66 | 67 | assert "Invalid soil depth total" in str(err.value) 68 | 69 | 70 | def test_parameters_file_invalid_soil_depth_ab_horizon_lt_zero(): 71 | invalid_value = 0 72 | soil_depth_total = 1 73 | with pytest.raises(ParametersFileErrorInvalidSoilDepthAB) as err: 74 | parametersfile.check_soil_depth_ab_horizon(invalid_value, soil_depth_total) 75 | 76 | assert "Invalid soil depth ab horizon" in str(err.value) 77 | 78 | 79 | def test_parameters_file_invalid_soil_depth_ab_horizon_gt_soil_depth_total(): 80 | invalid_value = 10 81 | soil_depth_total = 1 82 | with pytest.raises(ParametersFileErrorInvalidSoilDepthAB) as err: 83 | parametersfile.check_soil_depth_ab_horizon(invalid_value, soil_depth_total) 84 | 85 | assert "Invalid soil depth ab horizon" in str(err.value) 86 | 87 | 88 | def test_parameters_file_invalid_field_capacity_fraction(): 89 | invalid_value = 2 90 | with pytest.raises(ParametersFileErrorInvalidFieldCapacity) as err: 91 | parametersfile.check_field_capacity(invalid_value) 92 | 93 | assert "Invalid field capacity" in str(err.value) 94 | 95 | 96 | def test_parameters_file_invalid_macropore_fraction(): 97 | invalid_value = 2 98 | with pytest.raises(ParametersFileErrorInvalidMacropore) as err: 99 | parametersfile.check_macropore(invalid_value) 100 | 101 | assert "Invalid macropore" in str(err.value) 102 | 103 | 104 | def test_parameters_file_invalid_impervious_area_fraction(): 105 | invalid_value = 2 106 | with pytest.raises(ParametersFileErrorInvalidImperviousArea) as err: 107 | parametersfile.check_impervious_area(invalid_value) 108 | 109 | assert "Invalid impervious area" in str(err.value) 110 | -------------------------------------------------------------------------------- /tests/test_timeseriesfile.py: -------------------------------------------------------------------------------- 1 | """Tests for timeseriesfile module.""" 2 | 3 | from datetime import datetime 4 | from io import StringIO 5 | import numpy as np 6 | import pandas as pd 7 | import pytest 8 | 9 | from topmodelpy.exceptions import (TimeseriesFileErrorInvalidHeader, 10 | TimeseriesFileErrorMissingValues, 11 | TimeseriesFileErrorMissingDates, 12 | TimeseriesFileErrorInvalidTimestep) 13 | from topmodelpy import timeseriesfile 14 | 15 | 16 | def test_timeseries_file_read_in(timeseries_file): 17 | expected = pd.DataFrame({ 18 | "date": np.array([ 19 | datetime(2017, 1, 1, 0, 0), 20 | datetime(2017, 1, 2, 0, 0), 21 | datetime(2017, 1, 3, 0, 0), 22 | datetime(2017, 1, 4, 0, 0), 23 | datetime(2017, 1, 5, 0, 0)]), 24 | "temperature": np.array([1.0, 1.1, 1.2, 1.3, 1.4]), 25 | "precipitation": np.array([2.0, 2.1, 2.2, 2.3, 2.4]), 26 | "pet": np.array([3.0, 3.1, 3.2, 3.3, 3.4]), 27 | "flow_observed": np.array([4.0, 4.1, 4.2, 4.3, 4.4]), 28 | }) 29 | filestream = StringIO(timeseries_file) 30 | actual = timeseriesfile.read_in(filestream) 31 | 32 | np.testing.assert_allclose(actual["precipitation"], 33 | expected["precipitation"]) 34 | np.testing.assert_allclose(actual["temperature"], 35 | expected["temperature"]) 36 | np.testing.assert_allclose(actual["pet"], 37 | expected["pet"]) 38 | np.testing.assert_allclose(actual["flow_observed"], 39 | expected["flow_observed"]) 40 | assert actual.dtypes.all() == "float64" 41 | assert isinstance(actual.index, pd.DatetimeIndex) 42 | assert (actual.index[1] - actual.index[0]).days == 1 43 | 44 | 45 | def test_timeseries_file_invalid_header(timeseries_file_invalid_header): 46 | filestream = StringIO(timeseries_file_invalid_header) 47 | 48 | with pytest.raises(TimeseriesFileErrorInvalidHeader) as err: 49 | timeseriesfile.read_in(filestream) 50 | 51 | assert "Invalid header" in str(err.value) 52 | 53 | 54 | def test_timeseries_file_missing_dates(timeseries_file_missing_dates): 55 | filestream = StringIO(timeseries_file_missing_dates) 56 | 57 | with pytest.raises(TimeseriesFileErrorMissingDates) as err: 58 | timeseriesfile.read_in(filestream) 59 | 60 | assert "Missing dates" in str(err.value) 61 | 62 | 63 | def test_timeseries_file_missing_values(timeseries_file_missing_values): 64 | filestream = StringIO(timeseries_file_missing_values) 65 | 66 | with pytest.raises(TimeseriesFileErrorMissingValues) as err: 67 | timeseriesfile.read_in(filestream) 68 | 69 | assert "Missing values" in str(err.value) 70 | 71 | 72 | def test_timeseries_file_invalid_timestep(timeseries_file_invalid_timestep): 73 | filestream = StringIO(timeseries_file_invalid_timestep) 74 | 75 | with pytest.raises(TimeseriesFileErrorInvalidTimestep) as err: 76 | timeseriesfile.read_in(filestream) 77 | 78 | print(err.value) 79 | assert "Invalid timestep" in str(err.value) 80 | -------------------------------------------------------------------------------- /tests/test_topmodel.py: -------------------------------------------------------------------------------- 1 | """Test Topmodel class.""" 2 | 3 | import numpy as np 4 | 5 | from topmodelpy.topmodel import Topmodel 6 | 7 | 8 | def test_topmodel_init(parameters_wolock, 9 | timeseries_wolock, 10 | twi_wolock, 11 | twi_weighted_mean_wolock): 12 | """Test Topmodel class initialization""" 13 | 14 | # Initialize Topmodel 15 | topmodel = Topmodel( 16 | scaling_parameter=parameters_wolock["scaling_parameter"], 17 | saturated_hydraulic_conductivity=( 18 | parameters_wolock["saturated_hydraulic_conductivity"] 19 | ), 20 | macropore_fraction=parameters_wolock["macropore_fraction"], 21 | soil_depth_total=parameters_wolock["soil_depth_total"], 22 | soil_depth_ab_horizon=parameters_wolock["soil_depth_ab_horizon"], 23 | field_capacity_fraction=parameters_wolock["field_capacity_fraction"], 24 | latitude=parameters_wolock["latitude"], 25 | basin_area_total=parameters_wolock["basin_area_total"], 26 | impervious_area_fraction=parameters_wolock["impervious_area_fraction"], 27 | twi_values=twi_wolock["twi"].values, 28 | twi_saturated_areas=twi_wolock["proportion"].values, 29 | twi_mean=twi_weighted_mean_wolock, 30 | precip_available=timeseries_wolock["precip_minus_pet"].values, 31 | flow_initial=1, 32 | soil_depth_roots=1, 33 | timestep_daily_fraction=1 34 | ) 35 | 36 | assert(topmodel.scaling_parameter == 37 | parameters_wolock["scaling_parameter"]) 38 | assert(topmodel.saturated_hydraulic_conductivity == 39 | parameters_wolock["saturated_hydraulic_conductivity"]) 40 | assert(topmodel.macropore_fraction == 41 | parameters_wolock["macropore_fraction"]) 42 | assert(topmodel.soil_depth_total == 43 | parameters_wolock["soil_depth_total"]) 44 | assert(topmodel.soil_depth_ab_horizon == 45 | parameters_wolock["soil_depth_ab_horizon"]) 46 | assert(topmodel.field_capacity_fraction == 47 | parameters_wolock["field_capacity_fraction"]) 48 | assert(topmodel.latitude == 49 | parameters_wolock["latitude"]) 50 | assert(topmodel.basin_area_total == 51 | parameters_wolock["basin_area_total"]) 52 | assert(topmodel.impervious_area_fraction == 53 | parameters_wolock["impervious_area_fraction"]) 54 | np.testing.assert_allclose(topmodel.twi_values, 55 | twi_wolock["twi"].values) 56 | np.testing.assert_allclose(topmodel.twi_saturated_areas, 57 | twi_wolock["proportion"].values) 58 | assert(topmodel.twi_mean == 59 | twi_weighted_mean_wolock) 60 | np.testing.assert_allclose(topmodel.precip_available, 61 | timeseries_wolock["precip_minus_pet"].values) 62 | assert(topmodel.flow_initial == 1) 63 | assert(topmodel.soil_depth_roots == 1) 64 | assert(topmodel.timestep_daily_fraction == 1) 65 | 66 | 67 | def test_topmodel_run(parameters_wolock, 68 | timeseries_wolock, 69 | twi_wolock, 70 | twi_weighted_mean_wolock): 71 | """Test Topmodel run with input data from Dave Wolock's Topmodel version. 72 | Note: 73 | This version and Wolock's version produce predicted flow values are 74 | are close (max difference is 0.047 or 4.7%). However, there are small 75 | differences likely due to a slightly different twi mean values and 76 | floating point differences from rounding. This test allows for a 77 | small relative tolerance (rtol) of 0.05 for assertions. 78 | 79 | Reference: 80 | Python Relative tolerance (rtol) vs Absolute tolerance (atol) 81 | https://www.python.org/dev/peps/pep-0485/#defaults 82 | """ 83 | 84 | # Initialize Topmodel 85 | topmodel = Topmodel( 86 | scaling_parameter=parameters_wolock["scaling_parameter"], 87 | saturated_hydraulic_conductivity=( 88 | parameters_wolock["saturated_hydraulic_conductivity"] 89 | ), 90 | macropore_fraction=parameters_wolock["macropore_fraction"], 91 | soil_depth_total=parameters_wolock["soil_depth_total"], 92 | soil_depth_ab_horizon=parameters_wolock["soil_depth_ab_horizon"], 93 | field_capacity_fraction=parameters_wolock["field_capacity_fraction"], 94 | latitude=parameters_wolock["latitude"], 95 | basin_area_total=parameters_wolock["basin_area_total"], 96 | impervious_area_fraction=parameters_wolock["impervious_area_fraction"], 97 | twi_values=twi_wolock["twi"].values, 98 | twi_saturated_areas=twi_wolock["proportion"].values, 99 | twi_mean=twi_weighted_mean_wolock, 100 | precip_available=timeseries_wolock["precip_minus_pet"].values, 101 | flow_initial=1, 102 | timestep_daily_fraction=1, 103 | soil_depth_roots=1 104 | ) 105 | 106 | topmodel.run() 107 | 108 | diff = (topmodel.flow_predicted 109 | - timeseries_wolock["flow_predicted"].values) 110 | print("Difference between Lant and Wolock: {}".format(diff)) 111 | print("Max difference: {}".format(max(diff))) 112 | print("The difference occurred at index: {}".format(np.where(diff == 113 | max(diff)))) 114 | np.testing.assert_allclose(topmodel.flow_predicted, 115 | timeseries_wolock["flow_predicted"].values, 116 | rtol=0.05) 117 | -------------------------------------------------------------------------------- /tests/test_twifile.py: -------------------------------------------------------------------------------- 1 | """Tests for twi module.""" 2 | 3 | from datetime import datetime 4 | from io import StringIO 5 | import numpy as np 6 | import pandas as pd 7 | import pytest 8 | 9 | from topmodelpy.exceptions import (TwiFileErrorInvalidHeader, 10 | TwiFileErrorMissingValues, 11 | TwiFileErrorInvalidProportion) 12 | from topmodelpy import twifile 13 | 14 | 15 | def test_twi_file_read_in(twi_file): 16 | expected = pd.DataFrame({ 17 | "bin": np.array([1, 2, 3, 4, 5]), 18 | "twi": np.array([0.02, 0.03, 0.04, 0.05, 0.06]), 19 | "proportion": np.array([0.10, 0.15, 0.25, 0.30, 0.20]), 20 | "cells": np.array([10, 15, 25, 30, 20]), 21 | }) 22 | filestream = StringIO(twi_file) 23 | actual = twifile.read_in(filestream) 24 | 25 | np.testing.assert_allclose(actual["bin"], 26 | expected["bin"]) 27 | np.testing.assert_allclose(actual["twi"], 28 | expected["twi"]) 29 | np.testing.assert_allclose(actual["proportion"], 30 | expected["proportion"]) 31 | np.testing.assert_allclose(actual["cells"], 32 | expected["cells"]) 33 | assert actual.dtypes.all() == "float64" 34 | 35 | 36 | def test_twi_file_invalid_header(twi_file_invalid_header): 37 | filestream = StringIO(twi_file_invalid_header) 38 | 39 | with pytest.raises(TwiFileErrorInvalidHeader) as err: 40 | twifile.read_in(filestream) 41 | 42 | assert "Invalid header" in str(err.value) 43 | 44 | 45 | def test_twi_file_missing_values(twi_file_missing_values): 46 | filestream = StringIO(twi_file_missing_values) 47 | 48 | with pytest.raises(TwiFileErrorMissingValues) as err: 49 | twifile.read_in(filestream) 50 | 51 | assert "Missing values" in str(err.value) 52 | 53 | 54 | def test_twi_file_invalid_proportion(twi_file_invalid_proportion): 55 | filestream = StringIO(twi_file_invalid_proportion) 56 | 57 | with pytest.raises(TwiFileErrorInvalidProportion) as err: 58 | twifile.read_in(filestream) 59 | 60 | assert "Invalid sum of proportion" in str(err.value) 61 | -------------------------------------------------------------------------------- /tests/testdata/twi_wolock.csv: -------------------------------------------------------------------------------- 1 | bin,twi,proportion,cells 2 | 1,11.6817837,0.003704784,0.370478397 3 | 2,11.21175,0.005604416,0.560441613 4 | 3,10.7417173,0.008413193,0.841319282 5 | 4,10.2716837,0.012517415,1.2517415 6 | 5,9.80165005,0.018429315,1.84293147 7 | 6,9.33161736,0.026795687,2.67956872 8 | 7,8.86158371,0.038372401,3.83724011 9 | 8,8.39155006,0.053925101,5.39251007 10 | 9,7.9215169,0.073987022,7.3987022 11 | 10,7.45148373,0.098363318,9.83633175 12 | 11,6.98145008,0.125227734,12.5227734 13 | 12,6.51141691,0.149650559,14.9650559 14 | 13,6.04138327,0.161592662,16.1592662 15 | 14,5.5713501,0.14437668,14.437668 16 | 15,5.10131645,0.079039745,7.9039745 17 | 16,0,0,0 18 | 17,0,0,0 19 | 18,0,0,0 20 | 19,0,0,0 21 | 20,0,0,0 22 | -------------------------------------------------------------------------------- /topmodelpy/__init__.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | topmodelpy 4 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 5 | is a command line interface for a rainfall-runoff model that predicts the amount of water flow in rivers. 6 | 7 | Nutshell 8 | -------- 9 | Here a small example of using topmodelpy 10 | 11 | :authors: 2019 by Jeremiah Lant, see AUTHORS 12 | :license: CC0 1.0, see LICENSE file for details 13 | """ 14 | 15 | __version__ = "0.1.0" 16 | -------------------------------------------------------------------------------- /topmodelpy/cli.py: -------------------------------------------------------------------------------- 1 | """Main `topmodelpy` command line interface""" 2 | 3 | 4 | import click 5 | import sys 6 | 7 | from topmodelpy.main import topmodelpy 8 | 9 | 10 | class Options: 11 | def __init__(self): 12 | self.verbose = False 13 | self.show = False 14 | 15 | 16 | # Create a decorator to pass options to each command 17 | pass_options = click.make_pass_decorator(Options, ensure=True) 18 | 19 | 20 | @click.group() 21 | @click.option("-v", "--verbose", is_flag=True, 22 | help="Print model run details.") 23 | @click.option("-s", "--show", is_flag=True, 24 | help="Show output plots.") 25 | @click.pass_context 26 | def main(options, verbose, show): 27 | """Topmodelpy is a command line tool for a rainfall-runoff 28 | model that predicts the amount of water flow in rivers. 29 | """ 30 | options.verbose = verbose 31 | options.show = show 32 | 33 | 34 | @main.command() 35 | @click.argument("configfile", type=click.Path(exists=True)) 36 | @pass_options 37 | def run(options, configfile): 38 | """Run Topmodel with a model configuration file. 39 | 40 | The model configuration file contains the specifications for a model run. 41 | This command takes in the path to model configuration file. 42 | """ 43 | try: 44 | click.echo("Running model...") 45 | topmodelpy(configfile, options) 46 | click.echo("Finished!") 47 | click.echo("Output saved as specified in the model config file.") 48 | except Exception as err: 49 | click.echo(err) 50 | sys.exit(1) 51 | 52 | if options.verbose: 53 | click.echo("Verbose on") 54 | if options.show: 55 | click.echo("Show on") 56 | 57 | 58 | @main.command() 59 | @pass_options 60 | def runexample(options): 61 | try: 62 | click.echo("Run example") 63 | except Exception as err: 64 | click.echo(err) 65 | 66 | if options.verbose: 67 | click.echo("Verbose on") 68 | if options.show: 69 | click.echo("Show on") 70 | -------------------------------------------------------------------------------- /topmodelpy/exceptions.py: -------------------------------------------------------------------------------- 1 | """Exceptions for topmodel.py 2 | """ 3 | 4 | 5 | class TopmodelpyException(Exception): 6 | """ 7 | Base exception class. All custom exceptions subclass this class. 8 | """ 9 | def __str__(self): 10 | return self.message 11 | 12 | 13 | class ModelConfigFileErrorInvalidSection(TopmodelpyException): 14 | """ 15 | Raised when a model config file does not contain required sections. 16 | """ 17 | def __init__(self, invalid_sections, valid_sections): 18 | self.message = ( 19 | "Error with model config file.\n" 20 | "Invalid section(s):\n" 21 | " {}\n" 22 | "Valid sections are:\n" 23 | " {}\n".format(invalid_sections, valid_sections) 24 | ) 25 | 26 | 27 | class ModelConfigFileErrorInvalidFilePath(TopmodelpyException): 28 | """ 29 | Raised when a model config file does not contain vaild file paths. 30 | """ 31 | def __init__(self, filepath): 32 | self.message = ( 33 | "Error with model config file.\n" 34 | "Invalid file path:\n" 35 | " {}".format(filepath) 36 | ) 37 | 38 | 39 | class ModelConfigFileErrorInvalidOption(TopmodelpyException): 40 | """ 41 | Raised when a model config file does not contain valid options. 42 | """ 43 | def __init__(self, invalid_options, valid_options): 44 | self.message = ( 45 | "Error with model config file.\n" 46 | "Invalid option(s):\n" 47 | " option_pet = {pet}\n" 48 | " option_snowmelt = {snowmelt}\n" 49 | "".format(**invalid_options) 50 | ) 51 | self.message = self.message + ( 52 | "Valid options (contained in each respective list):\n" 53 | " option_pet = {pet}\n" 54 | " option_snowmelt = {snowmelt}\n" 55 | "".format(**valid_options) 56 | ) 57 | 58 | 59 | class ParametersFileErrorInvalidHeader(TopmodelpyException): 60 | """ 61 | Raised when a file is not a properly formatted parameters csv file. 62 | """ 63 | def __init__(self, invalid_header, valid_header): 64 | self.message = ( 65 | "Error with parameters file.\n" 66 | "Invalid header:\n" 67 | " {}\n" 68 | "Valid header:\n" 69 | " {}\n" 70 | "".format(invalid_header, valid_header) 71 | ) 72 | 73 | 74 | class ParametersFileErrorInvalidScalingParameter(TopmodelpyException): 75 | """ 76 | Raised when a file is not a properly formatted parameters csv file. 77 | """ 78 | def __init__(self, invalid_value): 79 | self.message = ( 80 | "Error with parameters file.\n" 81 | "Invalid scaling parameter:\n" 82 | " {}\n" 83 | "Valid scaling parameter:\n" 84 | " scaling_parameter > 0\n" 85 | "".format(invalid_value) 86 | ) 87 | 88 | 89 | class ParametersFileErrorInvalidLatitude(TopmodelpyException): 90 | """ 91 | Raised when a file is not a properly formatted parameters csv file. 92 | """ 93 | def __init__(self, invalid_value): 94 | self.message = ( 95 | "Error with parameters file.\n" 96 | "Invalid latitude:\n" 97 | " {}\n" 98 | "Valid latitude:\n" 99 | " 0 <= latitude <= 90\n" 100 | "".format(invalid_value) 101 | ) 102 | 103 | 104 | class ParametersFileErrorInvalidSoilDepthTotal(TopmodelpyException): 105 | """ 106 | Raised when a file is not a properly formatted parameters csv file. 107 | """ 108 | def __init__(self, invalid_value): 109 | self.message = ( 110 | "Error with parameters file.\n" 111 | "Invalid soil depth total:\n" 112 | " {}\n" 113 | "Valid soil depth total:\n" 114 | " soil_depth_total > 0\n" 115 | "".format(invalid_value) 116 | ) 117 | 118 | 119 | class ParametersFileErrorInvalidSoilDepthAB(TopmodelpyException): 120 | """ 121 | Raised when a file is not a properly formatted parameters csv file. 122 | """ 123 | def __init__(self, invalid_value, soil_depth_total): 124 | self.message = ( 125 | "Error with parameters file.\n" 126 | "Invalid soil depth ab horizon:\n" 127 | " {}\n" 128 | "Valid soil depth ab horizon:\n" 129 | " soil_depth_ab_horizon > 0\n" 130 | " soil_depth_ab_horizon < {} (soil_depth_total)\n" 131 | "".format(invalid_value, soil_depth_total) 132 | ) 133 | 134 | 135 | class ParametersFileErrorInvalidFieldCapacity(TopmodelpyException): 136 | """ 137 | Raised when a file is not a properly formatted parameters csv file. 138 | """ 139 | def __init__(self, invalid_value): 140 | self.message = ( 141 | "Error with parameters file.\n" 142 | "Invalid field capacity:\n" 143 | " {}\n" 144 | "Valid field capacity:\n" 145 | " 0 <= field_capacity_fraction <=1\n" 146 | "".format(invalid_value) 147 | ) 148 | 149 | 150 | class ParametersFileErrorInvalidMacropore(TopmodelpyException): 151 | """ 152 | Raised when a file is not a properly formatted parameters csv file. 153 | """ 154 | def __init__(self, invalid_value): 155 | self.message = ( 156 | "Error with parameters file.\n" 157 | "Invalid macropore:\n" 158 | " {}\n" 159 | "Valid macropore:\n" 160 | " 0 <= macropore_fraction <=1\n" 161 | "".format(invalid_value) 162 | ) 163 | 164 | 165 | class ParametersFileErrorInvalidImperviousArea(TopmodelpyException): 166 | """ 167 | Raised when a file is not a properly formatted parameters csv file. 168 | """ 169 | def __init__(self, invalid_value): 170 | self.message = ( 171 | "Error with parameters file.\n" 172 | "Invalid impervious area:\n" 173 | " {}\n" 174 | "Valid impervious area:\n" 175 | " 0 <= imprevious_area_fraction <=1\n" 176 | "".format(invalid_value) 177 | ) 178 | 179 | 180 | class TimeseriesFileErrorInvalidHeader(TopmodelpyException): 181 | """ 182 | Raised when a file is not a properly formatted timeseries csv file. 183 | """ 184 | def __init__(self, invalid_header, valid_header): 185 | self.message = ( 186 | "Error with timeseries file.\n" 187 | "Invalid header:\n" 188 | " {}\n" 189 | "Valid header:\n" 190 | " {}\n" 191 | "".format(invalid_header, valid_header) 192 | ) 193 | 194 | 195 | class TimeseriesFileErrorMissingValues(TopmodelpyException): 196 | """ 197 | Raised when a file is not a properly formatted timeseries csv file. 198 | """ 199 | def __init__(self, missing_values): 200 | self.message = ( 201 | "Error with timeseries file.\n" 202 | "Missing values:\n" 203 | " {}\n" 204 | "".format(missing_values) 205 | ) 206 | 207 | 208 | class TimeseriesFileErrorMissingDates(TopmodelpyException): 209 | """ 210 | Raised when a file is not a properly formatted timeseries csv file. 211 | """ 212 | def __init__(self, timestamps_near_missing): 213 | self.message = ( 214 | "Error with timeseries file.\n" 215 | "Missing dates near:\n" 216 | " {}\n" 217 | "".format(timestamps_near_missing) 218 | ) 219 | 220 | 221 | class TimeseriesFileErrorInvalidTimestep(TopmodelpyException): 222 | """ 223 | Raised when a file is not a properly formatted timeseries csv file. 224 | """ 225 | def __init__(self, invalid_timestep): 226 | self.message = ( 227 | "Error with timeseries file.\n" 228 | "Invalid timestep:\n" 229 | " {}\n" 230 | "Valid timestep:\n" 231 | " timestep <= 1 (sub-daily or daily)\n" 232 | "".format(invalid_timestep) 233 | ) 234 | 235 | 236 | class TwiFileErrorInvalidHeader(TopmodelpyException): 237 | """ 238 | Raised when a file is not a properly formatted twi csv file. 239 | """ 240 | def __init__(self, invalid_header, valid_header): 241 | self.message = ( 242 | "Error with twi file.\n" 243 | "Invalid header:\n" 244 | " {}\n" 245 | "Valid header:\n" 246 | " {}\n" 247 | "".format(invalid_header, valid_header) 248 | ) 249 | 250 | 251 | class TwiFileErrorMissingValues(TopmodelpyException): 252 | """ 253 | Raised when a file is not a properly formatted twi csv file. 254 | """ 255 | def __init__(self, missing_values): 256 | self.message = ( 257 | "Error with twi file.\n" 258 | "Missing values:\n" 259 | " {}\n" 260 | "".format(missing_values) 261 | ) 262 | 263 | 264 | class TwiFileErrorInvalidProportion(TopmodelpyException): 265 | """ 266 | Raised when a file is not a properly formatted twi csv file. 267 | """ 268 | def __init__(self, invalid_proportion): 269 | self.message = ( 270 | "Error with twi file.\n" 271 | "Invalid sum of proportion column:\n" 272 | " {}\n" 273 | "Valid sum of proportion column:\n" 274 | " 1.0\n" 275 | "".format(invalid_proportion) 276 | ) 277 | -------------------------------------------------------------------------------- /topmodelpy/hydrocalcs.py: -------------------------------------------------------------------------------- 1 | """ 2 | Module of hydrologic calculations. 3 | 4 | References: 5 | 6 | Engineering and Design - Runoff from Snowmelt 7 | U.S. Army Corps of Engineers 8 | Engineering Manual 1110-2-1406 9 | https://www.wcc.nrcs.usda.gov/ftpref/wntsc/H&H/snow/COEemSnowmeltRunoff.pdf 10 | """ 11 | 12 | import numpy as np 13 | from scipy import stats 14 | 15 | 16 | def pet(dates, temperatures, latitude, method="hamon"): 17 | """Calculate potential evapotranspiration for various methods. 18 | 19 | :param dates: An array of python datetimes 20 | :type dates: numpy.ndarray 21 | :param temperatures: An array of temperatures , in degrees Celsius 22 | :type temperatures: numpy.ndarray 23 | :param latitude: A latitude, in decimal degrees 24 | :type latitude: float 25 | :return pet: array of pet values, in millimeters per day 26 | :rtype pet: numpy.ndarray 27 | """ 28 | if method.lower() == "hamon": 29 | return pet_hamon(dates, temperatures, latitude) 30 | 31 | 32 | def pet_hamon(dates, temperatures, latitude): 33 | """Calculate the amount of potential evapotranspiration in millimeters 34 | per day using the Hamon equation. 35 | 36 | :param dates: An array of python datetimes 37 | :type dates: numpy.ndarray 38 | :param temps: An array of temps, in degrees Celsius 39 | :type temps: numpy.ndarray 40 | :param latitude: A latitude, in decimal degrees 41 | :type latitude: float 42 | :return pet: array of pet values, in millimeters per day 43 | :rtype pet: numpy.ndarray 44 | 45 | .. notes:: 46 | Equation: 47 | 48 | (1) PET = 0.1651 * Ld * RHOSAT * KPEC 49 | 50 | where 51 | 52 | PET daily potential evapotranspiration (mm/day) 53 | Ld daytime length (daylight hours), time from sunrise 54 | to sunset in multiples of 12 hours (hours) 55 | RHOSAT saturated vapor density at the daily mean air 56 | temperature (T), g/m**3 57 | KPEC calibration coefficient, dimensionless 58 | set to 1.2 for southeastern United States 59 | 60 | Sub-equations: 61 | 62 | (2) RHOSAT = 216.7 * ESAT / (T + 273.3) 63 | 64 | (3) ESAT = 6.108 * EXP(17.26939 * T / (T + 237.3)) 65 | 66 | where 67 | 68 | T daily mean air temperature, (celsius) 69 | ESAT saturated vapor pressure at the given T, (mb) 70 | 71 | (4) Ld = (w / 15) * 2 72 | 73 | where 74 | 75 | w the sunset hour angle (degrees); Earth moves 15° per hour 76 | 77 | (5) w = arccos(-1 * tan(declination) * tan(latitude)) 78 | 79 | where 80 | 81 | latitude angle distance of a place north or south of the 82 | earth's equator (radians) 83 | 84 | declination angle between the Sun's rays and the equatorial 85 | plane (radians); the declination of the Earth is 86 | the angular distance at solar noon between the 87 | Sun and the Equator, north-positive 88 | 89 | (6) declination = 23.45° * sin((360 * (284 + N) / 365)) 90 | 91 | where 92 | 93 | N number of days after January 1 (the Julian Day) 94 | 95 | References: 96 | - Lu et al. (2005). A comparison of six potential evapotranspiration 97 | methods for reginal use in the southeastern United States. 98 | 99 | - Brock, Thomas. (1981). Calculating solar radiation for ecological 100 | studies. Ecological Modeling. 101 | 102 | - https://en.wikipedia.org/wiki/Position_of_the_Sun 103 | """ 104 | if len(dates) != len(temperatures): 105 | raise IndexError( 106 | "Length of dates: {}\n", 107 | "Length of temperatures: {}\n", 108 | "Lengths of dates and temperatures must be equal" 109 | "".format(len(dates), len(temperatures)) 110 | ) 111 | 112 | DEG2RAD = np.pi/180 113 | RAD2DEG = 180/np.pi 114 | CALIBCOEFF = 1.2 115 | 116 | pet = [] 117 | for date, temperature in zip(dates, temperatures): 118 | 119 | # Declination 120 | # Note: using python datetimes whih have .timetuple() method 121 | day_num = date.timetuple().tm_yday 122 | angle = 360 * ((284 + day_num) / 365) * DEG2RAD 123 | declination = (23.45 * DEG2RAD) * np.sin(angle) 124 | 125 | # calculate sunset hour angle in degrees (w) 126 | sunset_hour_angle = ( 127 | np.arccos(-1 * np.tan(declination) * np.tan(latitude * DEG2RAD)) 128 | * RAD2DEG 129 | ) 130 | 131 | # calculate daytime length in 12 hour unit (Ld) 132 | daytime_length = abs((sunset_hour_angle / 15) * 2) / 12 133 | 134 | # calculate saturated vapor pressure (ESAT) 135 | saturated_vapor_pressure = ( 136 | 6.108 * np.exp((17.26939 * temperature) / (temperature + 237.3)) 137 | ) 138 | 139 | # calculate saturated vapor density (RHOSAT) 140 | saturated_vapor_density = ( 141 | (216.7 * saturated_vapor_pressure) / (temperature + 273.3) 142 | ) 143 | 144 | # calculate potential evapotranspiration 145 | potential_evapotranspiration = ( 146 | 0.1651 * daytime_length * saturated_vapor_density * CALIBCOEFF 147 | ) 148 | 149 | # add to list 150 | pet.append(potential_evapotranspiration) 151 | 152 | # convert list to numpy array 153 | pet = np.array(pet) 154 | 155 | return pet 156 | 157 | 158 | def snowmelt(precipitation, 159 | temperatures, 160 | temperature_cutoff, 161 | snowmelt_rate_coeff_with_rain, 162 | snowmelt_rate_coeff, 163 | timestep_daily_fraction): 164 | """Snow melt routine. 165 | 166 | :param precipitation: Precipitation rates, in millimeters per day 167 | :type precipitation: numpy.ndarray 168 | :param temperatures: Temperatures, in degrees Fahrenheit 169 | :type temperatures: numpy.ndarray 170 | :param temperature_cutoff: Temperature when melt begins, 171 | in degrees Fahrenheit 172 | :type temperature_cutoff: float 173 | :param snowmelt_rate_coeff_with_rain: Snowmelt coefficient when raining, 174 | 1/degrees Fahrenheit 175 | :type snowmelt_rate_coeff_with_rain: float 176 | :param snowmelt_rate_coeff: Snowmelt rate coefficient (often variable), 177 | in inches per degree Fahrenheit 178 | :type snowmelt_rate_coeff: float 179 | :param timestep_daily_fraction: Model timestep as a fraction of a day 180 | :type timestep_daily_fraction: float 181 | :return: Tuple of arrays of adjusted precipitation, snowmelt, 182 | and snowpack values, each array is in millimeters per day 183 | :rtype: Tuple 184 | 185 | """ 186 | precip_inches = precipitation / 25.4 # mm to inches 187 | 188 | snowprecip = [] 189 | snowmelts = [] 190 | snowpacks = [] 191 | 192 | snowmelt = 0 193 | snowpack = 0 194 | for temp, precip_inch in zip(temperatures, precip_inches): 195 | 196 | # If temp is high enough then there is snowmelt, 197 | # calculate amount of snowmelt, else no snowmelt, 198 | # snowpack accumulates by full precip amount 199 | if temp >= temperature_cutoff: 200 | # If it is raining, calculate snowmelt with rain, 201 | # else calculate snowmelt without rain 202 | if precip_inch > 0: 203 | snowmelt = snowmelt_rain_on_snow_heavily_forested( 204 | precip_inch, 205 | temp, 206 | temperature_cutoff, 207 | snowmelt_rate_coeff_with_rain 208 | ) 209 | 210 | # adjust daily snowmelt to same timestep as precip and temp 211 | snowmelt = snowmelt * timestep_daily_fraction 212 | 213 | else: 214 | snowmelt = snowmelt_temperature_index( 215 | temp, 216 | temperature_cutoff, 217 | snowmelt_rate_coeff 218 | ) 219 | 220 | # adjust daily snowmelt to same timestep as precip and temp 221 | snowmelt = snowmelt * timestep_daily_fraction 222 | 223 | # If there is more snowmelt than snowpack available to melt, then 224 | # add the amount of snowpack available to the snowmelt 225 | if snowmelt >= snowpack: 226 | snowmelt = snowpack 227 | 228 | # Remove the amount of snowmelt from the snowpack, 229 | # and add the amount of snowmelt to the current precip amount 230 | snowpack = snowpack - snowmelt 231 | precip_inch = precip_inch + snowmelt 232 | 233 | # If temp is too cold for melting, then add the precip (snow) amount 234 | # to the snow pack and assume no water infiltrates on cold days by 235 | # setting precip to zero 236 | else: 237 | snowpack = snowpack + precip_inch 238 | precip_inch = 0 239 | 240 | snowprecip.append(precip_inch) 241 | snowmelts.append(snowmelt) 242 | snowpacks.append(snowpack) 243 | 244 | snowprecip = np.array(snowprecip) * 25.4 # inches to mm 245 | snowmelts = np.array(snowmelts) * 25.4 # inches to mm 246 | snowpacks = np.array(snowpacks) * 25.4 # inches to mm 247 | 248 | return snowprecip, snowmelts, snowpacks 249 | 250 | 251 | def snowmelt_rain_on_snow_heavily_forested(precipitation, 252 | temperatures, 253 | temperature_cutoff=32.0, 254 | rain_melt_coeff=0.007): 255 | """Calculate the amount of snowmelt rain-on-snow situations in 256 | heavily forested areas using a generalized equation for rain-on-snow 257 | situations in heavily forested areas (the mean canopy cover is greater 258 | than 80%). This snowmelt calculation is from the family of energy budget 259 | solutions. 260 | 261 | :param precipitation: Precipitation rates, in inches/day 262 | :type precipitation: numpy.ndarray 263 | :param temperatures: Temperatures, in degrees Fahrenheit 264 | :type temperatures: numpy.ndarray 265 | :param temperature_cutoff: Temperature when melt begins, 266 | in degrees Fahrenheit 267 | :type temperature_cutoff: float 268 | :param rain_melt_coeff: Snowmelt coefficient when raining, 269 | 1/degrees Fahrenheit 270 | :type rain_melt_coeff: float 271 | :return snowmelt: Snowmelt values, in inches per day 272 | :rtype snowmelt: numpy.ndarray 273 | 274 | .. note:: 275 | 276 | Equation: 277 | 278 | M = (0.074 + 0.007 * P_r) * (T_a - 32) + 0.05 279 | 280 | where 281 | 282 | M snowmelt, inches/day 283 | P_r rate of precipitation, inches/day 284 | T_a temperature of saturated air, at 3 meters (10 ft) level, 285 | degrees Fahrenheit 286 | 287 | Reference: 288 | 289 | Engineering and Design - Runoff from Snowmelt 290 | U.S. Army Corps of Engineers 291 | Engineering Manual 1110-2-1406 292 | Chapter 5-3. Generalized Equations, Rain-on-Snow Situations, Equation 5-20 293 | https://www.wcc.nrcs.usda.gov/ftpref/wntsc/H&H/snow/COEemSnowmeltRunoff.pdf 294 | """ 295 | return ( 296 | (0.074 + rain_melt_coeff * precipitation) 297 | * (temperatures - temperature_cutoff) + 0.05 298 | ) 299 | 300 | 301 | def snowmelt_rain_on_snow_open_to_partly_forested(precipitation, 302 | temperatures, 303 | winds, 304 | temperature_cutoff=32.0, 305 | rain_melt_coeff=0.007, 306 | basin_wind_coeff=0.5): 307 | """Calculate the amount of snowmelt rain-on-snow situations in 308 | partially forested areas using a generalized equation for 309 | rain-on-snow situations in partially forested areas (the mean canopy 310 | cover is greater than 10%-80%). Snowmelt calculation is from the family 311 | of energy budget solutions. 312 | 313 | :param precipitation: Precipitation rates, in inches/day 314 | :type precipitation: numpy.ndarray 315 | :param temperatures: Temperatures of saturated air, 316 | in degrees Fahrenheit 317 | :type temperatures: numpy.ndarray 318 | :param winds: Winds, in miles per hour 319 | :type winds: numpy.ndarray 320 | :param temperature_cutoff: Temperature when melt begins, 321 | in degrees Fahrenheit 322 | :type temperature_cutoff: float 323 | :param rain_melt_coeff: Snowmelt coefficient when raining, 324 | 1/degrees Fahrenheit 325 | :type rain_melt_coeff: float 326 | :param basin_wind_coeff: Basin wind exposure coefficient, fraction 327 | :type basin_wind_coeff: float 328 | :return snowmelt: Snowmelt values, in inches per day 329 | :rtype snowmelt: numpy.ndarray 330 | 331 | .. note:: 332 | 333 | Equation: 334 | 335 | M = (0.029 + 0.0084 * k * v + 0.007 * P_r) * (T_a - 32) + 0.09 336 | 337 | where 338 | 339 | M snowmelt, inches/day 340 | k basin wind exposure coefficient, unitless 341 | v wind velocity, miles/hour 342 | P_r rate of precipitation, inches/day 343 | T_a temperature of saturated air, at 3 meters (10 ft) level, 344 | degrees Fahrenheit 345 | 346 | Reference: 347 | 348 | Engineering and Design - Runoff from Snowmelt 349 | U.S. Army Corps of Engineers 350 | Engineering Manual 1110-2-1406 351 | Chapter 5-3. Generalized Equations, Rain-on-Snow Situations, Equation 5-19 352 | https://www.wcc.nrcs.usda.gov/ftpref/wntsc/H&H/snow/COEemSnowmeltRunoff.pdf 353 | """ 354 | return ( 355 | (0.029 356 | + (0.0084 * basin_wind_coeff * winds) 357 | + (rain_melt_coeff * precipitation)) 358 | * (temperatures - temperature_cutoff) + 0.09 359 | ) 360 | 361 | 362 | def snowmelt_temperature_index(temperatures, 363 | temperature_cutoff=32.0, 364 | melt_rate_coeff=0.06): 365 | """Calculate the amount of snowmelt using a temperature index method, 366 | also called degree-day method. This method has its limitations as noted 367 | in the reference. 368 | 369 | :param temperatures: Temperatures, in degrees Fahrenheit 370 | :type temperatures: numpy.ndarray 371 | :param temperature_cutoff: Temperature when melt begins, 372 | in degrees Fahrenheit 373 | :type temperature_cutoff: float 374 | :param melt_rate_coeff: Snowmelt rate coefficient (often variable), 375 | in inches per degree Fahrenheit 376 | :type melt_rate_coeff: float 377 | :return snowmelt: Snowmelt values, in inches per day 378 | :rtype snowmelt: numpy.ndarray 379 | 380 | .. note:: 381 | 382 | Equation: 383 | 384 | M = C_m * (T_a - T_b) 385 | 386 | where 387 | 388 | M snowmelt, inches per period 389 | C_m melt-rate coefficient, inches/(degree Fahrenheit/period) 390 | T_a air temperature, degrees Fahrenheit 391 | T_b base temperature, degrees Fahrenheit 392 | 393 | Reference: 394 | 395 | Engineering and Design - Runoff from Snowmelt 396 | U.S. Army Corps of Engineers 397 | Engineering Manual 1110-2-1406 398 | Chapter 6-1. Generalized Equations, Rain-on-Snow Situations, Equation 6-1 399 | https://www.wcc.nrcs.usda.gov/ftpref/wntsc/H&H/snow/COEemSnowmeltRunoff.pdf 400 | """ 401 | return melt_rate_coeff * (temperatures - temperature_cutoff) 402 | 403 | 404 | def weighted_mean(values, weights): 405 | """Calculate the weighted mean. 406 | 407 | :param values: Array of values 408 | :type values: numpy.ndarray 409 | :param weights: Array of weights 410 | :type weights: numpy.ndarray 411 | :rtype: float 412 | """ 413 | weighted_mean = (values * weights).sum() / weights.sum() 414 | 415 | return weighted_mean 416 | 417 | 418 | def absolute_error(observed, modeled): 419 | """Calculate the absolute error between two arrays. 420 | 421 | :param observed: Array of observed data 422 | :type observed: numpy.ndarray 423 | :param modeled: Array of modeled data 424 | :type modeled: numpy.ndarray 425 | :rtype: numpy.ndarray 426 | """ 427 | error = observed - modeled 428 | 429 | return error 430 | 431 | 432 | def mean_squared_error(observed, modeled): 433 | """Calculate the mean square error between two arrays. 434 | 435 | :param observed: Array of observed data 436 | :type observed: numpy.ndarray 437 | :param modeled: Array of modeled data 438 | :type modeled: numpy.ndarray 439 | :rtype: float 440 | """ 441 | error = absolute_error(observed, modeled) 442 | mse = np.mean(error**2) 443 | 444 | return mse 445 | 446 | 447 | def relative_error(observed, modeled): 448 | """Calculate the relative change between two arrays. 449 | 450 | :param observed: Array of observed data 451 | :type observed: numpy.ndarray 452 | :param modeled: Array of modeled data 453 | :type modeled: numpy.ndarray 454 | :rtype: numpy.ndarray 455 | """ 456 | error = absolute_error(observed, modeled) / observed 457 | 458 | return error 459 | 460 | 461 | def percent_error(observed, modeled): 462 | """Calculate the percent error between two arrays. 463 | 464 | :param observed: Array of observed data 465 | :type observed: numpy.ndarray 466 | :param modeled: Array of modeled data 467 | :type modeled: numpy.ndarray 468 | :rtype: numpy.ndarray 469 | """ 470 | error = relative_error(observed, modeled) * 100 471 | 472 | return error 473 | 474 | 475 | def percent_difference(observed, modeled): 476 | """Calculate the percent difference between two arrays. 477 | 478 | :param observed: Array of observed data 479 | :type observed: numpy.ndarray 480 | :param modeled: Array of modeled data 481 | :type modeled: numpy.ndarray 482 | :rtype: numpy.ndarray 483 | """ 484 | mean = np.mean((observed, modeled), axis=0) 485 | percent_diff = ((modeled - observed) / mean) * 100 486 | 487 | return percent_diff 488 | 489 | 490 | def r_squared(observed, modeled): 491 | """Calculate the Coefficient of Determination. Used to indicate how well 492 | data points fit a line or curve. Use numpy.coeff for computation. 493 | 494 | :param observed: Array of observed data 495 | :type observed: numpy.ndarray 496 | :param modeled: Array of modeled data 497 | :type modeled: numpy.ndarray 498 | :rtype: float 499 | """ 500 | r = np.corrcoef(observed, modeled)[0, 1] 501 | coefficient = r**2 502 | 503 | return coefficient 504 | 505 | 506 | def nash_sutcliffe(observed, modeled): 507 | """Calculate the Nash-Sutcliffe (model efficiency coefficient). 508 | Used to assess the predictive power of hydrological models. 509 | 510 | E = 1 - sum((observed - modeled) ** 2)) / (sum((observed - mean_observed)**2 ))) 511 | 512 | :param observed: Array of observed data 513 | :type observed: numpy.ndarray 514 | :param modeled: Array of modeled data 515 | :type modeled: numpy.ndarray 516 | :rtype: float 517 | """ 518 | mean_observed = np.mean(observed) 519 | numerator = np.sum((observed - modeled) ** 2) 520 | denominator = np.sum((observed - mean_observed)**2) 521 | coefficient = 1 - (numerator/denominator) 522 | 523 | return coefficient 524 | 525 | 526 | def flow_duration(values): 527 | """Calculate the exceedance probabilities for a set of values for use in 528 | plotting a flow duration curve. 529 | 530 | :param observed: Array of flow values 531 | :type observed: numpy.ndarray 532 | :param modeled: Array of modeled data 533 | :return tuple: Tuple of probabilities, sorted values 534 | :rtype: tuple 535 | """ 536 | # Sort the values 537 | values_sorted = np.sort(values) 538 | 539 | # Rank data from smallest to largest 540 | ranks = stats.rankdata(values_sorted, method="average") 541 | 542 | # Reverse the order 543 | ranks = ranks[::-1] 544 | 545 | # Compute the exceedance probabilities 546 | probabilities = ( 547 | [(ranks[i] / (len(values) + 1)) * 100 for i in range(len(values))] 548 | ) 549 | 550 | return probabilities, values_sorted 551 | -------------------------------------------------------------------------------- /topmodelpy/main.py: -------------------------------------------------------------------------------- 1 | """Main module that runs topmodelpy. 2 | 3 | This module contains functionality that: 4 | - Read model configurationo file 5 | - Read all input files 6 | - Preprocess input data 7 | - Calculate the timestep daily fraction 8 | - Calculate pet if not in timeseries 9 | - Calculates adjusted precipitation from snowmelt 10 | - Calculate the twi weighted mean 11 | - Run Topmodel 12 | - Post process results 13 | - Write output *.csv file of results 14 | - Plot output 15 | """ 16 | import pandas as pd 17 | from pathlib import PurePath 18 | from topmodelpy import (hydrocalcs, 19 | modelconfigfile, 20 | parametersfile, 21 | timeseriesfile, 22 | twifile, 23 | plots, 24 | report) 25 | from topmodelpy.topmodel import Topmodel 26 | 27 | 28 | def topmodelpy(configfile, options): 29 | """Read inputs, preprocesse data, run Topmodel, and postprocess 30 | results, write *.csv outputfiles and make plots. 31 | 32 | :param configfile: The file path to the model config file that 33 | contains model specifications 34 | :type param: string 35 | :param options: The options sent from the cli 36 | :type options: Click.obj 37 | """ 38 | config_data = modelconfigfile.read(configfile) 39 | parameters, timeseries, twi = read_input_files(config_data) 40 | 41 | preprocessed_data = preprocess(config_data, parameters, timeseries, twi) 42 | topmodel_data = run_topmodel(parameters, twi, preprocessed_data) 43 | postprocess(config_data, timeseries, preprocessed_data, topmodel_data) 44 | 45 | 46 | def read_input_files(configdata): 47 | """Read input files from model configuration file. 48 | 49 | Returns a tuple of: 50 | dictionary from parameters file 51 | pandas.DataFrame from timeseries file 52 | pandas.DataFrame from twi file 53 | 54 | :param config: A ConfigParser object that behaves much like a dictionary. 55 | :type config: ConfigParser 56 | :return: Tuple of parameters dict, timeseries dataframe, twi dataframe 57 | :rtype: tuple 58 | """ 59 | parameters = parametersfile.read(configdata["Inputs"]["parameters_file"]) 60 | timeseries = timeseriesfile.read(configdata["Inputs"]["timeseries_file"]) 61 | twi = twifile.read(configdata["Inputs"]["twi_file"]) 62 | 63 | return parameters, timeseries, twi 64 | 65 | 66 | def preprocess(config_data, parameters, timeseries, twi): 67 | """Preprocess data for topmodel run. 68 | 69 | Calculate timestep daily fraction, usually 1 for daily timesteps 70 | - 1 day = 86400 seconds 71 | Calculate pet if pet is not in timeseries dataframe 72 | Calculate snowmelt and adjusted precipitation from snowmelt routine 73 | - Snowmelt routine requires temperatures in Fahrenheit. 74 | - The temperature cutoff from the parameters dict is in Fahrenheit. 75 | - snowprecip is the adjusted precipitation from snowmelt. 76 | - The snowmelt and snowpack variables are not used at this time. 77 | Calculate the difference between the adjusted precip and pet for Topmodel. 78 | Calculate the weighted twi mean for Topmodel. 79 | 80 | :param parameters: The parameters for the model. 81 | :type parameters: Dict 82 | :param timeseries: A dataframe of all the timeseries data. 83 | :type timeseries: Pandas.DataFrame 84 | :param twi: A dataframe of all the twi data. 85 | :type twi: Pandas.DataFrame 86 | :return preprocessed_data: A dict of the calculated variables from 87 | preprocessing. 88 | :rtype: dict 89 | """ 90 | # Calculate the daily timestep as a fraction 91 | timestep_daily_fraction = ( 92 | (timeseries.index[1] - timeseries.index[0]).total_seconds() / 86400.0 93 | ) 94 | 95 | # Get pet as a numpy array from the input timeseries if it exists, 96 | # otherwise calculate it. 97 | if "pet" in timeseries.columns: 98 | pet = timeseries["pet"].to_numpy() * timestep_daily_fraction 99 | else: 100 | pet = hydrocalcs.pet( 101 | dates=timeseries.index.to_pydatetime(), 102 | temperatures=timeseries["temperature"].to_numpy(), 103 | latitude=parameters["latitude"]["value"], 104 | method="hamon" 105 | ) 106 | pet = pet * timestep_daily_fraction 107 | 108 | # If snowmelt option is turned on, then compute snowmelt and the difference 109 | # between the adjusted precip with pet. 110 | # Otherwise, just compute the difference between the original precip with 111 | # pet. 112 | snowprecip, snowmelt, snowpack = None, None, None 113 | if config_data["Options"].getboolean("option_snowmelt"): 114 | # Calculate the adjusted precipitation based on snowmelt 115 | # Note: snowmelt function needs temperatures in Fahrenheit 116 | snowprecip, snowmelt, snowpack = hydrocalcs.snowmelt( 117 | timeseries["precipitation"].to_numpy(), 118 | timeseries["temperature"].to_numpy() * (9/5) + 32, 119 | parameters["snowmelt_temperature_cutoff"]["value"], 120 | parameters["snowmelt_rate_coeff_with_rain"]["value"], 121 | parameters["snowmelt_rate_coeff"]["value"], 122 | timestep_daily_fraction 123 | ) 124 | 125 | # Calculate the difference between the adjusted precip (snowprecip) 126 | # and pet. 127 | precip_minus_pet = snowprecip - pet 128 | else: 129 | # Calculate the difference between the original precip and pet 130 | precip_minus_pet = timeseries["precipitation"].to_numpy() - pet 131 | 132 | # Calculate the twi weighted mean 133 | twi_weighted_mean = hydrocalcs.weighted_mean(values=twi["twi"], 134 | weights=twi["proportion"]) 135 | 136 | # Return a dict of calculated data 137 | preprocessed_data = { 138 | "timestep_daily_fraction": timestep_daily_fraction, 139 | "pet": pet, 140 | "precip_minus_pet": precip_minus_pet, 141 | "snowprecip": snowprecip, 142 | "snowmelt": snowmelt, 143 | "snowpack": snowpack, 144 | "twi_weighted_mean": twi_weighted_mean, 145 | } 146 | 147 | return preprocessed_data 148 | 149 | 150 | def run_topmodel(parameters, twi, preprocessed_data): 151 | """Run Topmodel. 152 | 153 | :param parameters: The parameters for the model. 154 | :type parameters: Dict 155 | :param twi: A dataframe of all the twi data. 156 | :type twi: Pandas.DataFrame 157 | :param preprocessed_data: A dict of the calculated variables from 158 | preprocessing. 159 | :type: dict 160 | :return topmodel_data: A dict of relevant data results from Topmodel 161 | :rtype: dict 162 | """ 163 | # Initialize Topmodel 164 | topmodel = Topmodel( 165 | scaling_parameter=parameters["scaling_parameter"]["value"], 166 | saturated_hydraulic_conductivity=( 167 | parameters["saturated_hydraulic_conductivity"]["value"] 168 | ), 169 | macropore_fraction=parameters["macropore_fraction"]["value"], 170 | soil_depth_total=parameters["soil_depth_total"]["value"], 171 | soil_depth_ab_horizon=parameters["soil_depth_ab_horizon"]["value"], 172 | field_capacity_fraction=parameters["field_capacity_fraction"]["value"], 173 | latitude=parameters["latitude"]["value"], 174 | basin_area_total=parameters["basin_area_total"]["value"], 175 | impervious_area_fraction=parameters["impervious_area_fraction"]["value"], 176 | flow_initial=parameters["flow_initial"]["value"], 177 | twi_values=twi["twi"].to_numpy(), 178 | twi_saturated_areas=twi["proportion"].to_numpy(), 179 | twi_mean=preprocessed_data["twi_weighted_mean"], 180 | precip_available=preprocessed_data["precip_minus_pet"], 181 | timestep_daily_fraction=preprocessed_data["timestep_daily_fraction"] 182 | ) 183 | 184 | # Run Topmodel 185 | topmodel.run() 186 | 187 | # Return a dict of relevant calculated values 188 | topmodel_data = { 189 | "flow_predicted": topmodel.flow_predicted, 190 | "saturation_deficit_avgs": topmodel.saturation_deficit_avgs, 191 | "saturation_deficit_locals": topmodel.saturation_deficit_locals, 192 | "unsaturated_zone_storages": topmodel.unsaturated_zone_storages, 193 | "root_zone_storages": topmodel.root_zone_storages, 194 | } 195 | 196 | return topmodel_data 197 | 198 | 199 | def postprocess(config_data, timeseries, preprocessed_data, topmodel_data): 200 | """Postprocess data for output. 201 | 202 | Output csv files 203 | Plot timseries 204 | """ 205 | # Get output timeseries data 206 | output_df = get_output_dataframe(timeseries, 207 | preprocessed_data, 208 | topmodel_data) 209 | 210 | # Get output comparison stats 211 | output_comparison_data = get_comparison_data(output_df) 212 | 213 | # Write output data 214 | write_output_csv(df=output_df, 215 | filename=PurePath( 216 | config_data["Outputs"]["output_dir"], 217 | config_data["Outputs"]["output_filename"])) 218 | 219 | # Write output data matrices 220 | if config_data["Options"].getboolean("option_write_output_matrices"): 221 | write_output_matrices_csv(config_data, timeseries, topmodel_data) 222 | 223 | # Plot output data 224 | plot_output_data(df=output_df, 225 | comparison_data=output_comparison_data, 226 | path=config_data["Outputs"]["output_dir"]) 227 | 228 | # Write report of output data 229 | write_output_report(df=output_df, 230 | comparison_data=output_comparison_data, 231 | filename=PurePath( 232 | config_data["Outputs"]["output_dir"], 233 | config_data["Outputs"]["output_report"])) 234 | 235 | 236 | def get_output_dataframe(timeseries, preprocessed_data, topmodel_data): 237 | """Get the output data of interest. 238 | 239 | Returns a Pandas Dataframe of all output data of interest. 240 | """ 241 | output_data = {} 242 | if preprocessed_data["snowprecip"] is not None: 243 | output_data["snowprecip"] = preprocessed_data["snowprecip"] 244 | 245 | if "pet" not in timeseries.columns: 246 | output_data["pet"] = preprocessed_data["pet"] 247 | 248 | output_data["precip_minus_pet"] = preprocessed_data["precip_minus_pet"] 249 | 250 | output_data["flow_predicted"] = topmodel_data["flow_predicted"] 251 | output_data["saturation_deficit_avgs"] = topmodel_data["saturation_deficit_avgs"] 252 | 253 | output_df = timeseries.assign(**output_data) 254 | 255 | return output_df 256 | 257 | 258 | def get_comparison_data(output_df): 259 | """Get comparison statistics. 260 | 261 | Return a dictionary of descriptive statistics and if output data contains 262 | an observed flow, then compute the Nash-Sutcliffe statistic. 263 | """ 264 | output_comparison_data = {} 265 | if "flow_observed" in output_df.columns: 266 | output_comparison_data["nash_sutcliffe"] = ( 267 | hydrocalcs.nash_sutcliffe( 268 | observed=output_df["flow_observed"].to_numpy(), 269 | modeled=output_df["flow_predicted"].to_numpy()) 270 | ) 271 | output_comparison_data["absolute_error"] = ( 272 | hydrocalcs.absolute_error( 273 | observed=output_df["flow_observed"].to_numpy(), 274 | modeled=output_df["flow_predicted"].to_numpy()) 275 | ) 276 | output_comparison_data["mean_squared_error"] = ( 277 | hydrocalcs.mean_squared_error( 278 | observed=output_df["flow_observed"].to_numpy(), 279 | modeled=output_df["flow_predicted"].to_numpy()) 280 | ) 281 | 282 | return output_comparison_data 283 | 284 | 285 | def write_output_csv(df, filename): 286 | """Write output timeseries to csv file. 287 | 288 | Creating a pandas Dataframe to ease of saving a csv. 289 | """ 290 | df.rename = { 291 | "temperature": "temperature (celsius)", 292 | "precipitation": "precipitation (mm/day)", 293 | "pet": "pet (mm/day)", 294 | "precip_minus_pet": "precip_minus_pet (mm/day)", 295 | "flow_observed": "flow_observed (mm/day)", 296 | "flow_predicted": "flow_predicted (mm/day)", 297 | "saturation_deficit_avgs": "saturation_deficit_avgs (mm/day)", 298 | "snowprecip": "snowprecip (mm/day)", 299 | } 300 | df.to_csv(filename, 301 | float_format="%.2f") 302 | 303 | 304 | def write_output_matrices_csv(config_data, timeseries, topmodel_data): 305 | """Write output matrices. 306 | 307 | Matrices are of size: len(timeseries) x len(twi_bins) 308 | 309 | The following are the matrices saved. 310 | saturation_deficit_locals 311 | unsaturated_zone_storages 312 | root_zone_storages 313 | """ 314 | num_cols = topmodel_data["saturation_deficit_locals"].shape[1] 315 | header = ["bin_{}".format(i) for i in range(1, num_cols+1)] 316 | 317 | saturation_deficit_locals_df = ( 318 | pd.DataFrame(topmodel_data["saturation_deficit_locals"], 319 | index=timeseries.index) 320 | ) 321 | 322 | unsaturated_zone_storages_df = ( 323 | pd.DataFrame(topmodel_data["unsaturated_zone_storages"], 324 | index=timeseries.index) 325 | ) 326 | 327 | root_zone_storages_df = ( 328 | pd.DataFrame(topmodel_data["root_zone_storages"], 329 | index=timeseries.index) 330 | ) 331 | 332 | saturation_deficit_locals_df.to_csv( 333 | PurePath( 334 | config_data["Outputs"]["output_dir"], 335 | config_data["Outputs"]["output_filename_saturation_deficit_locals"] 336 | ), 337 | float_format="%.2f", 338 | header=header, 339 | ) 340 | 341 | unsaturated_zone_storages_df.to_csv( 342 | PurePath( 343 | config_data["Outputs"]["output_dir"], 344 | config_data["Outputs"]["output_filename_unsaturated_zone_storages"] 345 | ), 346 | float_format="%.2f", 347 | header=header, 348 | ) 349 | 350 | root_zone_storages_df.to_csv( 351 | PurePath( 352 | config_data["Outputs"]["output_dir"], 353 | config_data["Outputs"]["output_filename_root_zone_storages"] 354 | ), 355 | float_format="%.2f", 356 | header=header, 357 | ) 358 | 359 | 360 | def plot_output_data(df, comparison_data, path): 361 | """Plot output timeseries.""" 362 | for key, series in df.iteritems(): 363 | filename = PurePath(path, "{}.png".format(key.split(" ")[0])) 364 | plots.plot_timeseries( 365 | dates=df.index.to_pydatetime(), 366 | values=series.values, 367 | mean=series.mean(), 368 | median=series.median(), 369 | mode=series.mode()[0], 370 | max=series.max(), 371 | min=series.min(), 372 | label="{} (mm/day)".format(key), 373 | filename=filename) 374 | 375 | plots.plot_flow_duration_curve( 376 | values=df["flow_predicted"].to_numpy(), 377 | label="flow_predicted (mm/day)", 378 | filename=PurePath(path, "flow_duration_curve.png")) 379 | 380 | if "flow_observed" in df.columns: 381 | plots.plot_timeseries_comparison( 382 | dates=df.index.to_pydatetime(), 383 | observed=df["flow_observed"].to_numpy(), 384 | modeled=df["flow_predicted"].to_numpy(), 385 | absolute_error=comparison_data["absolute_error"], 386 | nash_sutcliffe=comparison_data["nash_sutcliffe"], 387 | mean_squared_error=comparison_data["mean_squared_error"], 388 | label="flow (mm/day)", 389 | filename=PurePath(path, "flow_observed_vs_flow_predicted.png")) 390 | 391 | plots.plot_flow_duration_curve_comparison( 392 | observed=df["flow_observed"].to_numpy(), 393 | modeled=df["flow_predicted"].to_numpy(), 394 | label="flow (mm/day)", 395 | filename=PurePath(path, "flow_duration_curved_observed_vs_predicted.png")) 396 | 397 | 398 | def write_output_report(df, comparison_data, filename): 399 | """Write an html web page with interactive plots.""" 400 | plots_html_data = {} 401 | for key, value in df.iteritems(): 402 | plots_html_data[key] = plots.plot_timeseries_html( 403 | dates=df.index.to_pydatetime(), 404 | values=value, 405 | label="{} (mm/day)".format(key)) 406 | 407 | flow_duration_curve_data = { 408 | "flow_duration_curve_html": plots.plot_flow_duration_curve_html( 409 | values=df["flow_predicted"].to_numpy(), 410 | label="flow_predicted (mm/day)") 411 | } 412 | 413 | if comparison_data: 414 | comparison_plot_html = plots.plot_timeseries_comparison_html( 415 | dates=df.index.to_pydatetime(), 416 | observed=df["flow_observed"].to_numpy(), 417 | modeled=df["flow_predicted"].to_numpy(), 418 | absolute_error=comparison_data["absolute_error"], 419 | label="flow (mm/day)") 420 | comparison_data.update({"comparison_plot_html": comparison_plot_html}) 421 | 422 | flow_duration_curve_comparison_hmtl = ( 423 | plots.plot_flow_duration_curve_comparison_html( 424 | observed=df["flow_observed"].to_numpy(), 425 | modeled=df["flow_predicted"].to_numpy(), 426 | label="flow (mm/day)") 427 | ) 428 | flow_duration_curve_data.update( 429 | {"flow_duration_curve_comparison_html": flow_duration_curve_comparison_hmtl} 430 | ) 431 | 432 | report.save(df=df, 433 | plots=plots_html_data, 434 | comparison_data=comparison_data, 435 | flow_duration_curve_data=flow_duration_curve_data, 436 | filename=filename) 437 | -------------------------------------------------------------------------------- /topmodelpy/modelconfigfile.py: -------------------------------------------------------------------------------- 1 | """Module for reading a model configuration file. 2 | 3 | The configuration file is in the standard INI file format. The file contains 4 | three sections: 5 | 1. Inputs - section for model input files 6 | 2. Outputs - section for model output files 7 | 3. Options - section for model options 8 | """ 9 | 10 | from configparser import ConfigParser, ExtendedInterpolation 11 | from pathlib import Path 12 | 13 | from .exceptions import (ModelConfigFileErrorInvalidSection, 14 | ModelConfigFileErrorInvalidFilePath, 15 | ModelConfigFileErrorInvalidOption) 16 | 17 | 18 | def read(filepath): 19 | """Read model config file. 20 | 21 | Read and process the model configuration file that is in the standard INI 22 | data format. 23 | 24 | :param filepath: A valid file path 25 | :type filepath: string 26 | :return: A ConfigParser object that behaves much like a dictionary. 27 | :rtype: ConfigParser 28 | """ 29 | filepath = Path(filepath) 30 | try: 31 | config = ConfigParser(interpolation=ExtendedInterpolation()) 32 | config.read(filepath) 33 | check_config(config) 34 | except ModelConfigFileErrorInvalidSection as err: 35 | print(err) 36 | except ModelConfigFileErrorInvalidFilePath as err: 37 | print(err) 38 | except ModelConfigFileErrorInvalidOption as err: 39 | print(err) 40 | except Exception as err: 41 | print(err) 42 | 43 | return config 44 | 45 | 46 | def check_config(config): 47 | """ Check that the config file has data and that all file paths are valid. 48 | 49 | :param config: A ConfigParser object that behaves much like a dictionary. 50 | :type config: ConfigParser 51 | """ 52 | check_config_sections(config) 53 | check_config_filepaths(config) 54 | check_config_options(config) 55 | 56 | 57 | def check_config_sections(config): 58 | """Check that the config file has sections.""" 59 | valid_sections = ["Inputs", "Outputs", "Options"] 60 | sections = config.sections() 61 | if not sections == valid_sections: 62 | raise ModelConfigFileErrorInvalidSection(sections, valid_sections) 63 | 64 | 65 | def check_config_filepaths(config): 66 | """Check that all the filepaths are valid.""" 67 | for section in config.sections(): 68 | for key in config[section]: 69 | value = config[section][key] 70 | if value and (key.endswith("dir") or key.endswith("file")): 71 | filepath = Path(value) 72 | if not filepath.exists() and not filepath.is_file(): 73 | raise ModelConfigFileErrorInvalidFilePath(value) 74 | 75 | 76 | def check_config_options(config): 77 | """Check that all the options are valid.""" 78 | valid_options = { 79 | "pet": ["hamon"], 80 | "snowmelt": ["yes", "no"], 81 | } 82 | 83 | options = { 84 | "pet": config["Options"]["option_pet"].lower().strip(), 85 | "snowmelt": ( 86 | config["Options"]["option_snowmelt"].lower().strip() 87 | ), 88 | } 89 | 90 | for key in valid_options.keys() and options.keys(): 91 | if options[key] not in valid_options[key]: 92 | raise ModelConfigFileErrorInvalidOption(options, valid_options) 93 | -------------------------------------------------------------------------------- /topmodelpy/parametersfile.py: -------------------------------------------------------------------------------- 1 | """Module that contains functions to read a parameters file in csv format.""" 2 | 3 | import csv 4 | 5 | from .exceptions import (ParametersFileErrorInvalidHeader, 6 | ParametersFileErrorInvalidScalingParameter, 7 | ParametersFileErrorInvalidLatitude, 8 | ParametersFileErrorInvalidSoilDepthTotal, 9 | ParametersFileErrorInvalidSoilDepthAB, 10 | ParametersFileErrorInvalidFieldCapacity, 11 | ParametersFileErrorInvalidMacropore, 12 | ParametersFileErrorInvalidImperviousArea,) 13 | 14 | 15 | def read(filepath): 16 | """Read data file 17 | Open file and create a file object to process with 18 | read_file_in(filestream). 19 | 20 | :param filepath: File path of data file. 21 | :type param: string 22 | :return data: A dict that contains all the data from the file. 23 | :rtype: dict 24 | """ 25 | try: 26 | with open(filepath) as f: 27 | data = read_in(f) 28 | check_data(data) 29 | return data 30 | except (ParametersFileErrorInvalidHeader, 31 | ParametersFileErrorInvalidScalingParameter, 32 | ParametersFileErrorInvalidLatitude, 33 | ParametersFileErrorInvalidSoilDepthTotal, 34 | ParametersFileErrorInvalidSoilDepthAB, 35 | ParametersFileErrorInvalidFieldCapacity, 36 | ParametersFileErrorInvalidMacropore, 37 | ParametersFileErrorInvalidImperviousArea,) as err: 38 | print(err) 39 | 40 | 41 | def read_in(filestream): 42 | """Read and process a filestream. 43 | Read and process a filestream of a comma-delimited parameter file. 44 | This function takes a filestream of text as input which allows for 45 | cleaner unit testing. 46 | 47 | :param filestream: A filestream of text. 48 | :type filestream: _io.TextIOWrapper 49 | :return data: A dict that contains all the data from the file. 50 | :rtype: dict 51 | """ 52 | fnames = ["name", "value", "units", "description"] 53 | reader = csv.DictReader(filestream, fieldnames=fnames) 54 | header = next(reader) 55 | header_list = [val.lower().strip() for val in header.values()] 56 | check_header(header_list, fnames) 57 | 58 | data = {} 59 | for row in reader: 60 | name = row["name"].lower().strip() 61 | data[name] = { 62 | "value": float(row["value"].strip()), 63 | "units": row["units"].lower().strip(), 64 | "description": row["description"].strip(), 65 | } 66 | return data 67 | 68 | 69 | def check_header(header, valid_header): 70 | """Check that column names in header line match what is expected. 71 | 72 | :param header: Header found in file. 73 | :type header: list 74 | :param valid_header: Valid header that is expected. 75 | :type valid_header: list 76 | """ 77 | if not header == valid_header: 78 | raise ParametersFileErrorInvalidHeader(header, valid_header) 79 | 80 | 81 | def check_data(data): 82 | """Check that all data values from the file are valid. 83 | 84 | :param data: A dict that contains all the data from the file. 85 | :type data: dict 86 | """ 87 | check_scaling_parameter(data["scaling_parameter"]["value"]) 88 | check_latitude(data["latitude"]["value"]) 89 | check_soil_depth_total(data["soil_depth_total"]["value"]) 90 | check_soil_depth_ab_horizon(data["soil_depth_ab_horizon"]["value"], 91 | data["soil_depth_total"]["value"]) 92 | check_field_capacity(data["field_capacity_fraction"]["value"]) 93 | check_macropore(data["macropore_fraction"]["value"]) 94 | check_impervious_area(data["impervious_area_fraction"]["value"]) 95 | 96 | 97 | def check_scaling_parameter(value): 98 | """Check that the scaling parameter value is valid. 99 | Valid scaling parameter value is: 100 | scaling_parameter > 0 101 | 102 | :param value: scaling parameter value. 103 | :type value: float 104 | """ 105 | if not value > 0: 106 | raise ParametersFileErrorInvalidScalingParameter(value) 107 | 108 | 109 | def check_latitude(value): 110 | """Check that the latitude value is valid. 111 | Valid latitude value are: 112 | 90 >= latitude >= 0 113 | 114 | :param value: latitude value. 115 | :type value: float 116 | """ 117 | if not value >= 0 or not value <= 90: 118 | raise ParametersFileErrorInvalidLatitude(value) 119 | 120 | 121 | def check_soil_depth_total(value): 122 | """Check that the soil_depth_total value is valid. 123 | Valid soil depth value are: 124 | soil_depth_total > 0 125 | 126 | :param value: soil depth total value. 127 | :type value: float 128 | """ 129 | if not value > 0: 130 | raise ParametersFileErrorInvalidSoilDepthTotal(value) 131 | 132 | 133 | def check_soil_depth_ab_horizon(value, soil_depth_total): 134 | """Check that the soil_depth_ab_horizon value is valid. 135 | Valid soil depth of AB horizon value are: 136 | soil_depth_ab_horizon > 0 137 | soil_depth_ab_horizon < soil_depth_total 138 | 139 | :param value: soil depth ab horizon value. 140 | :type value: float 141 | """ 142 | if not value > 0 or not value < soil_depth_total: 143 | raise ParametersFileErrorInvalidSoilDepthAB(value, soil_depth_total) 144 | 145 | 146 | def check_field_capacity(value): 147 | """Check that the field capacity fraction value is valid. 148 | Valid field capacity fraction value are: 149 | 0 <= field_capacity <= 1 150 | 151 | :param value: field capacity value. 152 | :type value: float 153 | """ 154 | if not value > 0 or not value < 1: 155 | raise ParametersFileErrorInvalidFieldCapacity(value) 156 | 157 | 158 | def check_macropore(value): 159 | """Check that the macropore value is valid. 160 | Valid macropore fraction value are: 161 | 0 <= macropore <= 1 162 | 163 | :param value: macropore value. 164 | :type value: float 165 | """ 166 | if not value > 0 or not value < 1: 167 | raise ParametersFileErrorInvalidMacropore(value) 168 | 169 | 170 | def check_impervious_area(value): 171 | """Check that the impervious_area value is valid. 172 | Valid impervious_area fraction value are: 173 | 0 <= impervious_area_fraction <= 1 174 | 175 | :param value: impervious area value. 176 | :type value: float 177 | """ 178 | if not value > 0 or not value < 1: 179 | raise ParametersFileErrorInvalidImperviousArea(value) 180 | -------------------------------------------------------------------------------- /topmodelpy/plots.py: -------------------------------------------------------------------------------- 1 | """Module of functions for generating plots. 2 | 3 | """ 4 | 5 | import matplotlib.pyplot as plt 6 | import matplotlib.dates as mdates 7 | import mpld3 8 | from pandas.plotting import register_matplotlib_converters 9 | 10 | from topmodelpy import hydrocalcs 11 | 12 | 13 | # Register for pandas 14 | register_matplotlib_converters() 15 | 16 | COLORS = { 17 | "temperature": "orange", 18 | "precipitation": "navy", 19 | "pet": "green", 20 | "precip_minus_pet": "darkgreen", 21 | "flow_observed": "darkblue", 22 | "flow_predicted": "blue", 23 | "saturation_deficit_avgs": "gray", 24 | } 25 | 26 | 27 | class MousePositionDatePlugin(mpld3.plugins.PluginBase): 28 | """Plugin for displaying mouse position with a datetime x axis.""" 29 | 30 | JAVASCRIPT = """ 31 | mpld3.register_plugin("mousepositiondate", MousePositionDatePlugin); 32 | MousePositionDatePlugin.prototype = Object.create(mpld3.Plugin.prototype); 33 | MousePositionDatePlugin.prototype.constructor = MousePositionDatePlugin; 34 | MousePositionDatePlugin.prototype.requiredProps = []; 35 | MousePositionDatePlugin.prototype.defaultProps = { 36 | fontsize: 12, 37 | xfmt: "%Y-%m-%d %H:%M:%S", 38 | yfmt: ".2f" 39 | }; 40 | function MousePositionDatePlugin(fig, props) { 41 | mpld3.Plugin.call(this, fig, props); 42 | } 43 | MousePositionDatePlugin.prototype.draw = function() { 44 | var fig = this.fig; 45 | var xfmt = d3.time.format(this.props.xfmt); 46 | var yfmt = d3.format(this.props.yfmt); 47 | var coords = fig.canvas.append("text").attr("class", "mpld3-coordinates").style("text-anchor", "end").style("font-size", this.props.fontsize).attr("x", this.fig.width - 5).attr("y", this.fig.height - 5); 48 | for (var i = 0; i < this.fig.axes.length; i++) { 49 | var update_coords = function() { 50 | var ax = fig.axes[i]; 51 | return function() { 52 | var pos = d3.mouse(this); 53 | x = ax.xdom.invert(pos[0]); 54 | y = ax.ydom.invert(pos[1]); 55 | coords.text("(" + xfmt(x) + ", " + yfmt(y) + ")"); 56 | }; 57 | }(); 58 | fig.axes[i].baseaxes.on("mousemove", update_coords).on("mouseout", function() { 59 | coords.text(""); 60 | }); 61 | } 62 | }; 63 | """ 64 | 65 | def __init__(self, fontsize=14, xfmt="%Y-%m-%d %H:%M:%S", yfmt=".2f"): 66 | self.dict_ = { 67 | "type": "mousepositiondate", 68 | "fontsize": fontsize, 69 | "xfmt": xfmt, 70 | "yfmt": yfmt 71 | } 72 | 73 | 74 | def plot_timeseries_html(dates, values, label): 75 | """Return an html string of the figure""" 76 | 77 | fig, ax = plt.subplots(subplot_kw=dict(facecolor="#EEEEEE")) 78 | fig.set_size_inches(10, 6) 79 | 80 | colorstr = "k" 81 | for key, value in COLORS.items(): 82 | if key in label: 83 | colorstr = value 84 | 85 | label = label.replace("_", " ").capitalize() 86 | 87 | ax.grid(color="white", linestyle="solid") 88 | ax.set_title("{}".format(label), fontsize=20) 89 | 90 | ax.plot(dates, values, color=colorstr, linewidth=2) 91 | 92 | # Connect plugin 93 | mpld3.plugins.connect(fig, MousePositionDatePlugin()) 94 | 95 | return mpld3.fig_to_html(fig) 96 | 97 | 98 | def plot_timeseries_comparison_html(dates, observed, modeled, absolute_error, label): 99 | """Return an html string of the figure""" 100 | 101 | fig, axes = plt.subplots(2, 1, sharex=True, subplot_kw=dict(facecolor="#EEEEEE")) 102 | fig.set_size_inches(10, 8) 103 | 104 | # Connect plugin 105 | mpld3.plugins.connect(fig, MousePositionDatePlugin()) 106 | 107 | # Plot comparison on first row 108 | axes[0].grid(True) 109 | axes[0].set_title("Observed flow vs. Modeled flow") 110 | axes[0].set_xlabel("Date") 111 | axes[0].set_ylabel(label) 112 | 113 | # Explicitly using matplotlibs new default color palette (blue and orange) 114 | axes[0].plot(dates, observed, linewidth=2, color="#1f77b4", 115 | label="Observed") 116 | axes[0].plot(dates, modeled, linewidth=2, color="#ff7f0e", 117 | label="Modeled") 118 | 119 | # Legend 120 | handles, labels = axes[0].get_legend_handles_labels() 121 | legend = axes[0].legend(handles, labels, fancybox=True) 122 | legend.get_frame().set_alpha(0.5) 123 | 124 | axes[1].grid(True) 125 | axes[1].set_title("Absolute Error: Observed - Modeled") 126 | axes[1].set_xlabel("Date") 127 | axes[1].set_ylabel("Error (mm/day)") 128 | 129 | axes[1].plot(dates, absolute_error, linewidth=2, color="black") 130 | 131 | # Rotate and align the tick labels so they look better 132 | fig.autofmt_xdate() 133 | axes[1].fmt_xdata = mdates.DateFormatter("%Y-%m-%d") 134 | 135 | return mpld3.fig_to_html(fig) 136 | 137 | 138 | def plot_flow_duration_curve_html(values, label): 139 | """Return an html string of the figure""" 140 | 141 | fig, ax = plt.subplots(subplot_kw=dict(facecolor="#EEEEEE")) 142 | fig.set_size_inches(10, 6) 143 | 144 | label = label.replace("_", " ").capitalize() 145 | 146 | ax.grid(color="white", linestyle="solid") 147 | ax.set_title("Flow Duration Curve: Observed vs. Modeled", fontsize=20) 148 | ax.set_xlabel("Exceedance Probability (%)") 149 | ax.set_ylabel(label) 150 | ax.set_yscale("log") 151 | 152 | probabilities, values_sorted = hydrocalcs.flow_duration(values) 153 | 154 | ax.plot(probabilities, values_sorted, linewidth=2) 155 | 156 | # Connect plugin 157 | mpld3.plugins.connect(fig, MousePositionDatePlugin()) 158 | 159 | return mpld3.fig_to_html(fig) 160 | 161 | 162 | def plot_flow_duration_curve_comparison_html(observed, modeled, label): 163 | """Plot flow duration curve.""" 164 | 165 | fig, ax = plt.subplots(subplot_kw=dict(facecolor="#EEEEEE")) 166 | fig.set_size_inches(10, 6) 167 | 168 | label = label.replace("_", " ").capitalize() 169 | 170 | ax.grid(color="white", linestyle="solid") 171 | ax.set_title("Flow Duration Curve: Observed vs. Modeled", fontsize=20) 172 | ax.set_xlabel("Exceedance Probability (%)") 173 | ax.set_ylabel(label) 174 | ax.set_yscale("log") 175 | 176 | observed_prob, observed_sorted = hydrocalcs.flow_duration(observed) 177 | modeled_prob, modeled_sorted = hydrocalcs.flow_duration(modeled) 178 | 179 | # Explicitly using matplotlibs new default color palette (blue and orange) 180 | ax.plot(observed_prob, observed_sorted, linewidth=2, color="#1f77b4", 181 | label="Observed") 182 | ax.plot(modeled_prob, modeled_sorted, linewidth=2, color="#ff7f0e", 183 | label="Modeled") 184 | 185 | # Legend 186 | handles, labels = ax.get_legend_handles_labels() 187 | legend = ax.legend(handles, labels, fancybox=True) 188 | legend.get_frame().set_alpha(0.5) 189 | 190 | # Connect plugin 191 | mpld3.plugins.connect(fig, MousePositionDatePlugin()) 192 | 193 | return mpld3.fig_to_html(fig) 194 | 195 | 196 | def plot_timeseries(dates, 197 | values, 198 | mean, 199 | median, 200 | mode, 201 | max, 202 | min, 203 | label, 204 | filename): 205 | """Plot timeseries.""" 206 | fig, ax = plt.subplots() 207 | fig.set_size_inches(12, 10) 208 | 209 | colorstr = "k" 210 | for key, value in COLORS.items(): 211 | if key in label: 212 | colorstr = value 213 | 214 | label = label.replace("_", " ").capitalize() 215 | 216 | ax.grid() 217 | ax.set_title(label) 218 | ax.set_xlabel("Date") 219 | ax.set_ylabel(label) 220 | 221 | ax.plot(dates, values, linewidth=2, color=colorstr) 222 | 223 | # Rotate and align the tick labels so they look better 224 | fig.autofmt_xdate() 225 | ax.fmt_xdata = mdates.DateFormatter("%Y-%m-%d") 226 | 227 | # Add text of descriptive stats to figure 228 | text = ( 229 | "Mean = {0:.2f}\n" 230 | "Median = {1:.2f}\n" 231 | "Mode = {2:.2f}\n" 232 | "Max = {3:.2f}\n" 233 | "Min = {4:.2f}" 234 | "".format(mean, median, mode, max, min) 235 | ) 236 | 237 | patch_properties = { 238 | "boxstyle": "round", 239 | "facecolor": "white", 240 | "alpha": 0.5 241 | } 242 | 243 | ax.text(0.05, 244 | 0.95, 245 | text, 246 | transform=ax.transAxes, 247 | fontsize=14, 248 | verticalalignment="top", 249 | horizontalalignment="left", 250 | bbox=patch_properties) 251 | 252 | plt.savefig(filename, format="png") 253 | plt.close() 254 | 255 | 256 | def plot_timeseries_comparison(dates, 257 | observed, 258 | modeled, 259 | absolute_error, 260 | nash_sutcliffe, 261 | mean_squared_error, 262 | label, 263 | filename): 264 | """Plot difference between timeseries.""" 265 | 266 | fig, axes = plt.subplots(2, 1, sharex=True) 267 | fig.set_size_inches(12, 10) 268 | 269 | label = label.replace("_", " ").capitalize() 270 | 271 | # Plot comparison on first row 272 | axes[0].grid(True) 273 | axes[0].set_title("Observed flow vs. Modeled flow") 274 | axes[0].set_xlabel("Date") 275 | axes[0].set_ylabel(label) 276 | 277 | # Explicitly using matplotlibs new default color palette (blue and orange) 278 | axes[0].plot(dates, observed, linewidth=2, color="#1f77b4", 279 | label="Observed") 280 | axes[0].plot(dates, modeled, linewidth=2, color="#ff7f0e", 281 | label="Modeled") 282 | 283 | # Legend 284 | handles, labels = axes[0].get_legend_handles_labels() 285 | legend = axes[0].legend(handles, labels, fancybox=True) 286 | legend.get_frame().set_alpha(0.5) 287 | 288 | # Add text of stats to figure 289 | text_nash_sutcliffe = ( 290 | "Nash-Sutcliffe = {:.2f}" 291 | "".format(nash_sutcliffe) 292 | ) 293 | 294 | patch_properties = { 295 | "boxstyle": "round", 296 | "facecolor": "white", 297 | "alpha": 0.5 298 | } 299 | 300 | axes[0].text(0.05, 301 | 0.95, 302 | text_nash_sutcliffe, 303 | transform=axes[0].transAxes, 304 | fontsize=14, 305 | verticalalignment="top", 306 | horizontalalignment="left", 307 | bbox=patch_properties) 308 | 309 | # Plot absolute error on second row 310 | axes[1].grid(True) 311 | axes[1].set_title("Absolute Error: Observed - Modeled") 312 | axes[1].set_xlabel("Date") 313 | axes[1].set_ylabel("Error (mm/day)") 314 | 315 | axes[1].plot(dates, absolute_error, linewidth=2, color="black") 316 | 317 | # Rotate and align the tick labels so they look better 318 | fig.autofmt_xdate() 319 | axes[1].fmt_xdata = mdates.DateFormatter("%Y-%m-%d") 320 | 321 | # Add text of stats to figure 322 | text_mse = ( 323 | "Mean Squared Error = {:.2f}" 324 | "".format(mean_squared_error) 325 | ) 326 | 327 | axes[1].text(0.05, 328 | 0.95, 329 | text_mse, 330 | transform=axes[1].transAxes, 331 | fontsize=14, 332 | verticalalignment="top", 333 | horizontalalignment="left", 334 | bbox=patch_properties) 335 | 336 | plt.savefig(filename, format="png") 337 | plt.close() 338 | 339 | 340 | def plot_flow_duration_curve(values, label, filename): 341 | """Plot flow duration curve.""" 342 | fig, ax = plt.subplots() 343 | fig.set_size_inches(12, 10) 344 | 345 | label = label.replace("_", " ").capitalize() 346 | 347 | ax.grid() 348 | ax.set_title("Flow Duration Curve") 349 | ax.set_xlabel("Exceedance Probability (%)") 350 | ax.set_ylabel(label) 351 | ax.set_yscale("log") 352 | 353 | probabilities, values_sorted = hydrocalcs.flow_duration(values) 354 | 355 | ax.plot(probabilities, values_sorted, linewidth=2) 356 | 357 | plt.savefig(filename, format="png") 358 | plt.close() 359 | 360 | 361 | def plot_flow_duration_curve_comparison(observed, modeled, label, filename): 362 | """Plot flow duration curve.""" 363 | fig, ax = plt.subplots() 364 | fig.set_size_inches(12, 10) 365 | 366 | label = label.replace("_", " ").capitalize() 367 | 368 | ax.grid() 369 | ax.set_title("Flow Duration Curve: Observed vs. Modeled") 370 | ax.set_xlabel("Exceedance Probability (%)") 371 | ax.set_ylabel(label) 372 | ax.set_yscale("log") 373 | 374 | observed_prob, observed_sorted = hydrocalcs.flow_duration(observed) 375 | modeled_prob, modeled_sorted = hydrocalcs.flow_duration(modeled) 376 | 377 | # Explicitly using matplotlibs new default color palette (blue and orange) 378 | ax.plot(observed_prob, observed_sorted, linewidth=2, color="#1f77b4", 379 | label="Observed") 380 | ax.plot(modeled_prob, modeled_sorted, linewidth=2, color="#ff7f0e", 381 | label="Modeled") 382 | 383 | # Legend 384 | handles, labels = ax.get_legend_handles_labels() 385 | legend = ax.legend(handles, labels, fancybox=True) 386 | legend.get_frame().set_alpha(0.5) 387 | 388 | plt.savefig(filename, format="png") 389 | plt.close() 390 | -------------------------------------------------------------------------------- /topmodelpy/report.py: -------------------------------------------------------------------------------- 1 | """Module that contains functions to render an html file. 2 | 3 | """ 4 | 5 | from jinja2 import PackageLoader, Environment 6 | 7 | 8 | def render_report(df, plots, comparison_data, flow_duration_curve_data, filename): 9 | """Render an html page of the model output data. 10 | 11 | :param data: Data to fill template with 12 | :param type: dictionary 13 | :param template_filename: The filename of the template to fill 14 | :type template_filename: string 15 | """ 16 | loader = PackageLoader("topmodelpy", "templates") 17 | env = Environment(loader=loader) 18 | template = env.get_template("report_template.html") 19 | 20 | return template.render(df=df, 21 | plots=plots, 22 | comparison_data=comparison_data, 23 | flow_duration_curve_data=flow_duration_curve_data) 24 | 25 | 26 | def save(df, plots, comparison_data, flow_duration_curve_data, filename): 27 | """Save summary as a restructured text file. 28 | 29 | :param data: Data to fill template with 30 | :param type: dictionary 31 | :param filename: The full path with filename of the output file to write 32 | :type filename: string 33 | """ 34 | with open(filename, "w") as f: 35 | f.write( 36 | render_report(df, plots, comparison_data, flow_duration_curve_data, filename) 37 | ) 38 | -------------------------------------------------------------------------------- /topmodelpy/templates/report_template.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | topmodelpy output 12 | 13 | 14 |
15 |
16 |
17 |

Outputs

18 |
    19 | {% for column in df.columns %} 20 |
  1. {{ column }}
  2. 21 | {% endfor %} 22 |
23 |
24 |
25 |
26 |
27 | {% if comparison_data %} 28 |

Comparison Plot

29 |
{{ comparison_data["comparison_plot_html"] }}
30 |
Descriptive Statistics
31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 |
StatisticValue
Nash-Sutcliffe{{ "{0:0.2f}".format(comparison_data["nash_sutcliffe"]) }}
Mean Squared Error{{ "{0:0.2f}".format(comparison_data["mean_squared_error"]) }}
49 |

Flow Duration Curve Comparison

50 |
{{ flow_duration_curve_data["flow_duration_curve_comparison_html"] }}
51 | {% endif %} 52 |
53 |
54 |
55 |
56 |

Interactive Plots

57 | {% for key, value in plots.items() %} 58 |
{{ value }}
59 |
Descriptive Statistics
60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 | 83 | 84 | 85 | 86 | 87 | 88 | 89 |
StatisticValue
Mean{{ "{0:0.2f}".format(df[key].mean()) }}
Median{{ "{0:0.2f}".format(df[key].median()) }}
Mode{{ "{0:0.2f}".format(df[key].mode().values[0]) }}
Max{{ "{0:0.2f}".format(df[key].max()) }}
Min{{ "{0:0.2f}".format(df[key].min()) }}
90 | {% endfor %} 91 |
92 |
93 |
94 |
95 |

Flow Duration Curve

96 |
{{ flow_duration_curve_data["flow_duration_curve_html"] }}
97 |
98 |
99 |
100 | 101 | 102 | 103 | 104 | 105 | 106 | 107 | 108 | -------------------------------------------------------------------------------- /topmodelpy/timeseriesfile.py: -------------------------------------------------------------------------------- 1 | """Module that contains functions to read a timeseries file in csv format.""" 2 | 3 | import numpy as np 4 | import pandas as pd 5 | 6 | from .exceptions import (TimeseriesFileErrorInvalidHeader, 7 | TimeseriesFileErrorMissingDates, 8 | TimeseriesFileErrorMissingValues, 9 | TimeseriesFileErrorInvalidTimestep) 10 | 11 | 12 | def read(filepath): 13 | """Read data file 14 | Open file and create a file object to process with 15 | read_file_in(filestream). 16 | 17 | :param filepath: File path to data file. 18 | :type param: string 19 | :return data: A dataframe of all the timeseries data. 20 | :rtype: Pandas.DataFrame 21 | """ 22 | try: 23 | with open(filepath, "r") as f: 24 | data = read_in(f) 25 | return data 26 | except TimeseriesFileErrorInvalidHeader as err: 27 | print(err) 28 | except TimeseriesFileErrorMissingDates as err: 29 | print(err) 30 | except TimeseriesFileErrorMissingValues as err: 31 | print(err) 32 | except Exception as err: 33 | print(err) 34 | 35 | 36 | def read_in(filestream): 37 | """Read and process a filestream. 38 | Read and process a filestream of a comma-delimited parameter file. 39 | This function takes a filestream of text as input which allows for 40 | cleaner unit testing. 41 | 42 | :param filestream: A filestream of text. 43 | :type filestream: _io.TextIOWrapper 44 | :return data: A dict that contains all the data from the file. 45 | :rtype: dict 46 | """ 47 | column_short_names = { 48 | "temperature (celsius)": "temperature", 49 | "precipitation (mm/day)": "precipitation", 50 | "pet (mm/day)": "pet", 51 | "flow_observed (mm/day)": "flow_observed", 52 | } 53 | 54 | data = pd.read_csv(filestream, index_col=0, parse_dates=True, dtype=float) 55 | data.columns = data.columns.str.strip() 56 | check_header(data.columns.values.tolist(), list(column_short_names)) 57 | check_missing_dates(data) 58 | check_missing_values(data) 59 | check_timestep(data) 60 | data.rename(columns=column_short_names, inplace=True) 61 | 62 | return data 63 | 64 | 65 | def check_header(header, valid_header): 66 | """Check that column names in header line match what is expected. 67 | 68 | :param header: Header found in file. 69 | :type header: list 70 | :param valid_header: Valid header that is expected. 71 | :type valid_header: list 72 | """ 73 | for item in header: 74 | if item not in valid_header: 75 | raise TimeseriesFileErrorInvalidHeader(header, valid_header) 76 | 77 | 78 | def check_missing_dates(data): 79 | """Check for any missing dates.""" 80 | if data.index.isna().any(): 81 | missing_indices = np.where(data.index.isna())[0] 82 | timestamps_near_missing = data.index[missing_indices - 1] 83 | raise TimeseriesFileErrorMissingDates(timestamps_near_missing.values) 84 | 85 | 86 | def check_missing_values(data): 87 | """Check for any missing data values.""" 88 | if data.isna().values.any(): 89 | missing_values = data[data.isna().any(axis=1)] 90 | raise TimeseriesFileErrorMissingValues(missing_values) 91 | 92 | 93 | def check_timestep(data): 94 | """Check that the timestep is 1 day or less.""" 95 | timestep = (data.index[1] - data.index[0]).days 96 | if not timestep <= 1: 97 | raise TimeseriesFileErrorInvalidTimestep(timestep) 98 | -------------------------------------------------------------------------------- /topmodelpy/twifile.py: -------------------------------------------------------------------------------- 1 | """Module that contains functions to read a twi file in csv format.""" 2 | 3 | import numpy as np 4 | import pandas as pd 5 | 6 | from .exceptions import (TwiFileErrorInvalidHeader, 7 | TwiFileErrorMissingValues, 8 | TwiFileErrorInvalidProportion) 9 | 10 | 11 | def read(filepath): 12 | """Read data file 13 | Open file and create a file object to process with 14 | read_file_in(filestream). 15 | 16 | :param filepath: File path to data file. 17 | :type param: string 18 | :return data: A dict that contains all the data from the file. 19 | :rtype: dict 20 | """ 21 | try: 22 | with open(filepath, "r") as f: 23 | data = read_in(f) 24 | return data 25 | except TwiFileErrorInvalidHeader as err: 26 | print(err) 27 | except TwiFileErrorMissingValues as err: 28 | print(err) 29 | except TwiFileErrorInvalidProportion as err: 30 | print(err) 31 | except Exception as err: 32 | print(err) 33 | 34 | 35 | def read_in(filestream): 36 | """Read and process a filestream. 37 | Read and process a filestream of a comma-delimited parameter file. 38 | This function takes a filestream of text as input which allows for 39 | cleaner unit testing. 40 | 41 | :param filestream: A filestream of text. 42 | :type filestream: _io.TextIOWrapper 43 | :return data: A dict that contains all the data from the file. 44 | :rtype: dict 45 | """ 46 | column_names = { 47 | "bin": "bin", 48 | "twi": "twi", 49 | "proportion": "proportion", 50 | "cells": "cells", 51 | } 52 | 53 | data = pd.read_csv(filestream, dtype=float) 54 | data.columns = data.columns.str.strip() 55 | check_header(data.columns.values.tolist(), list(column_names)) 56 | check_missing_values(data) 57 | check_proportion(data) 58 | data.rename(columns=column_names, inplace=True) 59 | 60 | return data 61 | 62 | 63 | def check_header(header, valid_header): 64 | """Check that column names in header line match what is expected. 65 | 66 | :param header: Header found in file. 67 | :type header: list 68 | :param valid_header: Valid header that is expected. 69 | :type valid_header: list 70 | """ 71 | for item in header: 72 | if item not in valid_header: 73 | raise TwiFileErrorInvalidHeader(header, valid_header) 74 | 75 | 76 | def check_missing_values(data): 77 | """Check if any data is missing. 78 | 79 | :param data: Pandas DataFrame containing data. 80 | :type data: pandas.DataFrame 81 | """ 82 | if data.isnull().values.any(): 83 | missing_values = data[data.isna().any(axis=1)] 84 | raise TwiFileErrorMissingValues(missing_values) 85 | 86 | 87 | def check_proportion(data): 88 | """Check that the sum of proportion column is close to 1.0. 89 | rtol=1e-02 means that computed sum should be within 1% of 1.0 90 | 91 | :param data: Pandas DataFrame containing data. 92 | :type data: pandas.DataFrame 93 | """ 94 | if not np.isclose(data["proportion"].sum(), 1.0, rtol=1e-02): 95 | raise TwiFileErrorInvalidProportion(data["proportion"].sum()) 96 | -------------------------------------------------------------------------------- /topmodelpy/utils.py: -------------------------------------------------------------------------------- 1 | """Utility functions 2 | 3 | """ 4 | 5 | import numpy as np 6 | 7 | 8 | def nans(shape, dtype=float): 9 | """Return an array filled with nan values. 10 | 11 | :param shape: A tuple for the shape of the array 12 | :type shape: tuple 13 | :returns: numpy.ndarray 14 | """ 15 | array = np.empty(shape, dtype) 16 | array[:] = np.nan 17 | 18 | return array 19 | --------------------------------------------------------------------------------