├── .gitattributes ├── .gitignore ├── .travis.yml ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── Examples ├── AirPassengers.csv ├── Example_pyMannKendall.ipynb ├── daily-total-female-births.csv └── shampoo.csv ├── LICENSE.txt ├── MANIFEST.in ├── Paper ├── Hussain et al [2019] - pyMannKendall a python package for non parametric Mann Kendall family of trend tests.pdf ├── paper.bib └── paper.md ├── README.md ├── pymannkendall ├── __init__.py ├── _version.py └── pymannkendall.py ├── requirements.txt ├── setup.cfg ├── setup.py ├── tests ├── __init__.py └── test_pymannkendall.py └── versioneer.py /.gitattributes: -------------------------------------------------------------------------------- 1 | pymannkendall/_version.py export-subst 2 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | Dev/ 12 | build/ 13 | develop-eggs/ 14 | dist/ 15 | downloads/ 16 | eggs/ 17 | .eggs/ 18 | lib/ 19 | lib64/ 20 | parts/ 21 | sdist/ 22 | var/ 23 | wheels/ 24 | *.egg-info/ 25 | .installed.cfg 26 | *.egg 27 | MANIFEST 28 | 29 | # PyInstaller 30 | # Usually these files are written by a python script from a template 31 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 32 | *.manifest 33 | *.spec 34 | 35 | # Installer logs 36 | pip-log.txt 37 | pip-delete-this-directory.txt 38 | 39 | # Unit test / coverage reports 40 | htmlcov/ 41 | .tox/ 42 | .coverage 43 | .coverage.* 44 | .cache 45 | nosetests.xml 46 | coverage.xml 47 | *.cover 48 | .hypothesis/ 49 | .pytest_cache/ 50 | 51 | # Translations 52 | *.mo 53 | *.pot 54 | 55 | # Django stuff: 56 | *.log 57 | local_settings.py 58 | db.sqlite3 59 | 60 | # Flask stuff: 61 | instance/ 62 | .webassets-cache 63 | 64 | # Scrapy stuff: 65 | .scrapy 66 | 67 | # Sphinx documentation 68 | docs/_build/ 69 | 70 | # PyBuilder 71 | target/ 72 | 73 | # Jupyter Notebook 74 | .ipynb_checkpoints 75 | # *.ipynb 76 | 77 | # pyenv 78 | .python-version 79 | 80 | # celery beat schedule file 81 | celerybeat-schedule 82 | 83 | # SageMath parsed files 84 | *.sage.py 85 | 86 | # Environments 87 | .env 88 | .venv 89 | env/ 90 | venv/ 91 | ENV/ 92 | env.bak/ 93 | venv.bak/ 94 | 95 | # Spyder project settings 96 | .spyderproject 97 | .spyproject 98 | 99 | # Rope project settings 100 | .ropeproject 101 | 102 | # mkdocs documentation 103 | /site 104 | 105 | # mypy 106 | .mypy_cache/ 107 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: python 2 | cache: pip 3 | python: 4 | - "2.7" 5 | - "3.4" 6 | - "3.5" 7 | - "3.6" 8 | - "3.7" 9 | - "3.8" 10 | - "3.9" 11 | install: pip install -r requirements.txt 12 | script: pytest -v -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Contributor Covenant Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. 6 | 7 | ## Our Standards 8 | 9 | Examples of behavior that contributes to creating a positive environment include: 10 | 11 | * Using welcoming and inclusive language 12 | * Being respectful of differing viewpoints and experiences 13 | * Gracefully accepting constructive criticism 14 | * Focusing on what is best for the community 15 | * Showing empathy towards other community members 16 | 17 | Examples of unacceptable behavior by participants include: 18 | 19 | * The use of sexualized language or imagery and unwelcome sexual attention or advances 20 | * Trolling, insulting/derogatory comments, and personal or political attacks 21 | * Public or private harassment 22 | * Publishing others' private information, such as a physical or electronic address, without explicit permission 23 | * Other conduct which could reasonably be considered inappropriate in a professional setting 24 | 25 | ## Our Responsibilities 26 | 27 | Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. 28 | 29 | Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. 30 | 31 | ## Scope 32 | 33 | This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. 34 | 35 | ## Enforcement 36 | 37 | Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at mmhs013@gmail.com. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. 38 | 39 | Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. 40 | 41 | ## Attribution 42 | 43 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version] 44 | 45 | [homepage]: http://contributor-covenant.org 46 | [version]: http://contributor-covenant.org/version/1/4/ -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing to pyMannKendall 2 | 3 | First of all, thanks for considering contributing to `pyMannKendall`! 👍 It's people like you that make it rewarding for us to work on `pyMannKendall`. 4 | 5 | `pyMannKendall` is an open source project, maintained by publicly funded academic researchers and released under the [MIT](https://github.com/mmhs013/pyMannKendall/blob/master/LICENSE.txt) licence. 6 | 7 | [repo]: https://github.com/mmhs013/pyMannKendall 8 | [issues]: https://github.com/mmhs013/pyMannKendall/issues 9 | [new_issue]: https://github.com/mmhs013/pyMannKendall/issues/new 10 | [code_of_conduct]: https://github.com/mmhs013/pyMannKendall/blob/master/CODE_OF_CONDUCT.md 11 | 12 | [citation]: https://doi.org/10.21105/joss.01556 13 | [demo_notebook]: https://github.com/mmhs013/pyMannKendall/blob/master/Examples/Example_pyMannKendall.ipynb 14 | 15 | ## Code of conduct 16 | 17 | Please note that this project is released with a [Contributor Code of Conduct][code_of_conduct]. By participating in this project you agree to abide by its terms. 18 | 19 | ## How you can contribute 20 | 21 | There are several ways you can contribute to this project. If you want to know more about why and how to contribute to open source projects like this one, see this [Open Source Guide](https://opensource.guide/how-to-contribute/). 22 | 23 | ### Share the love ❤️ 24 | 25 | Think `pyMannKendall` is useful? Let others discover it, by telling them in person, via Twitter or a blog post. 26 | 27 | Using `pyMannKendall` for a paper you are writing? Please [cite it][citation]. 28 | 29 | ### Ask a question ⁉️ 30 | 31 | Using `pyMannKendall` and got stuck? Browse the [readme ][repo] and the [demo notebook][demo_notebook] to see if you can find a solution. 32 | 33 | Still stuck? Post your question as an [issue on GitHub][new_issue]. 34 | 35 | While we cannot offer user support, we'll try to do our best to address it, as questions often lead to better documentation or the discovery of bugs. 36 | 37 | Want to ask a question in private? Contact the package maintainer by email : mmhs013@gmail.com 38 | 39 | ### Propose an idea 💡 40 | 41 | Have an idea for a new `pyMannKendall` feature? Take a look at the [issue list][issues] to see if it isn't included or suggested yet. If not, suggest your idea as an [issue on GitHub][new_issue]. While we can't promise to implement your idea, it helps to: 42 | 43 | * Explain in detail how it would work. 44 | * Keep the scope as narrow as possible. 45 | 46 | See below if you want to contribute code for your idea as well. 47 | 48 | ### Report a bug 🐛 49 | 50 | Using `pyMannKendall` and discovered a bug? That's annoying! Don't let others have the same experience and report it as an [issue on GitHub][new_issue] so we can fix it. A good bug report makes it easier for us to do so, so please include: 51 | 52 | * Your operating system name and version (e.g. Mac OS 10.13.6). 53 | * Any details about your local setup that might be helpful in troubleshooting. 54 | * Detailed steps to reproduce the bug. 55 | 56 | ### Improve the documentation 📖 57 | 58 | Noticed a typo on the website? Think a function could use a better example? Good documentation makes all the difference, so your help to improve it is very welcome! 59 | 60 | 1. Fork [this repo][repo] and clone it to your computer. To learn more about this process, see [this guide](https://guides.github.com/activities/forking/). 61 | 2. Edit the README.md file and submit a pull request. We will review your changes and include the fix in the next release. 62 | 63 | ### Contribute code 📝 64 | 65 | Care to fix bugs or implement new functionality for `pyMannKendall`? Awesome! 👏 Have a look at the [issue list][issues] and leave a comment on the things you want to work on. See also the development guidelines below. 66 | 67 | ## Development guidelines 68 | 69 | We try to follow the [GitHub flow](https://guides.github.com/introduction/flow/) for development and the [PEP 8](https://www.python.org/dev/peps/pep-0008/) style Guide for Python Code. 70 | 71 | 1. Fork [this repo][repo] and clone it to your computer. To learn more about this process, see [this guide](https://guides.github.com/activities/forking/). 72 | 73 | 2. If you have forked and cloned the project before and it has been a while since you worked on it, [pull changes from the original repo](https://help.github.com/articles/merging-an-upstream-repository-into-your-fork/) to your clone by using `git pull upstream master`. 74 | 75 | 3. Make your changes and test the modified code. 76 | 77 | 4. Commit and push your changes. 78 | 79 | 5. Submit a [pull request](https://guides.github.com/activities/forking/#making-a-pull-request). 80 | 81 | 82 | 83 | --- 84 | 85 | This file was adapted from a template created by [peterdesmet](https://gist.github.com/peterdesmet/e90a1b0dc17af6c12daf6e8b2f044e7c). 86 | -------------------------------------------------------------------------------- /Examples/AirPassengers.csv: -------------------------------------------------------------------------------- 1 | Month,#Passengers 2 | 1949-01,112 3 | 1949-02,118 4 | 1949-03,132 5 | 1949-04,129 6 | 1949-05,121 7 | 1949-06,135 8 | 1949-07,148 9 | 1949-08,148 10 | 1949-09,136 11 | 1949-10,119 12 | 1949-11,104 13 | 1949-12,118 14 | 1950-01,115 15 | 1950-02,126 16 | 1950-03,141 17 | 1950-04,135 18 | 1950-05,125 19 | 1950-06,149 20 | 1950-07,170 21 | 1950-08,170 22 | 1950-09,158 23 | 1950-10,133 24 | 1950-11,114 25 | 1950-12,140 26 | 1951-01,145 27 | 1951-02,150 28 | 1951-03,178 29 | 1951-04,163 30 | 1951-05,172 31 | 1951-06,178 32 | 1951-07,199 33 | 1951-08,199 34 | 1951-09,184 35 | 1951-10,162 36 | 1951-11,146 37 | 1951-12,166 38 | 1952-01,171 39 | 1952-02,180 40 | 1952-03,193 41 | 1952-04,181 42 | 1952-05,183 43 | 1952-06,218 44 | 1952-07,230 45 | 1952-08,242 46 | 1952-09,209 47 | 1952-10,191 48 | 1952-11,172 49 | 1952-12,194 50 | 1953-01,196 51 | 1953-02,196 52 | 1953-03,236 53 | 1953-04,235 54 | 1953-05,229 55 | 1953-06,243 56 | 1953-07,264 57 | 1953-08,272 58 | 1953-09,237 59 | 1953-10,211 60 | 1953-11,180 61 | 1953-12,201 62 | 1954-01,204 63 | 1954-02,188 64 | 1954-03,235 65 | 1954-04,227 66 | 1954-05,234 67 | 1954-06,264 68 | 1954-07,302 69 | 1954-08,293 70 | 1954-09,259 71 | 1954-10,229 72 | 1954-11,203 73 | 1954-12,229 74 | 1955-01,242 75 | 1955-02,233 76 | 1955-03,267 77 | 1955-04,269 78 | 1955-05,270 79 | 1955-06,315 80 | 1955-07,364 81 | 1955-08,347 82 | 1955-09,312 83 | 1955-10,274 84 | 1955-11,237 85 | 1955-12,278 86 | 1956-01,284 87 | 1956-02,277 88 | 1956-03,317 89 | 1956-04,313 90 | 1956-05,318 91 | 1956-06,374 92 | 1956-07,413 93 | 1956-08,405 94 | 1956-09,355 95 | 1956-10,306 96 | 1956-11,271 97 | 1956-12,306 98 | 1957-01,315 99 | 1957-02,301 100 | 1957-03,356 101 | 1957-04,348 102 | 1957-05,355 103 | 1957-06,422 104 | 1957-07,465 105 | 1957-08,467 106 | 1957-09,404 107 | 1957-10,347 108 | 1957-11,305 109 | 1957-12,336 110 | 1958-01,340 111 | 1958-02,318 112 | 1958-03,362 113 | 1958-04,348 114 | 1958-05,363 115 | 1958-06,435 116 | 1958-07,491 117 | 1958-08,505 118 | 1958-09,404 119 | 1958-10,359 120 | 1958-11,310 121 | 1958-12,337 122 | 1959-01,360 123 | 1959-02,342 124 | 1959-03,406 125 | 1959-04,396 126 | 1959-05,420 127 | 1959-06,472 128 | 1959-07,548 129 | 1959-08,559 130 | 1959-09,463 131 | 1959-10,407 132 | 1959-11,362 133 | 1959-12,405 134 | 1960-01,417 135 | 1960-02,391 136 | 1960-03,419 137 | 1960-04,461 138 | 1960-05,472 139 | 1960-06,535 140 | 1960-07,622 141 | 1960-08,606 142 | 1960-09,508 143 | 1960-10,461 144 | 1960-11,390 145 | 1960-12,432 146 | -------------------------------------------------------------------------------- /Examples/daily-total-female-births.csv: -------------------------------------------------------------------------------- 1 | "Date","Births" 2 | "1959-01-01",35 3 | "1959-01-02",32 4 | "1959-01-03",30 5 | "1959-01-04",31 6 | "1959-01-05",44 7 | "1959-01-06",29 8 | "1959-01-07",45 9 | "1959-01-08",43 10 | "1959-01-09",38 11 | "1959-01-10",27 12 | "1959-01-11",38 13 | "1959-01-12",33 14 | "1959-01-13",55 15 | "1959-01-14",47 16 | "1959-01-15",45 17 | "1959-01-16",37 18 | "1959-01-17",50 19 | "1959-01-18",43 20 | "1959-01-19",41 21 | "1959-01-20",52 22 | "1959-01-21",34 23 | "1959-01-22",53 24 | "1959-01-23",39 25 | "1959-01-24",32 26 | "1959-01-25",37 27 | "1959-01-26",43 28 | "1959-01-27",39 29 | "1959-01-28",35 30 | "1959-01-29",44 31 | "1959-01-30",38 32 | "1959-01-31",24 33 | "1959-02-01",23 34 | "1959-02-02",31 35 | "1959-02-03",44 36 | "1959-02-04",38 37 | "1959-02-05",50 38 | "1959-02-06",38 39 | "1959-02-07",51 40 | "1959-02-08",31 41 | "1959-02-09",31 42 | "1959-02-10",51 43 | "1959-02-11",36 44 | "1959-02-12",45 45 | "1959-02-13",51 46 | "1959-02-14",34 47 | "1959-02-15",52 48 | "1959-02-16",47 49 | "1959-02-17",45 50 | "1959-02-18",46 51 | "1959-02-19",39 52 | "1959-02-20",48 53 | "1959-02-21",37 54 | "1959-02-22",35 55 | "1959-02-23",52 56 | "1959-02-24",42 57 | "1959-02-25",45 58 | "1959-02-26",39 59 | "1959-02-27",37 60 | "1959-02-28",30 61 | "1959-03-01",35 62 | "1959-03-02",28 63 | "1959-03-03",45 64 | "1959-03-04",34 65 | "1959-03-05",36 66 | "1959-03-06",50 67 | "1959-03-07",44 68 | "1959-03-08",39 69 | "1959-03-09",32 70 | "1959-03-10",39 71 | "1959-03-11",45 72 | "1959-03-12",43 73 | "1959-03-13",39 74 | "1959-03-14",31 75 | "1959-03-15",27 76 | "1959-03-16",30 77 | "1959-03-17",42 78 | "1959-03-18",46 79 | "1959-03-19",41 80 | "1959-03-20",36 81 | "1959-03-21",45 82 | "1959-03-22",46 83 | "1959-03-23",43 84 | "1959-03-24",38 85 | "1959-03-25",34 86 | "1959-03-26",35 87 | "1959-03-27",56 88 | "1959-03-28",36 89 | "1959-03-29",32 90 | "1959-03-30",50 91 | "1959-03-31",41 92 | "1959-04-01",39 93 | "1959-04-02",41 94 | "1959-04-03",47 95 | "1959-04-04",34 96 | "1959-04-05",36 97 | "1959-04-06",33 98 | "1959-04-07",35 99 | "1959-04-08",38 100 | "1959-04-09",38 101 | "1959-04-10",34 102 | "1959-04-11",53 103 | "1959-04-12",34 104 | "1959-04-13",34 105 | "1959-04-14",38 106 | "1959-04-15",35 107 | "1959-04-16",32 108 | "1959-04-17",42 109 | "1959-04-18",34 110 | "1959-04-19",46 111 | "1959-04-20",30 112 | "1959-04-21",46 113 | "1959-04-22",45 114 | "1959-04-23",54 115 | "1959-04-24",34 116 | "1959-04-25",37 117 | "1959-04-26",35 118 | "1959-04-27",40 119 | "1959-04-28",42 120 | "1959-04-29",58 121 | "1959-04-30",51 122 | "1959-05-01",32 123 | "1959-05-02",35 124 | "1959-05-03",38 125 | "1959-05-04",33 126 | "1959-05-05",39 127 | "1959-05-06",47 128 | "1959-05-07",38 129 | "1959-05-08",52 130 | "1959-05-09",30 131 | "1959-05-10",34 132 | "1959-05-11",40 133 | "1959-05-12",35 134 | "1959-05-13",42 135 | "1959-05-14",41 136 | "1959-05-15",42 137 | "1959-05-16",38 138 | "1959-05-17",24 139 | "1959-05-18",34 140 | "1959-05-19",43 141 | "1959-05-20",36 142 | "1959-05-21",55 143 | "1959-05-22",41 144 | "1959-05-23",45 145 | "1959-05-24",41 146 | "1959-05-25",37 147 | "1959-05-26",43 148 | "1959-05-27",39 149 | "1959-05-28",33 150 | "1959-05-29",43 151 | "1959-05-30",40 152 | "1959-05-31",38 153 | "1959-06-01",45 154 | "1959-06-02",46 155 | "1959-06-03",34 156 | "1959-06-04",35 157 | "1959-06-05",48 158 | "1959-06-06",51 159 | "1959-06-07",36 160 | "1959-06-08",33 161 | "1959-06-09",46 162 | "1959-06-10",42 163 | "1959-06-11",48 164 | "1959-06-12",34 165 | "1959-06-13",41 166 | "1959-06-14",35 167 | "1959-06-15",40 168 | "1959-06-16",34 169 | "1959-06-17",30 170 | "1959-06-18",36 171 | "1959-06-19",40 172 | "1959-06-20",39 173 | "1959-06-21",45 174 | "1959-06-22",38 175 | "1959-06-23",47 176 | "1959-06-24",33 177 | "1959-06-25",30 178 | "1959-06-26",42 179 | "1959-06-27",43 180 | "1959-06-28",41 181 | "1959-06-29",41 182 | "1959-06-30",59 183 | "1959-07-01",43 184 | "1959-07-02",45 185 | "1959-07-03",38 186 | "1959-07-04",37 187 | "1959-07-05",45 188 | "1959-07-06",42 189 | "1959-07-07",57 190 | "1959-07-08",46 191 | "1959-07-09",51 192 | "1959-07-10",41 193 | "1959-07-11",47 194 | "1959-07-12",26 195 | "1959-07-13",35 196 | "1959-07-14",44 197 | "1959-07-15",41 198 | "1959-07-16",42 199 | "1959-07-17",36 200 | "1959-07-18",45 201 | "1959-07-19",45 202 | "1959-07-20",45 203 | "1959-07-21",47 204 | "1959-07-22",38 205 | "1959-07-23",42 206 | "1959-07-24",35 207 | "1959-07-25",36 208 | "1959-07-26",39 209 | "1959-07-27",45 210 | "1959-07-28",43 211 | "1959-07-29",47 212 | "1959-07-30",36 213 | "1959-07-31",41 214 | "1959-08-01",50 215 | "1959-08-02",39 216 | "1959-08-03",41 217 | "1959-08-04",46 218 | "1959-08-05",64 219 | "1959-08-06",45 220 | "1959-08-07",34 221 | "1959-08-08",38 222 | "1959-08-09",44 223 | "1959-08-10",48 224 | "1959-08-11",46 225 | "1959-08-12",44 226 | "1959-08-13",37 227 | "1959-08-14",39 228 | "1959-08-15",44 229 | "1959-08-16",45 230 | "1959-08-17",33 231 | "1959-08-18",44 232 | "1959-08-19",38 233 | "1959-08-20",46 234 | "1959-08-21",46 235 | "1959-08-22",40 236 | "1959-08-23",39 237 | "1959-08-24",44 238 | "1959-08-25",48 239 | "1959-08-26",50 240 | "1959-08-27",41 241 | "1959-08-28",42 242 | "1959-08-29",51 243 | "1959-08-30",41 244 | "1959-08-31",44 245 | "1959-09-01",38 246 | "1959-09-02",68 247 | "1959-09-03",40 248 | "1959-09-04",42 249 | "1959-09-05",51 250 | "1959-09-06",44 251 | "1959-09-07",45 252 | "1959-09-08",36 253 | "1959-09-09",57 254 | "1959-09-10",44 255 | "1959-09-11",42 256 | "1959-09-12",53 257 | "1959-09-13",42 258 | "1959-09-14",34 259 | "1959-09-15",40 260 | "1959-09-16",56 261 | "1959-09-17",44 262 | "1959-09-18",53 263 | "1959-09-19",55 264 | "1959-09-20",39 265 | "1959-09-21",59 266 | "1959-09-22",55 267 | "1959-09-23",73 268 | "1959-09-24",55 269 | "1959-09-25",44 270 | "1959-09-26",43 271 | "1959-09-27",40 272 | "1959-09-28",47 273 | "1959-09-29",51 274 | "1959-09-30",56 275 | "1959-10-01",49 276 | "1959-10-02",54 277 | "1959-10-03",56 278 | "1959-10-04",47 279 | "1959-10-05",44 280 | "1959-10-06",43 281 | "1959-10-07",42 282 | "1959-10-08",45 283 | "1959-10-09",50 284 | "1959-10-10",48 285 | "1959-10-11",43 286 | "1959-10-12",40 287 | "1959-10-13",59 288 | "1959-10-14",41 289 | "1959-10-15",42 290 | "1959-10-16",51 291 | "1959-10-17",49 292 | "1959-10-18",45 293 | "1959-10-19",43 294 | "1959-10-20",42 295 | "1959-10-21",38 296 | "1959-10-22",47 297 | "1959-10-23",38 298 | "1959-10-24",36 299 | "1959-10-25",42 300 | "1959-10-26",35 301 | "1959-10-27",28 302 | "1959-10-28",44 303 | "1959-10-29",36 304 | "1959-10-30",45 305 | "1959-10-31",46 306 | "1959-11-01",48 307 | "1959-11-02",49 308 | "1959-11-03",43 309 | "1959-11-04",42 310 | "1959-11-05",59 311 | "1959-11-06",45 312 | "1959-11-07",52 313 | "1959-11-08",46 314 | "1959-11-09",42 315 | "1959-11-10",40 316 | "1959-11-11",40 317 | "1959-11-12",45 318 | "1959-11-13",35 319 | "1959-11-14",35 320 | "1959-11-15",40 321 | "1959-11-16",39 322 | "1959-11-17",33 323 | "1959-11-18",42 324 | "1959-11-19",47 325 | "1959-11-20",51 326 | "1959-11-21",44 327 | "1959-11-22",40 328 | "1959-11-23",57 329 | "1959-11-24",49 330 | "1959-11-25",45 331 | "1959-11-26",49 332 | "1959-11-27",51 333 | "1959-11-28",46 334 | "1959-11-29",44 335 | "1959-11-30",52 336 | "1959-12-01",45 337 | "1959-12-02",32 338 | "1959-12-03",46 339 | "1959-12-04",41 340 | "1959-12-05",34 341 | "1959-12-06",33 342 | "1959-12-07",36 343 | "1959-12-08",49 344 | "1959-12-09",43 345 | "1959-12-10",43 346 | "1959-12-11",34 347 | "1959-12-12",39 348 | "1959-12-13",35 349 | "1959-12-14",52 350 | "1959-12-15",47 351 | "1959-12-16",52 352 | "1959-12-17",39 353 | "1959-12-18",40 354 | "1959-12-19",42 355 | "1959-12-20",42 356 | "1959-12-21",53 357 | "1959-12-22",39 358 | "1959-12-23",40 359 | "1959-12-24",38 360 | "1959-12-25",44 361 | "1959-12-26",34 362 | "1959-12-27",37 363 | "1959-12-28",52 364 | "1959-12-29",48 365 | "1959-12-30",55 366 | "1959-12-31",50 -------------------------------------------------------------------------------- /Examples/shampoo.csv: -------------------------------------------------------------------------------- 1 | "Month","Sales" 2 | "1-01",266.0 3 | "1-02",145.9 4 | "1-03",183.1 5 | "1-04",119.3 6 | "1-05",180.3 7 | "1-06",168.5 8 | "1-07",231.8 9 | "1-08",224.5 10 | "1-09",192.8 11 | "1-10",122.9 12 | "1-11",336.5 13 | "1-12",185.9 14 | "2-01",194.3 15 | "2-02",149.5 16 | "2-03",210.1 17 | "2-04",273.3 18 | "2-05",191.4 19 | "2-06",287.0 20 | "2-07",226.0 21 | "2-08",303.6 22 | "2-09",289.9 23 | "2-10",421.6 24 | "2-11",264.5 25 | "2-12",342.3 26 | "3-01",339.7 27 | "3-02",440.4 28 | "3-03",315.9 29 | "3-04",439.3 30 | "3-05",401.3 31 | "3-06",437.4 32 | "3-07",575.5 33 | "3-08",407.6 34 | "3-09",682.0 35 | "3-10",475.3 36 | "3-11",581.3 37 | "3-12",646.9 -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 Md. Manjurul Hussain Shourov 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | include versioneer.py 2 | include pymannkendall/_version.py 3 | -------------------------------------------------------------------------------- /Paper/Hussain et al [2019] - pyMannKendall a python package for non parametric Mann Kendall family of trend tests.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Coder2cdb/pyMannKendall/c2be737a199a694e481d98677e0e2c2c5d21b89d/Paper/Hussain et al [2019] - pyMannKendall a python package for non parametric Mann Kendall family of trend tests.pdf -------------------------------------------------------------------------------- /Paper/paper.bib: -------------------------------------------------------------------------------- 1 | @book{hipel1994time, 2 | title={Time series modelling of water resources and environmental systems}, 3 | author={Hipel, Keith W and McLeod, A Ian}, 4 | volume={45}, 5 | year={1994}, 6 | publisher={Elsevier} 7 | } 8 | @article{mann1945nonparametric, 9 | title={Nonparametric tests against trend}, 10 | author={Mann, Henry B}, 11 | journal={Econometrica: Journal of the Econometric Society}, 12 | pages={245--259}, 13 | year={1945}, 14 | publisher={JSTOR}, 15 | doi={10.2307/1907187} 16 | } 17 | @article{kendall1975rank, 18 | title={Rank correlation measures}, 19 | author={Kendall, MG}, 20 | journal={Charles Griffin, London}, 21 | volume={202}, 22 | pages={15}, 23 | year={1975} 24 | } 25 | @article{bari2016analysis, 26 | title={Analysis of seasonal and annual rainfall trends in the northern region of {Bangladesh}}, 27 | author={Bari, Sheikh Hefzul and Rahman, M Tauhid Ur and Hoque, Muhammad Azizul and Hussain, Md Manjurul}, 28 | journal={Atmospheric Research}, 29 | volume={176}, 30 | pages={148--158}, 31 | year={2016}, 32 | publisher={Elsevier}, 33 | doi={10.1016/j.atmosres.2016.02.008} 34 | } 35 | @article{hirsch1982techniques, 36 | title={Techniques of trend analysis for monthly water quality data}, 37 | author={Hirsch, Robert M and Slack, James R and Smith, Richard A}, 38 | journal={Water resources research}, 39 | volume={18}, 40 | number={1}, 41 | pages={107--121}, 42 | year={1982}, 43 | publisher={Wiley Online Library}, 44 | doi={10.1029/WR018i001p00107} 45 | } 46 | @article{hamed1998modified, 47 | title={A modified {Mann}--{Kendall} trend test for autocorrelated data}, 48 | author={Hamed, Khaled H and Rao, A Ramachandra}, 49 | journal={Journal of hydrology}, 50 | volume={204}, 51 | number={1-4}, 52 | pages={182--196}, 53 | year={1998}, 54 | publisher={Elsevier}, 55 | doi={10.1016/S0022-1694(97)00125-X} 56 | } 57 | @article{cox1955some, 58 | title={Some quick sign tests for trend in location and dispersion}, 59 | author={Cox, David Roxbee and Stuart, Alan}, 60 | journal={Biometrika}, 61 | volume={42}, 62 | number={1/2}, 63 | pages={80--95}, 64 | year={1955}, 65 | publisher={JSTOR}, 66 | doi={10.2307/2333424} 67 | } 68 | @article{yue2004mann, 69 | title={The {Mann}--{Kendall} test modified by effective sample size to detect trend in serially correlated hydrological series}, 70 | author={Yue, Sheng and Wang, ChunYuan}, 71 | journal={Water resources management}, 72 | volume={18}, 73 | number={3}, 74 | pages={201--218}, 75 | year={2004}, 76 | publisher={Springer}, 77 | doi={10.1023/B:WARM.0000043140.61082.60} 78 | } 79 | @article{yue2002applicability, 80 | title={Applicability of prewhitening to eliminate the influence of serial correlation on the {Mann}--{Kendall} test}, 81 | author={Yue, Sheng and Wang, Chun Yuan}, 82 | journal={Water resources research}, 83 | volume={38}, 84 | number={6}, 85 | pages={4--1}, 86 | year={2002}, 87 | publisher={Wiley Online Library}, 88 | doi={10.1029/2001WR000861} 89 | } 90 | @article{yue2002influence, 91 | title={The influence of autocorrelation on the ability to detect trend in hydrological series}, 92 | author={Yue, Sheng and Pilon, Paul and Phinney, Bob and Cavadias, George}, 93 | journal={Hydrological processes}, 94 | volume={16}, 95 | number={9}, 96 | pages={1807--1829}, 97 | year={2002}, 98 | publisher={Wiley Online Library}, 99 | doi={10.1002/hyp.1095} 100 | } 101 | @article{helsel2006regional, 102 | title={Regional {Kendall} test for trend}, 103 | author={Helsel, Dennis R and Frans, Lonna M}, 104 | journal={Environmental science \& technology}, 105 | volume={40}, 106 | number={13}, 107 | pages={4066--4073}, 108 | year={2006}, 109 | publisher={ACS Publications}, 110 | doi={10.1021/es051650b} 111 | } 112 | @book{hipel1994time, 113 | title={Time series modelling of water resources and environmental systems}, 114 | author={Hipel, Keith W and McLeod, A Ian}, 115 | volume={45}, 116 | year={1994}, 117 | publisher={Elsevier} 118 | } 119 | @article{libiseller2002performance, 120 | title={Performance of partial {Mann}--{Kendall} tests for trend detection in the presence of covariates}, 121 | author={Libiseller, Claudia and Grimvall, Anders}, 122 | journal={Environmetrics: The official journal of the International Environmetrics Society}, 123 | volume={13}, 124 | number={1}, 125 | pages={71--84}, 126 | year={2002}, 127 | publisher={Wiley Online Library}, 128 | doi={10.1002/env.507} 129 | } 130 | @inproceedings{theil1950rank, 131 | title={A rank-invariant method of linear and polynominal regression analysis (Parts 1-3)}, 132 | author={Theil, H}, 133 | booktitle={Ned. Akad. Wetensch. Proc. Ser. A}, 134 | volume={53}, 135 | pages={1397--1412}, 136 | year={1950} 137 | } 138 | @article{sen1968estimates, 139 | title={Estimates of the regression coefficient based on {Kendall}'s tau}, 140 | author={Sen, Pranab Kumar}, 141 | journal={Journal of the American statistical association}, 142 | volume={63}, 143 | number={324}, 144 | pages={1379--1389}, 145 | year={1968}, 146 | publisher={Taylor \& Francis Group}, 147 | doi={10.1080/01621459.1968.10480934} 148 | } 149 | -------------------------------------------------------------------------------- /Paper/paper.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: 'pyMannKendall: a python package for non parametric Mann Kendall family of trend tests.' 3 | tags: 4 | - mann kendall 5 | - modified mann kendall 6 | - sen's slope 7 | authors: 8 | - name: Md. Manjurul Hussain 9 | orcid: 0000-0002-5361-0633 10 | affiliation: 1 11 | - name: Ishtiak Mahmud 12 | orcid: 0000-0002-4753-5403 13 | affiliation: 2 14 | affiliations: 15 | - name: Institute of Water and Flood Management, Bangladesh University of Engineering and Technology, Dhaka, Bangladesh 16 | index: 1 17 | - name: Shahjalal University of Science and Technology, Sylhet, Bangladesh 18 | index: 2 19 | date: 30 June 2019 20 | bibliography: paper.bib 21 | --- 22 | 23 | # Summary 24 | 25 | Trend analysis is one of the most important measurements in studying time series data. Both parametric and non-parametric tests are commonly used in trend analysis. Parametric tests require data to be independent and normally distributed. On the other hand, non-parametric trend tests require only that the data be independent and can tolerate outliers in the data [@hamed1998modified]. However, parametric tests are more powerful than nonparametric ones. 26 | 27 | The Mann–Kendall trend test [@mann1945nonparametric; @kendall1975rank] is a widely used non-parametric tests to detect significant trends in time series. However, the original Mann-Kendall test didn't consider serial correlation or seasonality effects [@bari2016analysis; @hirsch1982techniques]. But, in many real situations, the observed data are autocorrelated and this autocorrelation will result in misinterpretation of trend tests results [@hamed1998modified; @cox1955some]. Contrariwise, water quality, hydrologic, as well as climatic and other natural time series also have seasonality. To overcome those limitations of original Mann-Kendall test, various modified Mann-Kendall test have been developed. 28 | 29 | Again, Python is one of the widely used tools for data analysis. A large number of data analysis and research tools are also developed using Python. But, till now, there is no Mann-Kendall trend relation Python package available. ``pyMannKendall`` package fills this gap. 30 | 31 | ``pyMannKendall`` is written in pure Python and uses a vectorization approach to increase its performance. Currently, this package has 11 Mann-Kendall Tests and 2 Sen’s slope estimator functions. Brief description of the functions are below: 32 | 33 | 1. **Original Mann-Kendall test (*original_test*):** Original Mann-Kendall test [@mann1945nonparametric; @kendall1975rank] is a nonparametric test, which does not consider serial correlation or seasonal effects. 34 | 35 | 2. **Hamed and Rao Modified MK Test (*hamed_rao_modification_test*):** This modified MK test was proposed by @hamed1998modified to address serial autocorrelation issues. They suggested a variance correction approach to improve trend analysis. Users can consider first n significant lag by insert lag number in this function. By default, it considered all significant lags. 36 | 37 | 3. **Yue and Wang Modified MK Test (*yue_wang_modification_test*):** This is also a variance correction method for considered serial autocorrelation proposed by @yue2004mann. Users can also set their desired significant number of lags for the calculation. 38 | 39 | 4. **Modified MK test using Pre-Whitening method (*pre_whitening_modification_test*):** This test was suggested by @yue2002applicability to use Pre-Whitening the time series before the application of trend test. 40 | 41 | 5. **Modified MK test using Trend free Pre-Whitening method (*trend_free_pre_whitening_modification_test*):** This test was also proposed by @yue2002influence to remove trend components and then Pre-Whitening the time series before application of trend test. 42 | 43 | 6. **Multivariate MK Test (*multivariate_test*):** This is an MK test for multiple parameters proposed by @hirsch1982techniques. They used this method for seasonal MK tests, where they considered every month as a parameter. 44 | 45 | 7. **Seasonal MK Test (*seasonal_test*):** For seasonal time series data, @hirsch1982techniques proposed this test to calculate the seasonal trend. 46 | 47 | 8. **Regional MK Test (*regional_test*):** Based on the proposed seasonal MK test of @hirsch1982techniques, @helsel2006regional suggest a regional MK test to calculate the overall trend on a regional scale. 48 | 49 | 9. **Correlated Multivariate MK Test (*correlated_multivariate_test*):** This multivariate MK test was proposed by @hipel1994time for where the parameters are correlated. 50 | 51 | 10. **Correlated Seasonal MK Test (*correlated_seasonal_test*):** This method was proposed by @hipel1994time, for when time series significantly correlate with the preceding one or more months/seasons. 52 | 53 | 11. **Partial MK Test (*partial_test*):** In a real event, many factors affect the main studied response parameter, which can bias the trend results. To overcome this problem, @libiseller2002performance proposed this partial mk test. It required two parameters as input, where one is the response parameter and other is an independent parameter. 54 | 55 | 12. **Theil-sen's Slope Estimator (*sens_slope*):** This method was proposed by @theil1950rank and @sen1968estimates to estimate the magnitude of the monotonic trend. 56 | 57 | 13. **Seasonal sen's Slope Estimator (*seasonal_sens_slope*):** This method was proposed by @hipel1994time to estimate the magnitude of the monotonic trend, when data has seasonal effects. 58 | 59 | 60 | `pyMannKendall` is a non-parametric Mann-Kendall trend analysis package implemented in pure Python, which brings together almost all types of Mann-Kendall tests, which might help researchers to check Mann-Kendall trends in Python. 61 | 62 | # References 63 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # pyMannKendall 2 | [![Build Status](https://travis-ci.com/mmhs013/pyMannKendall.svg?branch=master)](https://travis-ci.com/mmhs013/pyMannKendall) 3 | [![PyPI](https://img.shields.io/pypi/v/pymannkendall.svg)](https://pypi.org/project/pymannkendall/) 4 | [![PyPI - License](https://img.shields.io/pypi/l/pymannkendall.svg)](https://pypi.org/project/pymannkendall/) 5 | [![PyPI - Status](https://img.shields.io/pypi/status/pymannkendall.svg)](https://pypi.org/project/pymannkendall/) 6 | [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/pymannkendall.svg)](https://pypi.org/project/pymannkendall/) 7 | 8 | [![Downloads](https://pepy.tech/badge/pymannkendall)](https://pepy.tech/project/pymannkendall) 9 | [![Conda](https://img.shields.io/conda/dn/conda-forge/pymannkendall?label=conda-downloads)](https://anaconda.org/conda-forge/pymannkendall) 10 | 11 | [![Google Scholar](https://github.com/mmhs013/Citation_Parser/blob/main/images/gs_pymk_cite.svg?raw=true)](https://scholar.google.com/scholar?q=pyMannKendall%3A+a+python+package+for+non+parametric+Mann+Kendall+family+of+trend+tests.) 12 | [![Researchgate](https://github.com/mmhs013/Citation_Parser/blob/main/images/rg_pymk_cite.svg?raw=true)](https://www.researchgate.net/publication/334688255_pyMannKendall_a_python_package_for_non_parametric_Mann_Kendall_family_of_trend_tests) 13 | 14 | [![status](http://joss.theoj.org/papers/14903dbd55343be89105112e585d262a/status.svg)](http://joss.theoj.org/papers/14903dbd55343be89105112e585d262a) 15 | [![DOI](https://zenodo.org/badge/174495388.svg)](https://zenodo.org/badge/latestdoi/174495388) 16 | 17 | ## What is the Mann-Kendall Test ? 18 | The Mann-Kendall Trend Test (sometimes called the MK test) is used to analyze time series data for consistently increasing or decreasing trends (monotonic trends). It is a non-parametric test, which means it works for all distributions (i.e. data doesn't have to meet the assumption of normality), but data should have no serial correlation. If the data has a serial correlation, it could affect in significant level (p-value). It could lead to misinterpretation. To overcome this problem, researchers proposed several modified Mann-Kendall tests (Hamed and Rao Modified MK Test, Yue and Wang Modified MK Test, Modified MK test using Pre-Whitening method, etc.). Seasonal Mann-Kendall test also developed to remove the effect of seasonality. 19 | 20 | Mann-Kendall Test is a powerful trend test, so several others modified Mann-Kendall tests like Multivariate MK Test, Regional MK Test, Correlated MK test, Partial MK Test, etc. were developed for the spacial condition. `pyMannkendal` is a pure Python implementation of non-parametric Mann-Kendall trend analysis, which bring together almost all types of Mann-Kendall Test. Currently, this package has 11 Mann-Kendall Tests and 2 sen's slope estimator function. Brief description of functions are below: 21 | 22 | 1. **Original Mann-Kendall test (*original_test*):** Original Mann-Kendall test is a nonparametric test, which does not consider serial correlation or seasonal effects. 23 | 24 | 2. **Hamed and Rao Modified MK Test (*hamed_rao_modification_test*):** This modified MK test proposed by *Hamed and Rao (1998)* to address serial autocorrelation issues. They suggested a variance correction approach to improve trend analysis. User can consider first n significant lag by insert lag number in this function. By default, it considered all significant lags. 25 | 26 | 3. **Yue and Wang Modified MK Test (*yue_wang_modification_test*):** This is also a variance correction method for considered serial autocorrelation proposed by *Yue, S., & Wang, C. Y. (2004)*. User can also set their desired significant n lags for the calculation. 27 | 28 | 4. **Modified MK test using Pre-Whitening method (*pre_whitening_modification_test*):** This test suggested by *Yue and Wang (2002)* to using Pre-Whitening the time series before the application of trend test. 29 | 30 | 5. **Modified MK test using Trend free Pre-Whitening method (*trend_free_pre_whitening_modification_test*):** This test also proposed by *Yue and Wang (2002)* to remove trend component and then Pre-Whitening the time series before application of trend test. 31 | 32 | 6. **Multivariate MK Test (*multivariate_test*):** This is an MK test for multiple parameters proposed by *Hirsch (1982)*. He used this method for seasonal mk test, where he considered every month as a parameter. 33 | 34 | 7. **Seasonal MK Test (*seasonal_test*):** For seasonal time series data, *Hirsch, R.M., Slack, J.R. and Smith, R.A. (1982)* proposed this test to calculate the seasonal trend. 35 | 36 | 8. **Regional MK Test (*regional_test*):** Based on*Hirsch (1982)* proposed seasonal mk test, *Helsel, D.R. and Frans, L.M., (2006)* suggest regional mk test to calculate the overall trend in a regional scale. 37 | 38 | 9. **Correlated Multivariate MK Test (*correlated_multivariate_test*):** This multivariate mk test proposed by *Hipel (1994)* where the parameters are correlated. 39 | 40 | 10. **Correlated Seasonal MK Test (*correlated_seasonal_test*):** This method proposed by *Hipel (1994)* used, when time series significantly correlated with the preceding one or more months/seasons. 41 | 42 | 11. **Partial MK Test (*partial_test*):** In a real event, many factors are affecting the main studied response parameter, which can bias the trend results. To overcome this problem, *Libiseller (2002)* proposed this partial mk test. It required two parameters as input, where, one is response parameter and other is an independent parameter. 43 | 44 | 12. **Theil-Sen's Slope Estimator (*sens_slope*):** This method proposed by *Theil (1950)* and *Sen (1968)* to estimate the magnitude of the monotonic trend. Intercept is calculate using *Conover, W.J. (1980)* method. 45 | 46 | 13. **Seasonal Theil-Sen's Slope Estimator (*seasonal_sens_slope*):** This method proposed by *Hipel (1994)* to estimate the magnitude of the monotonic trend, when data has seasonal effects. Intercept is calculate using *Conover, W.J. (1980)* method. 47 | 48 | ## Function details: 49 | 50 | All Mann-Kendall test functions have almost similar input parameters. Those are: 51 | 52 | - **x**: a vector (list, numpy array or pandas series) data 53 | - **alpha**: significance level (0.05 is the default) 54 | - **lag**: No. of First Significant Lags (Only available in hamed_rao_modification_test and yue_wang_modification_test) 55 | - **period**: seasonal cycle. For monthly data it is 12, weekly data it is 52 (Only available in seasonal tests) 56 | 57 | And all Mann-Kendall tests return a named tuple which contained: 58 | 59 | - **trend**: tells the trend (increasing, decreasing or no trend) 60 | - **h**: True (if trend is present) or False (if the trend is absence) 61 | - **p**: p-value of the significance test 62 | - **z**: normalized test statistics 63 | - **Tau**: Kendall Tau 64 | - **s**: Mann-Kendal's score 65 | - **var_s**: Variance S 66 | - **slope**: Theil-Sen estimator/slope 67 | - **intercept**: intercept of Kendall-Theil Robust Line, for seasonal test, full period cycle consider as unit time step 68 | 69 | sen's slope function required data vector. seasonal sen's slope also has optional input period, which by the default value is 12. Both sen's slope function return only slope value. 70 | 71 | ## Dependencies 72 | 73 | For the installation of `pyMannKendall`, the following packages are required: 74 | - [numpy](https://www.numpy.org/) 75 | - [scipy](https://www.scipy.org/) 76 | 77 | ## Installation 78 | 79 | You can install `pyMannKendall` using pip. For Linux users 80 | 81 | ```python 82 | sudo pip install pymannkendall 83 | ``` 84 | 85 | or, for Windows user 86 | 87 | ```python 88 | pip install pymannkendall 89 | ``` 90 | 91 | or, you can use conda 92 | ```python 93 | conda install -c conda-forge pymannkendall 94 | ``` 95 | 96 | or you can clone the repo and install it: 97 | 98 | ```bash 99 | git clone https://github.com/mmhs013/pymannkendall 100 | cd pymannkendall 101 | python setup.py install 102 | ``` 103 | 104 | ## Tests 105 | 106 | `pyMannKendall` is automatically tested using `pytest` package on each commit [here](https://travis-ci.org/mmhs013/pyMannKendall/), but the tests can be manually run: 107 | 108 | ``` 109 | pytest -v 110 | ``` 111 | 112 | ## Usage 113 | 114 | A quick example of `pyMannKendall` usage is given below. Several more examples are provided [here](https://github.com/mmhs013/pyMannKendall/blob/master/Examples/Example_pyMannKendall.ipynb). 115 | 116 | ```python 117 | import numpy as np 118 | import pymannkendall as mk 119 | 120 | # Data generation for analysis 121 | data = np.random.rand(360,1) 122 | 123 | result = mk.original_test(data) 124 | print(result) 125 | ``` 126 | Output are like this: 127 | ```python 128 | Mann_Kendall_Test(trend='no trend', h=False, p=0.9507221701045581, z=0.06179991635055463, Tau=0.0021974620860414733, s=142.0, var_s=5205500.0, slope=1.0353584906597959e-05, intercept=0.5232692553379981) 129 | ``` 130 | Whereas, the output is a named tuple, so you can call by name for specific result: 131 | ```python 132 | print(result.slope) 133 | ``` 134 | or, you can directly unpack your results like this: 135 | ```python 136 | trend, h, p, z, Tau, s, var_s, slope, intercept = mk.original_test(data) 137 | ``` 138 | 139 | ## Citation 140 | 141 | [![Google Scholar](https://github.com/mmhs013/Citation_Parser/blob/main/images/gs_pymk_cite.svg?raw=true)](https://scholar.google.com/scholar?q=pyMannKendall%3A+a+python+package+for+non+parametric+Mann+Kendall+family+of+trend+tests.) 142 | [![Researchgate](https://github.com/mmhs013/Citation_Parser/blob/main/images/rg_pymk_cite.svg?raw=true)](https://www.researchgate.net/publication/334688255_pyMannKendall_a_python_package_for_non_parametric_Mann_Kendall_family_of_trend_tests) 143 | 144 | If you publish results for which you used `pyMannKendall`, please give credit by citing [Hussain et al., (2019)](https://doi.org/10.21105/joss.01556): 145 | 146 | > Hussain et al., (2019). pyMannKendall: a python package for non parametric Mann Kendall family of trend tests.. Journal of Open Source Software, 4(39), 1556, https://doi.org/10.21105/joss.01556 147 | 148 | 149 | ``` 150 | @article{Hussain2019pyMannKendall, 151 | journal = {Journal of Open Source Software}, 152 | doi = {10.21105/joss.01556}, 153 | issn = {2475-9066}, 154 | number = {39}, 155 | publisher = {The Open Journal}, 156 | title = {pyMannKendall: a python package for non parametric Mann Kendall family of trend tests.}, 157 | url = {http://dx.doi.org/10.21105/joss.01556}, 158 | volume = {4}, 159 | author = {Hussain, Md. and Mahmud, Ishtiak}, 160 | pages = {1556}, 161 | date = {2019-07-25}, 162 | year = {2019}, 163 | month = {7}, 164 | day = {25}, 165 | } 166 | ``` 167 | 168 | ## Contributions 169 | 170 | `pyMannKendall` is a community project and welcomes contributions. Additional information can be found in the [contribution guidelines](https://github.com/mmhs013/pyMannKendall/blob/master/CONTRIBUTING.md). 171 | 172 | 173 | ## Code of Conduct 174 | 175 | `pyMannKendall` wishes to maintain a positive community. Additional details can be found in the [Code of Conduct](https://github.com/mmhs013/pyMannKendall/blob/master/CODE_OF_CONDUCT.md). 176 | 177 | 178 | ## References 179 | 180 | 1. Bari, S. H., Rahman, M. T. U., Hoque, M. A., & Hussain, M. M. (2016). Analysis of seasonal and annual rainfall trends in the northern region of Bangladesh. *Atmospheric Research*, 176, 148-158. doi:[10.1016/j.atmosres.2016.02.008](https://doi.org/10.1016/j.atmosres.2016.02.008) 181 | 182 | 2. Conover, W.J., (1980). Some methods based on ranks (Chapter 5), [Practical nonparametric statistics (2nd Ed.)](https://www.wiley.com/en-us/Practical+Nonparametric+Statistics%2C+3rd+Edition-p-9780471160687), *John Wiley and Sons*. 183 | 184 | 3. Cox, D. R., & Stuart, A. (1955). Some quick sign tests for trend in location and dispersion. *Biometrika*, 42(1/2), 80-95. doi:[10.2307/2333424](https://doi.org/10.2307/2333424) 185 | 186 | 4. Hamed, K. H., & Rao, A. R. (1998). A modified Mann-Kendall trend test for autocorrelated data. *Journal of hydrology*, 204(1-4), 182-196. doi:[10.1016/S0022-1694(97)00125-X](https://doi.org/10.1016/S0022-1694(97)00125-X) 187 | 188 | 5. Helsel, D. R., & Frans, L. M. (2006). Regional Kendall test for trend. *Environmental science & technology*, 40(13), 4066-4073. doi:[10.1021/es051650b](https://doi.org/10.1021/es051650b) 189 | 190 | 6. Hipel, K. W., & McLeod, A. I. (1994). Time series modelling of water resources and environmental systems (Vol. 45). Elsevier. 191 | 192 | 7. Hirsch, R. M., Slack, J. R., & Smith, R. A. (1982). Techniques of trend analysis for monthly water quality data. *Water resources research*, 18(1), 107-121. doi:[10.1029/WR018i001p00107](https://doi.org/10.1029/WR018i001p00107) 193 | 194 | 8. Jacquelin Dietz, E., (1987). A comparison of robust estimators in simple linear regression: A comparison of robust estimators. Communications in Statistics-Simulation and Computation, 16(4), pp.1209-1227. doi: [10.1080/03610918708812645](https://doi.org/10.1080/03610918708812645) 195 | 196 | 9. Kendall, M. (1975). Rank correlation measures. *Charles Griffin*, London, 202, 15. 197 | 198 | 10. Libiseller, C., & Grimvall, A. (2002). Performance of partial Mann-Kendall tests for trend detection in the presence of covariates. *Environmetrics: The official journal of the International Environmetrics Society*, 13(1), 71-84. doi:[10.1002/env.507](https://doi.org/1010.1002/env.507) 199 | 200 | 11. Mann, H. B. (1945). Nonparametric tests against trend. *Econometrica: Journal of the Econometric Society*, 245-259. doi:[10.2307/1907187](https://doi.org/10.2307/1907187) 201 | 202 | 12. Sen, P. K. (1968). Estimates of the regression coefficient based on Kendall's tau. *Journal of the American statistical association*, 63(324), 1379-1389. doi:[10.1080/01621459.1968.10480934](https://doi.org/10.1080/01621459.1968.10480934) 203 | 204 | 13. Theil, H. (1950). A rank-invariant method of linear and polynominal regression analysis (parts 1-3). In *Ned. Akad. Wetensch. Proc. Ser. A* (Vol. 53, pp. 1397-1412). 205 | 206 | 14. Yue, S., & Wang, C. (2004). The Mann-Kendall test modified by effective sample size to detect trend in serially correlated hydrological series. *Water resources management*, 18(3), 201-218. doi:[10.1023/B:WARM.0000043140.61082.60](https://doi.org/10.1023/B:WARM.0000043140.61082.60) 207 | 208 | 15. Yue, S., & Wang, C. Y. (2002). Applicability of prewhitening to eliminate the influence of serial correlation on the Mann-Kendall test. *Water resources research*, 38(6), 4-1. doi:[10.1029/2001WR000861](https://doi.org/10.1029/2001WR000861) 209 | 210 | 16. Yue, S., Pilon, P., Phinney, B., & Cavadias, G. (2002). The influence of autocorrelation on the ability to detect trend in hydrological series. *Hydrological processes*, 16(9), 1807-1829. doi:[10.1002/hyp.1095](https://doi.org/10.1002/hyp.1095) 211 | 212 | -------------------------------------------------------------------------------- /pymannkendall/__init__.py: -------------------------------------------------------------------------------- 1 | from .pymannkendall import sens_slope, seasonal_sens_slope, original_test, hamed_rao_modification_test, yue_wang_modification_test, pre_whitening_modification_test, trend_free_pre_whitening_modification_test, multivariate_test, seasonal_test, regional_test, correlated_multivariate_test, correlated_seasonal_test, partial_test 2 | 3 | __all__ = [sens_slope, seasonal_sens_slope, original_test, hamed_rao_modification_test, yue_wang_modification_test, pre_whitening_modification_test, trend_free_pre_whitening_modification_test, multivariate_test, seasonal_test, regional_test, correlated_multivariate_test, correlated_seasonal_test, partial_test] 4 | 5 | from ._version import get_versions 6 | __version__ = get_versions()['version'] 7 | del get_versions -------------------------------------------------------------------------------- /pymannkendall/_version.py: -------------------------------------------------------------------------------- 1 | 2 | # This file helps to compute a version number in source trees obtained from 3 | # git-archive tarball (such as those provided by githubs download-from-tag 4 | # feature). Distribution tarballs (built by setup.py sdist) and build 5 | # directories (produced by setup.py build) will contain a much shorter file 6 | # that just contains the computed version number. 7 | 8 | # This file is released into the public domain. Generated by 9 | # versioneer-0.18 (https://github.com/warner/python-versioneer) 10 | 11 | """Git implementation of _version.py.""" 12 | 13 | import errno 14 | import os 15 | import re 16 | import subprocess 17 | import sys 18 | 19 | 20 | def get_keywords(): 21 | """Get the keywords needed to look up the version information.""" 22 | # these strings will be replaced by git during git-archive. 23 | # setup.py/versioneer.py will grep for the variable names, so they must 24 | # each be defined on a line of their own. _version.py will just call 25 | # get_keywords(). 26 | git_refnames = " (HEAD -> master)" 27 | git_full = "c2be737a199a694e481d98677e0e2c2c5d21b89d" 28 | git_date = "2021-06-26 00:11:48 +0600" 29 | keywords = {"refnames": git_refnames, "full": git_full, "date": git_date} 30 | return keywords 31 | 32 | 33 | class VersioneerConfig: 34 | """Container for Versioneer configuration parameters.""" 35 | 36 | 37 | def get_config(): 38 | """Create, populate and return the VersioneerConfig() object.""" 39 | # these strings are filled in when 'setup.py versioneer' creates 40 | # _version.py 41 | cfg = VersioneerConfig() 42 | cfg.VCS = "git" 43 | cfg.style = "pep440" 44 | cfg.tag_prefix = "v" 45 | cfg.parentdir_prefix = "pymannkendall-" 46 | cfg.versionfile_source = "pymannkendall/_version.py" 47 | cfg.verbose = False 48 | return cfg 49 | 50 | 51 | class NotThisMethod(Exception): 52 | """Exception raised if a method is not valid for the current scenario.""" 53 | 54 | 55 | LONG_VERSION_PY = {} 56 | HANDLERS = {} 57 | 58 | 59 | def register_vcs_handler(vcs, method): # decorator 60 | """Decorator to mark a method as the handler for a particular VCS.""" 61 | def decorate(f): 62 | """Store f in HANDLERS[vcs][method].""" 63 | if vcs not in HANDLERS: 64 | HANDLERS[vcs] = {} 65 | HANDLERS[vcs][method] = f 66 | return f 67 | return decorate 68 | 69 | 70 | def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False, 71 | env=None): 72 | """Call the given command(s).""" 73 | assert isinstance(commands, list) 74 | p = None 75 | for c in commands: 76 | try: 77 | dispcmd = str([c] + args) 78 | # remember shell=False, so use git.cmd on windows, not just git 79 | p = subprocess.Popen([c] + args, cwd=cwd, env=env, 80 | stdout=subprocess.PIPE, 81 | stderr=(subprocess.PIPE if hide_stderr 82 | else None)) 83 | break 84 | except EnvironmentError: 85 | e = sys.exc_info()[1] 86 | if e.errno == errno.ENOENT: 87 | continue 88 | if verbose: 89 | print("unable to run %s" % dispcmd) 90 | print(e) 91 | return None, None 92 | else: 93 | if verbose: 94 | print("unable to find command, tried %s" % (commands,)) 95 | return None, None 96 | stdout = p.communicate()[0].strip() 97 | if sys.version_info[0] >= 3: 98 | stdout = stdout.decode() 99 | if p.returncode != 0: 100 | if verbose: 101 | print("unable to run %s (error)" % dispcmd) 102 | print("stdout was %s" % stdout) 103 | return None, p.returncode 104 | return stdout, p.returncode 105 | 106 | 107 | def versions_from_parentdir(parentdir_prefix, root, verbose): 108 | """Try to determine the version from the parent directory name. 109 | 110 | Source tarballs conventionally unpack into a directory that includes both 111 | the project name and a version string. We will also support searching up 112 | two directory levels for an appropriately named parent directory 113 | """ 114 | rootdirs = [] 115 | 116 | for i in range(3): 117 | dirname = os.path.basename(root) 118 | if dirname.startswith(parentdir_prefix): 119 | return {"version": dirname[len(parentdir_prefix):], 120 | "full-revisionid": None, 121 | "dirty": False, "error": None, "date": None} 122 | else: 123 | rootdirs.append(root) 124 | root = os.path.dirname(root) # up a level 125 | 126 | if verbose: 127 | print("Tried directories %s but none started with prefix %s" % 128 | (str(rootdirs), parentdir_prefix)) 129 | raise NotThisMethod("rootdir doesn't start with parentdir_prefix") 130 | 131 | 132 | @register_vcs_handler("git", "get_keywords") 133 | def git_get_keywords(versionfile_abs): 134 | """Extract version information from the given file.""" 135 | # the code embedded in _version.py can just fetch the value of these 136 | # keywords. When used from setup.py, we don't want to import _version.py, 137 | # so we do it with a regexp instead. This function is not used from 138 | # _version.py. 139 | keywords = {} 140 | try: 141 | f = open(versionfile_abs, "r") 142 | for line in f.readlines(): 143 | if line.strip().startswith("git_refnames ="): 144 | mo = re.search(r'=\s*"(.*)"', line) 145 | if mo: 146 | keywords["refnames"] = mo.group(1) 147 | if line.strip().startswith("git_full ="): 148 | mo = re.search(r'=\s*"(.*)"', line) 149 | if mo: 150 | keywords["full"] = mo.group(1) 151 | if line.strip().startswith("git_date ="): 152 | mo = re.search(r'=\s*"(.*)"', line) 153 | if mo: 154 | keywords["date"] = mo.group(1) 155 | f.close() 156 | except EnvironmentError: 157 | pass 158 | return keywords 159 | 160 | 161 | @register_vcs_handler("git", "keywords") 162 | def git_versions_from_keywords(keywords, tag_prefix, verbose): 163 | """Get version information from git keywords.""" 164 | if not keywords: 165 | raise NotThisMethod("no keywords at all, weird") 166 | date = keywords.get("date") 167 | if date is not None: 168 | # git-2.2.0 added "%cI", which expands to an ISO-8601 -compliant 169 | # datestamp. However we prefer "%ci" (which expands to an "ISO-8601 170 | # -like" string, which we must then edit to make compliant), because 171 | # it's been around since git-1.5.3, and it's too difficult to 172 | # discover which version we're using, or to work around using an 173 | # older one. 174 | date = date.strip().replace(" ", "T", 1).replace(" ", "", 1) 175 | refnames = keywords["refnames"].strip() 176 | if refnames.startswith("$Format"): 177 | if verbose: 178 | print("keywords are unexpanded, not using") 179 | raise NotThisMethod("unexpanded keywords, not a git-archive tarball") 180 | refs = set([r.strip() for r in refnames.strip("()").split(",")]) 181 | # starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of 182 | # just "foo-1.0". If we see a "tag: " prefix, prefer those. 183 | TAG = "tag: " 184 | tags = set([r[len(TAG):] for r in refs if r.startswith(TAG)]) 185 | if not tags: 186 | # Either we're using git < 1.8.3, or there really are no tags. We use 187 | # a heuristic: assume all version tags have a digit. The old git %d 188 | # expansion behaves like git log --decorate=short and strips out the 189 | # refs/heads/ and refs/tags/ prefixes that would let us distinguish 190 | # between branches and tags. By ignoring refnames without digits, we 191 | # filter out many common branch names like "release" and 192 | # "stabilization", as well as "HEAD" and "master". 193 | tags = set([r for r in refs if re.search(r'\d', r)]) 194 | if verbose: 195 | print("discarding '%s', no digits" % ",".join(refs - tags)) 196 | if verbose: 197 | print("likely tags: %s" % ",".join(sorted(tags))) 198 | for ref in sorted(tags): 199 | # sorting will prefer e.g. "2.0" over "2.0rc1" 200 | if ref.startswith(tag_prefix): 201 | r = ref[len(tag_prefix):] 202 | if verbose: 203 | print("picking %s" % r) 204 | return {"version": r, 205 | "full-revisionid": keywords["full"].strip(), 206 | "dirty": False, "error": None, 207 | "date": date} 208 | # no suitable tags, so version is "0+unknown", but full hex is still there 209 | if verbose: 210 | print("no suitable tags, using unknown + full revision id") 211 | return {"version": "0+unknown", 212 | "full-revisionid": keywords["full"].strip(), 213 | "dirty": False, "error": "no suitable tags", "date": None} 214 | 215 | 216 | @register_vcs_handler("git", "pieces_from_vcs") 217 | def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command): 218 | """Get version from 'git describe' in the root of the source tree. 219 | 220 | This only gets called if the git-archive 'subst' keywords were *not* 221 | expanded, and _version.py hasn't already been rewritten with a short 222 | version string, meaning we're inside a checked out source tree. 223 | """ 224 | GITS = ["git"] 225 | if sys.platform == "win32": 226 | GITS = ["git.cmd", "git.exe"] 227 | 228 | out, rc = run_command(GITS, ["rev-parse", "--git-dir"], cwd=root, 229 | hide_stderr=True) 230 | if rc != 0: 231 | if verbose: 232 | print("Directory %s not under git control" % root) 233 | raise NotThisMethod("'git rev-parse --git-dir' returned error") 234 | 235 | # if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty] 236 | # if there isn't one, this yields HEX[-dirty] (no NUM) 237 | describe_out, rc = run_command(GITS, ["describe", "--tags", "--dirty", 238 | "--always", "--long", 239 | "--match", "%s*" % tag_prefix], 240 | cwd=root) 241 | # --long was added in git-1.5.5 242 | if describe_out is None: 243 | raise NotThisMethod("'git describe' failed") 244 | describe_out = describe_out.strip() 245 | full_out, rc = run_command(GITS, ["rev-parse", "HEAD"], cwd=root) 246 | if full_out is None: 247 | raise NotThisMethod("'git rev-parse' failed") 248 | full_out = full_out.strip() 249 | 250 | pieces = {} 251 | pieces["long"] = full_out 252 | pieces["short"] = full_out[:7] # maybe improved later 253 | pieces["error"] = None 254 | 255 | # parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty] 256 | # TAG might have hyphens. 257 | git_describe = describe_out 258 | 259 | # look for -dirty suffix 260 | dirty = git_describe.endswith("-dirty") 261 | pieces["dirty"] = dirty 262 | if dirty: 263 | git_describe = git_describe[:git_describe.rindex("-dirty")] 264 | 265 | # now we have TAG-NUM-gHEX or HEX 266 | 267 | if "-" in git_describe: 268 | # TAG-NUM-gHEX 269 | mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe) 270 | if not mo: 271 | # unparseable. Maybe git-describe is misbehaving? 272 | pieces["error"] = ("unable to parse git-describe output: '%s'" 273 | % describe_out) 274 | return pieces 275 | 276 | # tag 277 | full_tag = mo.group(1) 278 | if not full_tag.startswith(tag_prefix): 279 | if verbose: 280 | fmt = "tag '%s' doesn't start with prefix '%s'" 281 | print(fmt % (full_tag, tag_prefix)) 282 | pieces["error"] = ("tag '%s' doesn't start with prefix '%s'" 283 | % (full_tag, tag_prefix)) 284 | return pieces 285 | pieces["closest-tag"] = full_tag[len(tag_prefix):] 286 | 287 | # distance: number of commits since tag 288 | pieces["distance"] = int(mo.group(2)) 289 | 290 | # commit: short hex revision ID 291 | pieces["short"] = mo.group(3) 292 | 293 | else: 294 | # HEX: no tags 295 | pieces["closest-tag"] = None 296 | count_out, rc = run_command(GITS, ["rev-list", "HEAD", "--count"], 297 | cwd=root) 298 | pieces["distance"] = int(count_out) # total number of commits 299 | 300 | # commit date: see ISO-8601 comment in git_versions_from_keywords() 301 | date = run_command(GITS, ["show", "-s", "--format=%ci", "HEAD"], 302 | cwd=root)[0].strip() 303 | pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1) 304 | 305 | return pieces 306 | 307 | 308 | def plus_or_dot(pieces): 309 | """Return a + if we don't already have one, else return a .""" 310 | if "+" in pieces.get("closest-tag", ""): 311 | return "." 312 | return "+" 313 | 314 | 315 | def render_pep440(pieces): 316 | """Build up version string, with post-release "local version identifier". 317 | 318 | Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you 319 | get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty 320 | 321 | Exceptions: 322 | 1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty] 323 | """ 324 | if pieces["closest-tag"]: 325 | rendered = pieces["closest-tag"] 326 | if pieces["distance"] or pieces["dirty"]: 327 | rendered += plus_or_dot(pieces) 328 | rendered += "%d.g%s" % (pieces["distance"], pieces["short"]) 329 | if pieces["dirty"]: 330 | rendered += ".dirty" 331 | else: 332 | # exception #1 333 | rendered = "0+untagged.%d.g%s" % (pieces["distance"], 334 | pieces["short"]) 335 | if pieces["dirty"]: 336 | rendered += ".dirty" 337 | return rendered 338 | 339 | 340 | def render_pep440_pre(pieces): 341 | """TAG[.post.devDISTANCE] -- No -dirty. 342 | 343 | Exceptions: 344 | 1: no tags. 0.post.devDISTANCE 345 | """ 346 | if pieces["closest-tag"]: 347 | rendered = pieces["closest-tag"] 348 | if pieces["distance"]: 349 | rendered += ".post.dev%d" % pieces["distance"] 350 | else: 351 | # exception #1 352 | rendered = "0.post.dev%d" % pieces["distance"] 353 | return rendered 354 | 355 | 356 | def render_pep440_post(pieces): 357 | """TAG[.postDISTANCE[.dev0]+gHEX] . 358 | 359 | The ".dev0" means dirty. Note that .dev0 sorts backwards 360 | (a dirty tree will appear "older" than the corresponding clean one), 361 | but you shouldn't be releasing software with -dirty anyways. 362 | 363 | Exceptions: 364 | 1: no tags. 0.postDISTANCE[.dev0] 365 | """ 366 | if pieces["closest-tag"]: 367 | rendered = pieces["closest-tag"] 368 | if pieces["distance"] or pieces["dirty"]: 369 | rendered += ".post%d" % pieces["distance"] 370 | if pieces["dirty"]: 371 | rendered += ".dev0" 372 | rendered += plus_or_dot(pieces) 373 | rendered += "g%s" % pieces["short"] 374 | else: 375 | # exception #1 376 | rendered = "0.post%d" % pieces["distance"] 377 | if pieces["dirty"]: 378 | rendered += ".dev0" 379 | rendered += "+g%s" % pieces["short"] 380 | return rendered 381 | 382 | 383 | def render_pep440_old(pieces): 384 | """TAG[.postDISTANCE[.dev0]] . 385 | 386 | The ".dev0" means dirty. 387 | 388 | Eexceptions: 389 | 1: no tags. 0.postDISTANCE[.dev0] 390 | """ 391 | if pieces["closest-tag"]: 392 | rendered = pieces["closest-tag"] 393 | if pieces["distance"] or pieces["dirty"]: 394 | rendered += ".post%d" % pieces["distance"] 395 | if pieces["dirty"]: 396 | rendered += ".dev0" 397 | else: 398 | # exception #1 399 | rendered = "0.post%d" % pieces["distance"] 400 | if pieces["dirty"]: 401 | rendered += ".dev0" 402 | return rendered 403 | 404 | 405 | def render_git_describe(pieces): 406 | """TAG[-DISTANCE-gHEX][-dirty]. 407 | 408 | Like 'git describe --tags --dirty --always'. 409 | 410 | Exceptions: 411 | 1: no tags. HEX[-dirty] (note: no 'g' prefix) 412 | """ 413 | if pieces["closest-tag"]: 414 | rendered = pieces["closest-tag"] 415 | if pieces["distance"]: 416 | rendered += "-%d-g%s" % (pieces["distance"], pieces["short"]) 417 | else: 418 | # exception #1 419 | rendered = pieces["short"] 420 | if pieces["dirty"]: 421 | rendered += "-dirty" 422 | return rendered 423 | 424 | 425 | def render_git_describe_long(pieces): 426 | """TAG-DISTANCE-gHEX[-dirty]. 427 | 428 | Like 'git describe --tags --dirty --always -long'. 429 | The distance/hash is unconditional. 430 | 431 | Exceptions: 432 | 1: no tags. HEX[-dirty] (note: no 'g' prefix) 433 | """ 434 | if pieces["closest-tag"]: 435 | rendered = pieces["closest-tag"] 436 | rendered += "-%d-g%s" % (pieces["distance"], pieces["short"]) 437 | else: 438 | # exception #1 439 | rendered = pieces["short"] 440 | if pieces["dirty"]: 441 | rendered += "-dirty" 442 | return rendered 443 | 444 | 445 | def render(pieces, style): 446 | """Render the given version pieces into the requested style.""" 447 | if pieces["error"]: 448 | return {"version": "unknown", 449 | "full-revisionid": pieces.get("long"), 450 | "dirty": None, 451 | "error": pieces["error"], 452 | "date": None} 453 | 454 | if not style or style == "default": 455 | style = "pep440" # the default 456 | 457 | if style == "pep440": 458 | rendered = render_pep440(pieces) 459 | elif style == "pep440-pre": 460 | rendered = render_pep440_pre(pieces) 461 | elif style == "pep440-post": 462 | rendered = render_pep440_post(pieces) 463 | elif style == "pep440-old": 464 | rendered = render_pep440_old(pieces) 465 | elif style == "git-describe": 466 | rendered = render_git_describe(pieces) 467 | elif style == "git-describe-long": 468 | rendered = render_git_describe_long(pieces) 469 | else: 470 | raise ValueError("unknown style '%s'" % style) 471 | 472 | return {"version": rendered, "full-revisionid": pieces["long"], 473 | "dirty": pieces["dirty"], "error": None, 474 | "date": pieces.get("date")} 475 | 476 | 477 | def get_versions(): 478 | """Get version information or return default if unable to do so.""" 479 | # I am in _version.py, which lives at ROOT/VERSIONFILE_SOURCE. If we have 480 | # __file__, we can work backwards from there to the root. Some 481 | # py2exe/bbfreeze/non-CPython implementations don't do __file__, in which 482 | # case we can only use expanded keywords. 483 | 484 | cfg = get_config() 485 | verbose = cfg.verbose 486 | 487 | try: 488 | return git_versions_from_keywords(get_keywords(), cfg.tag_prefix, 489 | verbose) 490 | except NotThisMethod: 491 | pass 492 | 493 | try: 494 | root = os.path.realpath(__file__) 495 | # versionfile_source is the relative path from the top of the source 496 | # tree (where the .git directory might live) to this file. Invert 497 | # this to find the root from __file__. 498 | for i in cfg.versionfile_source.split('/'): 499 | root = os.path.dirname(root) 500 | except NameError: 501 | return {"version": "0+unknown", "full-revisionid": None, 502 | "dirty": None, 503 | "error": "unable to find root of source tree", 504 | "date": None} 505 | 506 | try: 507 | pieces = git_pieces_from_vcs(cfg.tag_prefix, root, verbose) 508 | return render(pieces, cfg.style) 509 | except NotThisMethod: 510 | pass 511 | 512 | try: 513 | if cfg.parentdir_prefix: 514 | return versions_from_parentdir(cfg.parentdir_prefix, root, verbose) 515 | except NotThisMethod: 516 | pass 517 | 518 | return {"version": "0+unknown", "full-revisionid": None, 519 | "dirty": None, 520 | "error": "unable to compute version", "date": None} 521 | -------------------------------------------------------------------------------- /pymannkendall/pymannkendall.py: -------------------------------------------------------------------------------- 1 | """ 2 | Created on 05 March 2018 3 | Update on 28 May 2021 4 | @author: Md. Manjurul Hussain Shourov 5 | version: 1.4.2 6 | Approach: Vectorisation 7 | Citation: Hussain et al., (2019). pyMannKendall: a python package for non parametric Mann Kendall family of trend tests.. Journal of Open Source Software, 4(39), 1556, https://doi.org/10.21105/joss.01556 8 | """ 9 | 10 | from __future__ import division 11 | import numpy as np 12 | from scipy.stats import norm, rankdata 13 | from collections import namedtuple 14 | 15 | 16 | # Supporting Functions 17 | # Data Preprocessing 18 | def __preprocessing(x): 19 | x = np.asarray(x).astype(float) 20 | dim = x.ndim 21 | 22 | if dim == 1: 23 | c = 1 24 | 25 | elif dim == 2: 26 | (n, c) = x.shape 27 | 28 | if c == 1: 29 | dim = 1 30 | x = x.flatten() 31 | 32 | else: 33 | print('Please check your dataset.') 34 | 35 | return x, c 36 | 37 | 38 | # Missing Values Analysis 39 | def __missing_values_analysis(x, method = 'skip'): 40 | if method.lower() == 'skip': 41 | if x.ndim == 1: 42 | x = x[~np.isnan(x)] 43 | 44 | else: 45 | x = x[~np.isnan(x).any(axis=1)] 46 | 47 | n = len(x) 48 | 49 | return x, n 50 | 51 | 52 | # ACF Calculation 53 | def __acf(x, nlags): 54 | y = x - x.mean() 55 | n = len(x) 56 | d = n * np.ones(2 * n - 1) 57 | 58 | acov = (np.correlate(y, y, 'full') / d)[n - 1:] 59 | 60 | return acov[:nlags+1]/acov[0] 61 | 62 | 63 | # vectorization approach to calculate mk score, S 64 | def __mk_score(x, n): 65 | s = 0 66 | 67 | demo = np.ones(n) 68 | for k in range(n-1): 69 | s = s + np.sum(demo[k+1:n][x[k+1:n] > x[k]]) - np.sum(demo[k+1:n][x[k+1:n] < x[k]]) 70 | 71 | return s 72 | 73 | 74 | # original Mann-Kendal's variance S calculation 75 | def __variance_s(x, n): 76 | # calculate the unique data 77 | unique_x = np.unique(x) 78 | g = len(unique_x) 79 | 80 | # calculate the var(s) 81 | if n == g: # there is no tie 82 | var_s = (n*(n-1)*(2*n+5))/18 83 | 84 | else: # there are some ties in data 85 | tp = np.zeros(unique_x.shape) 86 | demo = np.ones(n) 87 | 88 | for i in range(g): 89 | tp[i] = np.sum(demo[x == unique_x[i]]) 90 | 91 | var_s = (n*(n-1)*(2*n+5) - np.sum(tp*(tp-1)*(2*tp+5)))/18 92 | 93 | return var_s 94 | 95 | 96 | # standardized test statistic Z 97 | def __z_score(s, var_s): 98 | if s > 0: 99 | z = (s - 1)/np.sqrt(var_s) 100 | elif s == 0: 101 | z = 0 102 | elif s < 0: 103 | z = (s + 1)/np.sqrt(var_s) 104 | 105 | return z 106 | 107 | 108 | # calculate the p_value 109 | def __p_value(z, alpha): 110 | # two tail test 111 | p = 2*(1-norm.cdf(abs(z))) 112 | h = abs(z) > norm.ppf(1-alpha/2) 113 | 114 | if (z < 0) and h: 115 | trend = 'decreasing' 116 | elif (z > 0) and h: 117 | trend = 'increasing' 118 | else: 119 | trend = 'no trend' 120 | 121 | return p, h, trend 122 | 123 | 124 | def __R(x): 125 | n = len(x) 126 | R = [] 127 | 128 | for j in range(n): 129 | i = np.arange(n) 130 | s = np.sum(np.sign(x[j] - x[i])) 131 | R.extend([(n + 1 + s)/2]) 132 | 133 | return np.asarray(R) 134 | 135 | 136 | def __K(x,z): 137 | n = len(x) 138 | K = 0 139 | 140 | for i in range(n-1): 141 | j = np.arange(i,n) 142 | K = K + np.sum(np.sign((x[j] - x[i]) * (z[j] - z[i]))) 143 | 144 | return K 145 | 146 | 147 | # Original Sens Estimator 148 | def __sens_estimator(x): 149 | idx = 0 150 | n = len(x) 151 | d = np.ones(int(n*(n-1)/2)) 152 | 153 | for i in range(n-1): 154 | j = np.arange(i+1,n) 155 | d[idx : idx + len(j)] = (x[j] - x[i]) / (j - i) 156 | idx = idx + len(j) 157 | 158 | return d 159 | 160 | 161 | def sens_slope(x): 162 | """ 163 | This method proposed by Theil (1950) and Sen (1968) to estimate the magnitude of the monotonic trend. Intercept calculated using Conover, W.J. (1980) method. 164 | Input: 165 | x: a one dimensional vector (list, numpy array or pandas series) data 166 | Output: 167 | slope: Theil-Sen estimator/slope 168 | intercept: intercept of Kendall-Theil Robust Line 169 | Examples 170 | -------- 171 | >>> import numpy as np 172 | >>> import pymannkendall as mk 173 | >>> x = np.random.rand(120) 174 | >>> slope,intercept = mk.sens_slope(x) 175 | """ 176 | res = namedtuple('Sens_Slope_Test', ['slope','intercept']) 177 | x, c = __preprocessing(x) 178 | # x, n = __missing_values_analysis(x, method = 'skip') 179 | n = len(x) 180 | slope = np.nanmedian(__sens_estimator(x)) 181 | intercept = np.nanmedian(x) - np.median(np.arange(n)[~np.isnan(x.flatten())]) * slope # or median(x) - (n-1)/2 *slope 182 | 183 | return res(slope, intercept) 184 | 185 | 186 | def seasonal_sens_slope(x_old, period=12): 187 | """ 188 | This method proposed by Hipel (1994) to estimate the magnitude of the monotonic trend, when data has seasonal effects. Intercept calculated using Conover, W.J. (1980) method. 189 | Input: 190 | x: a vector (list, numpy array or pandas series) data 191 | period: seasonal cycle. For monthly data it is 12, weekly data it is 52 (12 is the default) 192 | Output: 193 | slope: Theil-Sen estimator/slope 194 | intercept: intercept of Kendall-Theil Robust Line, where full period cycle consider as unit time step 195 | Examples 196 | -------- 197 | >>> import numpy as np 198 | >>> import pymannkendall as mk 199 | >>> x = np.random.rand(120) 200 | >>> slope,intercept = mk.seasonal_sens_slope(x, 12) 201 | """ 202 | res = namedtuple('Seasonal_Sens_Slope_Test', ['slope','intercept']) 203 | x, c = __preprocessing(x_old) 204 | n = len(x) 205 | 206 | if x.ndim == 1: 207 | if np.mod(n,period) != 0: 208 | x = np.pad(x,(0,period - np.mod(n,period)), 'constant', constant_values=(np.nan,)) 209 | 210 | x = x.reshape(int(len(x)/period),period) 211 | 212 | # x, n = __missing_values_analysis(x, method = 'skip') 213 | d = [] 214 | 215 | for i in range(period): 216 | d.extend(__sens_estimator(x[:,i])) 217 | 218 | slope = np.nanmedian(np.asarray(d)) 219 | intercept = np.nanmedian(x_old) - np.median(np.arange(x_old.size)[~np.isnan(x_old.flatten())]) / period * slope 220 | 221 | return res(slope, intercept) 222 | 223 | 224 | def original_test(x_old, alpha = 0.05): 225 | """ 226 | This function checks the Mann-Kendall (MK) test (Mann 1945, Kendall 1975, Gilbert 1987). 227 | Input: 228 | x: a vector (list, numpy array or pandas series) data 229 | alpha: significance level (0.05 default) 230 | Output: 231 | trend: tells the trend (increasing, decreasing or no trend) 232 | h: True (if trend is present) or False (if trend is absence) 233 | p: p-value of the significance test 234 | z: normalized test statistics 235 | Tau: Kendall Tau 236 | s: Mann-Kendal's score 237 | var_s: Variance S 238 | slope: Theil-Sen estimator/slope 239 | intercept: intercept of Kendall-Theil Robust Line 240 | Examples 241 | -------- 242 | >>> import numpy as np 243 | >>> import pymannkendall as mk 244 | >>> x = np.random.rand(1000) 245 | >>> trend,h,p,z,tau,s,var_s,slope,intercept = mk.original_test(x,0.05) 246 | """ 247 | res = namedtuple('Mann_Kendall_Test', ['trend', 'h', 'p', 'z', 'Tau', 's', 'var_s', 'slope', 'intercept']) 248 | x, c = __preprocessing(x_old) 249 | x, n = __missing_values_analysis(x, method = 'skip') 250 | 251 | s = __mk_score(x, n) 252 | var_s = __variance_s(x, n) 253 | Tau = s/(.5*n*(n-1)) 254 | 255 | z = __z_score(s, var_s) 256 | p, h, trend = __p_value(z, alpha) 257 | slope, intercept = sens_slope(x_old) 258 | 259 | return res(trend, h, p, z, Tau, s, var_s, slope, intercept) 260 | 261 | def hamed_rao_modification_test(x_old, alpha = 0.05, lag=None): 262 | """ 263 | This function checks the Modified Mann-Kendall (MK) test using Hamed and Rao (1998) method. 264 | Input: 265 | x: a vector (list, numpy array or pandas series) data 266 | alpha: significance level (0.05 default) 267 | lag: No. of First Significant Lags (default None, You can use 3 for considering first 3 lags, which also proposed by Hamed and Rao(1998)) 268 | Output: 269 | trend: tells the trend (increasing, decreasing or no trend) 270 | h: True (if trend is present) or False (if trend is absence) 271 | p: p-value of the significance test 272 | z: normalized test statistics 273 | Tau: Kendall Tau 274 | s: Mann-Kendal's score 275 | var_s: Variance S 276 | slope: Theil-Sen estimator/slope 277 | intercept: intercept of Kendall-Theil Robust Line 278 | Examples 279 | -------- 280 | >>> import numpy as np 281 | >>> import pymannkendall as mk 282 | >>> x = np.random.rand(1000) 283 | >>> trend,h,p,z,tau,s,var_s,slope,intercept = mk.hamed_rao_modification_test(x,0.05) 284 | """ 285 | res = namedtuple('Modified_Mann_Kendall_Test_Hamed_Rao_Approach', ['trend', 'h', 'p', 'z', 'Tau', 's', 'var_s', 'slope', 'intercept']) 286 | x, c = __preprocessing(x_old) 287 | x, n = __missing_values_analysis(x, method = 'skip') 288 | 289 | s = __mk_score(x, n) 290 | var_s = __variance_s(x, n) 291 | Tau = s/(.5*n*(n-1)) 292 | 293 | # Hamed and Rao (1998) variance correction 294 | if lag is None: 295 | lag = n 296 | else: 297 | lag = lag + 1 298 | 299 | # detrending 300 | # x_detrend = x - np.multiply(range(1,n+1), np.median(x)) 301 | slope, intercept = sens_slope(x_old) 302 | x_detrend = x - np.arange(1,n+1) * slope 303 | I = rankdata(x_detrend) 304 | 305 | # account for autocorrelation 306 | acf_1 = __acf(I, nlags=lag-1) 307 | interval = norm.ppf(1 - alpha / 2) / np.sqrt(n) 308 | upper_bound = 0 + interval 309 | lower_bound = 0 - interval 310 | 311 | sni = 0 312 | for i in range(1,lag): 313 | if (acf_1[i] <= upper_bound and acf_1[i] >= lower_bound): 314 | sni = sni 315 | else: 316 | sni += (n-i) * (n-i-1) * (n-i-2) * acf_1[i] 317 | 318 | n_ns = 1 + (2 / (n * (n-1) * (n-2))) * sni 319 | var_s = var_s * n_ns 320 | 321 | z = __z_score(s, var_s) 322 | p, h, trend = __p_value(z, alpha) 323 | 324 | return res(trend, h, p, z, Tau, s, var_s, slope, intercept) 325 | 326 | def yue_wang_modification_test(x_old, alpha = 0.05, lag=None): 327 | """ 328 | Input: This function checks the Modified Mann-Kendall (MK) test using Yue and Wang (2004) method. 329 | x: a vector (list, numpy array or pandas series) data 330 | alpha: significance level (0.05 default) 331 | lag: No. of First Significant Lags (default None, You can use 1 for considering first 1 lags, which also proposed by Yue and Wang (2004)) 332 | Output: 333 | trend: tells the trend (increasing, decreasing or no trend) 334 | h: True (if trend is present) or False (if trend is absence) 335 | p: p-value of the significance test 336 | z: normalized test statistics 337 | Tau: Kendall Tau 338 | s: Mann-Kendal's score 339 | var_s: Variance S 340 | slope: Theil-Sen estimator/slope 341 | intercept: intercept of Kendall-Theil Robust Line 342 | Examples 343 | -------- 344 | >>> import numpy as np 345 | >>> import pymannkendall as mk 346 | >>> x = np.random.rand(1000) 347 | >>> trend,h,p,z,tau,s,var_s,slope,intercept = mk.yue_wang_modification_test(x,0.05) 348 | """ 349 | res = namedtuple('Modified_Mann_Kendall_Test_Yue_Wang_Approach', ['trend', 'h', 'p', 'z', 'Tau', 's', 'var_s', 'slope', 'intercept']) 350 | x, c = __preprocessing(x_old) 351 | x, n = __missing_values_analysis(x, method = 'skip') 352 | 353 | s = __mk_score(x, n) 354 | var_s = __variance_s(x, n) 355 | Tau = s/(.5*n*(n-1)) 356 | 357 | # Yue and Wang (2004) variance correction 358 | if lag is None: 359 | lag = n 360 | else: 361 | lag = lag + 1 362 | 363 | # detrending 364 | slope, intercept = sens_slope(x_old) 365 | x_detrend = x - np.arange(1,n+1) * slope 366 | 367 | # account for autocorrelation 368 | acf_1 = __acf(x_detrend, nlags=lag-1) 369 | idx = np.arange(1,lag) 370 | sni = np.sum((1 - idx/n) * acf_1[idx]) 371 | 372 | n_ns = 1 + 2 * sni 373 | var_s = var_s * n_ns 374 | 375 | z = __z_score(s, var_s) 376 | p, h, trend = __p_value(z, alpha) 377 | 378 | return res(trend, h, p, z, Tau, s, var_s, slope, intercept) 379 | 380 | def pre_whitening_modification_test(x_old, alpha = 0.05): 381 | """ 382 | This function checks the Modified Mann-Kendall (MK) test using Pre-Whitening method proposed by Yue and Wang (2002). 383 | Input: 384 | x: a vector (list, numpy array or pandas series) data 385 | alpha: significance level (0.05 default) 386 | Output: 387 | trend: tells the trend (increasing, decreasing or no trend) 388 | h: True (if trend is present) or False (if trend is absence) 389 | p: p-value of the significance test 390 | z: normalized test statistics 391 | s: Mann-Kendal's score 392 | var_s: Variance S 393 | slope: Theil-Sen estimator/slope 394 | intercept: intercept of Kendall-Theil Robust Line 395 | Examples 396 | -------- 397 | >>> import numpy as np 398 | >>> import pymannkendall as mk 399 | >>> x = np.random.rand(1000) 400 | >>> trend,h,p,z,tau,s,var_s,slope,intercept = mk.pre_whitening_modification_test(x,0.05) 401 | """ 402 | res = namedtuple('Modified_Mann_Kendall_Test_PreWhitening_Approach', ['trend', 'h', 'p', 'z', 'Tau', 's', 'var_s', 'slope', 'intercept']) 403 | 404 | x, c = __preprocessing(x_old) 405 | x, n = __missing_values_analysis(x, method = 'skip') 406 | 407 | # PreWhitening 408 | acf_1 = __acf(x, nlags=1)[1] 409 | a = range(0, n-1) 410 | b = range(1, n) 411 | x = x[b] - x[a]*acf_1 412 | n = len(x) 413 | 414 | s = __mk_score(x, n) 415 | var_s = __variance_s(x, n) 416 | Tau = s/(.5*n*(n-1)) 417 | 418 | z = __z_score(s, var_s) 419 | p, h, trend = __p_value(z, alpha) 420 | slope, intercept = sens_slope(x_old) 421 | 422 | return res(trend, h, p, z, Tau, s, var_s, slope, intercept) 423 | 424 | def trend_free_pre_whitening_modification_test(x_old, alpha = 0.05): 425 | """ 426 | This function checks the Modified Mann-Kendall (MK) test using the trend-free Pre-Whitening method proposed by Yue and Wang (2002). 427 | Input: 428 | x: a vector (list, numpy array or pandas series) data 429 | alpha: significance level (0.05 default) 430 | Output: 431 | trend: tells the trend (increasing, decreasing or no trend) 432 | h: True (if trend is present) or False (if trend is absence) 433 | p: p-value of the significance test 434 | z: normalized test statistics 435 | s: Mann-Kendal's score 436 | var_s: Variance S 437 | slope: Theil-Sen estimator/slope 438 | intercept: intercept of Kendall-Theil Robust Line 439 | Examples 440 | -------- 441 | >>> import numpy as np 442 | >>> import pymannkendall as mk 443 | >>> x = np.random.rand(1000) 444 | >>> trend,h,p,z,tau,s,var_s,slope,intercept = mk.trend_free_pre_whitening_modification_test(x,0.05) 445 | """ 446 | res = namedtuple('Modified_Mann_Kendall_Test_Trend_Free_PreWhitening_Approach', ['trend', 'h', 'p', 'z', 'Tau', 's', 'var_s', 'slope', 'intercept']) 447 | 448 | x, c = __preprocessing(x_old) 449 | x, n = __missing_values_analysis(x, method = 'skip') 450 | 451 | # detrending 452 | slope, intercept = sens_slope(x_old) 453 | x_detrend = x - np.arange(1,n+1) * slope 454 | 455 | # PreWhitening 456 | acf_1 = __acf(x_detrend, nlags=1)[1] 457 | a = range(0, n-1) 458 | b = range(1, n) 459 | x = x_detrend[b] - x_detrend[a]*acf_1 460 | 461 | n = len(x) 462 | x = x + np.arange(1,n+1) * slope 463 | 464 | s = __mk_score(x, n) 465 | var_s = __variance_s(x, n) 466 | Tau = s/(.5*n*(n-1)) 467 | 468 | z = __z_score(s, var_s) 469 | p, h, trend = __p_value(z, alpha) 470 | slope, intercept = sens_slope(x_old) 471 | 472 | return res(trend, h, p, z, Tau, s, var_s, slope, intercept) 473 | 474 | 475 | def multivariate_test(x_old, alpha = 0.05): 476 | """ 477 | This function checks the Multivariate Mann-Kendall (MK) test, which is originally proposed by R. M. Hirsch and J. R. Slack (1984) for the seasonal Mann-Kendall test. Later this method also used Helsel (2006) for Regional Mann-Kendall test. 478 | Input: 479 | x: a matrix of data 480 | alpha: significance level (0.05 default) 481 | Output: 482 | trend: tells the trend (increasing, decreasing or no trend) 483 | h: True (if trend is present) or False (if trend is absence) 484 | p: p-value of the significance test 485 | z: normalized test statistics 486 | Tau: Kendall Tau 487 | s: Mann-Kendal's score 488 | var_s: Variance S 489 | slope: Theil-Sen estimator/slope 490 | intercept: intercept of Kendall-Theil Robust Line 491 | Examples 492 | -------- 493 | >>> import numpy as np 494 | >>> import pymannkendall as mk 495 | >>> x = np.random.rand(1000) 496 | >>> trend,h,p,z,tau,s,var_s,slope,intercept = mk.multivariate_test(x,0.05) 497 | """ 498 | res = namedtuple('Multivariate_Mann_Kendall_Test', ['trend', 'h', 'p', 'z', 'Tau', 's', 'var_s', 'slope', 'intercept']) 499 | s = 0 500 | var_s = 0 501 | denom = 0 502 | 503 | x, c = __preprocessing(x_old) 504 | # x, n = __missing_values_analysis(x, method = 'skip') # It makes all column at the same size 505 | 506 | for i in range(c): 507 | if c == 1: 508 | x_new, n = __missing_values_analysis(x, method = 'skip') # It makes all column at deferent size 509 | else: 510 | x_new, n = __missing_values_analysis(x[:,i], method = 'skip') # It makes all column at deferent size 511 | 512 | s = s + __mk_score(x_new, n) 513 | var_s = var_s + __variance_s(x_new, n) 514 | denom = denom + (.5*n*(n-1)) 515 | 516 | Tau = s/denom 517 | 518 | z = __z_score(s, var_s) 519 | p, h, trend = __p_value(z, alpha) 520 | 521 | slope, intercept = seasonal_sens_slope(x_old, period = c) 522 | 523 | return res(trend, h, p, z, Tau, s, var_s, slope, intercept) 524 | 525 | 526 | def seasonal_test(x_old, period = 12, alpha = 0.05): 527 | """ 528 | This function checks the Seasonal Mann-Kendall (MK) test (Hirsch, R. M., Slack, J. R. 1984). 529 | Input: 530 | x: a vector of data 531 | period: seasonal cycle. For monthly data it is 12, weekly data it is 52 (12 is the default) 532 | alpha: significance level (0.05 is the default) 533 | Output: 534 | trend: tells the trend (increasing, decreasing or no trend) 535 | h: True (if trend is present) or False (if trend is absence) 536 | p: p-value of the significance test 537 | z: normalized test statistics 538 | Tau: Kendall Tau 539 | s: Mann-Kendal's score 540 | var_s: Variance S 541 | slope: Theil-Sen estimator/slope 542 | intercept: intercept of Kendall-Theil Robust Line, where full period cycle consider as unit time step 543 | Examples 544 | -------- 545 | >>> import numpy as np 546 | >>> import pymannkendall as mk 547 | >>> x = np.random.rand(1000) 548 | >>> trend,h,p,z,tau,s,var_s,slope,intercept = mk.seasonal_test(x,0.05) 549 | """ 550 | res = namedtuple('Seasonal_Mann_Kendall_Test', ['trend', 'h', 'p', 'z', 'Tau', 's', 'var_s', 'slope', 'intercept']) 551 | x, c = __preprocessing(x_old) 552 | n = len(x) 553 | 554 | if x.ndim == 1: 555 | if np.mod(n,period) != 0: 556 | x = np.pad(x,(0,period - np.mod(n,period)), 'constant', constant_values=(np.nan,)) 557 | 558 | x = x.reshape(int(len(x)/period),period) 559 | 560 | trend, h, p, z, Tau, s, var_s, slope, intercept = multivariate_test(x, alpha = alpha) 561 | 562 | return res(trend, h, p, z, Tau, s, var_s, slope, intercept) 563 | 564 | 565 | def regional_test(x_old, alpha = 0.05): 566 | """ 567 | This function checks the Regional Mann-Kendall (MK) test (Helsel 2006). 568 | Input: 569 | x: a matrix of data 570 | alpha: significance level (0.05 default) 571 | Output: 572 | trend: tells the trend (increasing, decreasing or no trend) 573 | h: True (if trend is present) or False (if trend is absence) 574 | p: p-value of the significance test 575 | z: normalized test statistics 576 | Tau: Kendall Tau 577 | s: Mann-Kendal's score 578 | var_s: Variance S 579 | slope: Theil-Sen estimator/slope 580 | intercept: intercept of Kendall-Theil Robust Line 581 | Examples 582 | -------- 583 | >>> import numpy as np 584 | >>> import pymannkendall as mk 585 | >>> x = np.random.rand(1000,5) # here consider 5 station/location where every station have 1000 data 586 | >>> trend,h,p,z,tau,s,var_s,slope,intercept = mk.regional_test(x,0.05) 587 | """ 588 | res = namedtuple('Regional_Mann_Kendall_Test', ['trend', 'h', 'p', 'z', 'Tau', 's', 'var_s', 'slope', 'intercept']) 589 | 590 | trend, h, p, z, Tau, s, var_s, slope, intercept = multivariate_test(x_old) 591 | 592 | return res(trend, h, p, z, Tau, s, var_s, slope, intercept) 593 | 594 | 595 | def correlated_multivariate_test(x_old, alpha = 0.05): 596 | """ 597 | This function checks the Correlated Multivariate Mann-Kendall (MK) test (Libiseller and Grimvall (2002)). 598 | Input: 599 | x: a matrix of data 600 | alpha: significance level (0.05 default) 601 | Output: 602 | trend: tells the trend (increasing, decreasing or no trend) 603 | h: True (if trend is present) or False (if trend is absence) 604 | p: p-value of the significance test 605 | z: normalized test statistics 606 | Tau: Kendall Tau 607 | s: Mann-Kendal's score 608 | var_s: Variance S 609 | slope: Theil-Sen estimator/slope 610 | intercept: intercept of Kendall-Theil Robust Line 611 | Examples 612 | -------- 613 | >>> import numpy as np 614 | >>> import pymannkendall as mk 615 | >>> x = np.random.rand(1000, 2) 616 | >>> trend,h,p,z,tau,s,var_s,slope,intercept = mk.correlated_multivariate_test(x,0.05) 617 | """ 618 | res = namedtuple('Correlated_Multivariate_Mann_Kendall_Test', ['trend', 'h', 'p', 'z', 'Tau', 's', 'var_s', 'slope', 'intercept']) 619 | x, c = __preprocessing(x_old) 620 | x, n = __missing_values_analysis(x, method = 'skip') 621 | 622 | s = 0 623 | denom = 0 624 | 625 | for i in range(c): 626 | s = s + __mk_score(x[:,i], n) 627 | denom = denom + (.5*n*(n-1)) 628 | 629 | Tau = s/denom 630 | 631 | Gamma = np.ones([c,c]) 632 | 633 | for i in range(1,c): 634 | for j in range(i): 635 | k = __K(x[:,i], x[:,j]) 636 | ri = __R(x[:,i]) 637 | rj = __R(x[:,j]) 638 | Gamma[i,j] = (k + 4 * np.sum(ri * rj) - n*(n+1)**2)/3 639 | Gamma[j,i] = Gamma[i,j] 640 | 641 | for i in range(c): 642 | k = __K(x[:,i], x[:,i]) 643 | ri = __R(x[:,i]) 644 | rj = __R(x[:,i]) 645 | Gamma[i,i] = (k + 4 * np.sum(ri * rj) - n*(n+1)**2)/3 646 | 647 | 648 | var_s = np.sum(Gamma) 649 | 650 | z = s / np.sqrt(var_s) 651 | 652 | p, h, trend = __p_value(z, alpha) 653 | slope, intercept = seasonal_sens_slope(x_old, period=c) 654 | 655 | return res(trend, h, p, z, Tau, s, var_s, slope, intercept) 656 | 657 | 658 | def correlated_seasonal_test(x_old, period = 12 ,alpha = 0.05): 659 | """ 660 | This function checks the Correlated Seasonal Mann-Kendall (MK) test (Hipel [1994] ). 661 | Input: 662 | x: a matrix of data 663 | period: seasonal cycle. For monthly data it is 12, weekly data it is 52 (12 is default) 664 | alpha: significance level (0.05 default) 665 | Output: 666 | trend: tells the trend (increasing, decreasing or no trend) 667 | h: True (if trend is present) or False (if trend is absence) 668 | p: p-value of the significance test 669 | z: normalized test statistics 670 | Tau: Kendall Tau 671 | s: Mann-Kendal's score 672 | var_s: Variance S 673 | slope: Theil-Sen estimator/slope 674 | intercept: intercept of Kendall-Theil Robust Line, where full period cycle consider as unit time step 675 | Examples 676 | -------- 677 | >>> import numpy as np 678 | >>> import pymannkendall as mk 679 | >>> x = np.random.rand(1000) 680 | >>> trend,h,p,z,tau,s,var_s,slope,intercept = mk.correlated_seasonal_test(x,0.05) 681 | """ 682 | res = namedtuple('Correlated_Seasonal_Mann_Kendall_test', ['trend', 'h', 'p', 'z', 'Tau', 's', 'var_s', 'slope', 'intercept']) 683 | x, c = __preprocessing(x_old) 684 | 685 | n = len(x) 686 | 687 | if x.ndim == 1: 688 | if np.mod(n,period) != 0: 689 | x = np.pad(x,(0,period - np.mod(n,period)), 'constant', constant_values=(np.nan,)) 690 | 691 | x = x.reshape(int(len(x)/period),period) 692 | 693 | trend, h, p, z, Tau, s, var_s, slope, intercept = correlated_multivariate_test(x) 694 | 695 | return res(trend, h, p, z, Tau, s, var_s, slope, intercept) 696 | 697 | 698 | def partial_test(x_old, alpha = 0.05): 699 | """ 700 | This function checks the Partial Mann-Kendall (MK) test (Libiseller and Grimvall (2002)). 701 | Input: 702 | x: a matrix with 2 columns 703 | alpha: significance level (0.05 default) 704 | Output: 705 | trend: tells the trend (increasing, decreasing or no trend) 706 | h: True (if trend is present) or False (if trend is absence) 707 | p: p-value of the significance test 708 | z: normalized test statistics 709 | Tau: Kendall Tau 710 | s: Mann-Kendal's score 711 | var_s: Variance S 712 | slope: Theil-Sen estimator/slope 713 | Examples 714 | -------- 715 | >>> import numpy as np 716 | >>> import pymannkendall as mk 717 | >>> x = np.random.rand(1000, 2) 718 | >>> trend,h,p,z,tau,s,var_s,slope,intercept = mk.partial_test(x,0.05) 719 | """ 720 | res = namedtuple('Partial_Mann_Kendall_Test', ['trend', 'h', 'p', 'z', 'Tau', 's', 'var_s', 'slope', 'intercept']) 721 | 722 | x_proc, c = __preprocessing(x_old) 723 | x_proc, n = __missing_values_analysis(x_proc, method = 'skip') 724 | 725 | if c != 2: 726 | raise ValueError('Partial Mann Kendall test required two parameters/columns. Here column no ' + str(c) + ' is not equal to 2.') 727 | 728 | x = x_proc[:,0] 729 | y = x_proc[:,1] 730 | 731 | x_score = __mk_score(x, n) 732 | y_score = __mk_score(y, n) 733 | 734 | k = __K(x, y) 735 | rx = __R(x) 736 | ry = __R(y) 737 | 738 | sigma = (k + 4 * np.sum(rx * ry) - n*(n+1)**2)/3 739 | rho = sigma / (n*(n-1)*(2*n+5)/18) 740 | 741 | s = x_score - rho * y_score 742 | var_s = (1 - rho**2) * (n*(n-1)*(2*n+5))/18 743 | 744 | Tau = x_score/(.5*n*(n-1)) 745 | 746 | z = s / np.sqrt(var_s) 747 | 748 | p, h, trend = __p_value(z, alpha) 749 | slope, intercept = sens_slope(x_old[:,0]) 750 | 751 | return res(trend, h, p, z, Tau, s, var_s, slope, intercept) 752 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | numpy 2 | scipy 3 | pytest -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [versioneer] 2 | VCS = git 3 | style = pep440 4 | versionfile_source = pymannkendall/_version.py 5 | versionfile_build = pymannkendall/_version.py 6 | tag_prefix = v 7 | parentdir_prefix = pymannkendall- -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | from setuptools import setup 5 | import versioneer 6 | 7 | __author__ = "Md. Manjurul Hussain Shourov" 8 | __version__ = versioneer.get_version() 9 | __email__ = "mmhs013@gmail.com" 10 | __license__ = "MIT" 11 | __copyright__ = "Copyright Md. Manjurul Hussain Shourov (2019)" 12 | 13 | with open("README.md", "r") as fh: 14 | long_description = fh.read() 15 | 16 | setup( 17 | name = "pymannkendall", 18 | version = __version__, 19 | cmdclass=versioneer.get_cmdclass(), 20 | author = __author__, 21 | author_email = __email__, 22 | description = ("A python package for non-parametric Mann-Kendall family of trend tests."), 23 | long_description = long_description, 24 | long_description_content_type = "text/markdown", 25 | url = "https://github.com/mmhs013/pymannkendall", 26 | packages = ["pymannkendall"], 27 | license = __license__, 28 | install_requires = ["numpy", "scipy"], 29 | classifiers = [ 30 | "Programming Language :: Python :: 2.7", 31 | "Programming Language :: Python :: 3.4", 32 | "Programming Language :: Python :: 3.5", 33 | "Programming Language :: Python :: 3.6", 34 | "Programming Language :: Python :: 3.7", 35 | "Programming Language :: Python :: 3.8", 36 | "Programming Language :: Python :: 3.9", 37 | "License :: OSI Approved :: MIT License", 38 | "Intended Audience :: Science/Research", 39 | "Operating System :: OS Independent", 40 | "Topic :: Scientific/Engineering", 41 | "Development Status :: 5 - Production/Stable" 42 | ] 43 | ) -------------------------------------------------------------------------------- /tests/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Coder2cdb/pyMannKendall/c2be737a199a694e481d98677e0e2c2c5d21b89d/tests/__init__.py -------------------------------------------------------------------------------- /tests/test_pymannkendall.py: -------------------------------------------------------------------------------- 1 | # In this unit test file, we check all functions with randomly generated No trendy, trendy, arbitrary data. Those results are compared with R package - modifiedmk, fume, rkt, trend. 2 | 3 | import os 4 | import pytest 5 | import numpy as np 6 | import pymannkendall as mk 7 | 8 | @pytest.fixture 9 | def NoTrendData(): 10 | # Generate 360 random value with the same number 11 | NoTrendData = np.ones(360)*np.random.randint(10) 12 | return NoTrendData 13 | 14 | @pytest.fixture 15 | def NoTrend2dData(): 16 | # Generate 2 dimensional 360 random value with same number 17 | NoTrend2dData = np.ones((360,2))*np.random.randint(10) 18 | return NoTrend2dData 19 | 20 | @pytest.fixture 21 | def TrendData(): 22 | # Generate random 360 trendy data with approx. slope 1 23 | TrendData = np.arange(360).astype(np.float) + np.random.rand(360)/10**13 24 | return TrendData 25 | 26 | @pytest.fixture 27 | def arbitrary_1d_data(): 28 | # Generate arbitrary 360 data 29 | arbitrary_1d_data = np.array([ 32., 20., 25., 189., 240., 193., 379., 278., 301., 0., 0., 30 | 82., 0., 4., np.nan, np.nan, 121., 234., 360., 262., 120., 30., 31 | 11., 1., 7., 3., 31., 31., 355., 102., 248., 274., 308., 32 | np.nan, 5., 26., 11., 16., 6., 48., 388., 539., 431., 272., 33 | 404., 186., 0., 2., 0., 4., 1., 54., 272., 459., 235., 34 | 164., 365., 135., 2., np.nan, np.nan, 4., 0., 128., 210., 163., 35 | 446., 225., 462., 467., 19., 13., 0., 3., 17., 132., 178., 36 | 338., 525., 623., 145., 31., 19., 3., 0., 29., 25., 87., 37 | 259., 756., 486., 180., 292., 43., 92., 1., 0., 16., 2., 38 | 0., 130., 253., 594., 111., 273., 30., 0., 4., 0., 27., 39 | 24., 41., 292., 378., 499., 265., 320., 227., 4., 0., 4., 40 | 14., 8., 48., 416., 240., 404., 207., 733., 105., 0., 112., 41 | 0., 14., 0., 30., 140., 202., 289., 159., 424., 106., 3., 42 | 0., 65., 3., 14., 58., 268., 466., 432., 266., 240., 95., 43 | 1., 0., 10., 26., 4., 114., 94., 289., 173., 208., 263., 44 | 156., 5., 0., 16., 16., 14., 0., 111., 475., 534., 432., 45 | 471., 117., 70., 1., 3., 28., 7., 401., 184., 283., 338., 46 | 171., 335., 176., 0., 0., 10., 11., 9., 140., 102., 208., 47 | 298., 245., 220., 29., 2., 27., 10., 13., 26., 84., 143., 48 | 367., 749., 563., 283., 353., 10., 0., 0., 0., 0., 9., 49 | 246., 265., 343., 429., 168., 133., 17., 0., 18., 35., 76., 50 | 158., 272., 250., 190., 289., 466., 84., 0., 0., 0., 0., 51 | 0., 22., 217., 299., 185., 115., 344., 203., 8., np.nan, np.nan, 52 | 0., 5., 284., 123., 254., 476., 496., 326., 27., 20., 0., 53 | 4., 53., 72., 113., 214., 364., 219., 220., 156., 264., 0., 54 | 13., 0., 0., 45., 90., 137., 638., 529., 261., 206., 251., 55 | 0., 0., 5., 9., 58., 72., 138., 130., 471., 328., 356., 56 | 523., 0., 1., 0., 0., 12., 143., 193., 184., 192., 138., 57 | 174., 69., 1., 0., 0., 18., 25., 28., 92., 732., 320., 58 | 256., 302., 131., 15., 0., 27., 0., 22., 20., 213., 393., 59 | 474., 374., 109., 159., 0., 0., 0., 3., 3., 49., 205., 60 | 128., 194., 570., 169., 89., 0., 0., 0., 0., 0., 26., 61 | 185., 286., 92., 225., 244., 190., 3., 20.]) 62 | return arbitrary_1d_data 63 | 64 | @pytest.fixture 65 | def arbitrary_2d_data(): 66 | # Generate arbitrary 80, 2 dimensional data 67 | arbitrary_2d_data = np.array([[ 490., 458.], [ 540., 469.], [ 220., 4630.], [ 390., 321.], [ 450., 541.], 68 | [ 230., 1640.], [ 360., 1060.], [ 460., 264.], [ 430., 665.], [ 430., 680.], 69 | [ 620., 650.], [ 460., np.nan], [ 450., 380.], [ 580., 325.], [ 350., 1020.], 70 | [ 440., 460.], [ 530., 583.], [ 380., 777.], [ 440., 1230.], [ 430., 565.], 71 | [ 680., 533.], [ 250., 4930.], [np.nan, 3810.], [ 450., 469.], [ 500., 473.], 72 | [ 510., 593.], [ 490., 500.], [ 700., 266.], [ 420., 495.], [ 710., 245.], 73 | [ 430., 736.], [ 410., 508.], [ 700., 578.], [ 260., 4590.], [ 260., 4670.], 74 | [ 500., 503.], [ 450., 469.], [ 500., 314.], [ 620., 432.], [ 670., 279.], 75 | [np.nan, 542.], [ 470., 499.], [ 370., 741.], [ 410., 569.], [ 540., 360.], 76 | [ 550., 513.], [ 220., 3910.], [ 460., 364.], [ 390., 472.], [ 550., 245.], 77 | [ 320., np.nan], [ 570., 224.], [ 480., 342.], [ 520., 732.], [ 620., 240.], 78 | [ 520., 472.], [ 430., 679.], [ 400., 1080.], [ 430., 920.], [ 490., 488.], 79 | [ 560., np.nan], [ 370., 595.], [ 460., 295.], [ 390., 542.], [ 330., 1500.], 80 | [ 350., 1080.], [ 480., 334.], [ 390., 423.], [ 500., 216.], [ 410., 366.], 81 | [ 470., 750.], [ 280., 1260.], [ 510., 223.], [np.nan, 462.], [ 310., 7640.], 82 | [ 230., 2340.], [ 470., 239.], [ 330., 1400.], [ 320., 3070.], [ 500., 244.]]) 83 | return arbitrary_2d_data 84 | 85 | def test_sens_slope(NoTrendData, TrendData, arbitrary_1d_data): 86 | # check with no trend data 87 | NoTrendRes = mk.sens_slope(NoTrendData) 88 | assert NoTrendRes.slope == 0.0 89 | 90 | # check with trendy data 91 | TrendRes = mk.sens_slope(TrendData) 92 | assert TrendRes.slope == 1.0 93 | assert round(TrendRes.intercept) == 0.0 94 | 95 | result = mk.sens_slope(arbitrary_1d_data) 96 | assert result.slope == -0.006369426751592357 97 | assert result.intercept == 96.15286624203821 98 | 99 | def test_seasonal_sens_slope(NoTrendData, TrendData, arbitrary_1d_data): 100 | # check with no trend data 101 | NoTrendRes = mk.seasonal_sens_slope(NoTrendData) 102 | assert NoTrendRes.slope == 0.0 103 | 104 | # check with trendy data 105 | TrendRes = mk.seasonal_sens_slope(TrendData) 106 | assert TrendRes.slope == 12.0 107 | assert round(TrendRes.intercept) == 0.0 108 | 109 | result = mk.seasonal_sens_slope(arbitrary_1d_data) 110 | assert result.slope == -0.08695652173913043 111 | assert result.intercept == 96.31159420289855 112 | 113 | def test_original_test(NoTrendData, TrendData, arbitrary_1d_data): 114 | # check with no trend data 115 | NoTrendRes = mk.original_test(NoTrendData) 116 | assert NoTrendRes.trend == 'no trend' 117 | assert NoTrendRes.h == False 118 | assert NoTrendRes.p == 1.0 119 | assert NoTrendRes.z == 0 120 | assert NoTrendRes.Tau == 0.0 121 | assert NoTrendRes.s == 0.0 122 | assert NoTrendRes.var_s == 0.0 123 | 124 | # check with trendy data 125 | TrendRes = mk.original_test(TrendData) 126 | assert TrendRes.trend == 'increasing' 127 | assert TrendRes.h == True 128 | assert TrendRes.p == 0.0 129 | assert TrendRes.Tau == 1.0 130 | assert TrendRes.s == 64620.0 131 | 132 | # check with arbitrary data 133 | result = mk.original_test(arbitrary_1d_data) 134 | assert result.trend == 'no trend' 135 | assert result.h == False 136 | assert result.p == 0.37591058740506833 137 | assert result.z == -0.8854562842589916 138 | assert result.Tau == -0.03153167653875869 139 | assert result.s == -1959.0 140 | assert result.var_s == 4889800.333333333 141 | 142 | def test_hamed_rao_modification_test(NoTrendData, TrendData, arbitrary_1d_data): 143 | # check with no trend data 144 | NoTrendRes = mk.hamed_rao_modification_test(NoTrendData) 145 | assert NoTrendRes.trend == 'no trend' 146 | assert NoTrendRes.h == False 147 | assert NoTrendRes.p == 1.0 148 | assert NoTrendRes.z == 0 149 | assert NoTrendRes.Tau == 0.0 150 | assert NoTrendRes.s == 0.0 151 | 152 | # check with trendy data 153 | TrendRes = mk.hamed_rao_modification_test(TrendData) 154 | assert TrendRes.trend == 'increasing' 155 | assert TrendRes.h == True 156 | assert TrendRes.p == 0.0 157 | assert TrendRes.Tau == 1.0 158 | assert TrendRes.s == 64620.0 159 | 160 | # check with arbitrary data 161 | result = mk.hamed_rao_modification_test(arbitrary_1d_data) 162 | assert result.trend == 'decreasing' 163 | assert result.h == True 164 | assert result.p == 0.00012203829241275166 165 | assert result.z == -3.8419950613710894 166 | assert result.Tau == -0.03153167653875869 167 | assert result.s == -1959.0 168 | assert result.var_s == 259723.81316716125 169 | 170 | def test_hamed_rao_modification_test_lag3(NoTrendData, TrendData, arbitrary_1d_data): 171 | # check with no trend data 172 | NoTrendRes = mk.hamed_rao_modification_test(NoTrendData, lag=3) 173 | assert NoTrendRes.trend == 'no trend' 174 | assert NoTrendRes.h == False 175 | assert NoTrendRes.p == 1.0 176 | assert NoTrendRes.z == 0 177 | assert NoTrendRes.Tau == 0.0 178 | assert NoTrendRes.s == 0.0 179 | 180 | # check with trendy data 181 | TrendRes = mk.hamed_rao_modification_test(TrendData, lag=3) 182 | assert TrendRes.trend == 'increasing' 183 | assert TrendRes.h == True 184 | assert TrendRes.p == 0.0 185 | assert TrendRes.Tau == 1.0 186 | assert TrendRes.s == 64620.0 187 | 188 | # check with arbitrary data 189 | result = mk.hamed_rao_modification_test(arbitrary_1d_data, lag=3) 190 | assert result.trend == 'no trend' 191 | assert result.h == False 192 | assert result.p == 0.6037112685123898 193 | assert result.z == -0.5190709455046154 194 | assert result.Tau == -0.03153167653875869 195 | assert result.s == -1959.0 196 | assert result.var_s == 14228919.889368296 197 | 198 | def test_yue_wang_modification_test(NoTrendData, TrendData, arbitrary_1d_data): 199 | # check with no trend data 200 | NoTrendRes = mk.yue_wang_modification_test(NoTrendData) 201 | assert NoTrendRes.trend == 'no trend' 202 | assert NoTrendRes.h == False 203 | assert NoTrendRes.p == 1.0 204 | assert NoTrendRes.z == 0 205 | assert NoTrendRes.Tau == 0.0 206 | assert NoTrendRes.s == 0.0 207 | 208 | # check with trendy data 209 | TrendRes = mk.yue_wang_modification_test(TrendData) 210 | assert TrendRes.trend == 'increasing' 211 | assert TrendRes.h == True 212 | assert TrendRes.p == 0.0 213 | assert TrendRes.Tau == 1.0 214 | assert TrendRes.s == 64620.0 215 | 216 | # check with arbitrary data 217 | result = mk.yue_wang_modification_test(arbitrary_1d_data) 218 | assert result.trend == 'decreasing' 219 | assert result.h == True 220 | np.testing.assert_allclose(result.p, 0.008401398144858296) 221 | np.testing.assert_allclose(result.z, -2.6354977553857504) 222 | assert result.Tau == -0.03153167653875869 223 | assert result.s == -1959.0 224 | np.testing.assert_allclose(result.var_s, 551950.4269211816) 225 | 226 | def test_yue_wang_modification_test_lag1(NoTrendData, TrendData, arbitrary_1d_data): 227 | # check with no trend data 228 | NoTrendRes = mk.yue_wang_modification_test(NoTrendData, lag=1) 229 | assert NoTrendRes.trend == 'no trend' 230 | assert NoTrendRes.h == False 231 | assert NoTrendRes.p == 1.0 232 | assert NoTrendRes.z == 0 233 | assert NoTrendRes.Tau == 0.0 234 | assert NoTrendRes.s == 0.0 235 | 236 | # check with trendy data 237 | TrendRes = mk.yue_wang_modification_test(TrendData, lag=1) 238 | assert TrendRes.trend == 'increasing' 239 | assert TrendRes.h == True 240 | assert TrendRes.p == 0.0 241 | assert TrendRes.Tau == 1.0 242 | assert TrendRes.s == 64620.0 243 | 244 | # check with arbitrary data 245 | result = mk.yue_wang_modification_test(arbitrary_1d_data, lag=1) 246 | assert result.trend == 'no trend' 247 | assert result.h == False 248 | np.testing.assert_allclose(result.p, 0.5433112864060043) 249 | np.testing.assert_allclose(result.z, -0.6078133313683783) 250 | assert result.Tau == -0.03153167653875869 251 | assert result.s == -1959.0 252 | np.testing.assert_allclose(result.var_s, 10377313.384506395) 253 | 254 | def test_pre_whitening_modification_test(NoTrendData, TrendData, arbitrary_1d_data): 255 | # check with no trend data 256 | NoTrendRes = mk.pre_whitening_modification_test(NoTrendData) 257 | assert NoTrendRes.trend == 'no trend' 258 | assert NoTrendRes.h == False 259 | assert NoTrendRes.p == 1.0 260 | assert NoTrendRes.z == 0 261 | assert NoTrendRes.Tau == 0.0 262 | 263 | # check with trendy data 264 | TrendRes = mk.pre_whitening_modification_test(TrendData) 265 | assert TrendRes.trend == 'increasing' 266 | assert TrendRes.h == True 267 | assert TrendRes.p == 0.0 268 | 269 | # check with arbitrary data 270 | result = mk.pre_whitening_modification_test(arbitrary_1d_data) 271 | assert result.trend == 'no trend' 272 | assert result.h == False 273 | assert result.p == 0.9212742990272651 274 | assert result.z == -0.09882867695903437 275 | assert result.Tau == -0.003545066045066045 276 | assert result.s == -219.0 277 | assert result.var_s == 4865719.0 278 | 279 | def test_trend_free_pre_whitening_modification_test(NoTrendData, TrendData, arbitrary_1d_data): 280 | # check with no trend data 281 | NoTrendRes = mk.trend_free_pre_whitening_modification_test(NoTrendData) 282 | assert NoTrendRes.trend == 'no trend' 283 | assert NoTrendRes.h == False 284 | assert NoTrendRes.p == 1.0 285 | assert NoTrendRes.z == 0 286 | assert NoTrendRes.Tau == 0.0 287 | 288 | # check with trendy data 289 | TrendRes = mk.trend_free_pre_whitening_modification_test(TrendData) 290 | assert TrendRes.trend == 'increasing' 291 | assert TrendRes.h == True 292 | assert TrendRes.p == 0.0 293 | assert TrendRes.Tau == 1.0 294 | 295 | # check with arbitrary data 296 | result = mk.trend_free_pre_whitening_modification_test(arbitrary_1d_data) 297 | assert result.trend == 'no trend' 298 | assert result.h == False 299 | assert result.p == 0.7755465706913385 300 | assert result.z == -0.28512735834365455 301 | assert result.Tau == -0.010198135198135198 302 | assert result.s == -630.0 303 | assert result.var_s == 4866576.0 304 | 305 | def test_seasonal_test(NoTrendData, TrendData, arbitrary_1d_data): 306 | # check with no trend data 307 | NoTrendRes = mk.seasonal_test(NoTrendData, period=12) 308 | assert NoTrendRes.trend == 'no trend' 309 | assert NoTrendRes.h == False 310 | assert NoTrendRes.p == 1.0 311 | assert NoTrendRes.z == 0 312 | assert NoTrendRes.Tau == 0.0 313 | assert NoTrendRes.s == 0.0 314 | 315 | # check with trendy data 316 | TrendRes = mk.seasonal_test(TrendData, period=12) 317 | assert TrendRes.trend == 'increasing' 318 | assert TrendRes.h == True 319 | assert TrendRes.p == 0.0 320 | assert TrendRes.Tau == 1.0 321 | assert TrendRes.s == 5220.0 322 | 323 | # check with arbitrary data 324 | result = mk.seasonal_test(arbitrary_1d_data, period=12) 325 | assert result.trend == 'decreasing' 326 | assert result.h == True 327 | assert result.p == 0.03263834596177739 328 | assert result.z == -2.136504114534638 329 | assert result.Tau == -0.0794979079497908 330 | assert result.s == -399.0 331 | assert result.var_s == 34702.333333333336 332 | 333 | def test_regional_test(NoTrend2dData,arbitrary_2d_data): 334 | # check with no trend data 335 | NoTrendRes = mk.regional_test(NoTrend2dData) 336 | assert NoTrendRes.trend == 'no trend' 337 | assert NoTrendRes.h == False 338 | assert NoTrendRes.p == 1.0 339 | assert NoTrendRes.z == 0 340 | assert NoTrendRes.Tau == 0.0 341 | assert NoTrendRes.s == 0.0 342 | assert NoTrendRes.var_s == 0.0 343 | assert NoTrendRes.slope == 0.0 344 | 345 | # check with arbitrary data 346 | result = mk.regional_test(arbitrary_2d_data) 347 | assert result.trend == 'no trend' 348 | assert result.h == False 349 | assert result.p == 0.2613018311185482 350 | assert result.z == -1.1233194854000186 351 | assert result.Tau == -0.06185919343814081 352 | assert result.s == -362.0 353 | assert result.var_s == 103278.0 354 | assert result.slope == -0.680446465481604 355 | 356 | def test_correlated_multivariate_test(NoTrend2dData,arbitrary_2d_data): 357 | # check with no trend data 358 | NoTrendRes = mk.correlated_multivariate_test(NoTrend2dData) 359 | assert NoTrendRes.trend == 'no trend' 360 | assert NoTrendRes.h == False 361 | assert NoTrendRes.Tau == 0.0 362 | assert NoTrendRes.s == 0.0 363 | assert NoTrendRes.var_s == 0.0 364 | assert NoTrendRes.slope == 0.0 365 | 366 | # check with arbitrary data 367 | result = mk.correlated_multivariate_test(arbitrary_2d_data) 368 | assert result.trend == 'no trend' 369 | assert result.h == False 370 | assert result.p == 0.05777683185903615 371 | assert result.z == -1.8973873659119118 372 | assert result.Tau == -0.05868196964087375 373 | assert result.s == -317.0 374 | assert result.var_s == 27913.000000000007 375 | assert result.slope == -0.680446465481604 376 | 377 | def test_correlated_seasonal_test(NoTrendData, TrendData, arbitrary_1d_data): 378 | # check with no trend data 379 | NoTrendRes = mk.correlated_seasonal_test(NoTrendData, period=12) 380 | assert NoTrendRes.trend == 'no trend' 381 | assert NoTrendRes.h == False 382 | assert NoTrendRes.Tau == 0.0 383 | assert NoTrendRes.s == 0.0 384 | 385 | # check with trendy data 386 | TrendRes = mk.correlated_seasonal_test(TrendData, period=12) 387 | assert TrendRes.trend == 'increasing' 388 | assert TrendRes.h == True 389 | assert round(TrendRes.p) == 0.0 390 | assert TrendRes.Tau == 1.0 391 | assert TrendRes.s == 5220.0 392 | 393 | # check with arbitrary data 394 | result = mk.correlated_seasonal_test(arbitrary_1d_data, period=12) 395 | assert result.trend == 'no trend' 396 | assert result.h == False 397 | assert result.p == 0.06032641537423844 398 | assert result.z == -1.878400366918792 399 | assert result.Tau == -0.10054347826086957 400 | assert result.s == -333.0 401 | assert result.var_s == 31427.666666666664 402 | 403 | def test_partial_test(NoTrend2dData,arbitrary_2d_data): 404 | # check with no trend data 405 | NoTrendRes = mk.partial_test(NoTrend2dData) 406 | assert NoTrendRes.trend == 'no trend' 407 | assert NoTrendRes.h == False 408 | assert NoTrendRes.p == 1.0 409 | assert NoTrendRes.z == 0 410 | assert NoTrendRes.Tau == 0.0 411 | assert NoTrendRes.s == 0.0 412 | assert NoTrendRes.var_s == 5205500.0 413 | 414 | # check with arbitrary data 415 | result = mk.partial_test(arbitrary_2d_data) 416 | assert result.trend == 'no trend' 417 | assert result.h == False 418 | assert result.p == 0.06670496348739152 419 | assert result.z == -1.8336567432191642 420 | assert result.Tau == -0.07552758237689744 421 | assert result.s == -282.53012319329804 422 | assert result.var_s == 23740.695506142725 423 | assert result.slope == -0.5634920634920635 424 | assert result.intercept == 471.9761904761905 -------------------------------------------------------------------------------- /versioneer.py: -------------------------------------------------------------------------------- 1 | 2 | # Version: 0.18 3 | 4 | """The Versioneer - like a rocketeer, but for versions. 5 | 6 | The Versioneer 7 | ============== 8 | 9 | * like a rocketeer, but for versions! 10 | * https://github.com/warner/python-versioneer 11 | * Brian Warner 12 | * License: Public Domain 13 | * Compatible With: python2.6, 2.7, 3.2, 3.3, 3.4, 3.5, 3.6, and pypy 14 | * [![Latest Version] 15 | (https://pypip.in/version/versioneer/badge.svg?style=flat) 16 | ](https://pypi.python.org/pypi/versioneer/) 17 | * [![Build Status] 18 | (https://travis-ci.org/warner/python-versioneer.png?branch=master) 19 | ](https://travis-ci.org/warner/python-versioneer) 20 | 21 | This is a tool for managing a recorded version number in distutils-based 22 | python projects. The goal is to remove the tedious and error-prone "update 23 | the embedded version string" step from your release process. Making a new 24 | release should be as easy as recording a new tag in your version-control 25 | system, and maybe making new tarballs. 26 | 27 | 28 | ## Quick Install 29 | 30 | * `pip install versioneer` to somewhere to your $PATH 31 | * add a `[versioneer]` section to your setup.cfg (see below) 32 | * run `versioneer install` in your source tree, commit the results 33 | 34 | ## Version Identifiers 35 | 36 | Source trees come from a variety of places: 37 | 38 | * a version-control system checkout (mostly used by developers) 39 | * a nightly tarball, produced by build automation 40 | * a snapshot tarball, produced by a web-based VCS browser, like github's 41 | "tarball from tag" feature 42 | * a release tarball, produced by "setup.py sdist", distributed through PyPI 43 | 44 | Within each source tree, the version identifier (either a string or a number, 45 | this tool is format-agnostic) can come from a variety of places: 46 | 47 | * ask the VCS tool itself, e.g. "git describe" (for checkouts), which knows 48 | about recent "tags" and an absolute revision-id 49 | * the name of the directory into which the tarball was unpacked 50 | * an expanded VCS keyword ($Id$, etc) 51 | * a `_version.py` created by some earlier build step 52 | 53 | For released software, the version identifier is closely related to a VCS 54 | tag. Some projects use tag names that include more than just the version 55 | string (e.g. "myproject-1.2" instead of just "1.2"), in which case the tool 56 | needs to strip the tag prefix to extract the version identifier. For 57 | unreleased software (between tags), the version identifier should provide 58 | enough information to help developers recreate the same tree, while also 59 | giving them an idea of roughly how old the tree is (after version 1.2, before 60 | version 1.3). Many VCS systems can report a description that captures this, 61 | for example `git describe --tags --dirty --always` reports things like 62 | "0.7-1-g574ab98-dirty" to indicate that the checkout is one revision past the 63 | 0.7 tag, has a unique revision id of "574ab98", and is "dirty" (it has 64 | uncommitted changes. 65 | 66 | The version identifier is used for multiple purposes: 67 | 68 | * to allow the module to self-identify its version: `myproject.__version__` 69 | * to choose a name and prefix for a 'setup.py sdist' tarball 70 | 71 | ## Theory of Operation 72 | 73 | Versioneer works by adding a special `_version.py` file into your source 74 | tree, where your `__init__.py` can import it. This `_version.py` knows how to 75 | dynamically ask the VCS tool for version information at import time. 76 | 77 | `_version.py` also contains `$Revision$` markers, and the installation 78 | process marks `_version.py` to have this marker rewritten with a tag name 79 | during the `git archive` command. As a result, generated tarballs will 80 | contain enough information to get the proper version. 81 | 82 | To allow `setup.py` to compute a version too, a `versioneer.py` is added to 83 | the top level of your source tree, next to `setup.py` and the `setup.cfg` 84 | that configures it. This overrides several distutils/setuptools commands to 85 | compute the version when invoked, and changes `setup.py build` and `setup.py 86 | sdist` to replace `_version.py` with a small static file that contains just 87 | the generated version data. 88 | 89 | ## Installation 90 | 91 | See [INSTALL.md](./INSTALL.md) for detailed installation instructions. 92 | 93 | ## Version-String Flavors 94 | 95 | Code which uses Versioneer can learn about its version string at runtime by 96 | importing `_version` from your main `__init__.py` file and running the 97 | `get_versions()` function. From the "outside" (e.g. in `setup.py`), you can 98 | import the top-level `versioneer.py` and run `get_versions()`. 99 | 100 | Both functions return a dictionary with different flavors of version 101 | information: 102 | 103 | * `['version']`: A condensed version string, rendered using the selected 104 | style. This is the most commonly used value for the project's version 105 | string. The default "pep440" style yields strings like `0.11`, 106 | `0.11+2.g1076c97`, or `0.11+2.g1076c97.dirty`. See the "Styles" section 107 | below for alternative styles. 108 | 109 | * `['full-revisionid']`: detailed revision identifier. For Git, this is the 110 | full SHA1 commit id, e.g. "1076c978a8d3cfc70f408fe5974aa6c092c949ac". 111 | 112 | * `['date']`: Date and time of the latest `HEAD` commit. For Git, it is the 113 | commit date in ISO 8601 format. This will be None if the date is not 114 | available. 115 | 116 | * `['dirty']`: a boolean, True if the tree has uncommitted changes. Note that 117 | this is only accurate if run in a VCS checkout, otherwise it is likely to 118 | be False or None 119 | 120 | * `['error']`: if the version string could not be computed, this will be set 121 | to a string describing the problem, otherwise it will be None. It may be 122 | useful to throw an exception in setup.py if this is set, to avoid e.g. 123 | creating tarballs with a version string of "unknown". 124 | 125 | Some variants are more useful than others. Including `full-revisionid` in a 126 | bug report should allow developers to reconstruct the exact code being tested 127 | (or indicate the presence of local changes that should be shared with the 128 | developers). `version` is suitable for display in an "about" box or a CLI 129 | `--version` output: it can be easily compared against release notes and lists 130 | of bugs fixed in various releases. 131 | 132 | The installer adds the following text to your `__init__.py` to place a basic 133 | version in `YOURPROJECT.__version__`: 134 | 135 | from ._version import get_versions 136 | __version__ = get_versions()['version'] 137 | del get_versions 138 | 139 | ## Styles 140 | 141 | The setup.cfg `style=` configuration controls how the VCS information is 142 | rendered into a version string. 143 | 144 | The default style, "pep440", produces a PEP440-compliant string, equal to the 145 | un-prefixed tag name for actual releases, and containing an additional "local 146 | version" section with more detail for in-between builds. For Git, this is 147 | TAG[+DISTANCE.gHEX[.dirty]] , using information from `git describe --tags 148 | --dirty --always`. For example "0.11+2.g1076c97.dirty" indicates that the 149 | tree is like the "1076c97" commit but has uncommitted changes (".dirty"), and 150 | that this commit is two revisions ("+2") beyond the "0.11" tag. For released 151 | software (exactly equal to a known tag), the identifier will only contain the 152 | stripped tag, e.g. "0.11". 153 | 154 | Other styles are available. See [details.md](details.md) in the Versioneer 155 | source tree for descriptions. 156 | 157 | ## Debugging 158 | 159 | Versioneer tries to avoid fatal errors: if something goes wrong, it will tend 160 | to return a version of "0+unknown". To investigate the problem, run `setup.py 161 | version`, which will run the version-lookup code in a verbose mode, and will 162 | display the full contents of `get_versions()` (including the `error` string, 163 | which may help identify what went wrong). 164 | 165 | ## Known Limitations 166 | 167 | Some situations are known to cause problems for Versioneer. This details the 168 | most significant ones. More can be found on Github 169 | [issues page](https://github.com/warner/python-versioneer/issues). 170 | 171 | ### Subprojects 172 | 173 | Versioneer has limited support for source trees in which `setup.py` is not in 174 | the root directory (e.g. `setup.py` and `.git/` are *not* siblings). The are 175 | two common reasons why `setup.py` might not be in the root: 176 | 177 | * Source trees which contain multiple subprojects, such as 178 | [Buildbot](https://github.com/buildbot/buildbot), which contains both 179 | "master" and "slave" subprojects, each with their own `setup.py`, 180 | `setup.cfg`, and `tox.ini`. Projects like these produce multiple PyPI 181 | distributions (and upload multiple independently-installable tarballs). 182 | * Source trees whose main purpose is to contain a C library, but which also 183 | provide bindings to Python (and perhaps other langauges) in subdirectories. 184 | 185 | Versioneer will look for `.git` in parent directories, and most operations 186 | should get the right version string. However `pip` and `setuptools` have bugs 187 | and implementation details which frequently cause `pip install .` from a 188 | subproject directory to fail to find a correct version string (so it usually 189 | defaults to `0+unknown`). 190 | 191 | `pip install --editable .` should work correctly. `setup.py install` might 192 | work too. 193 | 194 | Pip-8.1.1 is known to have this problem, but hopefully it will get fixed in 195 | some later version. 196 | 197 | [Bug #38](https://github.com/warner/python-versioneer/issues/38) is tracking 198 | this issue. The discussion in 199 | [PR #61](https://github.com/warner/python-versioneer/pull/61) describes the 200 | issue from the Versioneer side in more detail. 201 | [pip PR#3176](https://github.com/pypa/pip/pull/3176) and 202 | [pip PR#3615](https://github.com/pypa/pip/pull/3615) contain work to improve 203 | pip to let Versioneer work correctly. 204 | 205 | Versioneer-0.16 and earlier only looked for a `.git` directory next to the 206 | `setup.cfg`, so subprojects were completely unsupported with those releases. 207 | 208 | ### Editable installs with setuptools <= 18.5 209 | 210 | `setup.py develop` and `pip install --editable .` allow you to install a 211 | project into a virtualenv once, then continue editing the source code (and 212 | test) without re-installing after every change. 213 | 214 | "Entry-point scripts" (`setup(entry_points={"console_scripts": ..})`) are a 215 | convenient way to specify executable scripts that should be installed along 216 | with the python package. 217 | 218 | These both work as expected when using modern setuptools. When using 219 | setuptools-18.5 or earlier, however, certain operations will cause 220 | `pkg_resources.DistributionNotFound` errors when running the entrypoint 221 | script, which must be resolved by re-installing the package. This happens 222 | when the install happens with one version, then the egg_info data is 223 | regenerated while a different version is checked out. Many setup.py commands 224 | cause egg_info to be rebuilt (including `sdist`, `wheel`, and installing into 225 | a different virtualenv), so this can be surprising. 226 | 227 | [Bug #83](https://github.com/warner/python-versioneer/issues/83) describes 228 | this one, but upgrading to a newer version of setuptools should probably 229 | resolve it. 230 | 231 | ### Unicode version strings 232 | 233 | While Versioneer works (and is continually tested) with both Python 2 and 234 | Python 3, it is not entirely consistent with bytes-vs-unicode distinctions. 235 | Newer releases probably generate unicode version strings on py2. It's not 236 | clear that this is wrong, but it may be surprising for applications when then 237 | write these strings to a network connection or include them in bytes-oriented 238 | APIs like cryptographic checksums. 239 | 240 | [Bug #71](https://github.com/warner/python-versioneer/issues/71) investigates 241 | this question. 242 | 243 | 244 | ## Updating Versioneer 245 | 246 | To upgrade your project to a new release of Versioneer, do the following: 247 | 248 | * install the new Versioneer (`pip install -U versioneer` or equivalent) 249 | * edit `setup.cfg`, if necessary, to include any new configuration settings 250 | indicated by the release notes. See [UPGRADING](./UPGRADING.md) for details. 251 | * re-run `versioneer install` in your source tree, to replace 252 | `SRC/_version.py` 253 | * commit any changed files 254 | 255 | ## Future Directions 256 | 257 | This tool is designed to make it easily extended to other version-control 258 | systems: all VCS-specific components are in separate directories like 259 | src/git/ . The top-level `versioneer.py` script is assembled from these 260 | components by running make-versioneer.py . In the future, make-versioneer.py 261 | will take a VCS name as an argument, and will construct a version of 262 | `versioneer.py` that is specific to the given VCS. It might also take the 263 | configuration arguments that are currently provided manually during 264 | installation by editing setup.py . Alternatively, it might go the other 265 | direction and include code from all supported VCS systems, reducing the 266 | number of intermediate scripts. 267 | 268 | 269 | ## License 270 | 271 | To make Versioneer easier to embed, all its code is dedicated to the public 272 | domain. The `_version.py` that it creates is also in the public domain. 273 | Specifically, both are released under the Creative Commons "Public Domain 274 | Dedication" license (CC0-1.0), as described in 275 | https://creativecommons.org/publicdomain/zero/1.0/ . 276 | 277 | """ 278 | 279 | from __future__ import print_function 280 | try: 281 | import configparser 282 | except ImportError: 283 | import ConfigParser as configparser 284 | import errno 285 | import json 286 | import os 287 | import re 288 | import subprocess 289 | import sys 290 | 291 | 292 | class VersioneerConfig: 293 | """Container for Versioneer configuration parameters.""" 294 | 295 | 296 | def get_root(): 297 | """Get the project root directory. 298 | 299 | We require that all commands are run from the project root, i.e. the 300 | directory that contains setup.py, setup.cfg, and versioneer.py . 301 | """ 302 | root = os.path.realpath(os.path.abspath(os.getcwd())) 303 | setup_py = os.path.join(root, "setup.py") 304 | versioneer_py = os.path.join(root, "versioneer.py") 305 | if not (os.path.exists(setup_py) or os.path.exists(versioneer_py)): 306 | # allow 'python path/to/setup.py COMMAND' 307 | root = os.path.dirname(os.path.realpath(os.path.abspath(sys.argv[0]))) 308 | setup_py = os.path.join(root, "setup.py") 309 | versioneer_py = os.path.join(root, "versioneer.py") 310 | if not (os.path.exists(setup_py) or os.path.exists(versioneer_py)): 311 | err = ("Versioneer was unable to run the project root directory. " 312 | "Versioneer requires setup.py to be executed from " 313 | "its immediate directory (like 'python setup.py COMMAND'), " 314 | "or in a way that lets it use sys.argv[0] to find the root " 315 | "(like 'python path/to/setup.py COMMAND').") 316 | raise VersioneerBadRootError(err) 317 | try: 318 | # Certain runtime workflows (setup.py install/develop in a setuptools 319 | # tree) execute all dependencies in a single python process, so 320 | # "versioneer" may be imported multiple times, and python's shared 321 | # module-import table will cache the first one. So we can't use 322 | # os.path.dirname(__file__), as that will find whichever 323 | # versioneer.py was first imported, even in later projects. 324 | me = os.path.realpath(os.path.abspath(__file__)) 325 | me_dir = os.path.normcase(os.path.splitext(me)[0]) 326 | vsr_dir = os.path.normcase(os.path.splitext(versioneer_py)[0]) 327 | if me_dir != vsr_dir: 328 | print("Warning: build in %s is using versioneer.py from %s" 329 | % (os.path.dirname(me), versioneer_py)) 330 | except NameError: 331 | pass 332 | return root 333 | 334 | 335 | def get_config_from_root(root): 336 | """Read the project setup.cfg file to determine Versioneer config.""" 337 | # This might raise EnvironmentError (if setup.cfg is missing), or 338 | # configparser.NoSectionError (if it lacks a [versioneer] section), or 339 | # configparser.NoOptionError (if it lacks "VCS="). See the docstring at 340 | # the top of versioneer.py for instructions on writing your setup.cfg . 341 | setup_cfg = os.path.join(root, "setup.cfg") 342 | parser = configparser.SafeConfigParser() 343 | with open(setup_cfg, "r") as f: 344 | parser.readfp(f) 345 | VCS = parser.get("versioneer", "VCS") # mandatory 346 | 347 | def get(parser, name): 348 | if parser.has_option("versioneer", name): 349 | return parser.get("versioneer", name) 350 | return None 351 | cfg = VersioneerConfig() 352 | cfg.VCS = VCS 353 | cfg.style = get(parser, "style") or "" 354 | cfg.versionfile_source = get(parser, "versionfile_source") 355 | cfg.versionfile_build = get(parser, "versionfile_build") 356 | cfg.tag_prefix = get(parser, "tag_prefix") 357 | if cfg.tag_prefix in ("''", '""'): 358 | cfg.tag_prefix = "" 359 | cfg.parentdir_prefix = get(parser, "parentdir_prefix") 360 | cfg.verbose = get(parser, "verbose") 361 | return cfg 362 | 363 | 364 | class NotThisMethod(Exception): 365 | """Exception raised if a method is not valid for the current scenario.""" 366 | 367 | 368 | # these dictionaries contain VCS-specific tools 369 | LONG_VERSION_PY = {} 370 | HANDLERS = {} 371 | 372 | 373 | def register_vcs_handler(vcs, method): # decorator 374 | """Decorator to mark a method as the handler for a particular VCS.""" 375 | def decorate(f): 376 | """Store f in HANDLERS[vcs][method].""" 377 | if vcs not in HANDLERS: 378 | HANDLERS[vcs] = {} 379 | HANDLERS[vcs][method] = f 380 | return f 381 | return decorate 382 | 383 | 384 | def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False, 385 | env=None): 386 | """Call the given command(s).""" 387 | assert isinstance(commands, list) 388 | p = None 389 | for c in commands: 390 | try: 391 | dispcmd = str([c] + args) 392 | # remember shell=False, so use git.cmd on windows, not just git 393 | p = subprocess.Popen([c] + args, cwd=cwd, env=env, 394 | stdout=subprocess.PIPE, 395 | stderr=(subprocess.PIPE if hide_stderr 396 | else None)) 397 | break 398 | except EnvironmentError: 399 | e = sys.exc_info()[1] 400 | if e.errno == errno.ENOENT: 401 | continue 402 | if verbose: 403 | print("unable to run %s" % dispcmd) 404 | print(e) 405 | return None, None 406 | else: 407 | if verbose: 408 | print("unable to find command, tried %s" % (commands,)) 409 | return None, None 410 | stdout = p.communicate()[0].strip() 411 | if sys.version_info[0] >= 3: 412 | stdout = stdout.decode() 413 | if p.returncode != 0: 414 | if verbose: 415 | print("unable to run %s (error)" % dispcmd) 416 | print("stdout was %s" % stdout) 417 | return None, p.returncode 418 | return stdout, p.returncode 419 | 420 | 421 | LONG_VERSION_PY['git'] = ''' 422 | # This file helps to compute a version number in source trees obtained from 423 | # git-archive tarball (such as those provided by githubs download-from-tag 424 | # feature). Distribution tarballs (built by setup.py sdist) and build 425 | # directories (produced by setup.py build) will contain a much shorter file 426 | # that just contains the computed version number. 427 | 428 | # This file is released into the public domain. Generated by 429 | # versioneer-0.18 (https://github.com/warner/python-versioneer) 430 | 431 | """Git implementation of _version.py.""" 432 | 433 | import errno 434 | import os 435 | import re 436 | import subprocess 437 | import sys 438 | 439 | 440 | def get_keywords(): 441 | """Get the keywords needed to look up the version information.""" 442 | # these strings will be replaced by git during git-archive. 443 | # setup.py/versioneer.py will grep for the variable names, so they must 444 | # each be defined on a line of their own. _version.py will just call 445 | # get_keywords(). 446 | git_refnames = "%(DOLLAR)sFormat:%%d%(DOLLAR)s" 447 | git_full = "%(DOLLAR)sFormat:%%H%(DOLLAR)s" 448 | git_date = "%(DOLLAR)sFormat:%%ci%(DOLLAR)s" 449 | keywords = {"refnames": git_refnames, "full": git_full, "date": git_date} 450 | return keywords 451 | 452 | 453 | class VersioneerConfig: 454 | """Container for Versioneer configuration parameters.""" 455 | 456 | 457 | def get_config(): 458 | """Create, populate and return the VersioneerConfig() object.""" 459 | # these strings are filled in when 'setup.py versioneer' creates 460 | # _version.py 461 | cfg = VersioneerConfig() 462 | cfg.VCS = "git" 463 | cfg.style = "%(STYLE)s" 464 | cfg.tag_prefix = "%(TAG_PREFIX)s" 465 | cfg.parentdir_prefix = "%(PARENTDIR_PREFIX)s" 466 | cfg.versionfile_source = "%(VERSIONFILE_SOURCE)s" 467 | cfg.verbose = False 468 | return cfg 469 | 470 | 471 | class NotThisMethod(Exception): 472 | """Exception raised if a method is not valid for the current scenario.""" 473 | 474 | 475 | LONG_VERSION_PY = {} 476 | HANDLERS = {} 477 | 478 | 479 | def register_vcs_handler(vcs, method): # decorator 480 | """Decorator to mark a method as the handler for a particular VCS.""" 481 | def decorate(f): 482 | """Store f in HANDLERS[vcs][method].""" 483 | if vcs not in HANDLERS: 484 | HANDLERS[vcs] = {} 485 | HANDLERS[vcs][method] = f 486 | return f 487 | return decorate 488 | 489 | 490 | def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False, 491 | env=None): 492 | """Call the given command(s).""" 493 | assert isinstance(commands, list) 494 | p = None 495 | for c in commands: 496 | try: 497 | dispcmd = str([c] + args) 498 | # remember shell=False, so use git.cmd on windows, not just git 499 | p = subprocess.Popen([c] + args, cwd=cwd, env=env, 500 | stdout=subprocess.PIPE, 501 | stderr=(subprocess.PIPE if hide_stderr 502 | else None)) 503 | break 504 | except EnvironmentError: 505 | e = sys.exc_info()[1] 506 | if e.errno == errno.ENOENT: 507 | continue 508 | if verbose: 509 | print("unable to run %%s" %% dispcmd) 510 | print(e) 511 | return None, None 512 | else: 513 | if verbose: 514 | print("unable to find command, tried %%s" %% (commands,)) 515 | return None, None 516 | stdout = p.communicate()[0].strip() 517 | if sys.version_info[0] >= 3: 518 | stdout = stdout.decode() 519 | if p.returncode != 0: 520 | if verbose: 521 | print("unable to run %%s (error)" %% dispcmd) 522 | print("stdout was %%s" %% stdout) 523 | return None, p.returncode 524 | return stdout, p.returncode 525 | 526 | 527 | def versions_from_parentdir(parentdir_prefix, root, verbose): 528 | """Try to determine the version from the parent directory name. 529 | 530 | Source tarballs conventionally unpack into a directory that includes both 531 | the project name and a version string. We will also support searching up 532 | two directory levels for an appropriately named parent directory 533 | """ 534 | rootdirs = [] 535 | 536 | for i in range(3): 537 | dirname = os.path.basename(root) 538 | if dirname.startswith(parentdir_prefix): 539 | return {"version": dirname[len(parentdir_prefix):], 540 | "full-revisionid": None, 541 | "dirty": False, "error": None, "date": None} 542 | else: 543 | rootdirs.append(root) 544 | root = os.path.dirname(root) # up a level 545 | 546 | if verbose: 547 | print("Tried directories %%s but none started with prefix %%s" %% 548 | (str(rootdirs), parentdir_prefix)) 549 | raise NotThisMethod("rootdir doesn't start with parentdir_prefix") 550 | 551 | 552 | @register_vcs_handler("git", "get_keywords") 553 | def git_get_keywords(versionfile_abs): 554 | """Extract version information from the given file.""" 555 | # the code embedded in _version.py can just fetch the value of these 556 | # keywords. When used from setup.py, we don't want to import _version.py, 557 | # so we do it with a regexp instead. This function is not used from 558 | # _version.py. 559 | keywords = {} 560 | try: 561 | f = open(versionfile_abs, "r") 562 | for line in f.readlines(): 563 | if line.strip().startswith("git_refnames ="): 564 | mo = re.search(r'=\s*"(.*)"', line) 565 | if mo: 566 | keywords["refnames"] = mo.group(1) 567 | if line.strip().startswith("git_full ="): 568 | mo = re.search(r'=\s*"(.*)"', line) 569 | if mo: 570 | keywords["full"] = mo.group(1) 571 | if line.strip().startswith("git_date ="): 572 | mo = re.search(r'=\s*"(.*)"', line) 573 | if mo: 574 | keywords["date"] = mo.group(1) 575 | f.close() 576 | except EnvironmentError: 577 | pass 578 | return keywords 579 | 580 | 581 | @register_vcs_handler("git", "keywords") 582 | def git_versions_from_keywords(keywords, tag_prefix, verbose): 583 | """Get version information from git keywords.""" 584 | if not keywords: 585 | raise NotThisMethod("no keywords at all, weird") 586 | date = keywords.get("date") 587 | if date is not None: 588 | # git-2.2.0 added "%%cI", which expands to an ISO-8601 -compliant 589 | # datestamp. However we prefer "%%ci" (which expands to an "ISO-8601 590 | # -like" string, which we must then edit to make compliant), because 591 | # it's been around since git-1.5.3, and it's too difficult to 592 | # discover which version we're using, or to work around using an 593 | # older one. 594 | date = date.strip().replace(" ", "T", 1).replace(" ", "", 1) 595 | refnames = keywords["refnames"].strip() 596 | if refnames.startswith("$Format"): 597 | if verbose: 598 | print("keywords are unexpanded, not using") 599 | raise NotThisMethod("unexpanded keywords, not a git-archive tarball") 600 | refs = set([r.strip() for r in refnames.strip("()").split(",")]) 601 | # starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of 602 | # just "foo-1.0". If we see a "tag: " prefix, prefer those. 603 | TAG = "tag: " 604 | tags = set([r[len(TAG):] for r in refs if r.startswith(TAG)]) 605 | if not tags: 606 | # Either we're using git < 1.8.3, or there really are no tags. We use 607 | # a heuristic: assume all version tags have a digit. The old git %%d 608 | # expansion behaves like git log --decorate=short and strips out the 609 | # refs/heads/ and refs/tags/ prefixes that would let us distinguish 610 | # between branches and tags. By ignoring refnames without digits, we 611 | # filter out many common branch names like "release" and 612 | # "stabilization", as well as "HEAD" and "master". 613 | tags = set([r for r in refs if re.search(r'\d', r)]) 614 | if verbose: 615 | print("discarding '%%s', no digits" %% ",".join(refs - tags)) 616 | if verbose: 617 | print("likely tags: %%s" %% ",".join(sorted(tags))) 618 | for ref in sorted(tags): 619 | # sorting will prefer e.g. "2.0" over "2.0rc1" 620 | if ref.startswith(tag_prefix): 621 | r = ref[len(tag_prefix):] 622 | if verbose: 623 | print("picking %%s" %% r) 624 | return {"version": r, 625 | "full-revisionid": keywords["full"].strip(), 626 | "dirty": False, "error": None, 627 | "date": date} 628 | # no suitable tags, so version is "0+unknown", but full hex is still there 629 | if verbose: 630 | print("no suitable tags, using unknown + full revision id") 631 | return {"version": "0+unknown", 632 | "full-revisionid": keywords["full"].strip(), 633 | "dirty": False, "error": "no suitable tags", "date": None} 634 | 635 | 636 | @register_vcs_handler("git", "pieces_from_vcs") 637 | def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command): 638 | """Get version from 'git describe' in the root of the source tree. 639 | 640 | This only gets called if the git-archive 'subst' keywords were *not* 641 | expanded, and _version.py hasn't already been rewritten with a short 642 | version string, meaning we're inside a checked out source tree. 643 | """ 644 | GITS = ["git"] 645 | if sys.platform == "win32": 646 | GITS = ["git.cmd", "git.exe"] 647 | 648 | out, rc = run_command(GITS, ["rev-parse", "--git-dir"], cwd=root, 649 | hide_stderr=True) 650 | if rc != 0: 651 | if verbose: 652 | print("Directory %%s not under git control" %% root) 653 | raise NotThisMethod("'git rev-parse --git-dir' returned error") 654 | 655 | # if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty] 656 | # if there isn't one, this yields HEX[-dirty] (no NUM) 657 | describe_out, rc = run_command(GITS, ["describe", "--tags", "--dirty", 658 | "--always", "--long", 659 | "--match", "%%s*" %% tag_prefix], 660 | cwd=root) 661 | # --long was added in git-1.5.5 662 | if describe_out is None: 663 | raise NotThisMethod("'git describe' failed") 664 | describe_out = describe_out.strip() 665 | full_out, rc = run_command(GITS, ["rev-parse", "HEAD"], cwd=root) 666 | if full_out is None: 667 | raise NotThisMethod("'git rev-parse' failed") 668 | full_out = full_out.strip() 669 | 670 | pieces = {} 671 | pieces["long"] = full_out 672 | pieces["short"] = full_out[:7] # maybe improved later 673 | pieces["error"] = None 674 | 675 | # parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty] 676 | # TAG might have hyphens. 677 | git_describe = describe_out 678 | 679 | # look for -dirty suffix 680 | dirty = git_describe.endswith("-dirty") 681 | pieces["dirty"] = dirty 682 | if dirty: 683 | git_describe = git_describe[:git_describe.rindex("-dirty")] 684 | 685 | # now we have TAG-NUM-gHEX or HEX 686 | 687 | if "-" in git_describe: 688 | # TAG-NUM-gHEX 689 | mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe) 690 | if not mo: 691 | # unparseable. Maybe git-describe is misbehaving? 692 | pieces["error"] = ("unable to parse git-describe output: '%%s'" 693 | %% describe_out) 694 | return pieces 695 | 696 | # tag 697 | full_tag = mo.group(1) 698 | if not full_tag.startswith(tag_prefix): 699 | if verbose: 700 | fmt = "tag '%%s' doesn't start with prefix '%%s'" 701 | print(fmt %% (full_tag, tag_prefix)) 702 | pieces["error"] = ("tag '%%s' doesn't start with prefix '%%s'" 703 | %% (full_tag, tag_prefix)) 704 | return pieces 705 | pieces["closest-tag"] = full_tag[len(tag_prefix):] 706 | 707 | # distance: number of commits since tag 708 | pieces["distance"] = int(mo.group(2)) 709 | 710 | # commit: short hex revision ID 711 | pieces["short"] = mo.group(3) 712 | 713 | else: 714 | # HEX: no tags 715 | pieces["closest-tag"] = None 716 | count_out, rc = run_command(GITS, ["rev-list", "HEAD", "--count"], 717 | cwd=root) 718 | pieces["distance"] = int(count_out) # total number of commits 719 | 720 | # commit date: see ISO-8601 comment in git_versions_from_keywords() 721 | date = run_command(GITS, ["show", "-s", "--format=%%ci", "HEAD"], 722 | cwd=root)[0].strip() 723 | pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1) 724 | 725 | return pieces 726 | 727 | 728 | def plus_or_dot(pieces): 729 | """Return a + if we don't already have one, else return a .""" 730 | if "+" in pieces.get("closest-tag", ""): 731 | return "." 732 | return "+" 733 | 734 | 735 | def render_pep440(pieces): 736 | """Build up version string, with post-release "local version identifier". 737 | 738 | Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you 739 | get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty 740 | 741 | Exceptions: 742 | 1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty] 743 | """ 744 | if pieces["closest-tag"]: 745 | rendered = pieces["closest-tag"] 746 | if pieces["distance"] or pieces["dirty"]: 747 | rendered += plus_or_dot(pieces) 748 | rendered += "%%d.g%%s" %% (pieces["distance"], pieces["short"]) 749 | if pieces["dirty"]: 750 | rendered += ".dirty" 751 | else: 752 | # exception #1 753 | rendered = "0+untagged.%%d.g%%s" %% (pieces["distance"], 754 | pieces["short"]) 755 | if pieces["dirty"]: 756 | rendered += ".dirty" 757 | return rendered 758 | 759 | 760 | def render_pep440_pre(pieces): 761 | """TAG[.post.devDISTANCE] -- No -dirty. 762 | 763 | Exceptions: 764 | 1: no tags. 0.post.devDISTANCE 765 | """ 766 | if pieces["closest-tag"]: 767 | rendered = pieces["closest-tag"] 768 | if pieces["distance"]: 769 | rendered += ".post.dev%%d" %% pieces["distance"] 770 | else: 771 | # exception #1 772 | rendered = "0.post.dev%%d" %% pieces["distance"] 773 | return rendered 774 | 775 | 776 | def render_pep440_post(pieces): 777 | """TAG[.postDISTANCE[.dev0]+gHEX] . 778 | 779 | The ".dev0" means dirty. Note that .dev0 sorts backwards 780 | (a dirty tree will appear "older" than the corresponding clean one), 781 | but you shouldn't be releasing software with -dirty anyways. 782 | 783 | Exceptions: 784 | 1: no tags. 0.postDISTANCE[.dev0] 785 | """ 786 | if pieces["closest-tag"]: 787 | rendered = pieces["closest-tag"] 788 | if pieces["distance"] or pieces["dirty"]: 789 | rendered += ".post%%d" %% pieces["distance"] 790 | if pieces["dirty"]: 791 | rendered += ".dev0" 792 | rendered += plus_or_dot(pieces) 793 | rendered += "g%%s" %% pieces["short"] 794 | else: 795 | # exception #1 796 | rendered = "0.post%%d" %% pieces["distance"] 797 | if pieces["dirty"]: 798 | rendered += ".dev0" 799 | rendered += "+g%%s" %% pieces["short"] 800 | return rendered 801 | 802 | 803 | def render_pep440_old(pieces): 804 | """TAG[.postDISTANCE[.dev0]] . 805 | 806 | The ".dev0" means dirty. 807 | 808 | Eexceptions: 809 | 1: no tags. 0.postDISTANCE[.dev0] 810 | """ 811 | if pieces["closest-tag"]: 812 | rendered = pieces["closest-tag"] 813 | if pieces["distance"] or pieces["dirty"]: 814 | rendered += ".post%%d" %% pieces["distance"] 815 | if pieces["dirty"]: 816 | rendered += ".dev0" 817 | else: 818 | # exception #1 819 | rendered = "0.post%%d" %% pieces["distance"] 820 | if pieces["dirty"]: 821 | rendered += ".dev0" 822 | return rendered 823 | 824 | 825 | def render_git_describe(pieces): 826 | """TAG[-DISTANCE-gHEX][-dirty]. 827 | 828 | Like 'git describe --tags --dirty --always'. 829 | 830 | Exceptions: 831 | 1: no tags. HEX[-dirty] (note: no 'g' prefix) 832 | """ 833 | if pieces["closest-tag"]: 834 | rendered = pieces["closest-tag"] 835 | if pieces["distance"]: 836 | rendered += "-%%d-g%%s" %% (pieces["distance"], pieces["short"]) 837 | else: 838 | # exception #1 839 | rendered = pieces["short"] 840 | if pieces["dirty"]: 841 | rendered += "-dirty" 842 | return rendered 843 | 844 | 845 | def render_git_describe_long(pieces): 846 | """TAG-DISTANCE-gHEX[-dirty]. 847 | 848 | Like 'git describe --tags --dirty --always -long'. 849 | The distance/hash is unconditional. 850 | 851 | Exceptions: 852 | 1: no tags. HEX[-dirty] (note: no 'g' prefix) 853 | """ 854 | if pieces["closest-tag"]: 855 | rendered = pieces["closest-tag"] 856 | rendered += "-%%d-g%%s" %% (pieces["distance"], pieces["short"]) 857 | else: 858 | # exception #1 859 | rendered = pieces["short"] 860 | if pieces["dirty"]: 861 | rendered += "-dirty" 862 | return rendered 863 | 864 | 865 | def render(pieces, style): 866 | """Render the given version pieces into the requested style.""" 867 | if pieces["error"]: 868 | return {"version": "unknown", 869 | "full-revisionid": pieces.get("long"), 870 | "dirty": None, 871 | "error": pieces["error"], 872 | "date": None} 873 | 874 | if not style or style == "default": 875 | style = "pep440" # the default 876 | 877 | if style == "pep440": 878 | rendered = render_pep440(pieces) 879 | elif style == "pep440-pre": 880 | rendered = render_pep440_pre(pieces) 881 | elif style == "pep440-post": 882 | rendered = render_pep440_post(pieces) 883 | elif style == "pep440-old": 884 | rendered = render_pep440_old(pieces) 885 | elif style == "git-describe": 886 | rendered = render_git_describe(pieces) 887 | elif style == "git-describe-long": 888 | rendered = render_git_describe_long(pieces) 889 | else: 890 | raise ValueError("unknown style '%%s'" %% style) 891 | 892 | return {"version": rendered, "full-revisionid": pieces["long"], 893 | "dirty": pieces["dirty"], "error": None, 894 | "date": pieces.get("date")} 895 | 896 | 897 | def get_versions(): 898 | """Get version information or return default if unable to do so.""" 899 | # I am in _version.py, which lives at ROOT/VERSIONFILE_SOURCE. If we have 900 | # __file__, we can work backwards from there to the root. Some 901 | # py2exe/bbfreeze/non-CPython implementations don't do __file__, in which 902 | # case we can only use expanded keywords. 903 | 904 | cfg = get_config() 905 | verbose = cfg.verbose 906 | 907 | try: 908 | return git_versions_from_keywords(get_keywords(), cfg.tag_prefix, 909 | verbose) 910 | except NotThisMethod: 911 | pass 912 | 913 | try: 914 | root = os.path.realpath(__file__) 915 | # versionfile_source is the relative path from the top of the source 916 | # tree (where the .git directory might live) to this file. Invert 917 | # this to find the root from __file__. 918 | for i in cfg.versionfile_source.split('/'): 919 | root = os.path.dirname(root) 920 | except NameError: 921 | return {"version": "0+unknown", "full-revisionid": None, 922 | "dirty": None, 923 | "error": "unable to find root of source tree", 924 | "date": None} 925 | 926 | try: 927 | pieces = git_pieces_from_vcs(cfg.tag_prefix, root, verbose) 928 | return render(pieces, cfg.style) 929 | except NotThisMethod: 930 | pass 931 | 932 | try: 933 | if cfg.parentdir_prefix: 934 | return versions_from_parentdir(cfg.parentdir_prefix, root, verbose) 935 | except NotThisMethod: 936 | pass 937 | 938 | return {"version": "0+unknown", "full-revisionid": None, 939 | "dirty": None, 940 | "error": "unable to compute version", "date": None} 941 | ''' 942 | 943 | 944 | @register_vcs_handler("git", "get_keywords") 945 | def git_get_keywords(versionfile_abs): 946 | """Extract version information from the given file.""" 947 | # the code embedded in _version.py can just fetch the value of these 948 | # keywords. When used from setup.py, we don't want to import _version.py, 949 | # so we do it with a regexp instead. This function is not used from 950 | # _version.py. 951 | keywords = {} 952 | try: 953 | f = open(versionfile_abs, "r") 954 | for line in f.readlines(): 955 | if line.strip().startswith("git_refnames ="): 956 | mo = re.search(r'=\s*"(.*)"', line) 957 | if mo: 958 | keywords["refnames"] = mo.group(1) 959 | if line.strip().startswith("git_full ="): 960 | mo = re.search(r'=\s*"(.*)"', line) 961 | if mo: 962 | keywords["full"] = mo.group(1) 963 | if line.strip().startswith("git_date ="): 964 | mo = re.search(r'=\s*"(.*)"', line) 965 | if mo: 966 | keywords["date"] = mo.group(1) 967 | f.close() 968 | except EnvironmentError: 969 | pass 970 | return keywords 971 | 972 | 973 | @register_vcs_handler("git", "keywords") 974 | def git_versions_from_keywords(keywords, tag_prefix, verbose): 975 | """Get version information from git keywords.""" 976 | if not keywords: 977 | raise NotThisMethod("no keywords at all, weird") 978 | date = keywords.get("date") 979 | if date is not None: 980 | # git-2.2.0 added "%cI", which expands to an ISO-8601 -compliant 981 | # datestamp. However we prefer "%ci" (which expands to an "ISO-8601 982 | # -like" string, which we must then edit to make compliant), because 983 | # it's been around since git-1.5.3, and it's too difficult to 984 | # discover which version we're using, or to work around using an 985 | # older one. 986 | date = date.strip().replace(" ", "T", 1).replace(" ", "", 1) 987 | refnames = keywords["refnames"].strip() 988 | if refnames.startswith("$Format"): 989 | if verbose: 990 | print("keywords are unexpanded, not using") 991 | raise NotThisMethod("unexpanded keywords, not a git-archive tarball") 992 | refs = set([r.strip() for r in refnames.strip("()").split(",")]) 993 | # starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of 994 | # just "foo-1.0". If we see a "tag: " prefix, prefer those. 995 | TAG = "tag: " 996 | tags = set([r[len(TAG):] for r in refs if r.startswith(TAG)]) 997 | if not tags: 998 | # Either we're using git < 1.8.3, or there really are no tags. We use 999 | # a heuristic: assume all version tags have a digit. The old git %d 1000 | # expansion behaves like git log --decorate=short and strips out the 1001 | # refs/heads/ and refs/tags/ prefixes that would let us distinguish 1002 | # between branches and tags. By ignoring refnames without digits, we 1003 | # filter out many common branch names like "release" and 1004 | # "stabilization", as well as "HEAD" and "master". 1005 | tags = set([r for r in refs if re.search(r'\d', r)]) 1006 | if verbose: 1007 | print("discarding '%s', no digits" % ",".join(refs - tags)) 1008 | if verbose: 1009 | print("likely tags: %s" % ",".join(sorted(tags))) 1010 | for ref in sorted(tags): 1011 | # sorting will prefer e.g. "2.0" over "2.0rc1" 1012 | if ref.startswith(tag_prefix): 1013 | r = ref[len(tag_prefix):] 1014 | if verbose: 1015 | print("picking %s" % r) 1016 | return {"version": r, 1017 | "full-revisionid": keywords["full"].strip(), 1018 | "dirty": False, "error": None, 1019 | "date": date} 1020 | # no suitable tags, so version is "0+unknown", but full hex is still there 1021 | if verbose: 1022 | print("no suitable tags, using unknown + full revision id") 1023 | return {"version": "0+unknown", 1024 | "full-revisionid": keywords["full"].strip(), 1025 | "dirty": False, "error": "no suitable tags", "date": None} 1026 | 1027 | 1028 | @register_vcs_handler("git", "pieces_from_vcs") 1029 | def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command): 1030 | """Get version from 'git describe' in the root of the source tree. 1031 | 1032 | This only gets called if the git-archive 'subst' keywords were *not* 1033 | expanded, and _version.py hasn't already been rewritten with a short 1034 | version string, meaning we're inside a checked out source tree. 1035 | """ 1036 | GITS = ["git"] 1037 | if sys.platform == "win32": 1038 | GITS = ["git.cmd", "git.exe"] 1039 | 1040 | out, rc = run_command(GITS, ["rev-parse", "--git-dir"], cwd=root, 1041 | hide_stderr=True) 1042 | if rc != 0: 1043 | if verbose: 1044 | print("Directory %s not under git control" % root) 1045 | raise NotThisMethod("'git rev-parse --git-dir' returned error") 1046 | 1047 | # if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty] 1048 | # if there isn't one, this yields HEX[-dirty] (no NUM) 1049 | describe_out, rc = run_command(GITS, ["describe", "--tags", "--dirty", 1050 | "--always", "--long", 1051 | "--match", "%s*" % tag_prefix], 1052 | cwd=root) 1053 | # --long was added in git-1.5.5 1054 | if describe_out is None: 1055 | raise NotThisMethod("'git describe' failed") 1056 | describe_out = describe_out.strip() 1057 | full_out, rc = run_command(GITS, ["rev-parse", "HEAD"], cwd=root) 1058 | if full_out is None: 1059 | raise NotThisMethod("'git rev-parse' failed") 1060 | full_out = full_out.strip() 1061 | 1062 | pieces = {} 1063 | pieces["long"] = full_out 1064 | pieces["short"] = full_out[:7] # maybe improved later 1065 | pieces["error"] = None 1066 | 1067 | # parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty] 1068 | # TAG might have hyphens. 1069 | git_describe = describe_out 1070 | 1071 | # look for -dirty suffix 1072 | dirty = git_describe.endswith("-dirty") 1073 | pieces["dirty"] = dirty 1074 | if dirty: 1075 | git_describe = git_describe[:git_describe.rindex("-dirty")] 1076 | 1077 | # now we have TAG-NUM-gHEX or HEX 1078 | 1079 | if "-" in git_describe: 1080 | # TAG-NUM-gHEX 1081 | mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe) 1082 | if not mo: 1083 | # unparseable. Maybe git-describe is misbehaving? 1084 | pieces["error"] = ("unable to parse git-describe output: '%s'" 1085 | % describe_out) 1086 | return pieces 1087 | 1088 | # tag 1089 | full_tag = mo.group(1) 1090 | if not full_tag.startswith(tag_prefix): 1091 | if verbose: 1092 | fmt = "tag '%s' doesn't start with prefix '%s'" 1093 | print(fmt % (full_tag, tag_prefix)) 1094 | pieces["error"] = ("tag '%s' doesn't start with prefix '%s'" 1095 | % (full_tag, tag_prefix)) 1096 | return pieces 1097 | pieces["closest-tag"] = full_tag[len(tag_prefix):] 1098 | 1099 | # distance: number of commits since tag 1100 | pieces["distance"] = int(mo.group(2)) 1101 | 1102 | # commit: short hex revision ID 1103 | pieces["short"] = mo.group(3) 1104 | 1105 | else: 1106 | # HEX: no tags 1107 | pieces["closest-tag"] = None 1108 | count_out, rc = run_command(GITS, ["rev-list", "HEAD", "--count"], 1109 | cwd=root) 1110 | pieces["distance"] = int(count_out) # total number of commits 1111 | 1112 | # commit date: see ISO-8601 comment in git_versions_from_keywords() 1113 | date = run_command(GITS, ["show", "-s", "--format=%ci", "HEAD"], 1114 | cwd=root)[0].strip() 1115 | pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1) 1116 | 1117 | return pieces 1118 | 1119 | 1120 | def do_vcs_install(manifest_in, versionfile_source, ipy): 1121 | """Git-specific installation logic for Versioneer. 1122 | 1123 | For Git, this means creating/changing .gitattributes to mark _version.py 1124 | for export-subst keyword substitution. 1125 | """ 1126 | GITS = ["git"] 1127 | if sys.platform == "win32": 1128 | GITS = ["git.cmd", "git.exe"] 1129 | files = [manifest_in, versionfile_source] 1130 | if ipy: 1131 | files.append(ipy) 1132 | try: 1133 | me = __file__ 1134 | if me.endswith(".pyc") or me.endswith(".pyo"): 1135 | me = os.path.splitext(me)[0] + ".py" 1136 | versioneer_file = os.path.relpath(me) 1137 | except NameError: 1138 | versioneer_file = "versioneer.py" 1139 | files.append(versioneer_file) 1140 | present = False 1141 | try: 1142 | f = open(".gitattributes", "r") 1143 | for line in f.readlines(): 1144 | if line.strip().startswith(versionfile_source): 1145 | if "export-subst" in line.strip().split()[1:]: 1146 | present = True 1147 | f.close() 1148 | except EnvironmentError: 1149 | pass 1150 | if not present: 1151 | f = open(".gitattributes", "a+") 1152 | f.write("%s export-subst\n" % versionfile_source) 1153 | f.close() 1154 | files.append(".gitattributes") 1155 | run_command(GITS, ["add", "--"] + files) 1156 | 1157 | 1158 | def versions_from_parentdir(parentdir_prefix, root, verbose): 1159 | """Try to determine the version from the parent directory name. 1160 | 1161 | Source tarballs conventionally unpack into a directory that includes both 1162 | the project name and a version string. We will also support searching up 1163 | two directory levels for an appropriately named parent directory 1164 | """ 1165 | rootdirs = [] 1166 | 1167 | for i in range(3): 1168 | dirname = os.path.basename(root) 1169 | if dirname.startswith(parentdir_prefix): 1170 | return {"version": dirname[len(parentdir_prefix):], 1171 | "full-revisionid": None, 1172 | "dirty": False, "error": None, "date": None} 1173 | else: 1174 | rootdirs.append(root) 1175 | root = os.path.dirname(root) # up a level 1176 | 1177 | if verbose: 1178 | print("Tried directories %s but none started with prefix %s" % 1179 | (str(rootdirs), parentdir_prefix)) 1180 | raise NotThisMethod("rootdir doesn't start with parentdir_prefix") 1181 | 1182 | 1183 | SHORT_VERSION_PY = """ 1184 | # This file was generated by 'versioneer.py' (0.18) from 1185 | # revision-control system data, or from the parent directory name of an 1186 | # unpacked source archive. Distribution tarballs contain a pre-generated copy 1187 | # of this file. 1188 | 1189 | import json 1190 | 1191 | version_json = ''' 1192 | %s 1193 | ''' # END VERSION_JSON 1194 | 1195 | 1196 | def get_versions(): 1197 | return json.loads(version_json) 1198 | """ 1199 | 1200 | 1201 | def versions_from_file(filename): 1202 | """Try to determine the version from _version.py if present.""" 1203 | try: 1204 | with open(filename) as f: 1205 | contents = f.read() 1206 | except EnvironmentError: 1207 | raise NotThisMethod("unable to read _version.py") 1208 | mo = re.search(r"version_json = '''\n(.*)''' # END VERSION_JSON", 1209 | contents, re.M | re.S) 1210 | if not mo: 1211 | mo = re.search(r"version_json = '''\r\n(.*)''' # END VERSION_JSON", 1212 | contents, re.M | re.S) 1213 | if not mo: 1214 | raise NotThisMethod("no version_json in _version.py") 1215 | return json.loads(mo.group(1)) 1216 | 1217 | 1218 | def write_to_version_file(filename, versions): 1219 | """Write the given version number to the given _version.py file.""" 1220 | os.unlink(filename) 1221 | contents = json.dumps(versions, sort_keys=True, 1222 | indent=1, separators=(",", ": ")) 1223 | with open(filename, "w") as f: 1224 | f.write(SHORT_VERSION_PY % contents) 1225 | 1226 | print("set %s to '%s'" % (filename, versions["version"])) 1227 | 1228 | 1229 | def plus_or_dot(pieces): 1230 | """Return a + if we don't already have one, else return a .""" 1231 | if "+" in pieces.get("closest-tag", ""): 1232 | return "." 1233 | return "+" 1234 | 1235 | 1236 | def render_pep440(pieces): 1237 | """Build up version string, with post-release "local version identifier". 1238 | 1239 | Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you 1240 | get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty 1241 | 1242 | Exceptions: 1243 | 1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty] 1244 | """ 1245 | if pieces["closest-tag"]: 1246 | rendered = pieces["closest-tag"] 1247 | if pieces["distance"] or pieces["dirty"]: 1248 | rendered += plus_or_dot(pieces) 1249 | rendered += "%d.g%s" % (pieces["distance"], pieces["short"]) 1250 | if pieces["dirty"]: 1251 | rendered += ".dirty" 1252 | else: 1253 | # exception #1 1254 | rendered = "0+untagged.%d.g%s" % (pieces["distance"], 1255 | pieces["short"]) 1256 | if pieces["dirty"]: 1257 | rendered += ".dirty" 1258 | return rendered 1259 | 1260 | 1261 | def render_pep440_pre(pieces): 1262 | """TAG[.post.devDISTANCE] -- No -dirty. 1263 | 1264 | Exceptions: 1265 | 1: no tags. 0.post.devDISTANCE 1266 | """ 1267 | if pieces["closest-tag"]: 1268 | rendered = pieces["closest-tag"] 1269 | if pieces["distance"]: 1270 | rendered += ".post.dev%d" % pieces["distance"] 1271 | else: 1272 | # exception #1 1273 | rendered = "0.post.dev%d" % pieces["distance"] 1274 | return rendered 1275 | 1276 | 1277 | def render_pep440_post(pieces): 1278 | """TAG[.postDISTANCE[.dev0]+gHEX] . 1279 | 1280 | The ".dev0" means dirty. Note that .dev0 sorts backwards 1281 | (a dirty tree will appear "older" than the corresponding clean one), 1282 | but you shouldn't be releasing software with -dirty anyways. 1283 | 1284 | Exceptions: 1285 | 1: no tags. 0.postDISTANCE[.dev0] 1286 | """ 1287 | if pieces["closest-tag"]: 1288 | rendered = pieces["closest-tag"] 1289 | if pieces["distance"] or pieces["dirty"]: 1290 | rendered += ".post%d" % pieces["distance"] 1291 | if pieces["dirty"]: 1292 | rendered += ".dev0" 1293 | rendered += plus_or_dot(pieces) 1294 | rendered += "g%s" % pieces["short"] 1295 | else: 1296 | # exception #1 1297 | rendered = "0.post%d" % pieces["distance"] 1298 | if pieces["dirty"]: 1299 | rendered += ".dev0" 1300 | rendered += "+g%s" % pieces["short"] 1301 | return rendered 1302 | 1303 | 1304 | def render_pep440_old(pieces): 1305 | """TAG[.postDISTANCE[.dev0]] . 1306 | 1307 | The ".dev0" means dirty. 1308 | 1309 | Eexceptions: 1310 | 1: no tags. 0.postDISTANCE[.dev0] 1311 | """ 1312 | if pieces["closest-tag"]: 1313 | rendered = pieces["closest-tag"] 1314 | if pieces["distance"] or pieces["dirty"]: 1315 | rendered += ".post%d" % pieces["distance"] 1316 | if pieces["dirty"]: 1317 | rendered += ".dev0" 1318 | else: 1319 | # exception #1 1320 | rendered = "0.post%d" % pieces["distance"] 1321 | if pieces["dirty"]: 1322 | rendered += ".dev0" 1323 | return rendered 1324 | 1325 | 1326 | def render_git_describe(pieces): 1327 | """TAG[-DISTANCE-gHEX][-dirty]. 1328 | 1329 | Like 'git describe --tags --dirty --always'. 1330 | 1331 | Exceptions: 1332 | 1: no tags. HEX[-dirty] (note: no 'g' prefix) 1333 | """ 1334 | if pieces["closest-tag"]: 1335 | rendered = pieces["closest-tag"] 1336 | if pieces["distance"]: 1337 | rendered += "-%d-g%s" % (pieces["distance"], pieces["short"]) 1338 | else: 1339 | # exception #1 1340 | rendered = pieces["short"] 1341 | if pieces["dirty"]: 1342 | rendered += "-dirty" 1343 | return rendered 1344 | 1345 | 1346 | def render_git_describe_long(pieces): 1347 | """TAG-DISTANCE-gHEX[-dirty]. 1348 | 1349 | Like 'git describe --tags --dirty --always -long'. 1350 | The distance/hash is unconditional. 1351 | 1352 | Exceptions: 1353 | 1: no tags. HEX[-dirty] (note: no 'g' prefix) 1354 | """ 1355 | if pieces["closest-tag"]: 1356 | rendered = pieces["closest-tag"] 1357 | rendered += "-%d-g%s" % (pieces["distance"], pieces["short"]) 1358 | else: 1359 | # exception #1 1360 | rendered = pieces["short"] 1361 | if pieces["dirty"]: 1362 | rendered += "-dirty" 1363 | return rendered 1364 | 1365 | 1366 | def render(pieces, style): 1367 | """Render the given version pieces into the requested style.""" 1368 | if pieces["error"]: 1369 | return {"version": "unknown", 1370 | "full-revisionid": pieces.get("long"), 1371 | "dirty": None, 1372 | "error": pieces["error"], 1373 | "date": None} 1374 | 1375 | if not style or style == "default": 1376 | style = "pep440" # the default 1377 | 1378 | if style == "pep440": 1379 | rendered = render_pep440(pieces) 1380 | elif style == "pep440-pre": 1381 | rendered = render_pep440_pre(pieces) 1382 | elif style == "pep440-post": 1383 | rendered = render_pep440_post(pieces) 1384 | elif style == "pep440-old": 1385 | rendered = render_pep440_old(pieces) 1386 | elif style == "git-describe": 1387 | rendered = render_git_describe(pieces) 1388 | elif style == "git-describe-long": 1389 | rendered = render_git_describe_long(pieces) 1390 | else: 1391 | raise ValueError("unknown style '%s'" % style) 1392 | 1393 | return {"version": rendered, "full-revisionid": pieces["long"], 1394 | "dirty": pieces["dirty"], "error": None, 1395 | "date": pieces.get("date")} 1396 | 1397 | 1398 | class VersioneerBadRootError(Exception): 1399 | """The project root directory is unknown or missing key files.""" 1400 | 1401 | 1402 | def get_versions(verbose=False): 1403 | """Get the project version from whatever source is available. 1404 | 1405 | Returns dict with two keys: 'version' and 'full'. 1406 | """ 1407 | if "versioneer" in sys.modules: 1408 | # see the discussion in cmdclass.py:get_cmdclass() 1409 | del sys.modules["versioneer"] 1410 | 1411 | root = get_root() 1412 | cfg = get_config_from_root(root) 1413 | 1414 | assert cfg.VCS is not None, "please set [versioneer]VCS= in setup.cfg" 1415 | handlers = HANDLERS.get(cfg.VCS) 1416 | assert handlers, "unrecognized VCS '%s'" % cfg.VCS 1417 | verbose = verbose or cfg.verbose 1418 | assert cfg.versionfile_source is not None, \ 1419 | "please set versioneer.versionfile_source" 1420 | assert cfg.tag_prefix is not None, "please set versioneer.tag_prefix" 1421 | 1422 | versionfile_abs = os.path.join(root, cfg.versionfile_source) 1423 | 1424 | # extract version from first of: _version.py, VCS command (e.g. 'git 1425 | # describe'), parentdir. This is meant to work for developers using a 1426 | # source checkout, for users of a tarball created by 'setup.py sdist', 1427 | # and for users of a tarball/zipball created by 'git archive' or github's 1428 | # download-from-tag feature or the equivalent in other VCSes. 1429 | 1430 | get_keywords_f = handlers.get("get_keywords") 1431 | from_keywords_f = handlers.get("keywords") 1432 | if get_keywords_f and from_keywords_f: 1433 | try: 1434 | keywords = get_keywords_f(versionfile_abs) 1435 | ver = from_keywords_f(keywords, cfg.tag_prefix, verbose) 1436 | if verbose: 1437 | print("got version from expanded keyword %s" % ver) 1438 | return ver 1439 | except NotThisMethod: 1440 | pass 1441 | 1442 | try: 1443 | ver = versions_from_file(versionfile_abs) 1444 | if verbose: 1445 | print("got version from file %s %s" % (versionfile_abs, ver)) 1446 | return ver 1447 | except NotThisMethod: 1448 | pass 1449 | 1450 | from_vcs_f = handlers.get("pieces_from_vcs") 1451 | if from_vcs_f: 1452 | try: 1453 | pieces = from_vcs_f(cfg.tag_prefix, root, verbose) 1454 | ver = render(pieces, cfg.style) 1455 | if verbose: 1456 | print("got version from VCS %s" % ver) 1457 | return ver 1458 | except NotThisMethod: 1459 | pass 1460 | 1461 | try: 1462 | if cfg.parentdir_prefix: 1463 | ver = versions_from_parentdir(cfg.parentdir_prefix, root, verbose) 1464 | if verbose: 1465 | print("got version from parentdir %s" % ver) 1466 | return ver 1467 | except NotThisMethod: 1468 | pass 1469 | 1470 | if verbose: 1471 | print("unable to compute version") 1472 | 1473 | return {"version": "0+unknown", "full-revisionid": None, 1474 | "dirty": None, "error": "unable to compute version", 1475 | "date": None} 1476 | 1477 | 1478 | def get_version(): 1479 | """Get the short version string for this project.""" 1480 | return get_versions()["version"] 1481 | 1482 | 1483 | def get_cmdclass(): 1484 | """Get the custom setuptools/distutils subclasses used by Versioneer.""" 1485 | if "versioneer" in sys.modules: 1486 | del sys.modules["versioneer"] 1487 | # this fixes the "python setup.py develop" case (also 'install' and 1488 | # 'easy_install .'), in which subdependencies of the main project are 1489 | # built (using setup.py bdist_egg) in the same python process. Assume 1490 | # a main project A and a dependency B, which use different versions 1491 | # of Versioneer. A's setup.py imports A's Versioneer, leaving it in 1492 | # sys.modules by the time B's setup.py is executed, causing B to run 1493 | # with the wrong versioneer. Setuptools wraps the sub-dep builds in a 1494 | # sandbox that restores sys.modules to it's pre-build state, so the 1495 | # parent is protected against the child's "import versioneer". By 1496 | # removing ourselves from sys.modules here, before the child build 1497 | # happens, we protect the child from the parent's versioneer too. 1498 | # Also see https://github.com/warner/python-versioneer/issues/52 1499 | 1500 | cmds = {} 1501 | 1502 | # we add "version" to both distutils and setuptools 1503 | from distutils.core import Command 1504 | 1505 | class cmd_version(Command): 1506 | description = "report generated version string" 1507 | user_options = [] 1508 | boolean_options = [] 1509 | 1510 | def initialize_options(self): 1511 | pass 1512 | 1513 | def finalize_options(self): 1514 | pass 1515 | 1516 | def run(self): 1517 | vers = get_versions(verbose=True) 1518 | print("Version: %s" % vers["version"]) 1519 | print(" full-revisionid: %s" % vers.get("full-revisionid")) 1520 | print(" dirty: %s" % vers.get("dirty")) 1521 | print(" date: %s" % vers.get("date")) 1522 | if vers["error"]: 1523 | print(" error: %s" % vers["error"]) 1524 | cmds["version"] = cmd_version 1525 | 1526 | # we override "build_py" in both distutils and setuptools 1527 | # 1528 | # most invocation pathways end up running build_py: 1529 | # distutils/build -> build_py 1530 | # distutils/install -> distutils/build ->.. 1531 | # setuptools/bdist_wheel -> distutils/install ->.. 1532 | # setuptools/bdist_egg -> distutils/install_lib -> build_py 1533 | # setuptools/install -> bdist_egg ->.. 1534 | # setuptools/develop -> ? 1535 | # pip install: 1536 | # copies source tree to a tempdir before running egg_info/etc 1537 | # if .git isn't copied too, 'git describe' will fail 1538 | # then does setup.py bdist_wheel, or sometimes setup.py install 1539 | # setup.py egg_info -> ? 1540 | 1541 | # we override different "build_py" commands for both environments 1542 | if "setuptools" in sys.modules: 1543 | from setuptools.command.build_py import build_py as _build_py 1544 | else: 1545 | from distutils.command.build_py import build_py as _build_py 1546 | 1547 | class cmd_build_py(_build_py): 1548 | def run(self): 1549 | root = get_root() 1550 | cfg = get_config_from_root(root) 1551 | versions = get_versions() 1552 | _build_py.run(self) 1553 | # now locate _version.py in the new build/ directory and replace 1554 | # it with an updated value 1555 | if cfg.versionfile_build: 1556 | target_versionfile = os.path.join(self.build_lib, 1557 | cfg.versionfile_build) 1558 | print("UPDATING %s" % target_versionfile) 1559 | write_to_version_file(target_versionfile, versions) 1560 | cmds["build_py"] = cmd_build_py 1561 | 1562 | if "cx_Freeze" in sys.modules: # cx_freeze enabled? 1563 | from cx_Freeze.dist import build_exe as _build_exe 1564 | # nczeczulin reports that py2exe won't like the pep440-style string 1565 | # as FILEVERSION, but it can be used for PRODUCTVERSION, e.g. 1566 | # setup(console=[{ 1567 | # "version": versioneer.get_version().split("+", 1)[0], # FILEVERSION 1568 | # "product_version": versioneer.get_version(), 1569 | # ... 1570 | 1571 | class cmd_build_exe(_build_exe): 1572 | def run(self): 1573 | root = get_root() 1574 | cfg = get_config_from_root(root) 1575 | versions = get_versions() 1576 | target_versionfile = cfg.versionfile_source 1577 | print("UPDATING %s" % target_versionfile) 1578 | write_to_version_file(target_versionfile, versions) 1579 | 1580 | _build_exe.run(self) 1581 | os.unlink(target_versionfile) 1582 | with open(cfg.versionfile_source, "w") as f: 1583 | LONG = LONG_VERSION_PY[cfg.VCS] 1584 | f.write(LONG % 1585 | {"DOLLAR": "$", 1586 | "STYLE": cfg.style, 1587 | "TAG_PREFIX": cfg.tag_prefix, 1588 | "PARENTDIR_PREFIX": cfg.parentdir_prefix, 1589 | "VERSIONFILE_SOURCE": cfg.versionfile_source, 1590 | }) 1591 | cmds["build_exe"] = cmd_build_exe 1592 | del cmds["build_py"] 1593 | 1594 | if 'py2exe' in sys.modules: # py2exe enabled? 1595 | try: 1596 | from py2exe.distutils_buildexe import py2exe as _py2exe # py3 1597 | except ImportError: 1598 | from py2exe.build_exe import py2exe as _py2exe # py2 1599 | 1600 | class cmd_py2exe(_py2exe): 1601 | def run(self): 1602 | root = get_root() 1603 | cfg = get_config_from_root(root) 1604 | versions = get_versions() 1605 | target_versionfile = cfg.versionfile_source 1606 | print("UPDATING %s" % target_versionfile) 1607 | write_to_version_file(target_versionfile, versions) 1608 | 1609 | _py2exe.run(self) 1610 | os.unlink(target_versionfile) 1611 | with open(cfg.versionfile_source, "w") as f: 1612 | LONG = LONG_VERSION_PY[cfg.VCS] 1613 | f.write(LONG % 1614 | {"DOLLAR": "$", 1615 | "STYLE": cfg.style, 1616 | "TAG_PREFIX": cfg.tag_prefix, 1617 | "PARENTDIR_PREFIX": cfg.parentdir_prefix, 1618 | "VERSIONFILE_SOURCE": cfg.versionfile_source, 1619 | }) 1620 | cmds["py2exe"] = cmd_py2exe 1621 | 1622 | # we override different "sdist" commands for both environments 1623 | if "setuptools" in sys.modules: 1624 | from setuptools.command.sdist import sdist as _sdist 1625 | else: 1626 | from distutils.command.sdist import sdist as _sdist 1627 | 1628 | class cmd_sdist(_sdist): 1629 | def run(self): 1630 | versions = get_versions() 1631 | self._versioneer_generated_versions = versions 1632 | # unless we update this, the command will keep using the old 1633 | # version 1634 | self.distribution.metadata.version = versions["version"] 1635 | return _sdist.run(self) 1636 | 1637 | def make_release_tree(self, base_dir, files): 1638 | root = get_root() 1639 | cfg = get_config_from_root(root) 1640 | _sdist.make_release_tree(self, base_dir, files) 1641 | # now locate _version.py in the new base_dir directory 1642 | # (remembering that it may be a hardlink) and replace it with an 1643 | # updated value 1644 | target_versionfile = os.path.join(base_dir, cfg.versionfile_source) 1645 | print("UPDATING %s" % target_versionfile) 1646 | write_to_version_file(target_versionfile, 1647 | self._versioneer_generated_versions) 1648 | cmds["sdist"] = cmd_sdist 1649 | 1650 | return cmds 1651 | 1652 | 1653 | CONFIG_ERROR = """ 1654 | setup.cfg is missing the necessary Versioneer configuration. You need 1655 | a section like: 1656 | 1657 | [versioneer] 1658 | VCS = git 1659 | style = pep440 1660 | versionfile_source = src/myproject/_version.py 1661 | versionfile_build = myproject/_version.py 1662 | tag_prefix = 1663 | parentdir_prefix = myproject- 1664 | 1665 | You will also need to edit your setup.py to use the results: 1666 | 1667 | import versioneer 1668 | setup(version=versioneer.get_version(), 1669 | cmdclass=versioneer.get_cmdclass(), ...) 1670 | 1671 | Please read the docstring in ./versioneer.py for configuration instructions, 1672 | edit setup.cfg, and re-run the installer or 'python versioneer.py setup'. 1673 | """ 1674 | 1675 | SAMPLE_CONFIG = """ 1676 | # See the docstring in versioneer.py for instructions. Note that you must 1677 | # re-run 'versioneer.py setup' after changing this section, and commit the 1678 | # resulting files. 1679 | 1680 | [versioneer] 1681 | #VCS = git 1682 | #style = pep440 1683 | #versionfile_source = 1684 | #versionfile_build = 1685 | #tag_prefix = 1686 | #parentdir_prefix = 1687 | 1688 | """ 1689 | 1690 | INIT_PY_SNIPPET = """ 1691 | from ._version import get_versions 1692 | __version__ = get_versions()['version'] 1693 | del get_versions 1694 | """ 1695 | 1696 | 1697 | def do_setup(): 1698 | """Main VCS-independent setup function for installing Versioneer.""" 1699 | root = get_root() 1700 | try: 1701 | cfg = get_config_from_root(root) 1702 | except (EnvironmentError, configparser.NoSectionError, 1703 | configparser.NoOptionError) as e: 1704 | if isinstance(e, (EnvironmentError, configparser.NoSectionError)): 1705 | print("Adding sample versioneer config to setup.cfg", 1706 | file=sys.stderr) 1707 | with open(os.path.join(root, "setup.cfg"), "a") as f: 1708 | f.write(SAMPLE_CONFIG) 1709 | print(CONFIG_ERROR, file=sys.stderr) 1710 | return 1 1711 | 1712 | print(" creating %s" % cfg.versionfile_source) 1713 | with open(cfg.versionfile_source, "w") as f: 1714 | LONG = LONG_VERSION_PY[cfg.VCS] 1715 | f.write(LONG % {"DOLLAR": "$", 1716 | "STYLE": cfg.style, 1717 | "TAG_PREFIX": cfg.tag_prefix, 1718 | "PARENTDIR_PREFIX": cfg.parentdir_prefix, 1719 | "VERSIONFILE_SOURCE": cfg.versionfile_source, 1720 | }) 1721 | 1722 | ipy = os.path.join(os.path.dirname(cfg.versionfile_source), 1723 | "__init__.py") 1724 | if os.path.exists(ipy): 1725 | try: 1726 | with open(ipy, "r") as f: 1727 | old = f.read() 1728 | except EnvironmentError: 1729 | old = "" 1730 | if INIT_PY_SNIPPET not in old: 1731 | print(" appending to %s" % ipy) 1732 | with open(ipy, "a") as f: 1733 | f.write(INIT_PY_SNIPPET) 1734 | else: 1735 | print(" %s unmodified" % ipy) 1736 | else: 1737 | print(" %s doesn't exist, ok" % ipy) 1738 | ipy = None 1739 | 1740 | # Make sure both the top-level "versioneer.py" and versionfile_source 1741 | # (PKG/_version.py, used by runtime code) are in MANIFEST.in, so 1742 | # they'll be copied into source distributions. Pip won't be able to 1743 | # install the package without this. 1744 | manifest_in = os.path.join(root, "MANIFEST.in") 1745 | simple_includes = set() 1746 | try: 1747 | with open(manifest_in, "r") as f: 1748 | for line in f: 1749 | if line.startswith("include "): 1750 | for include in line.split()[1:]: 1751 | simple_includes.add(include) 1752 | except EnvironmentError: 1753 | pass 1754 | # That doesn't cover everything MANIFEST.in can do 1755 | # (http://docs.python.org/2/distutils/sourcedist.html#commands), so 1756 | # it might give some false negatives. Appending redundant 'include' 1757 | # lines is safe, though. 1758 | if "versioneer.py" not in simple_includes: 1759 | print(" appending 'versioneer.py' to MANIFEST.in") 1760 | with open(manifest_in, "a") as f: 1761 | f.write("include versioneer.py\n") 1762 | else: 1763 | print(" 'versioneer.py' already in MANIFEST.in") 1764 | if cfg.versionfile_source not in simple_includes: 1765 | print(" appending versionfile_source ('%s') to MANIFEST.in" % 1766 | cfg.versionfile_source) 1767 | with open(manifest_in, "a") as f: 1768 | f.write("include %s\n" % cfg.versionfile_source) 1769 | else: 1770 | print(" versionfile_source already in MANIFEST.in") 1771 | 1772 | # Make VCS-specific changes. For git, this means creating/changing 1773 | # .gitattributes to mark _version.py for export-subst keyword 1774 | # substitution. 1775 | do_vcs_install(manifest_in, cfg.versionfile_source, ipy) 1776 | return 0 1777 | 1778 | 1779 | def scan_setup_py(): 1780 | """Validate the contents of setup.py against Versioneer's expectations.""" 1781 | found = set() 1782 | setters = False 1783 | errors = 0 1784 | with open("setup.py", "r") as f: 1785 | for line in f.readlines(): 1786 | if "import versioneer" in line: 1787 | found.add("import") 1788 | if "versioneer.get_cmdclass()" in line: 1789 | found.add("cmdclass") 1790 | if "versioneer.get_version()" in line: 1791 | found.add("get_version") 1792 | if "versioneer.VCS" in line: 1793 | setters = True 1794 | if "versioneer.versionfile_source" in line: 1795 | setters = True 1796 | if len(found) != 3: 1797 | print("") 1798 | print("Your setup.py appears to be missing some important items") 1799 | print("(but I might be wrong). Please make sure it has something") 1800 | print("roughly like the following:") 1801 | print("") 1802 | print(" import versioneer") 1803 | print(" setup( version=versioneer.get_version(),") 1804 | print(" cmdclass=versioneer.get_cmdclass(), ...)") 1805 | print("") 1806 | errors += 1 1807 | if setters: 1808 | print("You should remove lines like 'versioneer.VCS = ' and") 1809 | print("'versioneer.versionfile_source = ' . This configuration") 1810 | print("now lives in setup.cfg, and should be removed from setup.py") 1811 | print("") 1812 | errors += 1 1813 | return errors 1814 | 1815 | 1816 | if __name__ == "__main__": 1817 | cmd = sys.argv[1] 1818 | if cmd == "setup": 1819 | errors = do_setup() 1820 | errors += scan_setup_py() 1821 | if errors: 1822 | sys.exit(1) 1823 | --------------------------------------------------------------------------------