├── .gitignore ├── .travis.yml ├── AUTHORS.rst ├── CONTRIBUTING.rst ├── HISTORY.rst ├── LICENSE ├── MANIFEST.in ├── Makefile ├── Pipfile ├── Pipfile.lock ├── README.rst ├── docs ├── Makefile ├── authors.rst ├── conf.py ├── contributing.rst ├── elm.rst ├── history.rst ├── index.rst ├── installation.rst ├── make.bat ├── readme.rst └── usage.rst ├── elm ├── __init__.py ├── elmk.cfg ├── elmk.py ├── elmr.cfg ├── elmr.py └── mltools.py ├── requirements.txt ├── setup.cfg ├── setup.py ├── tests ├── data │ ├── boston.data │ ├── diabetes.data │ └── iris.data ├── test_classification.py ├── test_regression.py └── test_xor.py └── tox.ini /.gitignore: -------------------------------------------------------------------------------- 1 | *.py[cod] 2 | 3 | # C extensions 4 | *.so 5 | 6 | # Packages 7 | *.eggs 8 | *.egg 9 | *.egg-info 10 | dist 11 | build 12 | eggs 13 | parts 14 | bin 15 | var 16 | sdist 17 | develop-eggs 18 | .installed.cfg 19 | lib 20 | lib64 21 | 22 | # Installer logs 23 | pip-log.txt 24 | 25 | # Unit test / coverage reports 26 | .coverage 27 | .tox 28 | nosetests.xml 29 | htmlcov 30 | 31 | # Translations 32 | *.mo 33 | 34 | # Mr Developer 35 | .mr.developer.cfg 36 | .project 37 | .pydevproject 38 | 39 | # Complexity 40 | output/*.html 41 | output/*/index.html 42 | 43 | # Sphinx 44 | docs/_build 45 | 46 | # Spyder3 editor 47 | .editorconfig 48 | .spyderworkspace 49 | .spyderproject 50 | 51 | # Dev environment 52 | env/ 53 | .idea/ 54 | *.bak -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | # Config file for automatic testing at travis-ci.org 2 | 3 | language: python 4 | 5 | python: 6 | - "2.7" 7 | - "3.4" 8 | 9 | # command to install dependencies, e.g. pip install -r requirements.txt --use-mirrors 10 | install: pip install -r requirements.txt 11 | 12 | # command to run tests, e.g. python setup.py test 13 | script: python setup.py test 14 | -------------------------------------------------------------------------------- /AUTHORS.rst: -------------------------------------------------------------------------------- 1 | ======= 2 | Credits 3 | ======= 4 | 5 | Development Lead 6 | ---------------- 7 | 8 | * Augusto Almeida 9 | 10 | Contributors 11 | ------------ 12 | 13 | None yet. Why not be the first? 14 | -------------------------------------------------------------------------------- /CONTRIBUTING.rst: -------------------------------------------------------------------------------- 1 | ============ 2 | Contributing 3 | ============ 4 | 5 | Contributions are welcome, and they are greatly appreciated! Every 6 | little bit helps, and credit will always be given. 7 | 8 | You can contribute in many ways: 9 | 10 | Types of Contributions 11 | ---------------------- 12 | 13 | Report Bugs 14 | ~~~~~~~~~~~ 15 | 16 | Report bugs at https://github.com/acba/elm/issues. 17 | 18 | If you are reporting a bug, please include: 19 | 20 | * Your operating system name and version. 21 | * Any details about your local setup that might be helpful in troubleshooting. 22 | * Detailed steps to reproduce the bug. 23 | 24 | Fix Bugs 25 | ~~~~~~~~ 26 | 27 | Look through the GitHub issues for bugs. Anything tagged with "bug" 28 | is open to whoever wants to implement it. 29 | 30 | Implement Features 31 | ~~~~~~~~~~~~~~~~~~ 32 | 33 | Look through the GitHub issues for features. Anything tagged with "feature" 34 | is open to whoever wants to implement it. 35 | 36 | Write Documentation 37 | ~~~~~~~~~~~~~~~~~~~ 38 | 39 | Python Extreme Learning Machine (ELM) could always use more documentation, whether as part of the 40 | official Python Extreme Learning Machine (ELM) docs, in docstrings, or even on the web in blog posts, 41 | articles, and such. 42 | 43 | Submit Feedback 44 | ~~~~~~~~~~~~~~~ 45 | 46 | The best way to send feedback is to file an issue at https://github.com/acba/elm/issues. 47 | 48 | If you are proposing a feature: 49 | 50 | * Explain in detail how it would work. 51 | * Keep the scope as narrow as possible, to make it easier to implement. 52 | * Remember that this is a volunteer-driven project, and that contributions 53 | are welcome :) 54 | 55 | Get Started! 56 | ------------ 57 | 58 | Ready to contribute? Here's how to set up `elm` for local development. 59 | 60 | 1. Fork the `elm` repo on GitHub. 61 | 2. Clone your fork locally:: 62 | 63 | $ git clone git@github.com:your_name_here/elm.git 64 | 65 | 3. Install your local copy into a virtualenv. Assuming you have virtualenvwrapper installed, this is how you set up your fork for local development:: 66 | 67 | $ mkvirtualenv elm 68 | $ cd elm/ 69 | $ python setup.py develop 70 | 71 | 4. Create a branch for local development:: 72 | 73 | $ git checkout -b name-of-your-bugfix-or-feature 74 | 75 | Now you can make your changes locally. 76 | 77 | 5. When you're done making changes, check that your changes pass flake8 and the tests, including testing other Python versions with tox:: 78 | 79 | $ flake8 elm tests 80 | $ python setup.py test 81 | $ tox 82 | 83 | To get flake8 and tox, just pip install them into your virtualenv. 84 | 85 | 6. Commit your changes and push your branch to GitHub:: 86 | 87 | $ git add . 88 | $ git commit -m "Your detailed description of your changes." 89 | $ git push origin name-of-your-bugfix-or-feature 90 | 91 | 7. Submit a pull request through the GitHub website. 92 | 93 | Pull Request Guidelines 94 | ----------------------- 95 | 96 | Before you submit a pull request, check that it meets these guidelines: 97 | 98 | 1. The pull request should include tests. 99 | 2. If the pull request adds functionality, the docs should be updated. Put 100 | your new functionality into a function with a docstring, and add the 101 | feature to the list in README.rst. 102 | 3. The pull request should work for Python 2.6, 2.7, 3.3, and 3.4, and for PyPy. Check 103 | https://travis-ci.org/acba/elm/pull_requests 104 | and make sure that the tests pass for all supported Python versions. 105 | 106 | Tips 107 | ---- 108 | 109 | To run a subset of tests:: 110 | 111 | $ python -m unittest tests.test_elm 112 | -------------------------------------------------------------------------------- /HISTORY.rst: -------------------------------------------------------------------------------- 1 | .. :changelog: 2 | 3 | History 4 | ------- 5 | 6 | 0.1.0 (2015-02-03) 7 | ------------------ 8 | 9 | * First release on PyPI. 10 | 11 | 0.1.1 (2015-02-10) 12 | ------------------ 13 | 14 | * Fixed some package issues. 15 | * Added Python 2.7 support 16 | * Added extra parameter to search_param method. 17 | 18 | 0.1.2 (2017-09-19) 19 | ------------------ 20 | 21 | * Fixed a bug when using search_param. 22 | * Optunity solver is available when using search_param. -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2015, Augusto Almeida 2 | All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 5 | 6 | * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 7 | 8 | * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 9 | 10 | * Neither the name of Python Extreme Learning Machine (ELM) nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. 11 | 12 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 13 | -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | include AUTHORS.rst 2 | include CONTRIBUTING.rst 3 | include HISTORY.rst 4 | include LICENSE 5 | include README.rst 6 | include requirements.txt 7 | 8 | recursive-include tests * 9 | recursive-exclude * __pycache__ 10 | recursive-exclude * *.py[co] 11 | 12 | recursive-include docs *.rst conf.py Makefile make.bat 13 | recursive-include elm *.cfg 14 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | .PHONY: clean-pyc clean-build docs clean 2 | 3 | help: 4 | @echo "clean - remove all build, test, coverage and Python artifacts" 5 | @echo "clean-build - remove build artifacts" 6 | @echo "clean-pyc - remove Python file artifacts" 7 | @echo "clean-test - remove test and coverage artifacts" 8 | @echo "lint - check style with flake8" 9 | @echo "test - run tests quickly with the default Python" 10 | @echo "test-all - run tests on every Python version with tox" 11 | @echo "coverage - check code coverage quickly with the default Python" 12 | @echo "docs - generate Sphinx HTML documentation, including API docs" 13 | @echo "release - package and upload a release" 14 | @echo "dist - package" 15 | @echo "install - install the package to the active Python's site-packages" 16 | 17 | clean: clean-build clean-pyc clean-test 18 | 19 | clean-build: 20 | rm -fr build/ 21 | rm -fr dist/ 22 | rm -fr .eggs/ 23 | find . -name '*.egg-info' -exec rm -fr {} + 24 | find . -name '*.egg' -exec rm -f {} + 25 | 26 | clean-pyc: 27 | find . -name '*.pyc' -exec rm -f {} + 28 | find . -name '*.pyo' -exec rm -f {} + 29 | find . -name '*~' -exec rm -f {} + 30 | find . -name '__pycache__' -exec rm -fr {} + 31 | 32 | clean-test: 33 | rm -fr .tox/ 34 | rm -f .coverage 35 | rm -fr htmlcov/ 36 | 37 | lint: 38 | flake8 elm tests 39 | 40 | test: 41 | python setup.py test 42 | 43 | test-all: 44 | tox 45 | 46 | coverage: 47 | coverage run --source elm setup.py test 48 | coverage report -m 49 | coverage html 50 | open htmlcov/index.html 51 | 52 | docs: 53 | rm -f docs/elm.rst 54 | rm -f docs/modules.rst 55 | sphinx-apidoc -o docs/ elm 56 | $(MAKE) -C docs clean 57 | $(MAKE) -C docs html 58 | open docs/_build/html/index.html 59 | 60 | release-test: 61 | twine upload --repository-url https://test.pypi.org/legacy/ dist/* 62 | 63 | release: 64 | twine upload dist/* 65 | 66 | dist: clean 67 | python setup.py sdist 68 | python setup.py bdist_wheel 69 | ls -l dist 70 | 71 | install: clean 72 | python setup.py install 73 | -------------------------------------------------------------------------------- /Pipfile: -------------------------------------------------------------------------------- 1 | [[source]] 2 | url = "https://pypi.org/simple" 3 | verify_ssl = true 4 | name = "pypi" 5 | 6 | [packages] 7 | numpy = "==1.15.4" 8 | deap = "==1.2.2" 9 | sphinxcontrib-napoleon = "==0.7" 10 | Optunity = "==1.1.1" 11 | sphinx_rtd_theme = "==0.4.2" 12 | 13 | [dev-packages] 14 | twine = "*" 15 | 16 | [requires] 17 | python_version = "3.6" 18 | -------------------------------------------------------------------------------- /Pipfile.lock: -------------------------------------------------------------------------------- 1 | { 2 | "_meta": { 3 | "hash": { 4 | "sha256": "aedf1ef43ef65f3bbcda97b36218c25dfb95cca18f18a6815a3ef6b810ece555" 5 | }, 6 | "pipfile-spec": 6, 7 | "requires": { 8 | "python_version": "3.6" 9 | }, 10 | "sources": [ 11 | { 12 | "name": "pypi", 13 | "url": "https://pypi.org/simple", 14 | "verify_ssl": true 15 | } 16 | ] 17 | }, 18 | "default": { 19 | "alabaster": { 20 | "hashes": [ 21 | "sha256:446438bdcca0e05bd45ea2de1668c1d9b032e1a9154c2c259092d77031ddd359", 22 | "sha256:a661d72d58e6ea8a57f7a86e37d86716863ee5e92788398526d58b26a4e4dc02" 23 | ], 24 | "version": "==0.7.12" 25 | }, 26 | "babel": { 27 | "hashes": [ 28 | "sha256:6778d85147d5d85345c14a26aada5e478ab04e39b078b0745ee6870c2b5cf669", 29 | "sha256:8cba50f48c529ca3fa18cf81fa9403be176d374ac4d60738b839122dfaaa3d23" 30 | ], 31 | "version": "==2.6.0" 32 | }, 33 | "certifi": { 34 | "hashes": [ 35 | "sha256:339dc09518b07e2fa7eda5450740925974815557727d6bd35d319c1524a04a4c", 36 | "sha256:6d58c986d22b038c8c0df30d639f23a3e6d172a05c3583e766f4c0b785c0986a" 37 | ], 38 | "version": "==2018.10.15" 39 | }, 40 | "chardet": { 41 | "hashes": [ 42 | "sha256:84ab92ed1c4d4f16916e05906b6b75a6c0fb5db821cc65e70cbd64a3e2a5eaae", 43 | "sha256:fc323ffcaeaed0e0a02bf4d117757b98aed530d9ed4531e3e15460124c106691" 44 | ], 45 | "version": "==3.0.4" 46 | }, 47 | "deap": { 48 | "hashes": [ 49 | "sha256:95c63e66d755ec206c80fdb2908851c0bef420ee8651ad7be4f0578e9e909bcf" 50 | ], 51 | "index": "pypi", 52 | "version": "==1.2.2" 53 | }, 54 | "docutils": { 55 | "hashes": [ 56 | "sha256:02aec4bd92ab067f6ff27a38a38a41173bf01bed8f89157768c1573f53e474a6", 57 | "sha256:51e64ef2ebfb29cae1faa133b3710143496eca21c530f3f71424d77687764274", 58 | "sha256:7a4bd47eaf6596e1295ecb11361139febe29b084a87bf005bf899f9a42edc3c6" 59 | ], 60 | "version": "==0.14" 61 | }, 62 | "idna": { 63 | "hashes": [ 64 | "sha256:156a6814fb5ac1fc6850fb002e0852d56c0c8d2531923a51032d1b70760e186e", 65 | "sha256:684a38a6f903c1d71d6d5fac066b58d7768af4de2b832e426ec79c30daa94a16" 66 | ], 67 | "version": "==2.7" 68 | }, 69 | "imagesize": { 70 | "hashes": [ 71 | "sha256:3f349de3eb99145973fefb7dbe38554414e5c30abd0c8e4b970a7c9d09f3a1d8", 72 | "sha256:f3832918bc3c66617f92e35f5d70729187676313caa60c187eb0f28b8fe5e3b5" 73 | ], 74 | "version": "==1.1.0" 75 | }, 76 | "jinja2": { 77 | "hashes": [ 78 | "sha256:74c935a1b8bb9a3947c50a54766a969d4846290e1e788ea44c1392163723c3bd", 79 | "sha256:f84be1bb0040caca4cea721fcbbbbd61f9be9464ca236387158b0feea01914a4" 80 | ], 81 | "version": "==2.10" 82 | }, 83 | "markupsafe": { 84 | "hashes": [ 85 | "sha256:048ef924c1623740e70204aa7143ec592504045ae4429b59c30054cb31e3c432", 86 | "sha256:130f844e7f5bdd8e9f3f42e7102ef1d49b2e6fdf0d7526df3f87281a532d8c8b", 87 | "sha256:19f637c2ac5ae9da8bfd98cef74d64b7e1bb8a63038a3505cd182c3fac5eb4d9", 88 | "sha256:1b8a7a87ad1b92bd887568ce54b23565f3fd7018c4180136e1cf412b405a47af", 89 | "sha256:1c25694ca680b6919de53a4bb3bdd0602beafc63ff001fea2f2fc16ec3a11834", 90 | "sha256:1f19ef5d3908110e1e891deefb5586aae1b49a7440db952454b4e281b41620cd", 91 | "sha256:1fa6058938190ebe8290e5cae6c351e14e7bb44505c4a7624555ce57fbbeba0d", 92 | "sha256:31cbb1359e8c25f9f48e156e59e2eaad51cd5242c05ed18a8de6dbe85184e4b7", 93 | "sha256:3e835d8841ae7863f64e40e19477f7eb398674da6a47f09871673742531e6f4b", 94 | "sha256:4e97332c9ce444b0c2c38dd22ddc61c743eb208d916e4265a2a3b575bdccb1d3", 95 | "sha256:525396ee324ee2da82919f2ee9c9e73b012f23e7640131dd1b53a90206a0f09c", 96 | "sha256:52b07fbc32032c21ad4ab060fec137b76eb804c4b9a1c7c7dc562549306afad2", 97 | "sha256:52ccb45e77a1085ec5461cde794e1aa037df79f473cbc69b974e73940655c8d7", 98 | "sha256:5c3fbebd7de20ce93103cb3183b47671f2885307df4a17a0ad56a1dd51273d36", 99 | "sha256:5e5851969aea17660e55f6a3be00037a25b96a9b44d2083651812c99d53b14d1", 100 | "sha256:5edfa27b2d3eefa2210fb2f5d539fbed81722b49f083b2c6566455eb7422fd7e", 101 | "sha256:7d263e5770efddf465a9e31b78362d84d015cc894ca2c131901a4445eaa61ee1", 102 | "sha256:83381342bfc22b3c8c06f2dd93a505413888694302de25add756254beee8449c", 103 | "sha256:857eebb2c1dc60e4219ec8e98dfa19553dae33608237e107db9c6078b1167856", 104 | "sha256:98e439297f78fca3a6169fd330fbe88d78b3bb72f967ad9961bcac0d7fdd1550", 105 | "sha256:bf54103892a83c64db58125b3f2a43df6d2cb2d28889f14c78519394feb41492", 106 | "sha256:d9ac82be533394d341b41d78aca7ed0e0f4ba5a2231602e2f05aa87f25c51672", 107 | "sha256:e982fe07ede9fada6ff6705af70514a52beb1b2c3d25d4e873e82114cf3c5401", 108 | "sha256:edce2ea7f3dfc981c4ddc97add8a61381d9642dc3273737e756517cc03e84dd6", 109 | "sha256:efdc45ef1afc238db84cb4963aa689c0408912a0239b0721cb172b4016eb31d6", 110 | "sha256:f137c02498f8b935892d5c0172560d7ab54bc45039de8805075e19079c639a9c", 111 | "sha256:f82e347a72f955b7017a39708a3667f106e6ad4d10b25f237396a7115d8ed5fd", 112 | "sha256:fb7c206e01ad85ce57feeaaa0bf784b97fa3cad0d4a5737bc5295785f5c613a1" 113 | ], 114 | "version": "==1.1.0" 115 | }, 116 | "numpy": { 117 | "hashes": [ 118 | "sha256:0df89ca13c25eaa1621a3f09af4c8ba20da849692dcae184cb55e80952c453fb", 119 | "sha256:154c35f195fd3e1fad2569930ca51907057ae35e03938f89a8aedae91dd1b7c7", 120 | "sha256:18e84323cdb8de3325e741a7a8dd4a82db74fde363dce32b625324c7b32aa6d7", 121 | "sha256:1e8956c37fc138d65ded2d96ab3949bd49038cc6e8a4494b1515b0ba88c91565", 122 | "sha256:23557bdbca3ccbde3abaa12a6e82299bc92d2b9139011f8c16ca1bb8c75d1e95", 123 | "sha256:24fd645a5e5d224aa6e39d93e4a722fafa9160154f296fd5ef9580191c755053", 124 | "sha256:36e36b6868e4440760d4b9b44587ea1dc1f06532858d10abba98e851e154ca70", 125 | "sha256:3d734559db35aa3697dadcea492a423118c5c55d176da2f3be9c98d4803fc2a7", 126 | "sha256:416a2070acf3a2b5d586f9a6507bb97e33574df5bd7508ea970bbf4fc563fa52", 127 | "sha256:4a22dc3f5221a644dfe4a63bf990052cc674ef12a157b1056969079985c92816", 128 | "sha256:4d8d3e5aa6087490912c14a3c10fbdd380b40b421c13920ff468163bc50e016f", 129 | "sha256:4f41fd159fba1245e1958a99d349df49c616b133636e0cf668f169bce2aeac2d", 130 | "sha256:561ef098c50f91fbac2cc9305b68c915e9eb915a74d9038ecf8af274d748f76f", 131 | "sha256:56994e14b386b5c0a9b875a76d22d707b315fa037affc7819cda08b6d0489756", 132 | "sha256:73a1f2a529604c50c262179fcca59c87a05ff4614fe8a15c186934d84d09d9a5", 133 | "sha256:7da99445fd890206bfcc7419f79871ba8e73d9d9e6b82fe09980bc5bb4efc35f", 134 | "sha256:99d59e0bcadac4aa3280616591fb7bcd560e2218f5e31d5223a2e12a1425d495", 135 | "sha256:a4cc09489843c70b22e8373ca3dfa52b3fab778b57cf81462f1203b0852e95e3", 136 | "sha256:a61dc29cfca9831a03442a21d4b5fd77e3067beca4b5f81f1a89a04a71cf93fa", 137 | "sha256:b1853df739b32fa913cc59ad9137caa9cc3d97ff871e2bbd89c2a2a1d4a69451", 138 | "sha256:b1f44c335532c0581b77491b7715a871d0dd72e97487ac0f57337ccf3ab3469b", 139 | "sha256:b261e0cb0d6faa8fd6863af26d30351fd2ffdb15b82e51e81e96b9e9e2e7ba16", 140 | "sha256:c857ae5dba375ea26a6228f98c195fec0898a0fd91bcf0e8a0cae6d9faf3eca7", 141 | "sha256:cf5bb4a7d53a71bb6a0144d31df784a973b36d8687d615ef6a7e9b1809917a9b", 142 | "sha256:db9814ff0457b46f2e1d494c1efa4111ca089e08c8b983635ebffb9c1573361f", 143 | "sha256:df04f4bad8a359daa2ff74f8108ea051670cafbca533bb2636c58b16e962989e", 144 | "sha256:ecf81720934a0e18526177e645cbd6a8a21bb0ddc887ff9738de07a1df5c6b61", 145 | "sha256:edfa6fba9157e0e3be0f40168eb142511012683ac3dc82420bee4a3f3981b30e" 146 | ], 147 | "index": "pypi", 148 | "version": "==1.15.4" 149 | }, 150 | "optunity": { 151 | "hashes": [ 152 | "sha256:28c6a9104872c580f46648c7fd1f4a2b39e10593c1eb31f61f6da1acd275f07c", 153 | "sha256:a83618dd37e014c5993e8877749e0ee17864d24783f19f5ebdeedb5525c0a65b" 154 | ], 155 | "index": "pypi", 156 | "version": "==1.1.1" 157 | }, 158 | "packaging": { 159 | "hashes": [ 160 | "sha256:0886227f54515e592aaa2e5a553332c73962917f2831f1b0f9b9f4380a4b9807", 161 | "sha256:f95a1e147590f204328170981833854229bb2912ac3d5f89e2a8ccd2834800c9" 162 | ], 163 | "version": "==18.0" 164 | }, 165 | "pockets": { 166 | "hashes": [ 167 | "sha256:109eb91588e9cf722de98c98d300e1c5896e877f5704dc61176fa09686ca635b", 168 | "sha256:21a2405543c439ac091453ed187f558cf5294d3f85f15310f214ad4de057e0af" 169 | ], 170 | "version": "==0.7.2" 171 | }, 172 | "pygments": { 173 | "hashes": [ 174 | "sha256:78f3f434bcc5d6ee09020f92ba487f95ba50f1e3ef83ae96b9d5ffa1bab25c5d", 175 | "sha256:dbae1046def0efb574852fab9e90209b23f556367b5a320c0bcb871c77c3e8cc" 176 | ], 177 | "version": "==2.2.0" 178 | }, 179 | "pyparsing": { 180 | "hashes": [ 181 | "sha256:40856e74d4987de5d01761a22d1621ae1c7f8774585acae358aa5c5936c6c90b", 182 | "sha256:f353aab21fd474459d97b709e527b5571314ee5f067441dc9f88e33eecd96592" 183 | ], 184 | "version": "==2.3.0" 185 | }, 186 | "pytz": { 187 | "hashes": [ 188 | "sha256:31cb35c89bd7d333cd32c5f278fca91b523b0834369e757f4c5641ea252236ca", 189 | "sha256:8e0f8568c118d3077b46be7d654cc8167fa916092e28320cde048e54bfc9f1e6" 190 | ], 191 | "version": "==2018.7" 192 | }, 193 | "requests": { 194 | "hashes": [ 195 | "sha256:65b3a120e4329e33c9889db89c80976c5272f56ea92d3e74da8a463992e3ff54", 196 | "sha256:ea881206e59f41dbd0bd445437d792e43906703fff75ca8ff43ccdb11f33f263" 197 | ], 198 | "version": "==2.20.1" 199 | }, 200 | "six": { 201 | "hashes": [ 202 | "sha256:70e8a77beed4562e7f14fe23a786b54f6296e34344c23bc42f07b15018ff98e9", 203 | "sha256:832dc0e10feb1aa2c68dcc57dbb658f1c7e65b9b61af69048abc87a2db00a0eb" 204 | ], 205 | "version": "==1.11.0" 206 | }, 207 | "snowballstemmer": { 208 | "hashes": [ 209 | "sha256:919f26a68b2c17a7634da993d91339e288964f93c274f1343e3bbbe2096e1128", 210 | "sha256:9f3bcd3c401c3e862ec0ebe6d2c069ebc012ce142cce209c098ccb5b09136e89" 211 | ], 212 | "version": "==1.2.1" 213 | }, 214 | "sphinx": { 215 | "hashes": [ 216 | "sha256:120732cbddb1b2364471c3d9f8bfd4b0c5b550862f99a65736c77f970b142aea", 217 | "sha256:b348790776490894e0424101af9c8413f2a86831524bd55c5f379d3e3e12ca64" 218 | ], 219 | "version": "==1.8.2" 220 | }, 221 | "sphinx-rtd-theme": { 222 | "hashes": [ 223 | "sha256:02f02a676d6baabb758a20c7a479d58648e0f64f13e07d1b388e9bb2afe86a09", 224 | "sha256:d0f6bc70f98961145c5b0e26a992829363a197321ba571b31b24ea91879e0c96" 225 | ], 226 | "index": "pypi", 227 | "version": "==0.4.2" 228 | }, 229 | "sphinxcontrib-napoleon": { 230 | "hashes": [ 231 | "sha256:407382beed396e9f2d7f3043fad6afda95719204a1e1a231ac865f40abcbfcf8", 232 | "sha256:711e41a3974bdf110a484aec4c1a556799eb0b3f3b897521a018ad7e2db13fef" 233 | ], 234 | "index": "pypi", 235 | "version": "==0.7" 236 | }, 237 | "sphinxcontrib-websupport": { 238 | "hashes": [ 239 | "sha256:68ca7ff70785cbe1e7bccc71a48b5b6d965d79ca50629606c7861a21b206d9dd", 240 | "sha256:9de47f375baf1ea07cdb3436ff39d7a9c76042c10a769c52353ec46e4e8fc3b9" 241 | ], 242 | "version": "==1.1.0" 243 | }, 244 | "urllib3": { 245 | "hashes": [ 246 | "sha256:61bf29cada3fc2fbefad4fdf059ea4bd1b4a86d2b6d15e1c7c0b582b9752fe39", 247 | "sha256:de9529817c93f27c8ccbfead6985011db27bd0ddfcdb2d86f3f663385c6a9c22" 248 | ], 249 | "version": "==1.24.1" 250 | } 251 | }, 252 | "develop": { 253 | "bleach": { 254 | "hashes": [ 255 | "sha256:48d39675b80a75f6d1c3bdbffec791cf0bbbab665cf01e20da701c77de278718", 256 | "sha256:73d26f018af5d5adcdabf5c1c974add4361a9c76af215fe32fdec8a6fc5fb9b9" 257 | ], 258 | "version": "==3.0.2" 259 | }, 260 | "certifi": { 261 | "hashes": [ 262 | "sha256:339dc09518b07e2fa7eda5450740925974815557727d6bd35d319c1524a04a4c", 263 | "sha256:6d58c986d22b038c8c0df30d639f23a3e6d172a05c3583e766f4c0b785c0986a" 264 | ], 265 | "version": "==2018.10.15" 266 | }, 267 | "chardet": { 268 | "hashes": [ 269 | "sha256:84ab92ed1c4d4f16916e05906b6b75a6c0fb5db821cc65e70cbd64a3e2a5eaae", 270 | "sha256:fc323ffcaeaed0e0a02bf4d117757b98aed530d9ed4531e3e15460124c106691" 271 | ], 272 | "version": "==3.0.4" 273 | }, 274 | "docutils": { 275 | "hashes": [ 276 | "sha256:02aec4bd92ab067f6ff27a38a38a41173bf01bed8f89157768c1573f53e474a6", 277 | "sha256:51e64ef2ebfb29cae1faa133b3710143496eca21c530f3f71424d77687764274", 278 | "sha256:7a4bd47eaf6596e1295ecb11361139febe29b084a87bf005bf899f9a42edc3c6" 279 | ], 280 | "version": "==0.14" 281 | }, 282 | "idna": { 283 | "hashes": [ 284 | "sha256:156a6814fb5ac1fc6850fb002e0852d56c0c8d2531923a51032d1b70760e186e", 285 | "sha256:684a38a6f903c1d71d6d5fac066b58d7768af4de2b832e426ec79c30daa94a16" 286 | ], 287 | "version": "==2.7" 288 | }, 289 | "pkginfo": { 290 | "hashes": [ 291 | "sha256:5878d542a4b3f237e359926384f1dde4e099c9f5525d236b1840cf704fa8d474", 292 | "sha256:a39076cb3eb34c333a0dd390b568e9e1e881c7bf2cc0aee12120636816f55aee" 293 | ], 294 | "version": "==1.4.2" 295 | }, 296 | "pygments": { 297 | "hashes": [ 298 | "sha256:78f3f434bcc5d6ee09020f92ba487f95ba50f1e3ef83ae96b9d5ffa1bab25c5d", 299 | "sha256:dbae1046def0efb574852fab9e90209b23f556367b5a320c0bcb871c77c3e8cc" 300 | ], 301 | "version": "==2.2.0" 302 | }, 303 | "readme-renderer": { 304 | "hashes": [ 305 | "sha256:bb16f55b259f27f75f640acf5e00cf897845a8b3e4731b5c1a436e4b8529202f", 306 | "sha256:c8532b79afc0375a85f10433eca157d6b50f7d6990f337fa498c96cd4bfc203d" 307 | ], 308 | "version": "==24.0" 309 | }, 310 | "requests": { 311 | "hashes": [ 312 | "sha256:65b3a120e4329e33c9889db89c80976c5272f56ea92d3e74da8a463992e3ff54", 313 | "sha256:ea881206e59f41dbd0bd445437d792e43906703fff75ca8ff43ccdb11f33f263" 314 | ], 315 | "version": "==2.20.1" 316 | }, 317 | "requests-toolbelt": { 318 | "hashes": [ 319 | "sha256:42c9c170abc2cacb78b8ab23ac957945c7716249206f90874651971a4acff237", 320 | "sha256:f6a531936c6fa4c6cfce1b9c10d5c4f498d16528d2a54a22ca00011205a187b5" 321 | ], 322 | "version": "==0.8.0" 323 | }, 324 | "six": { 325 | "hashes": [ 326 | "sha256:70e8a77beed4562e7f14fe23a786b54f6296e34344c23bc42f07b15018ff98e9", 327 | "sha256:832dc0e10feb1aa2c68dcc57dbb658f1c7e65b9b61af69048abc87a2db00a0eb" 328 | ], 329 | "version": "==1.11.0" 330 | }, 331 | "tqdm": { 332 | "hashes": [ 333 | "sha256:3c4d4a5a41ef162dd61f1edb86b0e1c7859054ab656b2e7c7b77e7fbf6d9f392", 334 | "sha256:5b4d5549984503050883bc126280b386f5f4ca87e6c023c5d015655ad75bdebb" 335 | ], 336 | "version": "==4.28.1" 337 | }, 338 | "twine": { 339 | "hashes": [ 340 | "sha256:7d89bc6acafb31d124e6e5b295ef26ac77030bf098960c2a4c4e058335827c5c", 341 | "sha256:fad6f1251195f7ddd1460cb76d6ea106c93adb4e56c41e0da79658e56e547d2c" 342 | ], 343 | "index": "pypi", 344 | "version": "==1.12.1" 345 | }, 346 | "urllib3": { 347 | "hashes": [ 348 | "sha256:61bf29cada3fc2fbefad4fdf059ea4bd1b4a86d2b6d15e1c7c0b582b9752fe39", 349 | "sha256:de9529817c93f27c8ccbfead6985011db27bd0ddfcdb2d86f3f663385c6a9c22" 350 | ], 351 | "version": "==1.24.1" 352 | }, 353 | "webencodings": { 354 | "hashes": [ 355 | "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78", 356 | "sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923" 357 | ], 358 | "version": "==0.5.1" 359 | } 360 | } 361 | } 362 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | ===================================== 2 | Python Extreme Learning Machine (ELM) 3 | ===================================== 4 | 5 | .. image:: https://badge.fury.io/py/elm.png 6 | :target: http://badge.fury.io/py/elm 7 | 8 | .. image:: https://travis-ci.org/acba/elm.png?branch=master 9 | :target: https://travis-ci.org/acba/elm 10 | 11 | .. image:: https://pypip.in/d/elm/badge.png 12 | :target: https://pypi.python.org/pypi/elm 13 | 14 | 15 | Python Extreme Learning Machine (ELM) is a machine learning technique used for classification/regression tasks. 16 | 17 | * Free software: BSD license 18 | * Documentation: https://elm.readthedocs.org. 19 | 20 | Features 21 | -------- 22 | 23 | * ELM Kernel 24 | * ELM Random Neurons 25 | * MLTools 26 | 27 | 28 | 29 | .. image:: https://img.shields.io/badge/Donate-PayPal-green.svg 30 | :target: https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=QKX5XYS8EYJLA¤cy_code=BRL&source=url 31 | -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | PAPER = 8 | BUILDDIR = _build 9 | 10 | # User-friendly check for sphinx-build 11 | ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) 12 | $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) 13 | endif 14 | 15 | # Internal variables. 16 | PAPEROPT_a4 = -D latex_paper_size=a4 17 | PAPEROPT_letter = -D latex_paper_size=letter 18 | ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 19 | # the i18n builder cannot share the environment and doctrees with the others 20 | I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 21 | 22 | .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext 23 | 24 | help: 25 | @echo "Please use \`make ' where is one of" 26 | @echo " html to make standalone HTML files" 27 | @echo " dirhtml to make HTML files named index.html in directories" 28 | @echo " singlehtml to make a single large HTML file" 29 | @echo " pickle to make pickle files" 30 | @echo " json to make JSON files" 31 | @echo " htmlhelp to make HTML files and a HTML help project" 32 | @echo " qthelp to make HTML files and a qthelp project" 33 | @echo " devhelp to make HTML files and a Devhelp project" 34 | @echo " epub to make an epub" 35 | @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" 36 | @echo " latexpdf to make LaTeX files and run them through pdflatex" 37 | @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" 38 | @echo " text to make text files" 39 | @echo " man to make manual pages" 40 | @echo " texinfo to make Texinfo files" 41 | @echo " info to make Texinfo files and run them through makeinfo" 42 | @echo " gettext to make PO message catalogs" 43 | @echo " changes to make an overview of all changed/added/deprecated items" 44 | @echo " xml to make Docutils-native XML files" 45 | @echo " pseudoxml to make pseudoxml-XML files for display purposes" 46 | @echo " linkcheck to check all external links for integrity" 47 | @echo " doctest to run all doctests embedded in the documentation (if enabled)" 48 | 49 | clean: 50 | rm -rf $(BUILDDIR)/* 51 | 52 | html: 53 | $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html 54 | @echo 55 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." 56 | 57 | dirhtml: 58 | $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml 59 | @echo 60 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." 61 | 62 | singlehtml: 63 | $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml 64 | @echo 65 | @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." 66 | 67 | pickle: 68 | $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle 69 | @echo 70 | @echo "Build finished; now you can process the pickle files." 71 | 72 | json: 73 | $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json 74 | @echo 75 | @echo "Build finished; now you can process the JSON files." 76 | 77 | htmlhelp: 78 | $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp 79 | @echo 80 | @echo "Build finished; now you can run HTML Help Workshop with the" \ 81 | ".hhp project file in $(BUILDDIR)/htmlhelp." 82 | 83 | qthelp: 84 | $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp 85 | @echo 86 | @echo "Build finished; now you can run "qcollectiongenerator" with the" \ 87 | ".qhcp project file in $(BUILDDIR)/qthelp, like this:" 88 | @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/elm.qhcp" 89 | @echo "To view the help file:" 90 | @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/elm.qhc" 91 | 92 | devhelp: 93 | $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp 94 | @echo 95 | @echo "Build finished." 96 | @echo "To view the help file:" 97 | @echo "# mkdir -p $$HOME/.local/share/devhelp/elm" 98 | @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/elm" 99 | @echo "# devhelp" 100 | 101 | epub: 102 | $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub 103 | @echo 104 | @echo "Build finished. The epub file is in $(BUILDDIR)/epub." 105 | 106 | latex: 107 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 108 | @echo 109 | @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." 110 | @echo "Run \`make' in that directory to run these through (pdf)latex" \ 111 | "(use \`make latexpdf' here to do that automatically)." 112 | 113 | latexpdf: 114 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 115 | @echo "Running LaTeX files through pdflatex..." 116 | $(MAKE) -C $(BUILDDIR)/latex all-pdf 117 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 118 | 119 | latexpdfja: 120 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 121 | @echo "Running LaTeX files through platex and dvipdfmx..." 122 | $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja 123 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 124 | 125 | text: 126 | $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text 127 | @echo 128 | @echo "Build finished. The text files are in $(BUILDDIR)/text." 129 | 130 | man: 131 | $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man 132 | @echo 133 | @echo "Build finished. The manual pages are in $(BUILDDIR)/man." 134 | 135 | texinfo: 136 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 137 | @echo 138 | @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." 139 | @echo "Run \`make' in that directory to run these through makeinfo" \ 140 | "(use \`make info' here to do that automatically)." 141 | 142 | info: 143 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 144 | @echo "Running Texinfo files through makeinfo..." 145 | make -C $(BUILDDIR)/texinfo info 146 | @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." 147 | 148 | gettext: 149 | $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale 150 | @echo 151 | @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." 152 | 153 | changes: 154 | $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes 155 | @echo 156 | @echo "The overview file is in $(BUILDDIR)/changes." 157 | 158 | linkcheck: 159 | $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck 160 | @echo 161 | @echo "Link check complete; look for any errors in the above output " \ 162 | "or in $(BUILDDIR)/linkcheck/output.txt." 163 | 164 | doctest: 165 | $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest 166 | @echo "Testing of doctests in the sources finished, look at the " \ 167 | "results in $(BUILDDIR)/doctest/output.txt." 168 | 169 | xml: 170 | $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml 171 | @echo 172 | @echo "Build finished. The XML files are in $(BUILDDIR)/xml." 173 | 174 | pseudoxml: 175 | $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml 176 | @echo 177 | @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." 178 | -------------------------------------------------------------------------------- /docs/authors.rst: -------------------------------------------------------------------------------- 1 | .. include:: ../AUTHORS.rst 2 | -------------------------------------------------------------------------------- /docs/conf.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # elm documentation build configuration file, created by 5 | # sphinx-quickstart on Tue Jul 9 22:26:36 2013. 6 | # 7 | # This file is execfile()d with the current directory set to its 8 | # containing dir. 9 | # 10 | # Note that not all possible configuration values are present in this 11 | # autogenerated file. 12 | # 13 | # All configuration values have a default; values that are commented out 14 | # serve to show the default. 15 | 16 | import sys 17 | import os 18 | 19 | # If extensions (or modules to document with autodoc) are in another 20 | # directory, add these directories to sys.path here. If the directory is 21 | # relative to the documentation root, use os.path.abspath to make it 22 | # absolute, like shown here. 23 | #sys.path.insert(0, os.path.abspath('.')) 24 | 25 | # Get the project root dir, which is the parent dir of this 26 | cwd = os.getcwd() 27 | project_root = os.path.dirname(cwd) 28 | 29 | # Insert the project root dir as the first element in the PYTHONPATH. 30 | # This lets us ensure that the source package is imported, and that its 31 | # version is used. 32 | sys.path.insert(0, project_root) 33 | 34 | import elm 35 | import sphinx_rtd_theme 36 | 37 | # -- General configuration --------------------------------------------- 38 | 39 | # If your documentation needs a minimal Sphinx version, state it here. 40 | #needs_sphinx = '1.0' 41 | 42 | # Add any Sphinx extension module names here, as strings. They can be 43 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. 44 | extensions = ['sphinx.ext.autodoc', 'sphinx.ext.viewcode', 45 | 'sphinxcontrib.napoleon', 'sphinx.ext.mathjax'] 46 | 47 | 48 | # Napoleon settings 49 | napoleon_google_docstring = True 50 | napoleon_numpy_docstring = False 51 | napoleon_include_private_with_doc = False 52 | napoleon_include_special_with_doc = True 53 | napoleon_use_admonition_for_examples = False 54 | napoleon_use_admonition_for_notes = False 55 | napoleon_use_admonition_for_references = False 56 | napoleon_use_ivar = True 57 | napoleon_use_param = True 58 | napoleon_use_rtype = True 59 | 60 | # Add any paths that contain templates here, relative to this directory. 61 | templates_path = ['_templates'] 62 | 63 | # The suffix of source filenames. 64 | source_suffix = '.rst' 65 | 66 | # The encoding of source files. 67 | #source_encoding = 'utf-8-sig' 68 | 69 | # The master toctree document. 70 | master_doc = 'index' 71 | 72 | # General information about the project. 73 | project = u'Python Extreme Learning Machine (ELM)' 74 | copyright = u'2015, Augusto Almeida' 75 | 76 | # The version info for the project you're documenting, acts as replacement 77 | # for |version| and |release|, also used in various other places throughout 78 | # the built documents. 79 | # 80 | # The short X.Y version. 81 | version = elm.__version__ 82 | # The full version, including alpha/beta/rc tags. 83 | release = elm.__version__ 84 | 85 | # The language for content autogenerated by Sphinx. Refer to documentation 86 | # for a list of supported languages. 87 | #language = None 88 | 89 | # There are two options for replacing |today|: either, you set today to 90 | # some non-false value, then it is used: 91 | #today = '' 92 | # Else, today_fmt is used as the format for a strftime call. 93 | #today_fmt = '%B %d, %Y' 94 | 95 | # List of patterns, relative to source directory, that match files and 96 | # directories to ignore when looking for source files. 97 | exclude_patterns = ['_build'] 98 | 99 | # The reST default role (used for this markup: `text`) to use for all 100 | # documents. 101 | #default_role = None 102 | 103 | # If true, '()' will be appended to :func: etc. cross-reference text. 104 | #add_function_parentheses = True 105 | 106 | # If true, the current module name will be prepended to all description 107 | # unit titles (such as .. function::). 108 | #add_module_names = True 109 | 110 | # If true, sectionauthor and moduleauthor directives will be shown in the 111 | # output. They are ignored by default. 112 | #show_authors = False 113 | 114 | # The name of the Pygments (syntax highlighting) style to use. 115 | pygments_style = 'sphinx' 116 | 117 | # A list of ignored prefixes for module index sorting. 118 | #modindex_common_prefix = [] 119 | 120 | # If true, keep warnings as "system message" paragraphs in the built 121 | # documents. 122 | #keep_warnings = False 123 | 124 | 125 | # -- Options for HTML output ------------------------------------------- 126 | 127 | # The theme to use for HTML and HTML Help pages. See the documentation for 128 | # a list of builtin themes. 129 | html_theme = 'sphinx_rtd_theme' 130 | 131 | # Theme options are theme-specific and customize the look and feel of a 132 | # theme further. For a list of options available for each theme, see the 133 | # documentation. 134 | #html_theme_options = {} 135 | 136 | # Add any paths that contain custom themes here, relative to this directory. 137 | html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] 138 | 139 | # The name for this set of Sphinx documents. If None, it defaults to 140 | # " v documentation". 141 | #html_title = None 142 | 143 | # A shorter title for the navigation bar. Default is the same as 144 | # html_title. 145 | #html_short_title = None 146 | 147 | # The name of an image file (relative to this directory) to place at the 148 | # top of the sidebar. 149 | #html_logo = None 150 | 151 | # The name of an image file (within the static path) to use as favicon 152 | # of the docs. This file should be a Windows icon file (.ico) being 153 | # 16x16 or 32x32 pixels large. 154 | #html_favicon = None 155 | 156 | # Add any paths that contain custom static files (such as style sheets) 157 | # here, relative to this directory. They are copied after the builtin 158 | # static files, so a file named "default.css" will overwrite the builtin 159 | # "default.css". 160 | html_static_path = ['_static'] 161 | 162 | # If not '', a 'Last updated on:' timestamp is inserted at every page 163 | # bottom, using the given strftime format. 164 | #html_last_updated_fmt = '%b %d, %Y' 165 | 166 | # If true, SmartyPants will be used to convert quotes and dashes to 167 | # typographically correct entities. 168 | #html_use_smartypants = True 169 | 170 | # Custom sidebar templates, maps document names to template names. 171 | #html_sidebars = {} 172 | 173 | # Additional templates that should be rendered to pages, maps page names 174 | # to template names. 175 | #html_additional_pages = {} 176 | 177 | # If false, no module index is generated. 178 | #html_domain_indices = True 179 | 180 | # If false, no index is generated. 181 | #html_use_index = True 182 | 183 | # If true, the index is split into individual pages for each letter. 184 | #html_split_index = False 185 | 186 | # If true, links to the reST sources are added to the pages. 187 | #html_show_sourcelink = True 188 | 189 | # If true, "Created using Sphinx" is shown in the HTML footer. 190 | # Default is True. 191 | #html_show_sphinx = True 192 | 193 | # If true, "(C) Copyright ..." is shown in the HTML footer. 194 | # Default is True. 195 | #html_show_copyright = True 196 | 197 | # If true, an OpenSearch description file will be output, and all pages 198 | # will contain a tag referring to it. The value of this option 199 | # must be the base URL from which the finished HTML is served. 200 | #html_use_opensearch = '' 201 | 202 | # This is the file name suffix for HTML files (e.g. ".xhtml"). 203 | #html_file_suffix = None 204 | 205 | # Output file base name for HTML help builder. 206 | htmlhelp_basename = 'elmdoc' 207 | 208 | 209 | # -- Options for LaTeX output ------------------------------------------ 210 | 211 | latex_elements = { 212 | # The paper size ('letterpaper' or 'a4paper'). 213 | #'papersize': 'letterpaper', 214 | 215 | # The font size ('10pt', '11pt' or '12pt'). 216 | #'pointsize': '10pt', 217 | 218 | # Additional stuff for the LaTeX preamble. 219 | #'preamble': '', 220 | } 221 | 222 | # Grouping the document tree into LaTeX files. List of tuples 223 | # (source start file, target name, title, author, documentclass 224 | # [howto/manual]). 225 | latex_documents = [ 226 | ('index', 'elm.tex', 227 | u'Python Extreme Learning Machine (ELM) Documentation', 228 | u'Augusto Almeida', 'manual'), 229 | ] 230 | 231 | # The name of an image file (relative to this directory) to place at 232 | # the top of the title page. 233 | #latex_logo = None 234 | 235 | # For "manual" documents, if this is true, then toplevel headings 236 | # are parts, not chapters. 237 | #latex_use_parts = False 238 | 239 | # If true, show page references after internal links. 240 | #latex_show_pagerefs = False 241 | 242 | # If true, show URL addresses after external links. 243 | #latex_show_urls = False 244 | 245 | # Documents to append as an appendix to all manuals. 246 | #latex_appendices = [] 247 | 248 | # If false, no module index is generated. 249 | #latex_domain_indices = True 250 | 251 | 252 | # -- Options for manual page output ------------------------------------ 253 | 254 | # One entry per manual page. List of tuples 255 | # (source start file, name, description, authors, manual section). 256 | man_pages = [ 257 | ('index', 'elm', 258 | u'Python Extreme Learning Machine (ELM) Documentation', 259 | [u'Augusto Almeida'], 1) 260 | ] 261 | 262 | # If true, show URL addresses after external links. 263 | #man_show_urls = False 264 | 265 | 266 | # -- Options for Texinfo output ---------------------------------------- 267 | 268 | # Grouping the document tree into Texinfo files. List of tuples 269 | # (source start file, target name, title, author, 270 | # dir menu entry, description, category) 271 | texinfo_documents = [ 272 | ('index', 'elm', 273 | u'Python Extreme Learning Machine (ELM) Documentation', 274 | u'Augusto Almeida', 275 | 'elm', 276 | 'One line description of project.', 277 | 'Miscellaneous'), 278 | ] 279 | 280 | # Documents to append as an appendix to all manuals. 281 | #texinfo_appendices = [] 282 | 283 | # If false, no module index is generated. 284 | #texinfo_domain_indices = True 285 | 286 | # How to display URL addresses: 'footnote', 'no', or 'inline'. 287 | #texinfo_show_urls = 'footnote' 288 | 289 | # If true, do not generate a @detailmenu in the "Top" node's menu. 290 | #texinfo_no_detailmenu = False 291 | -------------------------------------------------------------------------------- /docs/contributing.rst: -------------------------------------------------------------------------------- 1 | .. include:: ../CONTRIBUTING.rst 2 | -------------------------------------------------------------------------------- /docs/elm.rst: -------------------------------------------------------------------------------- 1 | elm package 2 | =========== 3 | 4 | :mod:`elm.elmk` Module 5 | ----------------------- 6 | 7 | .. automodule:: elm.elmk 8 | :members: 9 | :special-members: __init__ 10 | :undoc-members: 11 | :show-inheritance: 12 | 13 | :mod:`elm.elmr` Module 14 | ---------------------- 15 | 16 | .. automodule:: elm.elmr 17 | :members: 18 | :special-members: __init__ 19 | :undoc-members: 20 | :show-inheritance: 21 | 22 | :mod:`elm.mltools` Module 23 | ------------------------- 24 | 25 | .. automodule:: elm.mltools 26 | :members: 27 | :show-inheritance: 28 | 29 | -------------------------------------------------------------------------------- /docs/history.rst: -------------------------------------------------------------------------------- 1 | .. include:: ../HISTORY.rst 2 | -------------------------------------------------------------------------------- /docs/index.rst: -------------------------------------------------------------------------------- 1 | .. elm documentation master file, created by 2 | sphinx-quickstart on Tue Jul 9 22:26:36 2013. 3 | You can adapt this file completely to your liking, but it should at least 4 | contain the root `toctree` directive. 5 | 6 | elm: A Python Extreme Learning Machine 7 | ====================================== 8 | 9 | Basics: 10 | ------- 11 | 12 | .. toctree:: 13 | :maxdepth: 2 14 | 15 | readme 16 | installation 17 | usage 18 | 19 | 20 | API Reference 21 | ------------- 22 | 23 | .. toctree:: 24 | :maxdepth: 2 25 | 26 | elm 27 | 28 | 29 | Project Info 30 | ------------ 31 | 32 | .. toctree:: 33 | :maxdepth: 2 34 | 35 | contributing 36 | authors 37 | history 38 | 39 | 40 | Indices and tables 41 | ================== 42 | 43 | * :ref:`genindex` 44 | * :ref:`modindex` 45 | * :ref:`search` 46 | 47 | -------------------------------------------------------------------------------- /docs/installation.rst: -------------------------------------------------------------------------------- 1 | ============ 2 | Installation 3 | ============ 4 | 5 | At the command line:: 6 | 7 | $ pip install elm 8 | 9 | Or, if you have virtualenv installed:: 10 | 11 | $ virtualenv venv 12 | $ source venv/bin/activate 13 | $ pip install elm 14 | 15 | 16 | .. note:: 17 | If you found an error while using :func:`ELMKernel.search_param` or 18 | :func:`ELMRandom.search_param`, probably is because **pip** installed an 19 | outdated version of **optunity** (currently their pip package and github are not synced). 20 | 21 | To fix it, do:: 22 | 23 | # Download package 24 | $ pip install -d . elm 25 | # Unzip it 26 | $ tar -xf elm*.tar.gz 27 | # Install requirements.txt 28 | $ cd elm*; pip install -r requirements.txt 29 | -------------------------------------------------------------------------------- /docs/make.bat: -------------------------------------------------------------------------------- 1 | @ECHO OFF 2 | 3 | REM Command file for Sphinx documentation 4 | 5 | if "%SPHINXBUILD%" == "" ( 6 | set SPHINXBUILD=sphinx-build 7 | ) 8 | set BUILDDIR=_build 9 | set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . 10 | set I18NSPHINXOPTS=%SPHINXOPTS% . 11 | if NOT "%PAPER%" == "" ( 12 | set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% 13 | set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% 14 | ) 15 | 16 | if "%1" == "" goto help 17 | 18 | if "%1" == "help" ( 19 | :help 20 | echo.Please use `make ^` where ^ is one of 21 | echo. html to make standalone HTML files 22 | echo. dirhtml to make HTML files named index.html in directories 23 | echo. singlehtml to make a single large HTML file 24 | echo. pickle to make pickle files 25 | echo. json to make JSON files 26 | echo. htmlhelp to make HTML files and a HTML help project 27 | echo. qthelp to make HTML files and a qthelp project 28 | echo. devhelp to make HTML files and a Devhelp project 29 | echo. epub to make an epub 30 | echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter 31 | echo. text to make text files 32 | echo. man to make manual pages 33 | echo. texinfo to make Texinfo files 34 | echo. gettext to make PO message catalogs 35 | echo. changes to make an overview over all changed/added/deprecated items 36 | echo. xml to make Docutils-native XML files 37 | echo. pseudoxml to make pseudoxml-XML files for display purposes 38 | echo. linkcheck to check all external links for integrity 39 | echo. doctest to run all doctests embedded in the documentation if enabled 40 | goto end 41 | ) 42 | 43 | if "%1" == "clean" ( 44 | for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i 45 | del /q /s %BUILDDIR%\* 46 | goto end 47 | ) 48 | 49 | 50 | %SPHINXBUILD% 2> nul 51 | if errorlevel 9009 ( 52 | echo. 53 | echo.The 'sphinx-build' command was not found. Make sure you have Sphinx 54 | echo.installed, then set the SPHINXBUILD environment variable to point 55 | echo.to the full path of the 'sphinx-build' executable. Alternatively you 56 | echo.may add the Sphinx directory to PATH. 57 | echo. 58 | echo.If you don't have Sphinx installed, grab it from 59 | echo.http://sphinx-doc.org/ 60 | exit /b 1 61 | ) 62 | 63 | if "%1" == "html" ( 64 | %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html 65 | if errorlevel 1 exit /b 1 66 | echo. 67 | echo.Build finished. The HTML pages are in %BUILDDIR%/html. 68 | goto end 69 | ) 70 | 71 | if "%1" == "dirhtml" ( 72 | %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml 73 | if errorlevel 1 exit /b 1 74 | echo. 75 | echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. 76 | goto end 77 | ) 78 | 79 | if "%1" == "singlehtml" ( 80 | %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml 81 | if errorlevel 1 exit /b 1 82 | echo. 83 | echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. 84 | goto end 85 | ) 86 | 87 | if "%1" == "pickle" ( 88 | %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle 89 | if errorlevel 1 exit /b 1 90 | echo. 91 | echo.Build finished; now you can process the pickle files. 92 | goto end 93 | ) 94 | 95 | if "%1" == "json" ( 96 | %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json 97 | if errorlevel 1 exit /b 1 98 | echo. 99 | echo.Build finished; now you can process the JSON files. 100 | goto end 101 | ) 102 | 103 | if "%1" == "htmlhelp" ( 104 | %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp 105 | if errorlevel 1 exit /b 1 106 | echo. 107 | echo.Build finished; now you can run HTML Help Workshop with the ^ 108 | .hhp project file in %BUILDDIR%/htmlhelp. 109 | goto end 110 | ) 111 | 112 | if "%1" == "qthelp" ( 113 | %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp 114 | if errorlevel 1 exit /b 1 115 | echo. 116 | echo.Build finished; now you can run "qcollectiongenerator" with the ^ 117 | .qhcp project file in %BUILDDIR%/qthelp, like this: 118 | echo.^> qcollectiongenerator %BUILDDIR%\qthelp\elm.qhcp 119 | echo.To view the help file: 120 | echo.^> assistant -collectionFile %BUILDDIR%\qthelp\elm.ghc 121 | goto end 122 | ) 123 | 124 | if "%1" == "devhelp" ( 125 | %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp 126 | if errorlevel 1 exit /b 1 127 | echo. 128 | echo.Build finished. 129 | goto end 130 | ) 131 | 132 | if "%1" == "epub" ( 133 | %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub 134 | if errorlevel 1 exit /b 1 135 | echo. 136 | echo.Build finished. The epub file is in %BUILDDIR%/epub. 137 | goto end 138 | ) 139 | 140 | if "%1" == "latex" ( 141 | %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex 142 | if errorlevel 1 exit /b 1 143 | echo. 144 | echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. 145 | goto end 146 | ) 147 | 148 | if "%1" == "latexpdf" ( 149 | %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex 150 | cd %BUILDDIR%/latex 151 | make all-pdf 152 | cd %BUILDDIR%/.. 153 | echo. 154 | echo.Build finished; the PDF files are in %BUILDDIR%/latex. 155 | goto end 156 | ) 157 | 158 | if "%1" == "latexpdfja" ( 159 | %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex 160 | cd %BUILDDIR%/latex 161 | make all-pdf-ja 162 | cd %BUILDDIR%/.. 163 | echo. 164 | echo.Build finished; the PDF files are in %BUILDDIR%/latex. 165 | goto end 166 | ) 167 | 168 | if "%1" == "text" ( 169 | %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text 170 | if errorlevel 1 exit /b 1 171 | echo. 172 | echo.Build finished. The text files are in %BUILDDIR%/text. 173 | goto end 174 | ) 175 | 176 | if "%1" == "man" ( 177 | %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man 178 | if errorlevel 1 exit /b 1 179 | echo. 180 | echo.Build finished. The manual pages are in %BUILDDIR%/man. 181 | goto end 182 | ) 183 | 184 | if "%1" == "texinfo" ( 185 | %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo 186 | if errorlevel 1 exit /b 1 187 | echo. 188 | echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. 189 | goto end 190 | ) 191 | 192 | if "%1" == "gettext" ( 193 | %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale 194 | if errorlevel 1 exit /b 1 195 | echo. 196 | echo.Build finished. The message catalogs are in %BUILDDIR%/locale. 197 | goto end 198 | ) 199 | 200 | if "%1" == "changes" ( 201 | %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes 202 | if errorlevel 1 exit /b 1 203 | echo. 204 | echo.The overview file is in %BUILDDIR%/changes. 205 | goto end 206 | ) 207 | 208 | if "%1" == "linkcheck" ( 209 | %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck 210 | if errorlevel 1 exit /b 1 211 | echo. 212 | echo.Link check complete; look for any errors in the above output ^ 213 | or in %BUILDDIR%/linkcheck/output.txt. 214 | goto end 215 | ) 216 | 217 | if "%1" == "doctest" ( 218 | %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest 219 | if errorlevel 1 exit /b 1 220 | echo. 221 | echo.Testing of doctests in the sources finished, look at the ^ 222 | results in %BUILDDIR%/doctest/output.txt. 223 | goto end 224 | ) 225 | 226 | if "%1" == "xml" ( 227 | %SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml 228 | if errorlevel 1 exit /b 1 229 | echo. 230 | echo.Build finished. The XML files are in %BUILDDIR%/xml. 231 | goto end 232 | ) 233 | 234 | if "%1" == "pseudoxml" ( 235 | %SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml 236 | if errorlevel 1 exit /b 1 237 | echo. 238 | echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml. 239 | goto end 240 | ) 241 | 242 | :end 243 | -------------------------------------------------------------------------------- /docs/readme.rst: -------------------------------------------------------------------------------- 1 | .. include:: ../README.rst 2 | -------------------------------------------------------------------------------- /docs/usage.rst: -------------------------------------------------------------------------------- 1 | ===== 2 | Usage 3 | ===== 4 | 5 | To use Python Extreme Learning Machine (ELM) in a project:: 6 | 7 | import elm 8 | 9 | # download an example dataset from 10 | # https://github.com/acba/elm/tree/develop/tests/data 11 | 12 | 13 | # load dataset 14 | data = elm.read("iris.data") 15 | 16 | # create a classifier 17 | elmk = elm.ELMKernel() 18 | 19 | # search for best parameter for this dataset 20 | # define "kfold" cross-validation method, "accuracy" as a objective function 21 | # to be optimized and perform 10 searching steps. 22 | # best parameters will be saved inside 'elmk' object 23 | elmk.search_param(data, cv="kfold", of="accuracy", eval=10) 24 | 25 | # split data in training and testing sets 26 | # use 80% of dataset to training and shuffle data before splitting 27 | tr_set, te_set = elm.split_sets(data, training_percent=.8, perm=True) 28 | 29 | #train and test 30 | # results are Error objects 31 | tr_result = elmk.train(tr_set) 32 | te_result = elmk.test(te_set) 33 | 34 | print(te_result.get_accuracy) 35 | -------------------------------------------------------------------------------- /elm/__init__.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | __author__ = 'Augusto Almeida' 4 | __email__ = 'augustocbenvenuto@gmail.com' 5 | __version__ = '0.1.3' 6 | 7 | from .elmk import ELMKernel 8 | from .elmr import ELMRandom 9 | from .mltools import * -------------------------------------------------------------------------------- /elm/elmk.cfg: -------------------------------------------------------------------------------- 1 | # These ranges were created after analysing several IBOVESPA stocks and 2 | # finding the most common parameters 3 | 4 | [DEFAULT] 5 | elmk_c_param_name = ["Regularization Coefficient"] 6 | elmk_c_range = [(-15, 15)] 7 | 8 | [rbf] 9 | kernel_n_param = 1 10 | kernel_param_name = ["gamma"] 11 | kernel_params_range = [(-15, 15)] 12 | 13 | [linear] 14 | kernel_n_param = 0 15 | kernel_params_range = [] 16 | 17 | [poly] 18 | kernel_n_param = 2 19 | kernel_param_name = ["coef0", "degree"] 20 | kernel_params_range = [(-15, 15), (1, 5)] -------------------------------------------------------------------------------- /elm/elmk.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | """ 4 | This file contains ELMKernel classes and all developed methods. 5 | """ 6 | 7 | # Python2 support 8 | from __future__ import unicode_literals 9 | from __future__ import division 10 | from __future__ import absolute_import 11 | from __future__ import print_function 12 | 13 | from .mltools import * 14 | 15 | import numpy as np 16 | import optunity 17 | import ast 18 | 19 | import sys 20 | if sys.version_info < (3, 0): 21 | import ConfigParser as configparser 22 | else: 23 | import configparser 24 | 25 | 26 | # Find configuration file 27 | from pkg_resources import Requirement, resource_filename 28 | _ELMK_CONFIG = resource_filename(Requirement.parse("elm"), "elm/elmk.cfg") 29 | 30 | 31 | class ELMKernel(MLTools): 32 | """ 33 | A Python implementation of ELM Kernel defined by Huang[1]. 34 | 35 | An ELM is a single-hidden layer feedforward network (SLFN) proposed by 36 | Huang back in 2006, in 2012 the author revised and introduced a new 37 | concept of using kernel functions to his previous work. 38 | 39 | This implementation currently accepts both methods proposed at 2012, 40 | random neurons and kernel functions to estimate classifier/regression 41 | functions. 42 | 43 | Let the dimensionality "d" of the problem be the sum of "t" size (number of 44 | targets per pattern) and "f" size (number of features per pattern). 45 | So, d = t + f 46 | 47 | The data will be set as Pattern = (Target | Features). 48 | 49 | If database has *N* patterns, its size follows *Nxd*. 50 | 51 | 52 | Note: 53 | [1] Paper reference: Huang, 2012, "Extreme Learning Machine for 54 | Regression and Multiclass Classification" 55 | 56 | Attributes: 57 | output_weight (numpy.ndarray): a column vector (*Nx1*) calculated 58 | after training, represent :math:\\beta. 59 | training_patterns (numpy.ndarray): a matrix (*Nxd*) containing all 60 | patterns used for training. 61 | 62 | Need to save all training patterns to perform kernel 63 | calculation at testing and prediction phase. 64 | param_kernel_function (str): kernel function that will be used 65 | for training. 66 | param_c (float): regularization coefficient (*C*) used for training. 67 | param_kernel_params (list of float): kernel function parameters 68 | that will be used for training. 69 | 70 | Other Parameters: 71 | regressor_name (str): The name of classifier/regressor. 72 | available_kernel_functions (list of str): List with all available 73 | kernel functions. 74 | default_param_kernel_function (str): Default kernel function if 75 | not set at class constructor. 76 | default_param_c (float): Default parameter c value if 77 | not set at class constructor. 78 | default_param_kernel_params (list of float): Default kernel 79 | function parameters if not set at class constructor. 80 | 81 | Note: 82 | * **regressor_name**: defaults to "elmk". 83 | * **default_param_kernel_function**: defaults to "rbf". 84 | * **default_param_c**: defaults to 9. 85 | * **default_param_kernel_params**: defaults to [-15]. 86 | 87 | """ 88 | 89 | def __init__(self, params=[]): 90 | """ 91 | Class constructor. 92 | 93 | Arguments: 94 | params (list): first argument (*str*) is an available kernel 95 | function, second argument (*float*) is the coefficient 96 | *C* of regularization and the third and last argument is 97 | a list of arguments for the kernel function. 98 | 99 | Example: 100 | 101 | >>> import elm 102 | >>> params = ["linear", 5, []] 103 | >>> elmk = elm.ELMKernel(params) 104 | 105 | """ 106 | super(self.__class__, self).__init__() 107 | 108 | self.regressor_name = "elmk" 109 | 110 | self.available_kernel_functions = ["rbf", "linear", "poly"] 111 | 112 | self.default_param_kernel_function = "rbf" 113 | self.default_param_c = 9 114 | self.default_param_kernel_params = [-15] 115 | 116 | self.output_weight = [] 117 | self.training_patterns = [] 118 | 119 | # Initialized parameters values 120 | if not params: 121 | self.param_kernel_function = self.default_param_kernel_function 122 | self.param_c = self.default_param_c 123 | self.param_kernel_params = self.default_param_kernel_params 124 | else: 125 | self.param_kernel_function = params[0] 126 | self.param_c = params[1] 127 | self.param_kernel_params = params[2] 128 | 129 | # ######################## 130 | # Private Methods 131 | # ######################## 132 | 133 | def _kernel_matrix(self, training_patterns, kernel_type, kernel_param, 134 | test_patterns=None): 135 | """ Calculate the Omega matrix (kernel matrix). 136 | 137 | If test_patterns is None, then the training Omega matrix will be 138 | calculated. This matrix represents the kernel value from each 139 | pattern of the training matrix with each other. If test_patterns 140 | exists, then the test Omega matrix will be calculated. This 141 | matrix represents the kernel value from each pattern of the 142 | training matrix with the patterns of test matrix. 143 | 144 | Arguments: 145 | training_patterns (numpy.ndarray): A matrix containing the 146 | features from all training patterns. 147 | kernel_type (str): The type of kernel to be used e.g: rbf 148 | kernel_param (list of float): The parameters of the chosen 149 | kernel. 150 | test_patterns (numpy.ndarray): An optional parameter used to 151 | calculate the Omega test matrix. 152 | 153 | Returns: 154 | numpy.ndarray: Omega matrix 155 | 156 | """ 157 | number_training_patterns = training_patterns.shape[0] 158 | 159 | if kernel_type == "rbf": 160 | if test_patterns is None: 161 | temp_omega = np.dot( 162 | np.sum(training_patterns ** 2, axis=1).reshape(-1, 1), 163 | np.ones((1, number_training_patterns))) 164 | 165 | temp_omega = temp_omega + temp_omega.conj().T 166 | 167 | omega = np.exp( 168 | -(2 ** kernel_param[0]) * (temp_omega - 2 * (np.dot( 169 | training_patterns, training_patterns.conj().T)))) 170 | 171 | else: 172 | number_test_patterns = test_patterns.shape[0] 173 | 174 | temp1 = np.dot( 175 | np.sum(training_patterns ** 2, axis=1).reshape(-1, 1), 176 | np.ones((1, number_test_patterns))) 177 | temp2 = np.dot( 178 | np.sum(test_patterns ** 2, axis=1).reshape(-1, 1), 179 | np.ones((1, number_training_patterns))) 180 | temp_omega = temp1 + temp2.conj().T 181 | 182 | omega = \ 183 | np.exp(- (2 ** kernel_param[0]) * 184 | (temp_omega - 2 * np.dot(training_patterns, 185 | test_patterns.conj().T))) 186 | elif kernel_type == "linear": 187 | if test_patterns is None: 188 | omega = np.dot(training_patterns, training_patterns.conj().T) 189 | else: 190 | omega = np.dot(training_patterns, test_patterns.conj().T) 191 | 192 | elif kernel_type == "poly": 193 | # Power a**x is undefined when x is real and 'a' is negative, 194 | # so is necessary to force an integer value 195 | kernel_param[1] = round(kernel_param[1]) 196 | 197 | if test_patterns is None: 198 | temp = np.dot(training_patterns, training_patterns.conj().T)+ kernel_param[0] 199 | 200 | omega = temp ** kernel_param[1] 201 | else: 202 | temp = np.dot(training_patterns, test_patterns.conj().T)+ kernel_param[0] 203 | omega = temp ** kernel_param[1] 204 | 205 | else: 206 | print("Error: Invalid or unavailable kernel function.") 207 | return 208 | 209 | return omega 210 | 211 | def _local_train(self, training_patterns, training_expected_targets, 212 | params): 213 | 214 | # If params not provided, uses initialized parameters values 215 | if not params: 216 | pass 217 | else: 218 | self.param_kernel_function = params[0] 219 | self.param_c = params[1] 220 | self.param_kernel_params = params[2] 221 | 222 | # Need to save all training patterns to perform kernel calculation at 223 | # testing and prediction phase 224 | self.training_patterns = training_patterns 225 | 226 | number_training_patterns = self.training_patterns.shape[0] 227 | 228 | # Training phase 229 | 230 | omega_train = self._kernel_matrix(self.training_patterns, 231 | self.param_kernel_function, 232 | self.param_kernel_params) 233 | 234 | self.output_weight = np.linalg.solve( 235 | (omega_train + np.eye(number_training_patterns) / 236 | (2 ** self.param_c)), 237 | training_expected_targets).reshape(-1, 1) 238 | 239 | training_predicted_targets = np.dot(omega_train, self.output_weight) 240 | 241 | return training_predicted_targets 242 | 243 | def _local_test(self, testing_patterns, testing_expected_targets, 244 | predicting): 245 | 246 | omega_test = self._kernel_matrix(self.training_patterns, 247 | self.param_kernel_function, 248 | self.param_kernel_params, 249 | testing_patterns) 250 | 251 | testing_predicted_targets = np.dot(omega_test.conj().T, 252 | self.output_weight) 253 | 254 | return testing_predicted_targets 255 | 256 | # ######################## 257 | # Public Methods 258 | # ######################## 259 | 260 | def get_available_kernel_functions(self): 261 | """ 262 | Return available kernel functions. 263 | """ 264 | 265 | return self.available_kernel_functions 266 | 267 | def print_parameters(self): 268 | """ 269 | Print parameters values. 270 | """ 271 | 272 | print() 273 | print("Regressor Parameters") 274 | print() 275 | print("Regularization coefficient: ", self.param_c) 276 | print("Kernel Function: ", self.param_kernel_function) 277 | print("Kernel parameters: ", self.param_kernel_params) 278 | print() 279 | print("CV error: ", self.cv_best_rmse) 280 | print() 281 | 282 | def search_param(self, database, dataprocess=None, path_filename=("", ""), 283 | save=False, cv="ts", of="rmse", kf=None, eval=50, op_solver_name="cma-es"): 284 | """ 285 | Search best hyperparameters for classifier/regressor based on 286 | optunity algorithms. 287 | 288 | Arguments: 289 | database (numpy.ndarray): a matrix containing all patterns 290 | that will be used for training/testing at some 291 | cross-validation method. 292 | dataprocess (DataProcess): an object that will pre-process 293 | database before training. Defaults to None. 294 | path_filename (tuple): *TODO*. 295 | save (bool): *TODO*. 296 | cv (str): Cross-validation method. Defaults to "ts". 297 | of (str): Objective function to be minimized at 298 | optunity.minimize. Defaults to "rmse". 299 | kf (list of str): a list of kernel functions to be used by 300 | the search. Defaults to None, this set all available 301 | functions. 302 | eval (int): Number of steps (evaluations) to optunity algorithm. 303 | 304 | 305 | Each set of hyperparameters will perform a cross-validation 306 | method chosen by param cv. 307 | 308 | 309 | Available *cv* methods: 310 | - "ts" :func:`mltools.time_series_cross_validation()` 311 | Perform a time-series cross-validation suggested by Hydman. 312 | 313 | - "kfold" :func:`mltools.kfold_cross_validation()` 314 | Perform a k-fold cross-validation. 315 | 316 | Available *of* function: 317 | - "accuracy", "rmse", "mape", "me". 318 | 319 | 320 | See Also: 321 | http://optunity.readthedocs.org/en/latest/user/index.html 322 | """ 323 | 324 | if kf is None: 325 | search_kernel_functions = self.available_kernel_functions 326 | elif type(kf) is list: 327 | search_kernel_functions = kf 328 | else: 329 | raise Exception("Invalid format for argument 'kf'.") 330 | 331 | print(self.regressor_name) 332 | print("##### Start search #####") 333 | 334 | config = configparser.ConfigParser() 335 | 336 | if sys.version_info < (3, 0): 337 | config.readfp(open(_ELMK_CONFIG)) 338 | else: 339 | config.read_file(open(_ELMK_CONFIG)) 340 | 341 | best_function_error = 99999.9 342 | temp_error = best_function_error 343 | best_param_c = 0 344 | best_param_kernel_function = "" 345 | best_param_kernel_param = [] 346 | for kernel_function in search_kernel_functions: 347 | 348 | if sys.version_info < (3, 0): 349 | elmk_c_range = ast.literal_eval(config.get("DEFAULT", 350 | "elmk_c_range")) 351 | 352 | n_parameters = config.getint(kernel_function, "kernel_n_param") 353 | kernel_p_range = \ 354 | ast.literal_eval(config.get(kernel_function, 355 | "kernel_params_range")) 356 | 357 | else: 358 | kernel_config = config[kernel_function] 359 | 360 | elmk_c_range = ast.literal_eval(kernel_config["elmk_c_range"]) 361 | 362 | n_parameters = int(kernel_config["kernel_n_param"]) 363 | kernel_p_range = \ 364 | ast.literal_eval(kernel_config["kernel_params_range"]) 365 | 366 | param_ranges = [[elmk_c_range[0][0], elmk_c_range[0][1]]] 367 | for param in range(n_parameters): 368 | param_ranges.append([kernel_p_range[param][0], 369 | kernel_p_range[param][1]]) 370 | 371 | def wrapper_0param(param_c): 372 | """ 373 | Wrapper for objective function. 374 | """ 375 | 376 | if cv == "ts": 377 | cv_tr_error, cv_te_error = \ 378 | time_series_cross_validation(self, database, 379 | [kernel_function, 380 | param_c, 381 | list([])], 382 | number_folds=10, 383 | dataprocess=dataprocess) 384 | 385 | elif cv == "kfold": 386 | cv_tr_error, cv_te_error = \ 387 | kfold_cross_validation(self, database, 388 | [kernel_function, 389 | param_c, 390 | list([])], 391 | number_folds=10, 392 | dataprocess=dataprocess) 393 | 394 | else: 395 | raise Exception("Invalid type of cross-validation.") 396 | 397 | if of == "accuracy": 398 | util = 1 / cv_te_error.get_accuracy() 399 | else: 400 | util = cv_te_error.get(of) 401 | 402 | # print("c:", param_c, "util: ", util) 403 | return util 404 | 405 | def wrapper_1param(param_c, param_kernel): 406 | """ 407 | Wrapper for optunity. 408 | """ 409 | 410 | if cv == "ts": 411 | cv_tr_error, cv_te_error = \ 412 | time_series_cross_validation(self, database, 413 | [kernel_function, 414 | param_c, 415 | list([param_kernel])], 416 | number_folds=10, 417 | dataprocess=dataprocess) 418 | 419 | elif cv == "kfold": 420 | cv_tr_error, cv_te_error = \ 421 | kfold_cross_validation(self, database, 422 | [kernel_function, 423 | param_c, 424 | list([param_kernel])], 425 | number_folds=10, 426 | dataprocess=dataprocess) 427 | 428 | else: 429 | raise Exception("Invalid type of cross-validation.") 430 | 431 | if of == "accuracy": 432 | util = 1 / cv_te_error.get_accuracy() 433 | else: 434 | util = cv_te_error.get(of) 435 | 436 | # print("c:", param_c, " gamma:", param_kernel, "util: ", util) 437 | return util 438 | 439 | def wrapper_2param(param_c, param_kernel1, param_kernel2): 440 | """ 441 | Wrapper for optunity. 442 | """ 443 | 444 | if cv == "ts": 445 | cv_tr_error, cv_te_error = \ 446 | time_series_cross_validation(self, database, 447 | [kernel_function, 448 | param_c, 449 | list([param_kernel1, 450 | param_kernel2])], 451 | number_folds=10, 452 | dataprocess=dataprocess) 453 | 454 | elif cv == "kfold": 455 | cv_tr_error, cv_te_error = \ 456 | kfold_cross_validation(self, database, 457 | [kernel_function, 458 | param_c, 459 | list([param_kernel1, 460 | param_kernel2])], 461 | number_folds=10, 462 | dataprocess=dataprocess) 463 | 464 | else: 465 | raise Exception("Invalid type of cross-validation.") 466 | 467 | if of == "accuracy": 468 | util = 1 / cv_te_error.get_accuracy() 469 | else: 470 | util = cv_te_error.get(of) 471 | 472 | # print("c:", param_c, " param1:", param_kernel1, 473 | # " param2:", param_kernel2, "util: ", util) 474 | return util 475 | 476 | if kernel_function == "linear": 477 | optimal_parameters, details, _ = \ 478 | optunity.minimize(wrapper_0param, 479 | solver_name=op_solver_name, 480 | num_evals=eval, 481 | param_c=param_ranges[0]) 482 | 483 | elif kernel_function == "rbf": 484 | optimal_parameters, details, _ = \ 485 | optunity.minimize(wrapper_1param, 486 | solver_name=op_solver_name, 487 | num_evals=eval, 488 | param_c=param_ranges[0], 489 | param_kernel=param_ranges[1]) 490 | 491 | elif kernel_function == "poly": 492 | optimal_parameters, details, _ = \ 493 | optunity.minimize(wrapper_2param, 494 | solver_name=op_solver_name, 495 | num_evals=eval, 496 | param_c=param_ranges[0], 497 | param_kernel1=param_ranges[1], 498 | param_kernel2=param_ranges[2]) 499 | else: 500 | raise Exception("Invalid kernel function.") 501 | 502 | # Save best kernel result 503 | if details[0] < temp_error: 504 | temp_error = details[0] 505 | 506 | if of == "accuracy": 507 | best_function_error = 1 / temp_error 508 | else: 509 | best_function_error = temp_error 510 | 511 | best_param_kernel_function = kernel_function 512 | best_param_c = optimal_parameters["param_c"] 513 | 514 | if best_param_kernel_function == "linear": 515 | best_param_kernel_param = [] 516 | elif best_param_kernel_function == "rbf": 517 | best_param_kernel_param = [optimal_parameters["param_kernel"]] 518 | elif best_param_kernel_function == "poly": 519 | best_param_kernel_param = \ 520 | [optimal_parameters["param_kernel1"], 521 | optimal_parameters["param_kernel2"]] 522 | else: 523 | raise Exception("Invalid kernel function.") 524 | 525 | # print("best: ", best_param_kernel_function, 526 | # best_function_error, best_param_c, best_param_kernel_param) 527 | 528 | if of == "accuracy": 529 | print("Kernel function: ", kernel_function, 530 | " best cv value: ", 1/details[0]) 531 | else: 532 | print("Kernel function: ", kernel_function, 533 | " best cv value: ", details[0]) 534 | 535 | 536 | # MLTools attribute 537 | self.cv_best_rmse = best_function_error 538 | 539 | # ELM attribute 540 | self.param_c = best_param_c 541 | self.param_kernel_function = best_param_kernel_function 542 | self.param_kernel_params = best_param_kernel_param 543 | 544 | print("##### Search complete #####") 545 | self.print_parameters() 546 | 547 | return None 548 | 549 | def train(self, training_matrix, params=[]): 550 | """ 551 | Calculate output_weight values needed to test/predict data. 552 | 553 | If params is provided, this method will use at training phase. 554 | Else, it will use the default value provided at object 555 | initialization. 556 | 557 | Arguments: 558 | training_matrix (numpy.ndarray): a matrix containing all 559 | patterns that will be used for training. 560 | params (list): a list of parameters defined at 561 | :func:`ELMKernel.__init__` 562 | 563 | Returns: 564 | :class:`Error`: training error object containing expected, 565 | predicted targets and all error metrics. 566 | 567 | Note: 568 | Training matrix must have target variables as the first column. 569 | 570 | """ 571 | 572 | return self._ml_train(training_matrix, params) 573 | 574 | def test(self, testing_matrix, predicting=False): 575 | """ 576 | Calculate test predicted values based on previous training. 577 | 578 | Args: 579 | testing_matrix (numpy.ndarray): a matrix containing all 580 | patterns that will be used for testing. 581 | predicting (bool): Don't set. 582 | 583 | Returns: 584 | :class:`Error`: testing error object containing expected, 585 | predicted targets and all error metrics. 586 | 587 | Note: 588 | Testing matrix must have target variables as the first column. 589 | """ 590 | 591 | return self._ml_test(testing_matrix, predicting) 592 | 593 | @copy_doc_of(MLTools._ml_predict) 594 | def predict(self, horizon=1): 595 | # self.__doc__ = self._ml_predict.__doc__ 596 | 597 | return self._ml_predict(horizon) 598 | 599 | @copy_doc_of(MLTools._ml_train_iterative) 600 | def train_iterative(self, database_matrix, params=[], 601 | sliding_window=168, k=1): 602 | # self.__doc__ = self._ml_train_iterative.__doc__ 603 | 604 | return self._ml_train_iterative(database_matrix, params, 605 | sliding_window, k) 606 | 607 | 608 | -------------------------------------------------------------------------------- /elm/elmr.cfg: -------------------------------------------------------------------------------- 1 | # These ranges were created after analysing several IBOVESPA stocks and 2 | # finding the most common parameters 3 | 4 | [DEFAULT] 5 | elmr_c_param_name = ["Regularization Coefficient"] 6 | elmr_c_range = [(-15, 15)] 7 | elmr_neurons = 500 -------------------------------------------------------------------------------- /elm/elmr.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | """ 4 | This file contains ELMKernel classes and all developed methods. 5 | """ 6 | 7 | # Python2 support 8 | from __future__ import unicode_literals 9 | from __future__ import division 10 | from __future__ import absolute_import 11 | from __future__ import print_function 12 | 13 | from .mltools import * 14 | 15 | import numpy as np 16 | import optunity 17 | import ast 18 | 19 | import sys 20 | if sys.version_info < (3, 0): 21 | import ConfigParser as configparser 22 | else: 23 | import configparser 24 | 25 | try: 26 | from scipy.special import expit 27 | except ImportError: 28 | _SCIPY = 0 29 | else: 30 | _SCIPY = 1 31 | 32 | # Find configuration file 33 | from pkg_resources import Requirement, resource_filename 34 | _ELMR_CONFIG = resource_filename(Requirement.parse("elm"), "elm/elmr.cfg") 35 | 36 | 37 | class ELMRandom(MLTools): 38 | """ 39 | A Python implementation of ELM Random Neurons defined by Huang[1]. 40 | 41 | An ELM is a single-hidden layer feedforward network (SLFN) proposed by 42 | Huang back in 2006, in 2012 the author revised and introduced a new 43 | concept of using kernel functions to his previous work. 44 | 45 | This implementation currently accepts both methods proposed at 2012, 46 | random neurons and kernel functions to estimate classifier/regression 47 | functions. 48 | 49 | Let the dimensionality "d" of the problem be the sum of "t" size (number of 50 | targets per pattern) and "f" size (number of features per pattern). 51 | So, d = t + f 52 | 53 | The data will be set as Pattern = (Target | Features). 54 | 55 | If database has *N* patterns, its size follows *Nxd*. 56 | 57 | 58 | Note: 59 | [1] Paper reference: Huang, 2012, "Extreme Learning Machine for 60 | Regression and Multiclass Classification" 61 | 62 | Attributes: 63 | input_weight (numpy.ndarray): a random matrix (*Lxd-1*) needed 64 | to calculate H(**x**). 65 | output_weight (numpy.ndarray): a column vector (*Nx1*) calculated 66 | after training, represent :math:\\beta. 67 | bias_of_hidden_neurons (numpy.ndarray): a random column vector 68 | (*Lx1*) needed to calculate H(**x**). 69 | param_function (str): function that will be used for training. 70 | param_c (float): regularization coefficient (*C*) used for training. 71 | param_l (list of float): number of neurons that will be used for 72 | training. 73 | param_opt (bool): a boolean used to calculate an optimization 74 | when number of training patterns are much larger than neurons 75 | (N >> L). 76 | 77 | Other Parameters: 78 | regressor_name (str): The name of classifier/regressor. 79 | available_functions (list of str): List with all available 80 | functions. 81 | default_param_function (str): Default function if not set at 82 | class constructor. 83 | default_param_c (float): Default parameter c value if not set at 84 | class constructor. 85 | default_param_l (integer): Default number of neurons if not set at 86 | class constructor. 87 | default_param_opt (bool): Default boolean optimization flag. 88 | 89 | Note: 90 | * **regressor_name**: defaults to "elmr". 91 | * **default_param_function**: defaults to "sigmoid". 92 | * **default_param_c**: defaults to 2 ** -6. 93 | * **default_param_l**: defaults to 500. 94 | * **default_param_opt**: defaults to False. 95 | 96 | """ 97 | 98 | def __init__(self, params=[]): 99 | """ 100 | Class constructor. 101 | 102 | Arguments: 103 | params (list): first argument (*str*) is an available function, 104 | second argument (*float*) is the coefficient *C* of 105 | regularization, the third is the number of hidden neurons 106 | and the last argument is an optimization boolean. 107 | 108 | Example: 109 | 110 | >>> import elm 111 | >>> params = ["sigmoid", 1, 500, False] 112 | >>> elmr = elm.ELMRandom(params) 113 | 114 | """ 115 | super(self.__class__, self).__init__() 116 | 117 | self.available_functions = ["sigmoid", "multiquadric"] 118 | 119 | self.regressor_name = "elmr" 120 | 121 | self.default_param_function = "sigmoid" 122 | self.default_param_c = 2 ** -6 123 | self.default_param_l = 500 124 | self.default_param_opt = False 125 | 126 | self.input_weight = [] 127 | self.output_weight = [] 128 | self.bias_of_hidden_neurons = [] 129 | 130 | # Initialized parameters values 131 | if not params: 132 | self.param_function = self.default_param_function 133 | self.param_c = self.default_param_c 134 | self.param_l = self.default_param_l 135 | self.param_opt = self.default_param_opt 136 | else: 137 | self.param_function = params[0] 138 | self.param_c = params[1] 139 | self.param_l = params[2] 140 | self.param_opt = params[3] 141 | 142 | # ######################## 143 | # Private Methods 144 | # ######################## 145 | 146 | def __set_random_weights(self, number_of_hidden_nodes, 147 | number_of_attributes): 148 | """ 149 | Initialize random values to calculate function 150 | 151 | Arguments: 152 | number_hidden_nodes (int): number of neurons. 153 | number_of_attributes (int): number of features. 154 | 155 | """ 156 | 157 | self.input_weight = np.random.rand(number_of_hidden_nodes, 158 | number_of_attributes) * 2 - 1 159 | 160 | self.bias_of_hidden_neurons = np.random.rand(number_of_hidden_nodes, 1) 161 | 162 | def __map_hidden_layer(self, function_type, number_hidden_nodes, data): 163 | """ 164 | Map argument "data" to the hidden layer feature space. 165 | 166 | Arguments: 167 | function_type (str): function to map input data to feature 168 | space. 169 | number_hidden_nodes (int): number of hidden neurons. 170 | data (numpy.ndarray): data to be mapped to feature space. 171 | 172 | Returns: 173 | numpy.ndarray: mapped data. 174 | 175 | """ 176 | 177 | number_of_data = data.shape[0] 178 | 179 | if function_type == "sigmoid" or function_type == "sig" or \ 180 | function_type == "sin" or function_type == "sine" or \ 181 | function_type == "hardlim" or \ 182 | function_type == "tribas": 183 | 184 | temp = np.dot(self.input_weight, data.conj().T) 185 | bias_matrix = np.tile(self.bias_of_hidden_neurons, 186 | number_of_data) 187 | temp = temp + bias_matrix 188 | 189 | elif function_type == "mtquadric" or function_type == "multiquadric": 190 | temph1 = np.tile(np.sum(data ** 2, axis=1).reshape(-1, 1), 191 | number_hidden_nodes) 192 | 193 | temph2 = \ 194 | np.tile(np.sum(self.input_weight ** 2, axis=1).reshape(-1, 1), 195 | number_of_data) 196 | 197 | temp = temph1 + temph2.conj().T \ 198 | - 2 * np.dot(data, self.input_weight.conj().T) 199 | 200 | temp = temp.conj().T + \ 201 | np.tile(self.bias_of_hidden_neurons ** 2, number_of_data) 202 | 203 | elif function_type == "gaussian" or function_type == "rbf": 204 | temph1 = np.tile(np.sum(data ** 2, axis=1).reshape(-1, 1), 205 | number_hidden_nodes) 206 | 207 | temph2 = \ 208 | np.tile(np.sum(self.input_weight ** 2, axis=1).reshape(-1, 1), 209 | number_of_data) 210 | 211 | temp = temph1 + temph2.conj().T \ 212 | - 2 * np.dot(data, self.input_weight.conj().T) 213 | 214 | temp = \ 215 | np.multiply(temp.conj().T, np.tile(self.bias_of_hidden_neurons, 216 | number_of_data)) 217 | else: 218 | print("Error: Invalid function type") 219 | return 220 | 221 | if function_type == "sigmoid" or function_type == "sig": 222 | if _SCIPY: 223 | h_matrix = expit(temp) 224 | else: 225 | h_matrix = 1 / (1 + np.exp(-temp)) 226 | elif function_type == "sine" or function_type == "sin": 227 | h_matrix = np.sin(temp) 228 | elif function_type == "mtquadric" or function_type == "multiquadric": 229 | h_matrix = np.sqrt(temp) 230 | elif function_type == "gaussian" or function_type == "rbf": 231 | h_matrix = np.exp(temp) 232 | else: 233 | print("Error: Invalid function type") 234 | return 235 | 236 | return h_matrix 237 | 238 | def _local_train(self, training_patterns, training_expected_targets, 239 | params): 240 | 241 | # If params not provided, uses initialized parameters values 242 | if not params: 243 | pass 244 | else: 245 | self.param_function = params[0] 246 | self.param_c = params[1] 247 | self.param_l = params[2] 248 | self.param_opt = params[3] 249 | 250 | number_of_attributes = training_patterns.shape[1] 251 | 252 | self.__set_random_weights(self.param_l, number_of_attributes) 253 | 254 | h_train = self.__map_hidden_layer(self.param_function, self.param_l, 255 | training_patterns) 256 | 257 | # If N >>> L, param_opt should be True 258 | if self.param_opt: 259 | self.output_weight = np.linalg.solve( 260 | (np.eye(h_train.shape[0]) / self.param_c) + 261 | np.dot(h_train, h_train.conj().T), 262 | np.dot(h_train, training_expected_targets)) 263 | 264 | else: 265 | self.output_weight = np.dot(h_train, np.linalg.solve( 266 | ((np.eye(h_train.shape[1]) / self.param_c) + np.dot( 267 | h_train.conj().T, h_train)), 268 | training_expected_targets)) 269 | 270 | training_predicted_targets = np.dot(h_train.conj().T, 271 | self.output_weight) 272 | 273 | return training_predicted_targets 274 | 275 | def _local_test(self, testing_patterns, testing_expected_targets, 276 | predicting): 277 | 278 | h_test = self.__map_hidden_layer(self.param_function, self.param_l, 279 | testing_patterns) 280 | 281 | testing_predicted_targets = np.dot(h_test.conj().T, self.output_weight) 282 | 283 | return testing_predicted_targets 284 | 285 | # ######################## 286 | # Public Methods 287 | # ######################## 288 | 289 | def search_param(self, database, dataprocess=None, path_filename=("", ""), 290 | save=False, cv="ts", of="rmse", f=None, eval=50): 291 | """ 292 | Search best hyperparameters for classifier/regressor based on 293 | optunity algorithms. 294 | 295 | Arguments: 296 | database (numpy.ndarray): a matrix containing all patterns 297 | that will be used for training/testing at some 298 | cross-validation method. 299 | dataprocess (DataProcess): an object that will pre-process 300 | database before training. Defaults to None. 301 | path_filename (tuple): *TODO*. 302 | save (bool): *TODO*. 303 | cv (str): Cross-validation method. Defaults to "ts". 304 | of (str): Objective function to be minimized at 305 | optunity.minimize. Defaults to "rmse". 306 | f (list of str): a list of functions to be used by the 307 | search. Defaults to None, this set all available 308 | functions. 309 | eval (int): Number of steps (evaluations) to optunity algorithm. 310 | 311 | Each set of hyperparameters will perform a cross-validation 312 | method chosen by param cv. 313 | 314 | Available *cv* methods: 315 | - "ts" :func:`mltools.time_series_cross_validation()` 316 | Perform a time-series cross-validation suggested by Hydman. 317 | 318 | - "kfold" :func:`mltools.kfold_cross_validation()` 319 | Perform a k-fold cross-validation. 320 | 321 | Available *of* function: 322 | - "accuracy", "rmse", "mape", "me". 323 | 324 | 325 | See Also: 326 | http://optunity.readthedocs.org/en/latest/user/index.html 327 | """ 328 | 329 | if f is None: 330 | search_functions = self.available_functions 331 | elif type(f) is list: 332 | search_functions = f 333 | else: 334 | raise Exception("Invalid format for argument 'f'.") 335 | 336 | print(self.regressor_name) 337 | print("##### Start search #####") 338 | 339 | config = configparser.ConfigParser() 340 | if sys.version_info < (3, 0): 341 | config.readfp(open(_ELMR_CONFIG)) 342 | else: 343 | config.read_file(open(_ELMR_CONFIG)) 344 | 345 | best_function_error = 99999.9 346 | temp_error = best_function_error 347 | best_param_function = "" 348 | best_param_c = 0 349 | best_param_l = 0 350 | for function in search_functions: 351 | 352 | if sys.version_info < (3, 0): 353 | elmr_c_range = ast.literal_eval(config.get("DEFAULT", 354 | "elmr_c_range")) 355 | 356 | neurons = config.getint("DEFAULT", "elmr_neurons") 357 | 358 | else: 359 | function_config = config["DEFAULT"] 360 | elmr_c_range = ast.literal_eval(function_config["elmr_c_range"]) 361 | neurons = ast.literal_eval(function_config["elmr_neurons"]) 362 | 363 | param_ranges = [[elmr_c_range[0][0], elmr_c_range[0][1]]] 364 | 365 | def wrapper_opt(param_c): 366 | """ 367 | Wrapper for optunity. 368 | """ 369 | 370 | if cv == "ts": 371 | cv_tr_error, cv_te_error = \ 372 | time_series_cross_validation(self, database, 373 | params=[function, 374 | 2 ** param_c, 375 | neurons, 376 | False], 377 | number_folds=10, 378 | dataprocess=dataprocess) 379 | 380 | elif cv == "kfold": 381 | cv_tr_error, cv_te_error = \ 382 | kfold_cross_validation(self, database, 383 | params=[function, 384 | 2 ** param_c, 385 | neurons, 386 | False], 387 | number_folds=10, 388 | dataprocess=dataprocess) 389 | 390 | else: 391 | raise Exception("Invalid type of cross-validation.") 392 | 393 | if of == "accuracy": 394 | util = 1 / cv_te_error.get_accuracy() 395 | else: 396 | util = cv_te_error.get(of) 397 | 398 | # print("c:", param_c, "util: ", util) 399 | return util 400 | 401 | optimal_pars, details, _ = \ 402 | optunity.minimize(wrapper_opt, 403 | solver_name="cma-es", 404 | num_evals=eval, 405 | param_c=param_ranges[0]) 406 | 407 | # Save best function result 408 | if details[0] < temp_error: 409 | temp_error = details[0] 410 | 411 | if of == "accuracy": 412 | best_function_error = 1 / temp_error 413 | else: 414 | best_function_error = temp_error 415 | 416 | best_param_function = function 417 | best_param_c = optimal_pars["param_c"] 418 | best_param_l = neurons 419 | 420 | if of == "accuracy": 421 | print("Function: ", function, 422 | " best cv value: ", 1/details[0]) 423 | else: 424 | print("Function: ", function, 425 | " best cv value: ", details[0]) 426 | 427 | # MLTools Attribute 428 | self.cv_best_rmse = best_function_error 429 | 430 | # elmr Attribute 431 | self.param_function = best_param_function 432 | self.param_c = best_param_c 433 | self.param_l = best_param_l 434 | 435 | print("##### Search complete #####") 436 | self.print_parameters() 437 | 438 | return None 439 | 440 | def print_parameters(self): 441 | """ 442 | Print current parameters. 443 | """ 444 | 445 | print() 446 | print("Regressor Parameters") 447 | print() 448 | print("Regularization coefficient: ", self.param_c) 449 | print("Function: ", self.param_function) 450 | print("Hidden Neurons: ", self.param_l) 451 | print() 452 | print("CV error: ", self.cv_best_rmse) 453 | print("") 454 | print() 455 | 456 | def get_available_functions(self): 457 | """ 458 | Return available functions. 459 | """ 460 | 461 | return self.available_functions 462 | 463 | def train(self, training_matrix, params=[]): 464 | """ 465 | Calculate output_weight values needed to test/predict data. 466 | 467 | If params is provided, this method will use at training phase. 468 | Else, it will use the default value provided at object 469 | initialization. 470 | 471 | Arguments: 472 | training_matrix (numpy.ndarray): a matrix containing all 473 | patterns that will be used for training. 474 | params (list): a list of parameters defined at 475 | :func:`ELMKernel.__init__` 476 | 477 | Returns: 478 | :class:`Error`: training error object containing expected, 479 | predicted targets and all error metrics. 480 | 481 | Note: 482 | Training matrix must have target variables as the first column. 483 | """ 484 | 485 | return self._ml_train(training_matrix, params) 486 | 487 | def test(self, testing_matrix, predicting=False): 488 | """ 489 | Calculate test predicted values based on previous training. 490 | 491 | Args: 492 | testing_matrix (numpy.ndarray): a matrix containing all 493 | patterns that will be used for testing. 494 | predicting (bool): Don't set. 495 | 496 | Returns: 497 | :class:`Error`: testing error object containing expected, 498 | predicted targets and all error metrics. 499 | 500 | Note: 501 | Testing matrix must have target variables as the first column. 502 | """ 503 | 504 | return self._ml_test(testing_matrix, predicting) 505 | 506 | 507 | @copy_doc_of(MLTools._ml_predict) 508 | def predict(self, horizon=1): 509 | 510 | return self._ml_predict(horizon) 511 | 512 | @copy_doc_of(MLTools._ml_train_iterative) 513 | def train_iterative(self, database_matrix, params=[], sliding_window=168, 514 | k=1): 515 | 516 | return self._ml_train_iterative(database_matrix, params, 517 | sliding_window, k) 518 | 519 | 520 | -------------------------------------------------------------------------------- /elm/mltools.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | """ 4 | This file contains MLTools class and all developed methods. 5 | """ 6 | 7 | # Python2 support 8 | from __future__ import unicode_literals 9 | from __future__ import division 10 | from __future__ import absolute_import 11 | from __future__ import print_function 12 | 13 | 14 | import numpy as np 15 | import pickle 16 | 17 | 18 | class MLTools(object): 19 | """ 20 | A Python implementation of several methods needed for machine learning 21 | classification/regression. 22 | 23 | Attributes: 24 | last_training_pattern (numpy.ndarray): Full path to the package 25 | to test. 26 | has_trained (boolean): package_name str 27 | cv_best_rmse (float): package_name str 28 | 29 | """ 30 | 31 | def __init__(self): 32 | self.last_training_pattern = [] 33 | 34 | self.has_trained = False 35 | 36 | self.cv_best_rmse = "Not cross-validated" 37 | 38 | ################################################# 39 | ########### Methods to be overridden ############ 40 | ################################################# 41 | 42 | def _local_train(self, training_patterns, training_expected_targets, 43 | params): 44 | """ 45 | Should be overridden. 46 | """ 47 | return None 48 | 49 | def _local_test(self, testing_patterns, testing_expected_targets, 50 | predicting): 51 | """ 52 | Should be overridden. 53 | """ 54 | return None 55 | 56 | # ######################## 57 | # Public Methods 58 | # ######################## 59 | 60 | def _ml_search_param(self, database, dataprocess, path_filename, save, 61 | cv, min_f): 62 | """ 63 | Should be overridden. 64 | """ 65 | return None 66 | 67 | def _ml_print_parameters(self): 68 | """ 69 | Should be overridden. 70 | """ 71 | return None 72 | 73 | def _ml_predict(self, horizon=1): 74 | """ 75 | Predict next targets based on previous training. 76 | 77 | Arguments: 78 | horizon (int): number of predictions. 79 | 80 | Returns: 81 | numpy.ndarray: a column vector containing all predicted targets. 82 | """ 83 | 84 | if not self.has_trained: 85 | print("Error: Train before predict.") 86 | return 87 | 88 | # Create first new pattern 89 | new_pattern = np.hstack([self.last_training_pattern[2:], 90 | self.last_training_pattern[0]]) 91 | 92 | # Create a fake target (1) 93 | new_pattern = np.insert(new_pattern, 0, 1).reshape(1, -1) 94 | 95 | predicted_targets = np.zeros((horizon, 1)) 96 | 97 | for t_counter in range(horizon): 98 | te_errors = self.test(new_pattern, predicting=True) 99 | 100 | predicted_value = te_errors.predicted_targets 101 | predicted_targets[t_counter] = predicted_value 102 | 103 | # Create a new pattern including the actual predicted value 104 | new_pattern = np.hstack([new_pattern[0, 2:], 105 | np.squeeze(predicted_value)]) 106 | 107 | # Create a fake target 108 | new_pattern = np.insert(new_pattern, 0, 1).reshape(1, -1) 109 | 110 | return predicted_targets 111 | 112 | def _ml_train(self, training_matrix, params): 113 | """ 114 | wr 115 | 116 | """ 117 | 118 | training_patterns = training_matrix[:, 1:] 119 | training_expected_targets = training_matrix[:, 0] 120 | 121 | training_predicted_targets = \ 122 | self._local_train(training_patterns, 123 | training_expected_targets, 124 | params) 125 | 126 | training_errors = Error(training_expected_targets, 127 | training_predicted_targets, 128 | regressor_name=self.regressor_name) 129 | 130 | # Save last pattern for posterior predictions 131 | self.last_training_pattern = training_matrix[-1, :] 132 | self.has_trained = True 133 | 134 | return training_errors 135 | 136 | def _ml_test(self, testing_matrix, predicting=False): 137 | """ wr 138 | 139 | """ 140 | 141 | testing_patterns = testing_matrix[:, 1:] 142 | testing_expected_targets = testing_matrix[:, 0].reshape(-1, 1) 143 | 144 | testing_predicted_targets = self._local_test(testing_patterns, 145 | testing_expected_targets, 146 | predicting) 147 | 148 | testing_errors = Error(testing_expected_targets, 149 | testing_predicted_targets, 150 | regressor_name=self.regressor_name) 151 | 152 | return testing_errors 153 | 154 | def _ml_train_iterative(self, database_matrix, params=[], 155 | sliding_window=168, k=1): 156 | """ 157 | Training method used by Fred 09 paper. 158 | """ 159 | 160 | # Number of dimension/lags/order 161 | p = database_matrix.shape[1] - 1 162 | 163 | # Amount of training/testing procedures 164 | number_iterations = database_matrix.shape[0] + p - k - sliding_window + 1 165 | print("Number of iterations: ", number_iterations) 166 | 167 | # Training set size 168 | tr_size = sliding_window - p - 1 169 | 170 | # Sum -z_i value to every input pattern, Z = r_t-(p-1)-k 171 | z = database_matrix[0:-k, 1].reshape(-1, 1) * np.ones((1, p)) 172 | database_matrix[k:, 1:] = database_matrix[k:, 1:] - z 173 | 174 | pr_target = [] 175 | ex_target = [] 176 | for i in range(number_iterations): 177 | # Train with sliding window training dataset 178 | self._ml_train(database_matrix[k+i:k+i+tr_size-1, :], params) 179 | 180 | # Predicted target with training_data - z_i ( r_t+1 ) 181 | pr_t = self._ml_predict(horizon=1) 182 | 183 | # Sum z_i value to get r'_t+1 = r_t+1 + z_i 184 | pr_t = pr_t[0][0] + z[i, 0] 185 | pr_target.append(pr_t) 186 | 187 | # Expected target 188 | ex_target.append(database_matrix[k+i+tr_size, 0]) 189 | 190 | pr_result = Error(expected=ex_target, predicted=pr_target) 191 | 192 | return pr_result 193 | 194 | def save_regressor(self, file_name): 195 | """ 196 | Save current classifier/regressor to file_name file. 197 | """ 198 | 199 | try: 200 | # First save all class attributes 201 | 202 | file = file_name 203 | with open(file, 'wb') as f: 204 | pickle.dump(self, f, protocol=pickle.HIGHEST_PROTOCOL) 205 | 206 | except: 207 | print("Error while saving ", file_name) 208 | return 209 | else: 210 | print("Saved model as: ", file_name) 211 | 212 | def load_regressor(self, file_name): 213 | """ 214 | Load classifier/regressor to memory. 215 | """ 216 | 217 | try: 218 | # First load all class attributes 219 | 220 | file = file_name 221 | with open(file, 'rb') as f: 222 | self = pickle.load(f) 223 | 224 | except: 225 | print("Error while loading ", file_name) 226 | return 227 | 228 | return self 229 | 230 | 231 | class Error(object): 232 | """ 233 | Error is a class that saves expected and predicted values to calculate 234 | error metrics. 235 | 236 | Attributes: 237 | regressor_name (str): Deprecated. 238 | expected_targets (numpy.ndarray): array of expected values. 239 | predicted_targets (numpy.ndarray): array of predicted values. 240 | dict_errors (dict): a dictionary containing all calculated errors 241 | and their values. 242 | 243 | """ 244 | 245 | available_error_metrics = ["rmse", "mse", "mae", "me", "mpe", "mape", 246 | "std", "hr", "hr+", "hr-", "accuracy"] 247 | 248 | def __init__(self, expected, predicted, regressor_name=""): 249 | 250 | if type(expected) is list: 251 | expected = np.array(expected) 252 | if type(predicted) is list: 253 | predicted = np.array(predicted) 254 | 255 | expected = expected.flatten() 256 | predicted = predicted.flatten() 257 | 258 | self.regressor_name = regressor_name 259 | self.expected_targets = expected 260 | self.predicted_targets = predicted 261 | 262 | self.dict_errors = {} 263 | for error in self.available_error_metrics: 264 | self.dict_errors[error] = "Not calculated" 265 | 266 | def _calc(self, name, expected, predicted): 267 | """ 268 | a 269 | """ 270 | 271 | if self.dict_errors[name] == "Not calculated": 272 | if name == "mae": 273 | error = expected - predicted 274 | self.dict_errors[name] = np.mean(np.fabs(error)) 275 | 276 | elif name == "me": 277 | error = expected - predicted 278 | self.dict_errors[name] = error.mean() 279 | 280 | elif name == "mse": 281 | error = expected - predicted 282 | self.dict_errors[name] = (error ** 2).mean() 283 | 284 | elif name == "rmse": 285 | error = expected - predicted 286 | self.dict_errors[name] = np.sqrt((error ** 2).mean()) 287 | 288 | elif name == "mpe": 289 | 290 | if np.count_nonzero(expected != 0) == 0: 291 | self.dict_errors[name] = np.nan 292 | else: 293 | # Remove all indexes that have 0, so I can calculate 294 | # relative error 295 | find_zero = expected != 0 296 | _et = np.extract(find_zero, expected) 297 | _pt = np.extract(find_zero, predicted) 298 | 299 | relative_error = (_et - _pt) / _et 300 | 301 | self.dict_errors[name] = 100 * relative_error.mean() 302 | 303 | elif name == "mape": 304 | 305 | if np.count_nonzero(expected != 0) == 0: 306 | self.dict_errors[name] = np.nan 307 | else: 308 | # Remove all indexes that have 0, so I can calculate 309 | # relative error 310 | find_zero = expected != 0 311 | _et = np.extract(find_zero, expected) 312 | _pt = np.extract(find_zero, predicted) 313 | 314 | relative_error = (_et - _pt) / _et 315 | 316 | self.dict_errors[name] = \ 317 | 100 * np.fabs(relative_error).mean() 318 | 319 | elif name == "std": 320 | error = expected - predicted 321 | self.dict_errors[name] = np.std(error) 322 | 323 | elif name == "hr": 324 | _c = expected * predicted 325 | 326 | if np.count_nonzero(_c != 0) == 0: 327 | self.dict_errors[name] = np.nan 328 | else: 329 | self.dict_errors[name] = np.count_nonzero(_c > 0) / \ 330 | np.count_nonzero(_c != 0) 331 | 332 | elif name == "hr+": 333 | _a = expected 334 | _b = predicted 335 | 336 | if np.count_nonzero(_b > 0) == 0: 337 | self.dict_errors[name] = np.nan 338 | else: 339 | self.dict_errors[name] = \ 340 | np.count_nonzero((_a > 0) * (_b > 0)) / \ 341 | np.count_nonzero(_b > 0) 342 | 343 | elif name == "hr-": 344 | _a = expected 345 | _b = predicted 346 | 347 | if np.count_nonzero(_b < 0) == 0: 348 | self.dict_errors[name] = np.nan 349 | else: 350 | self.dict_errors[name] = \ 351 | np.count_nonzero((_a < 0) * (_b < 0)) / \ 352 | np.count_nonzero(_b < 0) 353 | 354 | elif name == "accuracy": 355 | _a = expected.astype(int) 356 | _b = np.round(predicted).astype(int) 357 | 358 | self.dict_errors[name] = np.count_nonzero(_a == _b) / _b.size 359 | 360 | else: 361 | print("Error:", name, 362 | "- Invalid error or not available to calculate.") 363 | return 364 | 365 | def calc_metrics(self): 366 | """ 367 | Calculate all error metrics. 368 | 369 | Available error metrics are "rmse", "mse", "mae", "me", "mpe", 370 | "mape", "std", "hr", "hr+", "hr-" and "accuracy". 371 | 372 | """ 373 | 374 | for error in sorted(self.dict_errors.keys()): 375 | self._calc(error, self.expected_targets, self.predicted_targets) 376 | 377 | def print_errors(self): 378 | """ 379 | Print all errors metrics. 380 | 381 | Note: 382 | For better printing format, install :mod:`prettytable`. 383 | 384 | """ 385 | 386 | self.calc_metrics() 387 | 388 | try: 389 | from prettytable import PrettyTable 390 | 391 | table = PrettyTable(["Error", "Value"]) 392 | table.align["Error"] = "l" 393 | table.align["Value"] = "l" 394 | 395 | for error in sorted(self.dict_errors.keys()): 396 | table.add_row([error, np.around(self.dict_errors[error], decimals=8)]) 397 | 398 | print() 399 | print(table.get_string(sortby="Error")) 400 | print() 401 | 402 | except ImportError: 403 | print("For better table format install 'prettytable' module.") 404 | 405 | print() 406 | for error in sorted(self.dict_errors.keys()): 407 | print(error, np.around(self.dict_errors[error], decimals=8)) 408 | print() 409 | 410 | def print_values(self): 411 | """ 412 | Print expected and predicted values. 413 | """ 414 | 415 | print("Expected: ", self.expected_targets.reshape(1, -1), "\n", 416 | "Predicted: ", self.predicted_targets.reshape(1, -1), "\n") 417 | 418 | def get(self, error): 419 | """ 420 | Calculate and return value of an error. 421 | 422 | Arguments: 423 | error (str): Error to be calculated. 424 | 425 | Returns: 426 | float: value of desired error. 427 | """ 428 | self._calc(error, self.expected_targets, self.predicted_targets) 429 | return self.dict_errors[error] 430 | 431 | def get_std(self): 432 | self._calc("std", self.expected_targets, self.predicted_targets) 433 | return self.dict_errors["std"] 434 | 435 | def get_mae(self): 436 | self._calc("mae", self.expected_targets, self.predicted_targets) 437 | return self.dict_errors["mae"] 438 | 439 | def get_mse(self): 440 | self._calc("mse", self.expected_targets, self.predicted_targets) 441 | return self.dict_errors["mse"] 442 | 443 | def get_rmse(self): 444 | self._calc("rmse", self.expected_targets, self.predicted_targets) 445 | return self.dict_errors["rmse"] 446 | 447 | def get_mpe(self): 448 | self._calc("mpe", self.expected_targets, self.predicted_targets) 449 | return self.dict_errors["mpe"] 450 | 451 | def get_mape(self): 452 | self._calc("mape", self.expected_targets, self.predicted_targets) 453 | return self.dict_errors["mape"] 454 | 455 | def get_me(self): 456 | self._calc("me", self.expected_targets, self.predicted_targets) 457 | return self.dict_errors["me"] 458 | 459 | def get_hr(self): 460 | self._calc("hr", self.expected_targets, self.predicted_targets) 461 | return self.dict_errors["hr"] 462 | 463 | def get_hrm(self): 464 | self._calc("hr-", self.expected_targets, self.predicted_targets) 465 | return self.dict_errors["hr-"] 466 | 467 | def get_hrp(self): 468 | self._calc("hr+", self.expected_targets, self.predicted_targets) 469 | return self.dict_errors["hr+"] 470 | 471 | def get_accuracy(self): 472 | self._calc("accuracy", self.expected_targets, self.predicted_targets) 473 | return self.dict_errors["accuracy"] 474 | 475 | def get_error(self): 476 | return (self.expected_targets - self.predicted_targets).flatten() 477 | 478 | def get_anderson(self): 479 | """ 480 | Anderson-Darling test for data coming from a particular 481 | distribution. 482 | 483 | Returns: 484 | tuple: statistic value, critical values and significance values. 485 | 486 | Note: 487 | Need scipy.stats module to perform Anderson-Darling test. 488 | """ 489 | 490 | try: 491 | from scipy import stats 492 | except ImportError: 493 | raise ImportError("Need 'scipy.stats' module to calculate " 494 | "anderson-darling test.") 495 | 496 | error = (self.expected_targets - self.predicted_targets).flatten() 497 | 498 | # from matplotlib import pyplot as plt 499 | # import matplotlib.mlab as mlab 500 | # 501 | # plt.figure(figsize=(24.0, 12.0)) 502 | # _, bins, _ = plt.hist(error, 50, normed=1) 503 | # _mu = np.mean(error) 504 | # _sigma = np.std(error) 505 | # plt.plot(bins, mlab.normpdf(bins, _mu, _sigma)) 506 | # plt.show() 507 | # plt.close() 508 | 509 | # Calculate Anderson-Darling normality test index 510 | ad_statistic, ad_c, ad_s = stats.anderson(error, "norm") 511 | 512 | return ad_statistic, ad_c, ad_s 513 | 514 | def get_shapiro(self): 515 | """ 516 | Perform the Shapiro-Wilk test for normality. 517 | 518 | Returns: 519 | tuple: statistic value and p-value. 520 | 521 | Note: 522 | Need scipy.stats module to perform Shapiro-Wilk test. 523 | """ 524 | 525 | try: 526 | from scipy import stats 527 | except ImportError: 528 | raise ImportError("Need 'scipy.stats' module to calculate " 529 | "shapiro-wilk test.") 530 | 531 | error = (self.expected_targets - self.predicted_targets).flatten() 532 | 533 | # Calculate Shapiro-Wilk normality index 534 | sw_statistic, sw_p_value = stats.shapiro(error) 535 | 536 | return sw_statistic, sw_p_value 537 | 538 | 539 | class CVError(object): 540 | """ 541 | CVError is a class that saves :class:`Error` objects from all folds 542 | of a cross-validation method. 543 | 544 | Attributes: 545 | fold_errors (list of :class:`Error`): a list of all Error objects 546 | created through cross-validation process. 547 | all_fold_errors (dict): a dictionary containing lists of error 548 | values of all folds. 549 | all_fold_mean_errors (dict): a dictionary containing the mean of 550 | *all_fold_errors* lists. 551 | """ 552 | 553 | def __init__(self, fold_errors): 554 | self.fold_errors = fold_errors 555 | 556 | self.all_fold_errors = {} 557 | self.all_fold_mean_errors = {} 558 | 559 | for error in self.fold_errors[0].available_error_metrics: 560 | self.all_fold_errors[error] = [] 561 | self.all_fold_mean_errors[error] = -99 562 | 563 | self.calc_metrics() 564 | 565 | def calc_metrics(self): 566 | """ 567 | Calculate a folds mean of all error metrics. 568 | 569 | Available error metrics are "rmse", "mse", "mae", "me", "mpe", 570 | "mape", "std", "hr", "hr+", "hr-" and "accuracy". 571 | """ 572 | 573 | for fold in self.fold_errors: 574 | for error in fold.dict_errors: 575 | if fold.dict_errors[error] == "Not calculated": 576 | fold.dict_errors[error] = fold.get(error) 577 | 578 | self.all_fold_errors[error].append(fold.dict_errors[error]) 579 | 580 | for error in sorted(self.all_fold_errors.keys()): 581 | self.all_fold_mean_errors[error] = \ 582 | np.mean(self.all_fold_errors[error]) 583 | 584 | def print_errors(self): 585 | """ 586 | Print a mean of all error through all folds. 587 | """ 588 | 589 | for error in sorted(self.all_fold_errors.keys()): 590 | print(error, " mean:", self.all_fold_mean_errors[error]) 591 | print(self.all_fold_errors[error], "\n") 592 | 593 | print() 594 | 595 | def get(self, error): 596 | return self.all_fold_mean_errors[error] 597 | 598 | def get_rmse(self): 599 | return self.all_fold_mean_errors["rmse"] 600 | 601 | def get_accuracy(self): 602 | return self.all_fold_mean_errors["accuracy"] 603 | 604 | 605 | def read(file_name): 606 | """ 607 | Read data from txt file. 608 | 609 | Arguments: 610 | file_name (str): path and file name. 611 | 612 | Returns: 613 | numpy.ndarray: a matrix containing all read data. 614 | """ 615 | 616 | data = np.loadtxt(file_name) 617 | 618 | return data 619 | 620 | 621 | def write(file_name, data): 622 | """ 623 | Write data to txt file. 624 | 625 | Arguments: 626 | file_name (str): path and file name. 627 | data (numpy.ndarray): data to be written. 628 | 629 | """ 630 | 631 | np.savetxt(file_name, data) 632 | 633 | 634 | def split_sets(data, training_percent=None, n_test_samples=None, perm=False): 635 | """ 636 | Split data matrix into training and test matrices. 637 | 638 | Training matrix size will be set using the training_percent 639 | parameter, so its samples are the firsts samples found at 640 | data matrix, the rest of samples will be testing matrix. 641 | 642 | If neither training_percent or number_test_samples are set, an error 643 | will happen, only one of the parameters can be set at a time. 644 | 645 | Arguments: 646 | data (numpy.ndarray): A matrix containing nxf patterns features. 647 | training_percent (float): An optional parameter used to 648 | calculate the number of patterns of training matrix. 649 | n_test_samples (int): An optional parameter used to set the 650 | number of patterns of testing matrix. 651 | perm (bool): A flag to choose if should permute(shuffle) database 652 | before splitting sets. 653 | 654 | Returns: 655 | tuple: Both training and test matrices. 656 | 657 | """ 658 | 659 | number_of_samples = data.shape[0] 660 | 661 | # Permute data 662 | if perm: 663 | np.random.shuffle(data) 664 | 665 | if n_test_samples is not None: 666 | training_samples = number_of_samples - n_test_samples 667 | elif training_percent is not None: 668 | training_samples = round(number_of_samples * training_percent) 669 | else: 670 | raise Exception("Error: Missing \"training_percent\" or \"numberTestSamples\"" 671 | "parameter.") 672 | 673 | training_matrix = data[0:training_samples, :] 674 | testing_matrix = data[training_samples:, :] 675 | 676 | return training_matrix, testing_matrix 677 | 678 | 679 | def time_series_cross_validation(ml, database, params, number_folds=10, 680 | dataprocess=None): 681 | """ 682 | Performs a k-fold cross-validation on a Time Series as described by 683 | Rob Hyndman. 684 | 685 | See Also: 686 | http://robjhyndman.com/hyndsight/crossvalidation/ 687 | 688 | Arguments: 689 | ml (:class:`ELMKernel` or :class:`ELMRandom`): 690 | database (numpy.ndarray): uses 'data' matrix to perform 691 | cross-validation. 692 | params (list): list of parameters from *ml* to train/test. 693 | number_folds (int): number of folds to be created from training and 694 | testing matrices. 695 | dataprocess (:class:`DataProcess`): an object that will pre-process 696 | database before training. Defaults to None. 697 | 698 | Returns: 699 | tuple: tuple of :class:`CVError` from training and testing. 700 | """ 701 | 702 | if number_folds < 2: 703 | print("Error: Must have at least 2-folds.") 704 | return 705 | 706 | number_patterns = database.shape[0] 707 | fold_size = round(number_patterns / number_folds) 708 | 709 | folds = [] 710 | for k in range(number_folds): 711 | folds.append(database[k * fold_size:(k + 1) * fold_size, :]) 712 | 713 | training_errors = [] 714 | testing_errors = [] 715 | 716 | training_matrix = folds[0] 717 | testing_matrix = [] 718 | for k in range(number_folds - 1): 719 | if k > 0: 720 | training_matrix = \ 721 | np.concatenate((training_matrix, testing_matrix), axis=0) 722 | 723 | testing_matrix = folds[k + 1] 724 | 725 | # If dataprocess is available applies defined processes 726 | if dataprocess is not None: 727 | training_matrix, testing_matrix = \ 728 | dataprocess.auto(training_matrix, testing_matrix) 729 | 730 | tr_error = ml.train(training_matrix, params) 731 | te_error = ml.test(testing_matrix) 732 | 733 | training_errors.append(tr_error) 734 | testing_errors.append(te_error) 735 | 736 | cv_training_error = CVError(training_errors) 737 | cv_testing_error = CVError(testing_errors) 738 | 739 | return cv_training_error, cv_testing_error 740 | 741 | 742 | def kfold_cross_validation(ml, database, params, number_folds=10, 743 | dataprocess=None): 744 | """ 745 | Performs a k-fold cross-validation. 746 | 747 | Arguments: 748 | ml (:class:`ELMKernel` or :class:`ELMRandom`): 749 | database (numpy.ndarray): uses 'data' matrix to perform 750 | cross-validation. 751 | params (list): list of parameters from *ml* to train/test. 752 | number_folds (int): number of folds to be created from training and 753 | testing matrices. 754 | dataprocess (:class:`DataProcess`): an object that will pre-process 755 | database before training. Defaults to None. 756 | 757 | Returns: 758 | tuple: tuple of :class:`CVError` from training and testing. 759 | 760 | """ 761 | 762 | if number_folds < 2: 763 | print("Error: Must have at least 2-folds.") 764 | return 765 | 766 | # Permute patterns 767 | np.random.shuffle(database) 768 | 769 | # Number of dimensions considering only 1 output 770 | n_dim = database.shape[1] - 1 771 | number_patterns = database.shape[0] 772 | fold_size = int(np.ceil(number_patterns / number_folds)) 773 | 774 | folds = [] 775 | for k in range(number_folds): 776 | folds.append(database[k * fold_size: (k + 1) * fold_size, :]) 777 | 778 | training_errors = [] 779 | testing_errors = [] 780 | 781 | for k in range(number_folds): 782 | 783 | # Training matrix is all folds except "k" 784 | training_matrix = \ 785 | np.array(folds[:k] + folds[k+1:-1]).reshape(-1, n_dim + 1) 786 | if k < number_folds - 1: 787 | training_matrix = np.vstack((training_matrix, folds[-1])) 788 | 789 | testing_matrix = folds[k] 790 | 791 | # If dataprocess is available applies defined processes 792 | if dataprocess is not None: 793 | training_matrix, testing_matrix = \ 794 | dataprocess.auto(training_matrix, testing_matrix) 795 | 796 | training_errors.append(ml.train(training_matrix, params)) 797 | testing_errors.append(ml.test(testing_matrix)) 798 | 799 | cv_training_error = CVError(training_errors) 800 | cv_testing_error = CVError(testing_errors) 801 | 802 | return cv_training_error, cv_testing_error 803 | 804 | 805 | def copy_doc_of(fun): 806 | def decorator(f): 807 | f.__doc__ = fun.__doc__ 808 | return f 809 | 810 | return decorator 811 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | numpy==1.15.4 2 | deap==1.2.2 3 | optunity==1.1.1 4 | sphinx_rtd_theme==0.4.2 5 | sphinxcontrib-napoleon==0.7 6 | -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [wheel] 2 | universal = 1 3 | 4 | [pytest] 5 | norecursedirs = .git env .eggs 6 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | import sys 5 | import io 6 | import os 7 | import re 8 | 9 | from setuptools import setup 10 | from setuptools import find_packages 11 | from setuptools.command.test import test as TestCommand 12 | 13 | 14 | def read(*names, **kwargs): 15 | with io.open( 16 | os.path.join(os.path.dirname(__file__), *names), 17 | encoding=kwargs.get("encoding", "utf8") 18 | ) as fp: 19 | return fp.read() 20 | 21 | 22 | def find_version(*file_paths): 23 | version_file = read(*file_paths) 24 | version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]", 25 | version_file, re.M) 26 | if version_match: 27 | return version_match.group(1) 28 | raise RuntimeError("Unable to find version string.") 29 | 30 | 31 | class PyTest(TestCommand): 32 | def finalize_options(self): 33 | TestCommand.finalize_options(self) 34 | self.test_args = ['--strict', '--verbose', '--tb=long', 'tests'] 35 | self.test_suite = True 36 | 37 | def run_tests(self): 38 | import pytest 39 | errno = pytest.main(self.test_args) 40 | sys.exit(errno) 41 | 42 | 43 | with open('README.rst') as readme_file: 44 | readme = readme_file.read() 45 | 46 | with open('HISTORY.rst') as history_file: 47 | history = history_file.read().replace('.. :changelog:', '') 48 | 49 | requirements = [ 50 | "numpy==1.15.4", 51 | "deap==1.2.2", 52 | "optunity==1.1.1" 53 | ] 54 | 55 | setup( 56 | name='elm', 57 | version=find_version("elm/__init__.py"), 58 | description="Python Extreme Learning Machine (ELM) is a machine learning " 59 | "technique used for classification/regression tasks.", 60 | long_description=readme + '\n\n' + history, 61 | author="Augusto Almeida", 62 | author_email='augustocbenvenuto@gmail.com', 63 | url='https://github.com/acba/elm', 64 | packages=find_packages(exclude=['contrib', 'docs', 'tests*']), 65 | package_dir={'elm': 'elm'}, 66 | include_package_data=True, 67 | install_requires=requirements, 68 | license="BSD", 69 | zip_safe=False, 70 | keywords='elm, machine learning, artificial intelligence, ai, regression, \ 71 | regressor, classifier, neural network, extreme learning machine', 72 | classifiers=[ 73 | 'Development Status :: 3 - Alpha', 74 | 'Intended Audience :: Developers', 75 | 'License :: OSI Approved :: BSD License', 76 | 'Natural Language :: English', 77 | 'Programming Language :: Python :: 2.7', 78 | 'Programming Language :: Python :: 3', 79 | 'Programming Language :: Python :: 3.4', 80 | 'Topic :: Software Development', 81 | 'Topic :: Software Development :: Libraries :: Python Modules', 82 | 'Topic :: Scientific/Engineering', 83 | 'Topic :: Scientific/Engineering :: Artificial Intelligence', 84 | 'Topic :: Scientific/Engineering :: Mathematics' 85 | ], 86 | cmdclass={'test': PyTest}, 87 | test_suite='elm.tests', 88 | tests_require='pytest', 89 | extras_require={'testing': ['pytest']} 90 | ) 91 | 92 | -------------------------------------------------------------------------------- /tests/data/iris.data: -------------------------------------------------------------------------------- 1 | 0.000000000000000000e+00 5.099999999999999645e+00 3.500000000000000000e+00 1.399999999999999911e+00 2.000000000000000111e-01 2 | 0.000000000000000000e+00 4.900000000000000355e+00 3.000000000000000000e+00 1.399999999999999911e+00 2.000000000000000111e-01 3 | 0.000000000000000000e+00 4.700000000000000178e+00 3.200000000000000178e+00 1.300000000000000044e+00 2.000000000000000111e-01 4 | 0.000000000000000000e+00 4.599999999999999645e+00 3.100000000000000089e+00 1.500000000000000000e+00 2.000000000000000111e-01 5 | 0.000000000000000000e+00 5.000000000000000000e+00 3.600000000000000089e+00 1.399999999999999911e+00 2.000000000000000111e-01 6 | 0.000000000000000000e+00 5.400000000000000355e+00 3.899999999999999911e+00 1.699999999999999956e+00 4.000000000000000222e-01 7 | 0.000000000000000000e+00 4.599999999999999645e+00 3.399999999999999911e+00 1.399999999999999911e+00 2.999999999999999889e-01 8 | 0.000000000000000000e+00 5.000000000000000000e+00 3.399999999999999911e+00 1.500000000000000000e+00 2.000000000000000111e-01 9 | 0.000000000000000000e+00 4.400000000000000355e+00 2.899999999999999911e+00 1.399999999999999911e+00 2.000000000000000111e-01 10 | 0.000000000000000000e+00 4.900000000000000355e+00 3.100000000000000089e+00 1.500000000000000000e+00 1.000000000000000056e-01 11 | 0.000000000000000000e+00 5.400000000000000355e+00 3.700000000000000178e+00 1.500000000000000000e+00 2.000000000000000111e-01 12 | 0.000000000000000000e+00 4.799999999999999822e+00 3.399999999999999911e+00 1.600000000000000089e+00 2.000000000000000111e-01 13 | 0.000000000000000000e+00 4.799999999999999822e+00 3.000000000000000000e+00 1.399999999999999911e+00 1.000000000000000056e-01 14 | 0.000000000000000000e+00 4.299999999999999822e+00 3.000000000000000000e+00 1.100000000000000089e+00 1.000000000000000056e-01 15 | 0.000000000000000000e+00 5.799999999999999822e+00 4.000000000000000000e+00 1.199999999999999956e+00 2.000000000000000111e-01 16 | 0.000000000000000000e+00 5.700000000000000178e+00 4.400000000000000355e+00 1.500000000000000000e+00 4.000000000000000222e-01 17 | 0.000000000000000000e+00 5.400000000000000355e+00 3.899999999999999911e+00 1.300000000000000044e+00 4.000000000000000222e-01 18 | 0.000000000000000000e+00 5.099999999999999645e+00 3.500000000000000000e+00 1.399999999999999911e+00 2.999999999999999889e-01 19 | 0.000000000000000000e+00 5.700000000000000178e+00 3.799999999999999822e+00 1.699999999999999956e+00 2.999999999999999889e-01 20 | 0.000000000000000000e+00 5.099999999999999645e+00 3.799999999999999822e+00 1.500000000000000000e+00 2.999999999999999889e-01 21 | 0.000000000000000000e+00 5.400000000000000355e+00 3.399999999999999911e+00 1.699999999999999956e+00 2.000000000000000111e-01 22 | 0.000000000000000000e+00 5.099999999999999645e+00 3.700000000000000178e+00 1.500000000000000000e+00 4.000000000000000222e-01 23 | 0.000000000000000000e+00 4.599999999999999645e+00 3.600000000000000089e+00 1.000000000000000000e+00 2.000000000000000111e-01 24 | 0.000000000000000000e+00 5.099999999999999645e+00 3.299999999999999822e+00 1.699999999999999956e+00 5.000000000000000000e-01 25 | 0.000000000000000000e+00 4.799999999999999822e+00 3.399999999999999911e+00 1.899999999999999911e+00 2.000000000000000111e-01 26 | 0.000000000000000000e+00 5.000000000000000000e+00 3.000000000000000000e+00 1.600000000000000089e+00 2.000000000000000111e-01 27 | 0.000000000000000000e+00 5.000000000000000000e+00 3.399999999999999911e+00 1.600000000000000089e+00 4.000000000000000222e-01 28 | 0.000000000000000000e+00 5.200000000000000178e+00 3.500000000000000000e+00 1.500000000000000000e+00 2.000000000000000111e-01 29 | 0.000000000000000000e+00 5.200000000000000178e+00 3.399999999999999911e+00 1.399999999999999911e+00 2.000000000000000111e-01 30 | 0.000000000000000000e+00 4.700000000000000178e+00 3.200000000000000178e+00 1.600000000000000089e+00 2.000000000000000111e-01 31 | 0.000000000000000000e+00 4.799999999999999822e+00 3.100000000000000089e+00 1.600000000000000089e+00 2.000000000000000111e-01 32 | 0.000000000000000000e+00 5.400000000000000355e+00 3.399999999999999911e+00 1.500000000000000000e+00 4.000000000000000222e-01 33 | 0.000000000000000000e+00 5.200000000000000178e+00 4.099999999999999645e+00 1.500000000000000000e+00 1.000000000000000056e-01 34 | 0.000000000000000000e+00 5.500000000000000000e+00 4.200000000000000178e+00 1.399999999999999911e+00 2.000000000000000111e-01 35 | 0.000000000000000000e+00 4.900000000000000355e+00 3.100000000000000089e+00 1.500000000000000000e+00 1.000000000000000056e-01 36 | 0.000000000000000000e+00 5.000000000000000000e+00 3.200000000000000178e+00 1.199999999999999956e+00 2.000000000000000111e-01 37 | 0.000000000000000000e+00 5.500000000000000000e+00 3.500000000000000000e+00 1.300000000000000044e+00 2.000000000000000111e-01 38 | 0.000000000000000000e+00 4.900000000000000355e+00 3.100000000000000089e+00 1.500000000000000000e+00 1.000000000000000056e-01 39 | 0.000000000000000000e+00 4.400000000000000355e+00 3.000000000000000000e+00 1.300000000000000044e+00 2.000000000000000111e-01 40 | 0.000000000000000000e+00 5.099999999999999645e+00 3.399999999999999911e+00 1.500000000000000000e+00 2.000000000000000111e-01 41 | 0.000000000000000000e+00 5.000000000000000000e+00 3.500000000000000000e+00 1.300000000000000044e+00 2.999999999999999889e-01 42 | 0.000000000000000000e+00 4.500000000000000000e+00 2.299999999999999822e+00 1.300000000000000044e+00 2.999999999999999889e-01 43 | 0.000000000000000000e+00 4.400000000000000355e+00 3.200000000000000178e+00 1.300000000000000044e+00 2.000000000000000111e-01 44 | 0.000000000000000000e+00 5.000000000000000000e+00 3.500000000000000000e+00 1.600000000000000089e+00 5.999999999999999778e-01 45 | 0.000000000000000000e+00 5.099999999999999645e+00 3.799999999999999822e+00 1.899999999999999911e+00 4.000000000000000222e-01 46 | 0.000000000000000000e+00 4.799999999999999822e+00 3.000000000000000000e+00 1.399999999999999911e+00 2.999999999999999889e-01 47 | 0.000000000000000000e+00 5.099999999999999645e+00 3.799999999999999822e+00 1.600000000000000089e+00 2.000000000000000111e-01 48 | 0.000000000000000000e+00 4.599999999999999645e+00 3.200000000000000178e+00 1.399999999999999911e+00 2.000000000000000111e-01 49 | 0.000000000000000000e+00 5.299999999999999822e+00 3.700000000000000178e+00 1.500000000000000000e+00 2.000000000000000111e-01 50 | 0.000000000000000000e+00 5.000000000000000000e+00 3.299999999999999822e+00 1.399999999999999911e+00 2.000000000000000111e-01 51 | 1.000000000000000000e+00 7.000000000000000000e+00 3.200000000000000178e+00 4.700000000000000178e+00 1.399999999999999911e+00 52 | 1.000000000000000000e+00 6.400000000000000355e+00 3.200000000000000178e+00 4.500000000000000000e+00 1.500000000000000000e+00 53 | 1.000000000000000000e+00 6.900000000000000355e+00 3.100000000000000089e+00 4.900000000000000355e+00 1.500000000000000000e+00 54 | 1.000000000000000000e+00 5.500000000000000000e+00 2.299999999999999822e+00 4.000000000000000000e+00 1.300000000000000044e+00 55 | 1.000000000000000000e+00 6.500000000000000000e+00 2.799999999999999822e+00 4.599999999999999645e+00 1.500000000000000000e+00 56 | 1.000000000000000000e+00 5.700000000000000178e+00 2.799999999999999822e+00 4.500000000000000000e+00 1.300000000000000044e+00 57 | 1.000000000000000000e+00 6.299999999999999822e+00 3.299999999999999822e+00 4.700000000000000178e+00 1.600000000000000089e+00 58 | 1.000000000000000000e+00 4.900000000000000355e+00 2.399999999999999911e+00 3.299999999999999822e+00 1.000000000000000000e+00 59 | 1.000000000000000000e+00 6.599999999999999645e+00 2.899999999999999911e+00 4.599999999999999645e+00 1.300000000000000044e+00 60 | 1.000000000000000000e+00 5.200000000000000178e+00 2.700000000000000178e+00 3.899999999999999911e+00 1.399999999999999911e+00 61 | 1.000000000000000000e+00 5.000000000000000000e+00 2.000000000000000000e+00 3.500000000000000000e+00 1.000000000000000000e+00 62 | 1.000000000000000000e+00 5.900000000000000355e+00 3.000000000000000000e+00 4.200000000000000178e+00 1.500000000000000000e+00 63 | 1.000000000000000000e+00 6.000000000000000000e+00 2.200000000000000178e+00 4.000000000000000000e+00 1.000000000000000000e+00 64 | 1.000000000000000000e+00 6.099999999999999645e+00 2.899999999999999911e+00 4.700000000000000178e+00 1.399999999999999911e+00 65 | 1.000000000000000000e+00 5.599999999999999645e+00 2.899999999999999911e+00 3.600000000000000089e+00 1.300000000000000044e+00 66 | 1.000000000000000000e+00 6.700000000000000178e+00 3.100000000000000089e+00 4.400000000000000355e+00 1.399999999999999911e+00 67 | 1.000000000000000000e+00 5.599999999999999645e+00 3.000000000000000000e+00 4.500000000000000000e+00 1.500000000000000000e+00 68 | 1.000000000000000000e+00 5.799999999999999822e+00 2.700000000000000178e+00 4.099999999999999645e+00 1.000000000000000000e+00 69 | 1.000000000000000000e+00 6.200000000000000178e+00 2.200000000000000178e+00 4.500000000000000000e+00 1.500000000000000000e+00 70 | 1.000000000000000000e+00 5.599999999999999645e+00 2.500000000000000000e+00 3.899999999999999911e+00 1.100000000000000089e+00 71 | 1.000000000000000000e+00 5.900000000000000355e+00 3.200000000000000178e+00 4.799999999999999822e+00 1.800000000000000044e+00 72 | 1.000000000000000000e+00 6.099999999999999645e+00 2.799999999999999822e+00 4.000000000000000000e+00 1.300000000000000044e+00 73 | 1.000000000000000000e+00 6.299999999999999822e+00 2.500000000000000000e+00 4.900000000000000355e+00 1.500000000000000000e+00 74 | 1.000000000000000000e+00 6.099999999999999645e+00 2.799999999999999822e+00 4.700000000000000178e+00 1.199999999999999956e+00 75 | 1.000000000000000000e+00 6.400000000000000355e+00 2.899999999999999911e+00 4.299999999999999822e+00 1.300000000000000044e+00 76 | 1.000000000000000000e+00 6.599999999999999645e+00 3.000000000000000000e+00 4.400000000000000355e+00 1.399999999999999911e+00 77 | 1.000000000000000000e+00 6.799999999999999822e+00 2.799999999999999822e+00 4.799999999999999822e+00 1.399999999999999911e+00 78 | 1.000000000000000000e+00 6.700000000000000178e+00 3.000000000000000000e+00 5.000000000000000000e+00 1.699999999999999956e+00 79 | 1.000000000000000000e+00 6.000000000000000000e+00 2.899999999999999911e+00 4.500000000000000000e+00 1.500000000000000000e+00 80 | 1.000000000000000000e+00 5.700000000000000178e+00 2.600000000000000089e+00 3.500000000000000000e+00 1.000000000000000000e+00 81 | 1.000000000000000000e+00 5.500000000000000000e+00 2.399999999999999911e+00 3.799999999999999822e+00 1.100000000000000089e+00 82 | 1.000000000000000000e+00 5.500000000000000000e+00 2.399999999999999911e+00 3.700000000000000178e+00 1.000000000000000000e+00 83 | 1.000000000000000000e+00 5.799999999999999822e+00 2.700000000000000178e+00 3.899999999999999911e+00 1.199999999999999956e+00 84 | 1.000000000000000000e+00 6.000000000000000000e+00 2.700000000000000178e+00 5.099999999999999645e+00 1.600000000000000089e+00 85 | 1.000000000000000000e+00 5.400000000000000355e+00 3.000000000000000000e+00 4.500000000000000000e+00 1.500000000000000000e+00 86 | 1.000000000000000000e+00 6.000000000000000000e+00 3.399999999999999911e+00 4.500000000000000000e+00 1.600000000000000089e+00 87 | 1.000000000000000000e+00 6.700000000000000178e+00 3.100000000000000089e+00 4.700000000000000178e+00 1.500000000000000000e+00 88 | 1.000000000000000000e+00 6.299999999999999822e+00 2.299999999999999822e+00 4.400000000000000355e+00 1.300000000000000044e+00 89 | 1.000000000000000000e+00 5.599999999999999645e+00 3.000000000000000000e+00 4.099999999999999645e+00 1.300000000000000044e+00 90 | 1.000000000000000000e+00 5.500000000000000000e+00 2.500000000000000000e+00 4.000000000000000000e+00 1.300000000000000044e+00 91 | 1.000000000000000000e+00 5.500000000000000000e+00 2.600000000000000089e+00 4.400000000000000355e+00 1.199999999999999956e+00 92 | 1.000000000000000000e+00 6.099999999999999645e+00 3.000000000000000000e+00 4.599999999999999645e+00 1.399999999999999911e+00 93 | 1.000000000000000000e+00 5.799999999999999822e+00 2.600000000000000089e+00 4.000000000000000000e+00 1.199999999999999956e+00 94 | 1.000000000000000000e+00 5.000000000000000000e+00 2.299999999999999822e+00 3.299999999999999822e+00 1.000000000000000000e+00 95 | 1.000000000000000000e+00 5.599999999999999645e+00 2.700000000000000178e+00 4.200000000000000178e+00 1.300000000000000044e+00 96 | 1.000000000000000000e+00 5.700000000000000178e+00 3.000000000000000000e+00 4.200000000000000178e+00 1.199999999999999956e+00 97 | 1.000000000000000000e+00 5.700000000000000178e+00 2.899999999999999911e+00 4.200000000000000178e+00 1.300000000000000044e+00 98 | 1.000000000000000000e+00 6.200000000000000178e+00 2.899999999999999911e+00 4.299999999999999822e+00 1.300000000000000044e+00 99 | 1.000000000000000000e+00 5.099999999999999645e+00 2.500000000000000000e+00 3.000000000000000000e+00 1.100000000000000089e+00 100 | 1.000000000000000000e+00 5.700000000000000178e+00 2.799999999999999822e+00 4.099999999999999645e+00 1.300000000000000044e+00 101 | 2.000000000000000000e+00 6.299999999999999822e+00 3.299999999999999822e+00 6.000000000000000000e+00 2.500000000000000000e+00 102 | 2.000000000000000000e+00 5.799999999999999822e+00 2.700000000000000178e+00 5.099999999999999645e+00 1.899999999999999911e+00 103 | 2.000000000000000000e+00 7.099999999999999645e+00 3.000000000000000000e+00 5.900000000000000355e+00 2.100000000000000089e+00 104 | 2.000000000000000000e+00 6.299999999999999822e+00 2.899999999999999911e+00 5.599999999999999645e+00 1.800000000000000044e+00 105 | 2.000000000000000000e+00 6.500000000000000000e+00 3.000000000000000000e+00 5.799999999999999822e+00 2.200000000000000178e+00 106 | 2.000000000000000000e+00 7.599999999999999645e+00 3.000000000000000000e+00 6.599999999999999645e+00 2.100000000000000089e+00 107 | 2.000000000000000000e+00 4.900000000000000355e+00 2.500000000000000000e+00 4.500000000000000000e+00 1.699999999999999956e+00 108 | 2.000000000000000000e+00 7.299999999999999822e+00 2.899999999999999911e+00 6.299999999999999822e+00 1.800000000000000044e+00 109 | 2.000000000000000000e+00 6.700000000000000178e+00 2.500000000000000000e+00 5.799999999999999822e+00 1.800000000000000044e+00 110 | 2.000000000000000000e+00 7.200000000000000178e+00 3.600000000000000089e+00 6.099999999999999645e+00 2.500000000000000000e+00 111 | 2.000000000000000000e+00 6.500000000000000000e+00 3.200000000000000178e+00 5.099999999999999645e+00 2.000000000000000000e+00 112 | 2.000000000000000000e+00 6.400000000000000355e+00 2.700000000000000178e+00 5.299999999999999822e+00 1.899999999999999911e+00 113 | 2.000000000000000000e+00 6.799999999999999822e+00 3.000000000000000000e+00 5.500000000000000000e+00 2.100000000000000089e+00 114 | 2.000000000000000000e+00 5.700000000000000178e+00 2.500000000000000000e+00 5.000000000000000000e+00 2.000000000000000000e+00 115 | 2.000000000000000000e+00 5.799999999999999822e+00 2.799999999999999822e+00 5.099999999999999645e+00 2.399999999999999911e+00 116 | 2.000000000000000000e+00 6.400000000000000355e+00 3.200000000000000178e+00 5.299999999999999822e+00 2.299999999999999822e+00 117 | 2.000000000000000000e+00 6.500000000000000000e+00 3.000000000000000000e+00 5.500000000000000000e+00 1.800000000000000044e+00 118 | 2.000000000000000000e+00 7.700000000000000178e+00 3.799999999999999822e+00 6.700000000000000178e+00 2.200000000000000178e+00 119 | 2.000000000000000000e+00 7.700000000000000178e+00 2.600000000000000089e+00 6.900000000000000355e+00 2.299999999999999822e+00 120 | 2.000000000000000000e+00 6.000000000000000000e+00 2.200000000000000178e+00 5.000000000000000000e+00 1.500000000000000000e+00 121 | 2.000000000000000000e+00 6.900000000000000355e+00 3.200000000000000178e+00 5.700000000000000178e+00 2.299999999999999822e+00 122 | 2.000000000000000000e+00 5.599999999999999645e+00 2.799999999999999822e+00 4.900000000000000355e+00 2.000000000000000000e+00 123 | 2.000000000000000000e+00 7.700000000000000178e+00 2.799999999999999822e+00 6.700000000000000178e+00 2.000000000000000000e+00 124 | 2.000000000000000000e+00 6.299999999999999822e+00 2.700000000000000178e+00 4.900000000000000355e+00 1.800000000000000044e+00 125 | 2.000000000000000000e+00 6.700000000000000178e+00 3.299999999999999822e+00 5.700000000000000178e+00 2.100000000000000089e+00 126 | 2.000000000000000000e+00 7.200000000000000178e+00 3.200000000000000178e+00 6.000000000000000000e+00 1.800000000000000044e+00 127 | 2.000000000000000000e+00 6.200000000000000178e+00 2.799999999999999822e+00 4.799999999999999822e+00 1.800000000000000044e+00 128 | 2.000000000000000000e+00 6.099999999999999645e+00 3.000000000000000000e+00 4.900000000000000355e+00 1.800000000000000044e+00 129 | 2.000000000000000000e+00 6.400000000000000355e+00 2.799999999999999822e+00 5.599999999999999645e+00 2.100000000000000089e+00 130 | 2.000000000000000000e+00 7.200000000000000178e+00 3.000000000000000000e+00 5.799999999999999822e+00 1.600000000000000089e+00 131 | 2.000000000000000000e+00 7.400000000000000355e+00 2.799999999999999822e+00 6.099999999999999645e+00 1.899999999999999911e+00 132 | 2.000000000000000000e+00 7.900000000000000355e+00 3.799999999999999822e+00 6.400000000000000355e+00 2.000000000000000000e+00 133 | 2.000000000000000000e+00 6.400000000000000355e+00 2.799999999999999822e+00 5.599999999999999645e+00 2.200000000000000178e+00 134 | 2.000000000000000000e+00 6.299999999999999822e+00 2.799999999999999822e+00 5.099999999999999645e+00 1.500000000000000000e+00 135 | 2.000000000000000000e+00 6.099999999999999645e+00 2.600000000000000089e+00 5.599999999999999645e+00 1.399999999999999911e+00 136 | 2.000000000000000000e+00 7.700000000000000178e+00 3.000000000000000000e+00 6.099999999999999645e+00 2.299999999999999822e+00 137 | 2.000000000000000000e+00 6.299999999999999822e+00 3.399999999999999911e+00 5.599999999999999645e+00 2.399999999999999911e+00 138 | 2.000000000000000000e+00 6.400000000000000355e+00 3.100000000000000089e+00 5.500000000000000000e+00 1.800000000000000044e+00 139 | 2.000000000000000000e+00 6.000000000000000000e+00 3.000000000000000000e+00 4.799999999999999822e+00 1.800000000000000044e+00 140 | 2.000000000000000000e+00 6.900000000000000355e+00 3.100000000000000089e+00 5.400000000000000355e+00 2.100000000000000089e+00 141 | 2.000000000000000000e+00 6.700000000000000178e+00 3.100000000000000089e+00 5.599999999999999645e+00 2.399999999999999911e+00 142 | 2.000000000000000000e+00 6.900000000000000355e+00 3.100000000000000089e+00 5.099999999999999645e+00 2.299999999999999822e+00 143 | 2.000000000000000000e+00 5.799999999999999822e+00 2.700000000000000178e+00 5.099999999999999645e+00 1.899999999999999911e+00 144 | 2.000000000000000000e+00 6.799999999999999822e+00 3.200000000000000178e+00 5.900000000000000355e+00 2.299999999999999822e+00 145 | 2.000000000000000000e+00 6.700000000000000178e+00 3.299999999999999822e+00 5.700000000000000178e+00 2.500000000000000000e+00 146 | 2.000000000000000000e+00 6.700000000000000178e+00 3.000000000000000000e+00 5.200000000000000178e+00 2.299999999999999822e+00 147 | 2.000000000000000000e+00 6.299999999999999822e+00 2.500000000000000000e+00 5.000000000000000000e+00 1.899999999999999911e+00 148 | 2.000000000000000000e+00 6.500000000000000000e+00 3.000000000000000000e+00 5.200000000000000178e+00 2.000000000000000000e+00 149 | 2.000000000000000000e+00 6.200000000000000178e+00 3.399999999999999911e+00 5.400000000000000355e+00 2.299999999999999822e+00 150 | 2.000000000000000000e+00 5.900000000000000355e+00 3.000000000000000000e+00 5.099999999999999645e+00 1.800000000000000044e+00 151 | -------------------------------------------------------------------------------- /tests/test_classification.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | """ 5 | test_classification 6 | ---------------------------------- 7 | 8 | Datasets used were from sklearn.datasets 9 | 10 | import numpy as np 11 | from sklearn.datasets import load_boston, load_diabetes, load_iris 12 | 13 | data = load_iris() 14 | data = np.hstack((data["target"].reshape(-1, 1), data["data"])) 15 | np.savetxt("iris.data", data) 16 | 17 | """ 18 | 19 | import elm 20 | import numpy as np 21 | 22 | 23 | def test_elmk_iris(): 24 | 25 | # load dataset 26 | data = elm.read("tests/data/iris.data") 27 | 28 | # create a regressor 29 | elmk = elm.ELMKernel() 30 | 31 | try: 32 | # search for best parameter for this dataset 33 | elmk.search_param(data, cv="kfold", of="accuracy", eval=10) 34 | 35 | # split data in training and testing sets 36 | tr_set, te_set = elm.split_sets(data, training_percent=.8, perm=True) 37 | 38 | #train and test 39 | tr_result = elmk.train(tr_set) 40 | te_result = elmk.test(te_set) 41 | except: 42 | ERROR = 1 43 | else: 44 | ERROR = 0 45 | 46 | assert (ERROR == 0) 47 | 48 | # te_result.predicted_targets = np.round(te_result.predicted_targets) 49 | # assert (te_result.get_accuracy() <= 20) 50 | 51 | 52 | def test_elmr_iris(): 53 | 54 | # load dataset 55 | data = elm.read("tests/data/iris.data") 56 | 57 | # create a regressor 58 | elmr = elm.ELMRandom() 59 | 60 | try: 61 | # search for best parameter for this dataset 62 | elmr.search_param(data, cv="kfold", of="accuracy", eval=10) 63 | 64 | # split data in training and testing sets 65 | tr_set, te_set = elm.split_sets(data, training_percent=.8, perm=True) 66 | 67 | #train and test 68 | tr_result = elmr.train(tr_set) 69 | te_result = elmr.test(te_set) 70 | except: 71 | ERROR = 1 72 | else: 73 | ERROR = 0 74 | 75 | assert (ERROR == 0) 76 | 77 | # assert (te_result.get_rmse() <= 20) 78 | -------------------------------------------------------------------------------- /tests/test_regression.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | """ 5 | test_regression 6 | ---------------------------------- 7 | 8 | Datasets used were from sklearn.datasets 9 | 10 | import numpy as np 11 | from sklearn.datasets import load_boston, load_diabetes 12 | 13 | data = load_boston() 14 | data = np.hstack((data["target"].reshape(-1, 1), data["data"])) 15 | np.savetxt("boston.data", data) 16 | 17 | data = load_diabetes() 18 | data = np.hstack((data["target"].reshape(-1, 1), data["data"])) 19 | np.savetxt("diabetes.data", data) 20 | 21 | """ 22 | 23 | import elm 24 | 25 | 26 | def test_elmk_boston(): 27 | 28 | # load dataset 29 | data = elm.read("tests/data/boston.data") 30 | 31 | # create a regressor 32 | elmk = elm.ELMKernel() 33 | 34 | try: 35 | # search for best parameter for this dataset 36 | # elmk.search_param(data, cv="kfold", of="rmse") 37 | 38 | # split data in training and testing sets 39 | tr_set, te_set = elm.split_sets(data, training_percent=.8, perm=True) 40 | 41 | #train and test 42 | tr_result = elmk.train(tr_set) 43 | te_result = elmk.test(te_set) 44 | except: 45 | ERROR = 1 46 | else: 47 | ERROR = 0 48 | 49 | assert (ERROR == 0) 50 | # te_result.get_rmse() 51 | # assert (te_result.get_rmse() <= 20) 52 | 53 | 54 | def test_elmk_diabetes(): 55 | 56 | # load dataset 57 | data = elm.read("tests/data/diabetes.data") 58 | 59 | # create a regressor 60 | elmk = elm.ELMKernel() 61 | 62 | try: 63 | # search for best parameter for this dataset 64 | # elmk.search_param(data, cv="kfold", of="rmse") 65 | 66 | # split data in training and testing sets 67 | tr_set, te_set = elm.split_sets(data, training_percent=.8, perm=True) 68 | 69 | #train and test 70 | tr_result = elmk.train(tr_set) 71 | te_result = elmk.test(te_set) 72 | 73 | except: 74 | ERROR = 1 75 | else: 76 | ERROR = 0 77 | 78 | assert (ERROR == 0) 79 | # assert (te_result.get_rmse() <= 70) 80 | 81 | 82 | def test_elmr_boston(): 83 | 84 | # load dataset 85 | data = elm.read("tests/data/boston.data") 86 | 87 | # create a regressor 88 | elmr = elm.ELMRandom() 89 | 90 | try: 91 | # search for best parameter for this dataset 92 | # elmr.search_param(data, cv="kfold", of="rmse") 93 | 94 | # split data in training and testing sets 95 | tr_set, te_set = elm.split_sets(data, training_percent=.8, perm=True) 96 | 97 | #train and test 98 | tr_result = elmr.train(tr_set) 99 | te_result = elmr.test(te_set) 100 | 101 | except: 102 | ERROR = 1 103 | else: 104 | ERROR = 0 105 | 106 | assert (ERROR == 0) 107 | # assert (te_result.get_rmse() <= 20) 108 | 109 | 110 | def test_elmr_diabetes(): 111 | 112 | # load dataset 113 | data = elm.read("tests/data/diabetes.data") 114 | 115 | # create a regressor 116 | elmr = elm.ELMRandom() 117 | 118 | try: 119 | # search for best parameter for this dataset 120 | # elmr.search_param(data, cv="kfold", of="rmse") 121 | 122 | # split data in training and testing sets 123 | tr_set, te_set = elm.split_sets(data, training_percent=.8, perm=True) 124 | 125 | #train and test 126 | tr_result = elmr.train(tr_set) 127 | te_result = elmr.test(te_set) 128 | 129 | except: 130 | ERROR = 1 131 | else: 132 | ERROR = 0 133 | 134 | assert (ERROR == 0) 135 | # assert (te_result.get_rmse() <= 70) 136 | -------------------------------------------------------------------------------- /tests/test_xor.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | """ 5 | test_xor 6 | ---------------------------------- 7 | 8 | """ 9 | 10 | from elm import ELMKernel, ELMRandom 11 | import numpy as np 12 | 13 | # output | input 14 | DATABASE_XOR = np.array([[-1, -1, -1], 15 | [1, -1, 1], 16 | [1, 1, -1], 17 | [-1, 1, 1] 18 | ]) 19 | 20 | 21 | def test_xor_elmk(): 22 | 23 | elmk = ELMKernel() 24 | 25 | elmk.train(DATABASE_XOR) 26 | te_result = elmk.test(DATABASE_XOR) 27 | predicted = te_result.predicted_targets 28 | 29 | predicted[predicted < 0] = -1 30 | predicted[predicted > 0] = 1 31 | 32 | te_result.predicted_targets = predicted 33 | 34 | assert (te_result.get_accuracy() == 1) 35 | 36 | 37 | def test_xor_elmr(): 38 | 39 | elmr = ELMRandom() 40 | 41 | elmr.train(DATABASE_XOR) 42 | te_result = elmr.test(DATABASE_XOR) 43 | predicted = te_result.predicted_targets 44 | 45 | predicted[predicted < 0] = -1 46 | predicted[predicted > 0] = 1 47 | 48 | te_result.predicted_targets = predicted 49 | assert (te_result.get_accuracy() == 1) -------------------------------------------------------------------------------- /tox.ini: -------------------------------------------------------------------------------- 1 | [tox] 2 | envlist = py27, py34 3 | 4 | [testenv] 5 | commands = {envpython} setup.py test 6 | deps = 7 | pytest 8 | -rrequirements.txt 9 | 10 | setenv= 11 | PYTHONWARNINGS=all 12 | 13 | [pytest] 14 | adopts=--doctest-modules 15 | python_files=*.py 16 | python_functions=test_ 17 | norecursedirs=.tox .git env 18 | 19 | [testenv:py27] 20 | commands= 21 | py.test tests/ --doctest-module 22 | 23 | [testenv:py34] 24 | commands= 25 | py.test tests/ --doctest-module 26 | --------------------------------------------------------------------------------