├── example ├── __init__.py ├── simple_functions │ ├── __init__.py │ ├── img │ │ ├── quick_start.png │ │ ├── sphere_mixed_figure.png │ │ ├── ackley_continuous_figure.png │ │ ├── setcover_discrete_figure.png │ │ ├── sphere_continuous_figure.png │ │ ├── sphere_discrete_order_figure.png │ │ └── ackley_continuous_noisy_figure.png │ ├── README.md │ ├── quick_start.py │ ├── discrete_opt.py │ ├── discrete_with_order_opt.py │ ├── mixed_opt.py │ ├── continuous_opt.py │ ├── opt_under_noise.py │ ├── function_test1.py │ ├── opt_with_stopping_criterion.py │ └── simple_function.py ├── sparse_regression │ ├── __init__.py │ ├── poss_opt.py │ ├── ponss_opt.py │ ├── README.md │ └── sparse_mse.py ├── direct_policy_search_for_gym │ ├── __init__.py │ ├── img │ │ └── direct_policy_search_for_gym.png │ ├── README.md │ ├── run.py │ ├── nn_model.py │ └── gym_task.py ├── sequential_random_embedding │ ├── __init__.py │ ├── img │ │ ├── minimize_sphere_sre.png │ │ └── sphere_continuous_sre.png │ ├── sphere_sre.py │ ├── README.md │ └── continuous_sre_opt.py ├── linear_classifier_using_ramploss │ ├── __init__.py │ ├── img │ │ └── ramploss.png │ ├── README.md │ └── ramploss.py └── parallel_opt │ └── continuous_opt.py ├── zoopt ├── algos │ ├── __init__.py │ ├── opt_algorithms │ │ ├── __init__.py │ │ ├── paretoopt │ │ │ ├── __init__.py │ │ │ ├── pareto_optimization.py │ │ │ └── paretoopt.py │ │ └── racos │ │ │ ├── __init__.py │ │ │ ├── racos_optimization.py │ │ │ └── racos.py │ ├── noise_handling │ │ ├── __init__.py │ │ ├── ponss.py │ │ └── ssracos.py │ └── high_dimensionality_handling │ │ ├── __init__.py │ │ └── sre_optimization.py ├── utils │ ├── __init__.py │ ├── tool_function.py │ └── zoo_global.py ├── __init__.py ├── opt.py ├── exp_opt.py ├── objective.py └── solution.py ├── docs ├── requirements.txt ├── Makefile ├── index.rst ├── make.bat ├── conf.py ├── ZOOpt │ ├── Derivative-Free Optimization.rst │ ├── Quick-Start.rst │ ├── Practical-Parameter-Settings-and-fine-tuning-tricks.rst │ └── A-Brief-Introduction-to-ZOOpt.rst ├── Examples │ ├── Optimize-a-Continuous-Function.rst │ ├── Optimize-a-Function-with-Mixed-Search-Space.rst │ ├── Optimize-a-High-dimensional-Function.rst │ ├── Optimize-a-Noisy-Function.rst │ └── Optimize-a-Discrete-Function.rst └── README.rst ├── setup.cfg ├── requirements.txt ├── img ├── quick_start.png ├── quick_start_cmd.png ├── Distributed_ZOOpt.png ├── sphere_mixed_figure.png ├── sphere_continuous_sre.png ├── ackley_continuous_figure.png ├── setcover_discrete_figure.png ├── sphere_continuous_figure.png ├── direct_policy_search_for_gym.png ├── sphere_discrete_order_figure.png └── ackley_continuous_noisy_figure.png ├── .gitignore ├── .travis.yml ├── test ├── test_parameter.py ├── test_algos │ ├── test_high_dimensionality_handling │ │ └── test_high_dimensionality_handling.py │ ├── test_opt_algorithm │ │ └── test_paretoopt │ │ │ ├── test_paretoopt.py │ │ │ └── sparse_mse.py │ └── test_noise_handling │ │ ├── test_ponss.py │ │ ├── sparse_mse.py │ │ └── test_ssracos.py ├── test_solution.py ├── test_objective.py ├── test_seed.py └── test_dimension.py ├── LICENSE.txt ├── setup.py └── README.md /example/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /zoopt/algos/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /zoopt/utils/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /docs/requirements.txt: -------------------------------------------------------------------------------- 1 | sphinx 2 | -------------------------------------------------------------------------------- /example/simple_functions/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /example/sparse_regression/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /zoopt/algos/opt_algorithms/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /zoopt/algos/noise_handling/__init__.py: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /example/direct_policy_search_for_gym/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /example/sequential_random_embedding/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /zoopt/algos/opt_algorithms/paretoopt/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /zoopt/algos/opt_algorithms/racos/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /example/linear_classifier_using_ramploss/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [metadata] 2 | description-file = README.md -------------------------------------------------------------------------------- /zoopt/algos/high_dimensionality_handling/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | numpy 2 | matplotlib 3 | liac-arff 4 | gym 5 | 6 | -------------------------------------------------------------------------------- /img/quick_start.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/polixir/ZOOpt/HEAD/img/quick_start.png -------------------------------------------------------------------------------- /img/quick_start_cmd.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/polixir/ZOOpt/HEAD/img/quick_start_cmd.png -------------------------------------------------------------------------------- /img/Distributed_ZOOpt.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/polixir/ZOOpt/HEAD/img/Distributed_ZOOpt.png -------------------------------------------------------------------------------- /img/sphere_mixed_figure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/polixir/ZOOpt/HEAD/img/sphere_mixed_figure.png -------------------------------------------------------------------------------- /img/sphere_continuous_sre.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/polixir/ZOOpt/HEAD/img/sphere_continuous_sre.png -------------------------------------------------------------------------------- /img/ackley_continuous_figure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/polixir/ZOOpt/HEAD/img/ackley_continuous_figure.png -------------------------------------------------------------------------------- /img/setcover_discrete_figure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/polixir/ZOOpt/HEAD/img/setcover_discrete_figure.png -------------------------------------------------------------------------------- /img/sphere_continuous_figure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/polixir/ZOOpt/HEAD/img/sphere_continuous_figure.png -------------------------------------------------------------------------------- /img/direct_policy_search_for_gym.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/polixir/ZOOpt/HEAD/img/direct_policy_search_for_gym.png -------------------------------------------------------------------------------- /img/sphere_discrete_order_figure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/polixir/ZOOpt/HEAD/img/sphere_discrete_order_figure.png -------------------------------------------------------------------------------- /img/ackley_continuous_noisy_figure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/polixir/ZOOpt/HEAD/img/ackley_continuous_noisy_figure.png -------------------------------------------------------------------------------- /example/simple_functions/img/quick_start.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/polixir/ZOOpt/HEAD/example/simple_functions/img/quick_start.png -------------------------------------------------------------------------------- /example/simple_functions/img/sphere_mixed_figure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/polixir/ZOOpt/HEAD/example/simple_functions/img/sphere_mixed_figure.png -------------------------------------------------------------------------------- /example/linear_classifier_using_ramploss/img/ramploss.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/polixir/ZOOpt/HEAD/example/linear_classifier_using_ramploss/img/ramploss.png -------------------------------------------------------------------------------- /example/simple_functions/img/ackley_continuous_figure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/polixir/ZOOpt/HEAD/example/simple_functions/img/ackley_continuous_figure.png -------------------------------------------------------------------------------- /example/simple_functions/img/setcover_discrete_figure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/polixir/ZOOpt/HEAD/example/simple_functions/img/setcover_discrete_figure.png -------------------------------------------------------------------------------- /example/simple_functions/img/sphere_continuous_figure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/polixir/ZOOpt/HEAD/example/simple_functions/img/sphere_continuous_figure.png -------------------------------------------------------------------------------- /example/simple_functions/img/sphere_discrete_order_figure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/polixir/ZOOpt/HEAD/example/simple_functions/img/sphere_discrete_order_figure.png -------------------------------------------------------------------------------- /example/sequential_random_embedding/img/minimize_sphere_sre.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/polixir/ZOOpt/HEAD/example/sequential_random_embedding/img/minimize_sphere_sre.png -------------------------------------------------------------------------------- /example/simple_functions/img/ackley_continuous_noisy_figure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/polixir/ZOOpt/HEAD/example/simple_functions/img/ackley_continuous_noisy_figure.png -------------------------------------------------------------------------------- /example/sequential_random_embedding/img/sphere_continuous_sre.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/polixir/ZOOpt/HEAD/example/sequential_random_embedding/img/sphere_continuous_sre.png -------------------------------------------------------------------------------- /example/direct_policy_search_for_gym/img/direct_policy_search_for_gym.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/polixir/ZOOpt/HEAD/example/direct_policy_search_for_gym/img/direct_policy_search_for_gym.png -------------------------------------------------------------------------------- /zoopt/__init__.py: -------------------------------------------------------------------------------- 1 | from zoopt.dimension import Dimension, Dimension2, ValueType 2 | from zoopt.objective import Objective 3 | from zoopt.opt import Opt 4 | from zoopt.parameter import Parameter 5 | from zoopt.solution import Solution 6 | from zoopt.exp_opt import ExpOpt 7 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | *.py[cod] 2 | build 3 | zoopt.egg-info 4 | .idea 5 | /example/direct_policy_search_for_gym/Theano 6 | /example/direct_policy_search_for_gym/test.py 7 | /example/simple_functions/function_test.py 8 | /doc-make 9 | /zoopt/algos/asynchronous_racos/*.py 10 | /example/simple_funcitions/asynchronous_*.py 11 | /build/* 12 | /tutorial/* 13 | /dist/* 14 | /htmlcov/* 15 | .pytest_cache 16 | .coverage 17 | coverage.xml 18 | .cash 19 | -------------------------------------------------------------------------------- /example/sequential_random_embedding/sphere_sre.py: -------------------------------------------------------------------------------- 1 | """ 2 | This module contains an objective function sphere. 3 | 4 | Author: 5 | Yu-Ren Liu 6 | """ 7 | 8 | 9 | def sphere_sre(solution): 10 | """ 11 | Variant of the sphere function. Dimensions except the first 10 ones have limited impact on the function value. 12 | """ 13 | a = 0 14 | bias = 0.2 15 | x = solution.get_x() 16 | x1 = x[:10] 17 | x2 = x[10:] 18 | value1 = sum([(i-bias)*(i-bias) for i in x1]) 19 | value2 = 1/len(x) * sum([(i-bias)*(i-bias) for i in x2]) 20 | return value1 + value2 21 | 22 | 23 | -------------------------------------------------------------------------------- /zoopt/utils/tool_function.py: -------------------------------------------------------------------------------- 1 | """ 2 | This module contains the class ToolFunction. 3 | 4 | Author: 5 | Yu-Ren Liu, Yang Yu 6 | """ 7 | from __future__ import print_function 8 | import pickle 9 | 10 | 11 | class ToolFunction: 12 | """ 13 | This class defines some tool functions used in the project. 14 | """ 15 | def __init__(self): 16 | pass 17 | 18 | @staticmethod 19 | def log(text): 20 | """ 21 | Output logs in ZOOpt. 22 | 23 | :param text: the text content 24 | :return: no return value 25 | """ 26 | print('[zoopt] '+text) 27 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: python 2 | python: 3 | - "3.5" 4 | - "3.5-dev" # 3.5 development branch 5 | - "3.6" 6 | - "3.6-dev" # 3.6 development branch 7 | - "3.7" 8 | - "3.7-dev" # 3.7 development branch 9 | - "3.8" 10 | - "3.8-dev" # 3.8 development branch 11 | # command to install dependencies 12 | install: 13 | - pip install -r requirements.txt 14 | - pip install coverage 15 | - pip install codecov 16 | 17 | # command to run tests 18 | script: 19 | - coverage run --source zoopt -m py.test 20 | 21 | branches: 22 | only: 23 | - dev 24 | - master 25 | 26 | 27 | after_success: 28 | - code_cov 29 | -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Minimal makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | SPHINXPROJ = ZOOpt 8 | SOURCEDIR = . 9 | BUILDDIR = build 10 | 11 | # Put it first so that "make" without argument is like "make help". 12 | help: 13 | @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 14 | 15 | .PHONY: help Makefile 16 | 17 | # Catch-all target: route all unknown targets to Sphinx using the new 18 | # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). 19 | %: Makefile 20 | @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 21 | -------------------------------------------------------------------------------- /docs/index.rst: -------------------------------------------------------------------------------- 1 | .. include:: README.rst 2 | 3 | .. toctree:: 4 | :caption: Tutorial of ZOOpt 5 | :maxdepth: 2 6 | 7 | ZOOpt/Derivative-Free Optimization.rst 8 | ZOOpt/Quick-Start.rst 9 | ZOOpt/A-Brief-Introduction-to-ZOOpt.rst 10 | ZOOpt/Parameters-in-ZOOpt.rst 11 | ZOOpt/Practical-Parameter-Settings-and-fine-tuning-tricks.rst 12 | 13 | .. toctree:: 14 | :caption: Examples 15 | :maxdepth: 2 16 | 17 | Examples/Optimize-a-Continuous-Function.rst 18 | Examples/Optimize-a-Discrete-Function.rst 19 | Examples/Optimize-a-Function-with-Mixed-Search-Space.rst 20 | Examples/Optimize-a-High-dimensional-Function.rst 21 | Examples/Optimize-a-Noisy-Function.rst 22 | 23 | -------------------------------------------------------------------------------- /example/simple_functions/README.md: -------------------------------------------------------------------------------- 1 | # Optimization Examples on Simple Functions 2 | 3 | Several optimization examples are defined in `continuous_opt.py`, `discrete_opt.py`, `discrete_with_order_opt.py`, `mixed_opt.py`, `opt_under_noise.py` and `quick_start.py`. 4 | 5 | Four kinds of simple functions are defined in `simple_function.py` 6 | 7 | * `sphere`: a convex hyper-sphere function in real dimensions 8 | 9 | * `ackley`: a non-convex ackley function with many local optima in real dimensions 10 | 11 | * `setcover`: an instance of the NP-hard minimum set cover problem in discrete dimensions 12 | 13 | * `mixed_function`: a sum-of-all-dimension function for mixed real/discrete dimensions 14 | 15 | -------------------------------------------------------------------------------- /test/test_parameter.py: -------------------------------------------------------------------------------- 1 | from zoopt import Parameter 2 | 3 | 4 | class TestParameter(object): 5 | def test_auto_set(self): 6 | par = Parameter(budget=50) 7 | assert par.get_train_size() == 4 and par.get_positive_size() == 1 and par.get_negative_size() == 3 8 | par = Parameter(budget=100) 9 | assert par.get_train_size() == 6 and par.get_positive_size() == 1 and par.get_negative_size() == 5 10 | par = Parameter(budget=1000) 11 | assert par.get_train_size() == 12 and par.get_positive_size() == 2 and par.get_negative_size() == 10 12 | par = Parameter(budget=1001) 13 | assert par.get_train_size() == 22 and par.get_positive_size() == 2 and par.get_negative_size() == 20 14 | 15 | -------------------------------------------------------------------------------- /example/linear_classifier_using_ramploss/README.md: -------------------------------------------------------------------------------- 1 | # Optimization Example on Linear Classifier using Ramp-loss 2 | 3 | Ramp-loss is a robust loss function for classification learning task. It is however non-convex. This example is derived from the following paper to minimize the ramp-loss. 4 | > Yang Yu, Hong Qian, and Yi-Qi Hu. Derivative-free optimization via classification. In: Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI'16), Phoenix, AZ, 2016, pp.2286-2292. 5 | 6 | In `ramploss.py`, a class RampLoss is defined to handle the loss function calculation and file reading. You can run this file to get results of this example. `ionosphere.arff` is an example data set. 7 | 8 | __Package requirement:__ 9 | * liac-arff: https://pypi.python.org/pypi/liac-arff 10 | 11 | -------------------------------------------------------------------------------- /example/sparse_regression/poss_opt.py: -------------------------------------------------------------------------------- 1 | """ 2 | An example of using POSS to optimize a subset selection problem. 3 | """ 4 | 5 | from sparse_mse import SparseMSE 6 | from zoopt import Objective, Parameter, ExpOpt 7 | from math import exp 8 | 9 | if __name__ == '__main__': 10 | # load data file 11 | mse = SparseMSE('sonar.arff') 12 | mse.set_sparsity(8) 13 | 14 | # setup objective 15 | # print(mse.get_dim().get_size()) 16 | objective = Objective(func=mse.loss, dim=mse.get_dim(), constraint=mse.constraint) 17 | parameter = Parameter(algorithm='poss', budget=2 * exp(1) * (mse.get_sparsity() ** 2) * mse.get_dim().get_size(), seed=1) 18 | 19 | # perform sparse regression with constraint |w|_0 <= k 20 | solution_list = ExpOpt.min(objective, parameter, repeat=2, plot=False) 21 | -------------------------------------------------------------------------------- /example/direct_policy_search_for_gym/README.md: -------------------------------------------------------------------------------- 1 | # Optimization Example on Direct Policy Search for Reinforcement Learning 2 | 3 | Direct policy search method is a kind of reinforcement learning approach that direct optimizes the parameters of the policy to maximize the total reward. This example is derived from the following paper 4 | > Yi-Qi Hu, Hong Qian, and Yang Yu. Sequential classification-based optimization for direct policy search. In: Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI’17), San Francisco, CA, 2017. 5 | 6 | A neural network model implemented in `nn_model.py` is used as the policy. In `gym_task.py`, reinforcement learning tasks in Gym are wrapped in the class as a derivative-free optimization problem. `run.py` provides examples of doing the policy search. 7 | 8 | __Package requirement:__ 9 | * gym: https://gym.openai.com/docs 10 | * numpy: http://www.numpy.org 11 | -------------------------------------------------------------------------------- /example/sequential_random_embedding/README.md: -------------------------------------------------------------------------------- 1 | # Optimization Example on High-dimensional Function using Sequential Random Embedding 2 | 3 | Sequential random embedding is a recently proposed method to solve high-dimensional problems. This example is derived from the following paper to minimize a synthetic function. 4 | > Hong Qian, Yi-Qi Hu and Yang Yu. Derivative-free optimization of high-dimensional non-convex functions by sequential random embeddings. In: Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI'16), New York, NY, 2016, pp.1946-1952. 5 | 6 | In `sphere_sre.py`, a `sphere_sre` function is defined. It's a variant of the sphere function, the dimensions except the first 10 ones have limited impact on the function value. 7 | 8 | In `continuous_sre_opt.py`, a process is defined to minimize the `sphere_sre` function. You can run this file to get results of this example. 9 | -------------------------------------------------------------------------------- /example/parallel_opt/continuous_opt.py: -------------------------------------------------------------------------------- 1 | from zoopt import Objective, Dimension, Solution, Parameter, ExpOpt, Opt 2 | import numpy as np 3 | 4 | 5 | def ackley(solution): 6 | """ 7 | Ackley function for continuous optimization 8 | """ 9 | x = solution.get_x() 10 | bias = 0.2 11 | ave_seq = sum([(i - bias) * (i - bias) for i in x]) / len(x) 12 | ave_cos = sum([np.cos(2.0*np.pi*(i-bias)) for i in x]) / len(x) 13 | value = -20 * np.exp(-0.2 * np.sqrt(ave_seq)) - np.exp(ave_cos) + 20.0 + np.e 14 | return value 15 | 16 | 17 | if __name__ == '__main__': 18 | dim = 100 # dimension 19 | objective = Objective(ackley, Dimension(dim, [[-1, 1]] * dim, [True] * dim)) # setup objective 20 | parameter = Parameter(budget=10000, parallel=True, server_num=3) # init with init_samples 21 | sol = Opt.min(objective, parameter) 22 | print(sol.get_x()) 23 | print(sol.get_value()) 24 | -------------------------------------------------------------------------------- /example/simple_functions/quick_start.py: -------------------------------------------------------------------------------- 1 | """ 2 | This file contains an example of how to optimize continuous ackley function. 3 | 4 | Author: 5 | Yu-Ren Liu, Xiong-Hui Chen 6 | """ 7 | 8 | from zoopt import Dimension, Objective, Parameter, ExpOpt, Opt, Solution 9 | from simple_function import ackley 10 | 11 | if __name__ == '__main__': 12 | dim = 100 # dimension 13 | objective = Objective(ackley, Dimension(dim, [[-1, 1]] * dim, [True] * dim)) # setup objective 14 | parameter = Parameter(budget=100 * dim, intermediate_result= True, intermediate_freq=1000) 15 | # parameter = Parameter(budget=100 * dim, init_samples=[Solution([0] * 100)]) # init with init_samples 16 | solution = Opt.min(objective, parameter) 17 | solution.print_solution() 18 | solution_list = ExpOpt.min(objective, parameter, repeat=1, plot=True, plot_file="img/quick_start.png") 19 | for solution in solution_list: 20 | x = solution.get_x() 21 | value = solution.get_value() 22 | print(x, value) -------------------------------------------------------------------------------- /docs/make.bat: -------------------------------------------------------------------------------- 1 | @ECHO OFF 2 | 3 | pushd %~dp0 4 | 5 | REM Command file for Sphinx documentation 6 | 7 | if "%SPHINXBUILD%" == "" ( 8 | set SPHINXBUILD=python -msphinx 9 | ) 10 | set SPHINXOPTS= 11 | set SPHINXBUILD=sphinx-build 12 | set SOURCEDIR=. 13 | set BUILDDIR=build 14 | set SPHINXPROJ=ReadtheDocsSphinxTheme 15 | 16 | if "%1" == "" goto help 17 | 18 | %SPHINXBUILD% >NUL 2>NUL 19 | if errorlevel 9009 ( 20 | echo. 21 | echo.The Sphinx module was not found. Make sure you have Sphinx installed, 22 | echo.then set the SPHINXBUILD environment variable to point to the full 23 | echo.path of the 'sphinx-build' executable. Alternatively you may add the 24 | echo.Sphinx directory to PATH. 25 | echo. 26 | echo.If you don't have Sphinx installed, grab it from 27 | echo.http://sphinx-doc.org/ 28 | exit /b 1 29 | ) 30 | 31 | %SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% 32 | goto end 33 | 34 | :help 35 | %SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% 36 | 37 | :end 38 | popd 39 | -------------------------------------------------------------------------------- /example/sparse_regression/ponss_opt.py: -------------------------------------------------------------------------------- 1 | """ 2 | An example of using PONSS to optimize a noisy subset selection problem. 3 | """ 4 | 5 | from sparse_mse import SparseMSE 6 | from zoopt import Objective, Parameter, ExpOpt 7 | from math import exp 8 | 9 | if __name__ == '__main__': 10 | # load data file 11 | mse = SparseMSE('sonar.arff') 12 | mse.set_sparsity(8) 13 | 14 | # setup objective 15 | objective = Objective(func=mse.loss, dim=mse.get_dim(), constraint=mse.constraint) 16 | # ponss_theta and ponss_b are parameters used in PONSS algorithm and should be provided by users. ponss_theta stands 17 | # for the threshold. ponss_b limits the number of solutions in the population set. 18 | parameter = Parameter(algorithm='poss', noise_handling=True, ponss=True, ponss_theta=0.5, ponss_b=mse.get_k(), 19 | budget=2 * exp(1) * (mse.get_sparsity() ** 2) * mse.get_dim().get_size(), seed=1, 20 | intermediate_result=True, intermediate_freq=100) 21 | 22 | # perform sparse regression with constraint |w|_0 <= k 23 | solution_list = ExpOpt.min(objective, parameter, repeat=1, plot=False) 24 | -------------------------------------------------------------------------------- /example/simple_functions/discrete_opt.py: -------------------------------------------------------------------------------- 1 | """ 2 | This file contains examples of optimizing discrete objective function. 3 | 4 | Author: 5 | Yu-Ren Liu 6 | """ 7 | 8 | from simple_function import SetCover 9 | from zoopt import Dimension, Objective, Parameter, ExpOpt 10 | 11 | 12 | def minimize_setcover_discrete(): 13 | """ 14 | Discrete optimization example of minimizing setcover problem. 15 | 16 | :return: no return value 17 | """ 18 | problem = SetCover() 19 | dim = problem.dim # the dim is prepared by the class 20 | objective = Objective(problem.fx, dim) # form up the objective function 21 | budget = 100 * dim.get_size() # number of calls to the objective function 22 | # if autoset is False, you should define train_size, positive_size, negative_size on your own 23 | parameter = Parameter(budget=budget, autoset=False) 24 | parameter.set_train_size(6) 25 | parameter.set_positive_size(1) 26 | parameter.set_negative_size(5) 27 | 28 | ExpOpt.min(objective, parameter, repeat=10, best_n=5, plot=True, plot_file="img/setcover_discrete_figure.png") 29 | 30 | 31 | if __name__ == '__main__': 32 | minimize_setcover_discrete() 33 | -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | COPYRIGHT 4 | ========= 5 | 6 | Copyright (c) 2017 LAMDA (http://lamda.nju.edu.cn), Nanjing University, China 7 | All rights reserved. 8 | 9 | LICENSE 10 | ======= 11 | 12 | Permission is hereby granted, free of charge, to any person obtaining a copy 13 | of this software and associated documentation files (the “Software”), to deal 14 | in the Software without restriction, including without limitation the rights 15 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 16 | copies of the Software, and to permit persons to whom the Software is 17 | furnished to do so, subject to the following conditions: 18 | 19 | The above copyright notice and this permission notice shall be included in 20 | all copies or substantial portions of the Software. 21 | 22 | THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 23 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 24 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 25 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 26 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 27 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN 28 | THE SOFTWARE. 29 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # coding=utf-8 3 | 4 | from setuptools import setup, find_packages 5 | 6 | setup( 7 | name='zoopt', 8 | version='0.4.2', 9 | description=( 10 | 'A Python Package for Zeroth-Order Optimization' 11 | ), 12 | author='Yang Yu', 13 | author_email='yuy@nju.edu.cn', 14 | maintainer='Yu-Ren Liu, Xiong-Hui Chen, Yi-Qi Hu, Chao Feng, Yang Yu', 15 | license='MIT License', 16 | packages=find_packages(), 17 | platforms=["all"], 18 | url='https://github.com/polixir/ZOOpt', 19 | classifiers=[ 20 | 'Development Status :: 4 - Beta', 21 | 'Operating System :: OS Independent', 22 | 'Intended Audience :: Developers', 23 | 'License :: OSI Approved :: MIT License', 24 | 'Programming Language :: Python', 25 | 'Programming Language :: Python :: Implementation', 26 | 'Programming Language :: Python :: 3', 27 | 'Programming Language :: Python :: 3.5', 28 | 'Programming Language :: Python :: 3.6', 29 | 'Programming Language :: Python :: 3.7', 30 | 'Programming Language :: Python :: 3.8', 31 | 'Topic :: Software Development :: Libraries' 32 | ], 33 | install_requires=[ 34 | 'numpy', 35 | 'matplotlib', 36 | ] 37 | ) 38 | -------------------------------------------------------------------------------- /example/simple_functions/discrete_with_order_opt.py: -------------------------------------------------------------------------------- 1 | """ 2 | This file contains examples of optimizing discrete objective function with ordered search space. 3 | 4 | Author: 5 | Yu-Ren Liu 6 | """ 7 | 8 | from simple_function import sphere_discrete_order 9 | from zoopt import Dimension, Objective, Parameter, ExpOpt 10 | 11 | 12 | def minimize_sphere_discrete_order(): 13 | """ 14 | Discrete optimization example of minimizing the sphere function, which has ordered search space. 15 | 16 | :return: no return value 17 | """ 18 | dim_size = 100 # dimensions 19 | dim_regs = [[-10, 10]] * dim_size # dimension range 20 | dim_tys = [False] * dim_size # dimension type : integer 21 | dim_order = [True] * dim_size 22 | dim = Dimension(dim_size, dim_regs, dim_tys, order=dim_order) # form up the dimension object 23 | objective = Objective(sphere_discrete_order, dim) # form up the objective function 24 | 25 | # setup algorithm parameters 26 | budget = 10000 # number of calls to the objective function 27 | parameter = Parameter(budget=budget, uncertain_bits=1) 28 | 29 | ExpOpt.min(objective, parameter, repeat=1, plot=True, plot_file="img/sphere_discrete_order_figure.png") 30 | 31 | 32 | if __name__ == '__main__': 33 | minimize_sphere_discrete_order() 34 | -------------------------------------------------------------------------------- /example/sequential_random_embedding/continuous_sre_opt.py: -------------------------------------------------------------------------------- 1 | """ 2 | This module contains an example of optimizing high-dimensional sphere function with sequential random embedding. 3 | 4 | Author: 5 | Yu-Ren Liu 6 | """ 7 | 8 | from sphere_sre import sphere_sre 9 | from zoopt import Dimension, Objective, Parameter, ExpOpt 10 | 11 | 12 | def minimize_sphere_sre(): 13 | """ 14 | Example of minimizing high-dimensional sphere function with sequential random embedding. 15 | 16 | :return: no return value 17 | """ 18 | 19 | dim_size = 10000 # dimensions 20 | dim_regs = [[-1, 1]] * dim_size # dimension range 21 | dim_tys = [True] * dim_size # dimension type : real 22 | dim = Dimension(dim_size, dim_regs, dim_tys) # form up the dimension object 23 | objective = Objective(sphere_sre, dim) # form up the objective function 24 | 25 | # setup algorithm parameters 26 | budget = 2000 # number of calls to the objective function 27 | parameter = Parameter(budget=budget, high_dim_handling=True, reducedim=True, num_sre=5, 28 | low_dimension=Dimension(10, [[-1, 1]] * 10, [True] * 10)) 29 | solution_list = ExpOpt.min(objective, parameter, repeat=5, plot=False, plot_file="img/minimize_sphere_sre.png") 30 | 31 | 32 | if __name__ == "__main__": 33 | minimize_sphere_sre() 34 | -------------------------------------------------------------------------------- /zoopt/utils/zoo_global.py: -------------------------------------------------------------------------------- 1 | """ 2 | This module contains the class Global. 3 | 4 | Author: 5 | Yu-Ren Liu 6 | 7 | Updated by: 8 | Ze-Wen Li 9 | """ 10 | 11 | from random import Random 12 | import numpy as np 13 | 14 | 15 | class Global: 16 | """ 17 | This class defines global variables used in all algorithms. 18 | """ 19 | 20 | def __init__(self): 21 | """ 22 | Initialize rand and precision. 23 | """ 24 | # rand is the random object used by all files 25 | self.precision = 1e-17 26 | self.float_precisions = [] 27 | # rand.seed(100) 28 | 29 | def set_seed(self, seed): 30 | """ 31 | Set random seed. 32 | 33 | :param seed: random seed 34 | :return: no return value 35 | """ 36 | np.random.seed(seed) 37 | return 38 | 39 | def set_precision(self, my_precision): 40 | """ 41 | Set precision, precision is used to judge whether two floats are equal. 42 | 43 | :param my_precision: precision 44 | :return: no return value 45 | """ 46 | self.precision = my_precision 47 | return 48 | 49 | 50 | gl = Global() 51 | # constants 52 | # pos_inf = np.inf 53 | # neg_inf = -np.inf 54 | # nan = np.nan 55 | pos_inf = float('Inf') 56 | neg_inf = float('-Inf') 57 | nan = float('Nan') 58 | -------------------------------------------------------------------------------- /zoopt/algos/opt_algorithms/paretoopt/pareto_optimization.py: -------------------------------------------------------------------------------- 1 | 2 | """ 3 | The class ParetoOptimization is a wrapper of Pareto optimization methods, even though currently there is only the canonical Pareto optimization method 4 | 5 | Author: 6 | Yu-Ren Liu 7 | 8 | """ 9 | 10 | from zoopt.algos.noise_handling.ponss import PONSS 11 | from zoopt.algos.opt_algorithms.paretoopt.paretoopt import ParetoOpt 12 | 13 | 14 | class ParetoOptimization: 15 | """ 16 | Pareto optimization. 17 | """ 18 | 19 | def __init__(self): 20 | self.__best_solution = None 21 | self.__algorithm = None 22 | 23 | def clear(self): 24 | self.__best_solution = None 25 | self.__algorithm = None 26 | 27 | def opt(self, objective, parameter): 28 | """ 29 | The optimization procedure. 30 | 31 | :param objective: an Objective object 32 | :param parameter: a Parameter object 33 | :return: the best solution 34 | """ 35 | self.clear() 36 | if parameter.get_noise_handling() is True and parameter.get_ponss() is True: 37 | self.__algorithm = PONSS() 38 | else: 39 | self.__algorithm = ParetoOpt() 40 | self.__best_solution = self.__algorithm.opt(objective, parameter) 41 | return self.__best_solution 42 | 43 | def get_best_sol(self): 44 | return self.__best_solution 45 | -------------------------------------------------------------------------------- /test/test_algos/test_high_dimensionality_handling/test_high_dimensionality_handling.py: -------------------------------------------------------------------------------- 1 | from zoopt import Dimension, Objective, Parameter, Opt 2 | 3 | 4 | def sphere_sre(solution): 5 | """ 6 | Variant of the sphere function. Dimensions except the first 10 ones have limited impact on the function value. 7 | """ 8 | a = 0 9 | bias = 0.2 10 | x = solution.get_x() 11 | x1 = x[:10] 12 | x2 = x[10:] 13 | value1 = sum([(i-bias)*(i-bias) for i in x1]) 14 | value2 = 1/len(x) * sum([(i-bias)*(i-bias) for i in x2]) 15 | return value1 + value2 16 | 17 | 18 | class TestHighDimensionalityHandling(object): 19 | def test_performance(self): 20 | dim_size = 10000 # dimensions 21 | dim_regs = [[-1, 1]] * dim_size # dimension range 22 | dim_tys = [True] * dim_size # dimension type : real 23 | dim = Dimension(dim_size, dim_regs, dim_tys) # form up the dimension object 24 | objective = Objective(sphere_sre, dim) # form up the objective function 25 | 26 | # setup algorithm parameters 27 | budget = 2000 # number of calls to the objective function 28 | parameter = Parameter(budget=budget, high_dim_handling=True, reducedim=True, num_sre=5, 29 | low_dimension=Dimension(10, [[-1, 1]] * 10, [True] * 10)) 30 | solution = Opt.min(objective, parameter) 31 | assert solution.get_value() < 0.3 -------------------------------------------------------------------------------- /example/sparse_regression/README.md: -------------------------------------------------------------------------------- 1 | # Optimization Examples on Sparse Regression 2 | 3 | `poss_opt.py` demonstrates the example of sparse regression using Pareto optimization. The example is derived from the following paper 4 | > Chao Qian, Yang Yu and Zhi-Hua Zhou. Subset selection by Pareto optimization. In: Advances in Neural Information Processing Systems 28 (NIPS'15) , Montreal, Canada, 2015. 5 | 6 | `ponss_opt.py` demonstrates the example of sparse regression using Pareto optimization with noise-aware strategy. The example is derived from the following paper 7 | > Chao Qian, Jing-Cheng Shi, Yang Yu, Ke Tang, and Zhi-Hua Zhou. Subset selection under noise. In: Advances in Neural Information Processing Systems 30 (NIPS'17), Long Beach, CA, 2017. 8 | 9 | For sparse regression, the objective is to learn a linear classifier _w_ minimzing the mean squared error, while the number of non-zero elements of _w_ should be not larger than _k_, which is a sparsity requirement. 10 | 11 | The objective function can be write as 12 | ``` 13 | min_w mse(w) s.t. ||w||_0 <= k 14 | ``` 15 | 16 | These examples show how to solve this problem using a subset selection algorithm called Pareto optimization, which has a better theoretical guarantee than greedy algorithm. Details can be find in the above papers. 17 | 18 | __Package requirement:__ 19 | * liac-arff: https://pypi.python.org/pypi/liac-arff 20 | * numpy: http://www.numpy.org -------------------------------------------------------------------------------- /example/simple_functions/mixed_opt.py: -------------------------------------------------------------------------------- 1 | """ 2 | This file contains an example of optimizing a function with the mixed search space(continuous and discrete). 3 | 4 | Author: 5 | Yu-Ren Liu 6 | """ 7 | from simple_function import sphere_mixed 8 | from zoopt import Dimension, Objective, Parameter, ExpOpt 9 | 10 | 11 | # mixed optimization 12 | def minimize_sphere_mixed(): 13 | """ 14 | Mixed optimization example of minimizing sphere function, which has mixed search search space. 15 | 16 | :return: no return value 17 | """ 18 | 19 | # setup optimization problem 20 | dim_size = 100 21 | dim_regs = [] 22 | dim_tys = [] 23 | # In this example, the search space is discrete if this dimension index is odd, Otherwise, the search space 24 | # is continuous. 25 | for i in range(dim_size): 26 | if i % 2 == 0: 27 | dim_regs.append([0, 1]) 28 | dim_tys.append(True) 29 | else: 30 | dim_regs.append([0, 100]) 31 | dim_tys.append(False) 32 | dim = Dimension(dim_size, dim_regs, dim_tys) 33 | objective = Objective(sphere_mixed, dim) # form up the objective function 34 | budget = 100 * dim_size # number of calls to the objective function 35 | parameter = Parameter(budget=budget) 36 | 37 | solution_list = ExpOpt.min(objective, parameter, repeat=1, plot=True, plot_file="img/sphere_mixed_figure.png") 38 | 39 | if __name__ == '__main__': 40 | minimize_sphere_mixed() 41 | -------------------------------------------------------------------------------- /test/test_algos/test_opt_algorithm/test_paretoopt/test_paretoopt.py: -------------------------------------------------------------------------------- 1 | from zoopt.algos.opt_algorithms.paretoopt.paretoopt import ParetoOpt 2 | from zoopt import Objective, Parameter, Opt 3 | from math import exp 4 | from sparse_mse import SparseMSE 5 | 6 | 7 | class TestParetoOpt(object): 8 | def test_mutation(self): 9 | a = [0, 1, 0, 1] 10 | n = 4 11 | res = ParetoOpt.mutation(a, n) 12 | assert res != a 13 | 14 | # def test_performance(self): 15 | # mse = SparseMSE('test/test_algos/test_opt_algorithm/test_paretoopt/sonar.arff') 16 | # mse.set_sparsity(8) 17 | # objective = Objective(func=mse.loss, dim=mse.get_dim(), constraint=mse.constraint) 18 | # parameter = Parameter(algorithm='poss', 19 | # budget=2 * exp(1) * (mse.get_sparsity() ** 2) * mse.get_dim().get_size(), seed=1) 20 | # solution = Opt.min(objective, parameter) 21 | # assert solution.get_value()[0] < 0.6 22 | # # PONSS 23 | # mse = SparseMSE('test/test_algos/test_opt_algorithm/test_paretoopt/sonar.arff') 24 | # mse.set_sparsity(8) 25 | # objective = Objective(func=mse.loss, dim=mse.get_dim(), constraint=mse.constraint) 26 | # parameter = Parameter(algorithm='poss', noise_handling=True, ponss=True, ponss_theta=0.5, ponss_b=mse.get_k(), 27 | # budget=2 * exp(1) * (mse.get_sparsity() ** 2) * mse.get_dim().get_size(), seed=1, 28 | # intermediate_result=True, intermediate_freq=100) 29 | # solution = Opt.min(objective, parameter) 30 | # assert solution.get_value()[0] < 0.7 31 | -------------------------------------------------------------------------------- /test/test_solution.py: -------------------------------------------------------------------------------- 1 | from zoopt import Solution 2 | 3 | 4 | class TestSolution(object): 5 | def test_is_equal(self): 6 | sol1 = Solution(x=[1, 2, 3]) 7 | sol2 = Solution(x=[1, 3, 4]) 8 | assert sol1.is_equal(sol2) is False 9 | assert sol1.is_equal(sol1) is True 10 | 11 | def test_deep_copy(self): 12 | sol1 = Solution(x=[1, 2, 3]) 13 | sol2 = sol1.deep_copy() 14 | assert sol1.is_equal(sol2) 15 | 16 | def test_exist_equal(self): 17 | sol1 = Solution(x=[1, 2, 3]) 18 | sol2 = Solution(x=[1, 3, 4]) 19 | sol3 = Solution(x=[1, 5, 6]) 20 | sol_set = [sol1, sol2] 21 | assert sol1.exist_equal(sol_set) is True 22 | assert sol3.exist_equal(sol_set) is False 23 | 24 | def test_deep_copy_set(self): 25 | sol1 = Solution(x=[1, 2, 3]) 26 | sol2 = Solution(x=[1, 3, 4]) 27 | sol3 = Solution(x=[1, 5, 6]) 28 | sol_set_1 = [sol1, sol2, sol3] 29 | sol_set_2 = Solution.deep_copy_set(sol_set_1) 30 | if len(sol_set_1) != len(sol_set_2): 31 | assert 0 32 | for i in range(len(sol_set_1)): 33 | if sol_set_1[i].is_equal(sol_set_2[i]) is False: 34 | assert 0 35 | assert 1 36 | 37 | def test_find_maximum_and_minimum(self): 38 | sol1 = Solution(x=[1, 2, 3], value=1) 39 | sol2 = Solution(x=[1, 3, 4], value=2) 40 | sol3 = Solution(x=[1, 5, 6], value=3) 41 | sol_set = [sol1, sol2, sol3] 42 | assert sol1.is_equal(Solution.find_minimum(sol_set)[0]) 43 | assert sol3.is_equal(Solution.find_maximum(sol_set)[0]) 44 | 45 | 46 | -------------------------------------------------------------------------------- /test/test_objective.py: -------------------------------------------------------------------------------- 1 | from zoopt import Objective 2 | from zoopt import Parameter 3 | from zoopt import Dimension 4 | from zoopt import Solution 5 | import numpy as np 6 | 7 | 8 | def ackley(solution): 9 | """ 10 | Ackley function for continuous optimization 11 | """ 12 | x = solution.get_x() 13 | bias = 0.2 14 | ave_seq = sum([(i - bias) * (i - bias) for i in x]) / len(x) 15 | ave_cos = sum([np.cos(2.0 * np.pi * (i - bias)) for i in x]) / len(x) 16 | value = -20 * np.exp(-0.2 * np.sqrt(ave_seq)) - np.exp(ave_cos) + 20.0 + np.e 17 | return value 18 | 19 | 20 | class TestObjective(object): 21 | 22 | def test_parameter_set(self): 23 | par = Parameter(budget=1000, noise_handling=True, suppression=True) 24 | assert 1 25 | 26 | def test_eval(self): 27 | dim = 100 28 | obj = Objective(func=ackley, dim=Dimension(dim, [[-1, 1]] * dim, [True] * dim)) 29 | sol = Solution(x=[0.2] * dim) 30 | res = obj.eval(sol) 31 | assert abs(res) <= 1e-7 32 | 33 | def test_resample(self): 34 | dim = 100 35 | obj = Objective(func=ackley, dim=Dimension(dim, [[-1, 1]] * dim, [True] * dim)) 36 | sol = Solution(x=[0.2] * dim) 37 | res = obj.eval(sol) 38 | obj.resample(sol, 3) 39 | assert abs(sol.get_value()) <= 1e-7 40 | sol.set_value(0) 41 | obj.resample_func(sol, 3) 42 | assert abs(sol.get_value()) <= 1e-7 43 | 44 | def test_history_best_so_far(self): 45 | input_data = [0.5, 0.6, 0.4, 0.7, 0.3, 0.2] 46 | output_data = [0.5, 0.5, 0.4, 0.4, 0.3, 0.2] 47 | obj = Objective() 48 | obj.set_history(input_data) 49 | best_history = obj.get_history_bestsofar() 50 | assert best_history == output_data 51 | 52 | -------------------------------------------------------------------------------- /example/simple_functions/continuous_opt.py: -------------------------------------------------------------------------------- 1 | """ 2 | This file contains examples of optimizing continuous objective function. 3 | 4 | Author: 5 | Yu-Ren Liu, Xiong-Hui Chen 6 | """ 7 | 8 | 9 | from simple_function import sphere, ackley 10 | from zoopt import Dimension, Objective, Parameter, ExpOpt 11 | 12 | 13 | def minimize_ackley_continuous(): 14 | """ 15 | Continuous optimization example of minimizing the ackley function. 16 | 17 | :return: no return value 18 | """ 19 | dim_size = 100 # dimensions 20 | dim_regs = [[-1, 1]] * dim_size # dimension range 21 | dim_tys = [True] * dim_size # dimension type : real 22 | dim = Dimension(dim_size, dim_regs, dim_tys) # form up the dimension object 23 | 24 | objective = Objective(ackley, dim) # form up the objective function 25 | 26 | budget = 100 * dim_size # number of calls to the objective function 27 | parameter = Parameter(budget=budget) 28 | 29 | solution_list = ExpOpt.min(objective, parameter, repeat=1, plot=True, plot_file="img/ackley_continuous_figure.png") 30 | 31 | 32 | def minimize_sphere_continuous(): 33 | """ 34 | Example of minimizing the sphere function 35 | 36 | :return: no return value 37 | """ 38 | dim_size = 100 39 | # form up the objective function 40 | objective = Objective(sphere, Dimension(dim_size, [[-1, 1]] * dim_size, [True] * dim_size)) 41 | 42 | budget = 100 * dim_size 43 | # if intermediate_result is True, ZOOpt will output intermediate best solution every intermediate_freq budget 44 | parameter = Parameter(budget=budget, intermediate_result=True, intermediate_freq=1000) 45 | ExpOpt.min(objective, parameter, repeat=1, plot=True, plot_file="img/sphere_continuous_figure.png") 46 | 47 | 48 | if __name__ == '__main__': 49 | minimize_ackley_continuous() 50 | # minimize_sphere_continuous() 51 | -------------------------------------------------------------------------------- /example/simple_functions/opt_under_noise.py: -------------------------------------------------------------------------------- 1 | """ 2 | This file contains an example of optimizing a function under noise. 3 | 4 | Author: 5 | Xiong-Hui Chen, Yu-Ren Liu 6 | """ 7 | 8 | from simple_function import ackley, ackley_noise_creator 9 | from zoopt import Dimension, Objective, Parameter, ExpOpt, Solution 10 | 11 | 12 | def minimize_ackley_continuous_noisy(): 13 | """ 14 | SSRacos example of minimizing ackley function under Gaussian noise 15 | 16 | :return: no return value 17 | """ 18 | ackley_noise_func = ackley_noise_creator(0, 0.1) 19 | dim_size = 100 # dimensions 20 | dim_regs = [[-1, 1]] * dim_size # dimension range 21 | dim_tys = [True] * dim_size # dimension type : real 22 | dim = Dimension(dim_size, dim_regs, dim_tys) # form up the dimension object 23 | objective = Objective(ackley_noise_func, dim) # form up the objective function 24 | budget = 20000 # 20*dim_size # number of calls to the objective function 25 | # suppression=True means optimize with value suppression, which is a noise handling method 26 | # resampling=True means optimize with re-sampling, which is another common used noise handling method 27 | # non_update_allowed=500 and resample_times=100 means if the best solution doesn't change for 500 budgets, 28 | # the best solution will be evaluated repeatedly for 100 times 29 | # balance_rate is a parameter for exponential weight average of several evaluations of one sample. 30 | parameter = Parameter(budget=budget, noise_handling=True, suppression=True, non_update_allowed=200, resample_times=50, balance_rate=0.5) 31 | 32 | # parameter = Parameter(budget=budget, noise_handling=True, resampling=True, resample_times=10) 33 | parameter.set_positive_size(5) 34 | 35 | ExpOpt.min(objective, parameter, repeat=5, plot=False, plot_file="img/ackley_continuous_noisy_figure.png") 36 | 37 | if __name__ == '__main__': 38 | minimize_ackley_continuous_noisy() 39 | -------------------------------------------------------------------------------- /test/test_algos/test_noise_handling/test_ponss.py: -------------------------------------------------------------------------------- 1 | from zoopt import Solution, Objective, Parameter, Opt 2 | from zoopt.algos.noise_handling.ponss import PONSS 3 | from sparse_mse import SparseMSE 4 | from math import exp 5 | 6 | 7 | class TestPONSS(object): 8 | def test_theta_dominate(self): 9 | sol1 = Solution(value=[1, 2]) 10 | sol2 = Solution(value=[4, 2]) 11 | assert PONSS.theta_dominate(2.9, sol1, sol2) is True and PONSS.theta_dominate(3, sol1, sol2) is False 12 | sol3 = Solution(value=[2, 3]) 13 | sol4 = Solution(value=[3, 2]) 14 | assert PONSS.theta_dominate(1, sol1, sol2) is True 15 | 16 | def test_theta_weak_dominate(self): 17 | sol1 = Solution(value=[1, 2]) 18 | sol2 = Solution(value=[4, 2]) 19 | assert PONSS.theta_weak_dominate(3, sol1, sol2) is True 20 | assert PONSS.theta_weak_dominate(3.1, sol1, sol2) is False 21 | 22 | # def test_performance(self): 23 | # # load data file 24 | # mse = SparseMSE('example/sparse_regression/sonar.arff') 25 | # mse.set_sparsity(8) 26 | # 27 | # # setup objective 28 | # objective = Objective(func=mse.loss, dim=mse.get_dim(), constraint=mse.constraint) 29 | # # ponss_theta and ponss_b are parameters used in PONSS algorithm and should be provided by users. ponss_theta stands 30 | # # for the threshold. ponss_b limits the number of solutions in the population set. 31 | # budget = int(2 * exp(1) * (mse.get_sparsity() ** 2) * mse.get_dim().get_size()) 32 | # parameter = Parameter(algorithm='poss', noise_handling=True, ponss=True, ponss_theta=0.5, ponss_b=mse.get_k(), 33 | # budget=budget) 34 | # # perform sparse regression with constraint |w|_0 <= k 35 | # solution = Opt.min(objective, parameter) 36 | # assert solution.get_value()[0] < 0.7 37 | # 38 | # 39 | # if __name__ == '__main__': 40 | # tp = TestPONSS() 41 | # tp.test_performance() -------------------------------------------------------------------------------- /zoopt/opt.py: -------------------------------------------------------------------------------- 1 | """ 2 | This module contains the class Opt. 3 | 4 | Author: 5 | Yu-Ren Liu 6 | """ 7 | from zoopt.algos.opt_algorithms.paretoopt.pareto_optimization import ParetoOptimization 8 | 9 | from zoopt.algos.high_dimensionality_handling.sre_optimization import SequentialRandomEmbedding 10 | from zoopt.algos.opt_algorithms.racos.racos_optimization import RacosOptimization 11 | from zoopt.utils.tool_function import ToolFunction 12 | from zoopt.utils.zoo_global import gl 13 | 14 | 15 | class Opt: 16 | """ 17 | The main entrance of the optimization. 18 | """ 19 | def __init__(self): 20 | return 21 | 22 | @staticmethod 23 | def min(objective, parameter): 24 | """ 25 | Minimization function. 26 | 27 | :param objective: an Objective object 28 | :param parameter: a Parameter object 29 | :return: the result of the optimization 30 | """ 31 | objective.parameter_set(parameter) 32 | Opt.set_global(parameter) 33 | constraint = objective.get_constraint() 34 | algorithm = parameter.get_algorithm() 35 | if algorithm: 36 | algorithm = algorithm.lower() 37 | result = None 38 | if constraint is not None and ((algorithm is None) or (algorithm == "poss")): 39 | optimizer = ParetoOptimization() 40 | elif constraint is None and ((algorithm is None) or (algorithm == "racos") or (algorithm == "sracos")) or (algorithm == "ssracos"): 41 | optimizer = RacosOptimization() 42 | else: 43 | ToolFunction.log( 44 | "opt.py: No proper algorithm found for %s" % algorithm) 45 | return result 46 | if objective.get_reducedim() is True: 47 | sre = SequentialRandomEmbedding(objective, parameter, optimizer) 48 | result = sre.opt() 49 | else: 50 | result = optimizer.opt(objective, parameter) 51 | result.print_solution() 52 | return result 53 | 54 | @staticmethod 55 | def set_global(parameter): 56 | """ 57 | Set global variables. 58 | 59 | :param parameter: a Parameter object 60 | :return: no return value 61 | """ 62 | 63 | precision = parameter.get_precision() 64 | seed = parameter.get_seed() 65 | if precision: 66 | gl.set_precision(precision) 67 | if seed: 68 | gl.set_seed(seed) 69 | -------------------------------------------------------------------------------- /example/simple_functions/function_test1.py: -------------------------------------------------------------------------------- 1 | """ 2 | This file contains examples of optimizing discrete objective function. 3 | 4 | Author: 5 | Yu-Ren Liu 6 | """ 7 | 8 | from simple_function import SetCover, sphere_discrete_order, ackley 9 | from zoopt import Dimension, Objective, Parameter, ExpOpt, Opt 10 | 11 | 12 | # continuous 13 | # dim = 100 # dimension 14 | # objective = Objective(ackley, Dimension(dim, [[-1, 1]] * dim, [True] * dim)) # setup objective 15 | # parameter = Parameter(budget=100 * dim, parallel=True, server_num=2) 16 | # # parameter = Parameter(budget=100 * dim, init_samples=[Solution([0] * 100)]) # init with init_samples 17 | # solution_list = ExpOpt.min(objective, parameter, repeat=1) 18 | # for solution in solution_list: 19 | # value = solution.get_value() 20 | # assert value < 0.2 21 | # discrete 22 | # setcover 23 | problem = SetCover() 24 | dim = problem.dim # the dim is prepared by the class 25 | objective = Objective(problem.fx, dim) # form up the objective function 26 | budget = 100 * dim.get_size() # number of calls to the objective function 27 | parameter = Parameter(budget=budget, parallel=True, server_num=2, seed=1) 28 | solution_list = ExpOpt.min(objective, parameter, repeat=10) 29 | for solution in solution_list: 30 | value = solution.get_value() 31 | assert value < 2 32 | # # sphere 33 | # dim_size = 100 # dimensions 34 | # dim_regs = [[-10, 10]] * dim_size # dimension range 35 | # dim_tys = [False] * dim_size # dimension type : integer 36 | # dim_order = [True] * dim_size 37 | # dim = Dimension(dim_size, dim_regs, dim_tys, order=dim_order) # form up the dimension object 38 | # objective = Objective(sphere_discrete_order, dim) # form up the objective function 39 | # parameter = Parameter(budget=1000, parallel=True, server_num=1) 40 | # solution_list = ExpOpt.min(objective, parameter, repeat=2, plot=True) 41 | # for solution in solution_list: 42 | # value = solution.get_value() 43 | 44 | # sphere 45 | # dim_size = 100 # dimensions 46 | # dim_regs = [[-10, 10]] * dim_size # dimension range 47 | # dim_tys = [False] * dim_size # dimension type : integer 48 | # dim_order = [True] * dim_size 49 | # dim = Dimension(dim_size, dim_regs, dim_tys, order=dim_order) # form up the dimension object 50 | # objective = Objective(sphere_discrete_order, dim) # form up the objective function 51 | # parameter = Parameter(budget=10000) 52 | # sol = ExpOpt.min(objective, parameter, repeat=5, plot=True) 53 | 54 | -------------------------------------------------------------------------------- /zoopt/algos/high_dimensionality_handling/sre_optimization.py: -------------------------------------------------------------------------------- 1 | """ 2 | This module contains the class SequentialRandomEmbedding. 3 | 4 | Author: 5 | Yu-Ren Liu 6 | """ 7 | 8 | from zoopt.solution import Solution 9 | from zoopt.dimension import Dimension 10 | import numpy as np 11 | import copy 12 | import math 13 | from zoopt.utils.tool_function import ToolFunction 14 | 15 | 16 | class SequentialRandomEmbedding: 17 | """ 18 | Sequential random embedding is implemented in this class. 19 | """ 20 | def __init__(self, objective, parameter, optimizer): 21 | """ 22 | :param objective: an Objective object 23 | :param parameter: an Parameter object 24 | :param optimizer: the optimization algorithm 25 | """ 26 | self.__objective = objective 27 | self.__parameter = parameter 28 | self.__optimizer = optimizer 29 | 30 | def opt(self): 31 | """ 32 | Sequential random embedding optimization. 33 | 34 | :return: the best solution of the optimization 35 | """ 36 | 37 | dim = self.__objective.get_dim() 38 | res = [] 39 | iteration = self.__parameter.get_num_sre() 40 | new_obj = copy.deepcopy(self.__objective) 41 | new_par = copy.deepcopy(self.__parameter) 42 | new_par.set_budget(math.floor(self.__parameter.get_budget()/iteration)) 43 | new_obj.set_last_x(Solution(x=[0])) 44 | for i in range(iteration): 45 | ToolFunction.log('sequential random embedding %d' % i) 46 | new_obj.set_A(np.sqrt(self.__parameter.get_variance_A()) * 47 | np.random.randn(dim.get_size(), self.__parameter.get_low_dimension().get_size())) 48 | new_dim = Dimension.merge_dim(self.__parameter.get_withdraw_alpha(), self.__parameter.get_low_dimension()) 49 | new_obj.set_dim(new_dim) 50 | result = self.__optimizer.opt(new_obj, new_par) 51 | x = result.get_x() 52 | x_origin = x[0] * np.array(new_obj.get_last_x().get_x()) + np.dot(new_obj.get_A(), np.array(x[1:])) 53 | sol = Solution(x=x_origin, value=result.get_value()) 54 | new_obj.set_last_x(sol) 55 | res.append(sol) 56 | best_sol = res[0] 57 | for i in range(len(res)): 58 | if res[i].get_value() < best_sol.get_value(): 59 | best_sol = res[i] 60 | self.__objective.get_history().extend(new_obj.get_history()) 61 | return best_sol 62 | -------------------------------------------------------------------------------- /docs/conf.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import sys 4 | import os 5 | import re 6 | 7 | if not 'READTHEDOCS' in os.environ: 8 | sys.path.insert(0, os.path.abspath('..')) 9 | sys.path.append(os.path.abspath('./ZOOpt/')) 10 | 11 | # from sphinx.locale import _ 12 | from sphinx_rtd_theme import __version__ 13 | 14 | 15 | project = u'ZOOpt' 16 | slug = re.sub(r'\W+', '-', project.lower()) 17 | author = u'Yu-Ren Liu, Yi-Qi Hu, Hong Qian, Xiong-Hui Chen, Yang Yu' 18 | version = u'0.3.0' 19 | release = u'0.3.0' 20 | copyright = author 21 | language = 'en' 22 | 23 | extensions = [ 24 | 'sphinx.ext.intersphinx', 25 | 'sphinx.ext.autodoc', 26 | 'sphinx.ext.mathjax', 27 | 'sphinx.ext.viewcode', 28 | 'sphinx_rtd_theme', 29 | ] 30 | 31 | templates_path = ['_templates'] 32 | source_suffix = '.rst' 33 | exclude_patterns = [] 34 | # locale_dirs = ['locale/'] 35 | gettext_compact = False 36 | 37 | master_doc = 'index' 38 | suppress_warnings = ['image.nonlocal_uri'] 39 | pygments_style = 'default' 40 | 41 | # intersphinx_mapping = { 42 | # 'rtd': ('https://docs.readthedocs.io/en/latest/', None), 43 | # 'sphinx': ('http://www.sphinx-doc.org/en/stable/', None), 44 | # } 45 | 46 | html_theme = 'sphinx_rtd_theme' 47 | html_theme_options = { 48 | 'display_version': True 49 | } 50 | # html_theme_path = ["../.."] 51 | # html_logo = "demo/static/logo-wordmark-light.svg" 52 | # html_show_sourcelink = True 53 | 54 | htmlhelp_basename = slug 55 | 56 | # latex_documents = [ 57 | # ('index', '{0}.tex'.format(slug), project, author, 'manual'), 58 | # ] 59 | 60 | man_pages = [ 61 | ('index', slug, project, [author], 1) 62 | ] 63 | 64 | texinfo_documents = [ 65 | ('index', slug, project, author, slug, project, 'Miscellaneous'), 66 | ] 67 | 68 | 69 | # Extensions to theme docs 70 | def setup(app): 71 | from sphinx.domains.python import PyField 72 | from sphinx.util.docfields import Field 73 | 74 | app.add_object_type( 75 | 'confval', 76 | 'confval', 77 | objname='configuration value', 78 | indextemplate='pair: %s; configuration value', 79 | doc_field_types=[ 80 | PyField( 81 | 'type', 82 | label=('Type'), 83 | has_arg=False, 84 | names=('type',), 85 | bodyrolename='class' 86 | ), 87 | Field( 88 | 'default', 89 | label=('Default'), 90 | has_arg=False, 91 | names=('default',), 92 | ), 93 | ] 94 | ) 95 | -------------------------------------------------------------------------------- /example/simple_functions/opt_with_stopping_criterion.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from zoopt import Objective, Parameter, ExpOpt, Dimension, Opt 3 | from simple_function import ackley, sphere 4 | 5 | 6 | class StoppingCriterion: 7 | """ 8 | This class defines a stopping criterion, which is used as a parameter of the class Parameter, and should implement 9 | check(self, optcontent) member function. 10 | """ 11 | def __init__(self): 12 | self.__best_result = 0 13 | self.__count = 0 14 | self.__total_count = 0 15 | self.__count_limit = 100 16 | 17 | def check(self, optcontent): 18 | """ 19 | This function is invoked at each iteration of the optimization. 20 | Optimization will stop early when this function returns True, otherwise, it is not affected. In this example, 21 | optimization will be stopped if the best result remains unchanged for 100 iterations. 22 | :param optcontent: an instance of the class RacosCommon. Several functions can be invoked to get the contexts of 23 | the optimization, which are listed as follows, 24 | optcontent.get_best_solution(): get the current optimal solution 25 | optcontent.get_data(): get all the solutions contained in the current solution pool 26 | optcontent.get_positive_data(): get positive solutions contained in the current solution pool 27 | optcontent.get_negative_data(): get negative solutions contained in the current solution pool 28 | 29 | :return: bool object. 30 | """ 31 | self.__total_count += 1 32 | content_best_value = optcontent.get_best_solution().get_value() 33 | if content_best_value == self.__best_result: 34 | self.__count += 1 35 | else: 36 | self.__best_result = content_best_value 37 | self.__count = 0 38 | if self.__count >= self.__count_limit: 39 | print("stopping criterion holds, total_count: %d" % self.__total_count) 40 | return True 41 | else: 42 | return False 43 | 44 | 45 | if __name__ == '__main__': 46 | dim_size = 100 47 | # form up the objective function 48 | objective = Objective(sphere, Dimension(dim_size, [[-1, 1]] * dim_size, [True] * dim_size)) 49 | 50 | budget = 100 * dim_size 51 | # if intermediate_result is True, ZOOpt will output intermediate best solution every intermediate_freq budget 52 | parameter = Parameter(budget=budget, intermediate_result=True, 53 | intermediate_freq=10, stopping_criterion=StoppingCriterion()) 54 | sol = Opt.min(objective, parameter) 55 | sol.print_solution() -------------------------------------------------------------------------------- /docs/ZOOpt/Derivative-Free Optimization.rst: -------------------------------------------------------------------------------- 1 | ----------------------------- 2 | Derivative-Free Optimization 3 | ----------------------------- 4 | 5 | `Optimization `__ 6 | is to approximate the optimal solution **x** \* of a function *f*. 7 | 8 | I assume that readers are aware of 9 | `gradient `__ based 10 | optimization: to find a minimum valued solution of a function, follows 11 | the negative gradient direction, such as the `gradient 12 | descent `__ method. To 13 | apply gradient-based optimization, the function has several 14 | restrictions. It needs to be (almost) differentiable in order to 15 | calculate the gradient. To guarantee that the the minimum point of the 16 | function can be found, the function needs to be (closely) 17 | `convex `__ . 18 | 19 | Let's rethink about why gradients can be followed to do the 20 | optimization. For a convex function, the negative gradient direction 21 | points to the global optimum. In other words, the gradient at a solution 22 | can tell where better solutions are. 23 | 24 | Derivative-free optimization does not rely on the gradient. Note that 25 | the only principle for optimization is, again, collecting the 26 | information about where better solutions are. Derivative-free 27 | optimization methods use sampling to understand the landscape of the 28 | function, and find regions that contain better solutions. 29 | 30 | A typical structure of a derivative-free optimization method is outlined 31 | as follows: 32 | 33 | | 1. starting from the model *D* which is the uniform distribution over 34 | the search space 35 | | 2. samples a set of solutions { *x* :sub:`1`, *x* :sub:`2` ,..., *x* 36 | :sub:`m` } from *D* 37 | | 3. for each solution *x* :sub:`i`, evaluate its function value *f* ( 38 | *x* :sub:`i` ) 39 | | 4. record in the history set *H* the solutions with their function 40 | values 41 | | 5. learn from *H* a new model *D* 42 | | 6. repeat from step 2 until the stop criterion is met 43 | | 7. return the best solution in *H* 44 | 45 | Different derivative-free optimization methods many differ in the way of 46 | learning the model (step 5) and sampling (step 2). For examples, in 47 | `genetic algorithms `__ 48 | , the (implicit) model is a set of good solutions, and the sampling is 49 | by some variation operators on these solutions; in `Bayesian 50 | optimization `__ 51 | which appears very different with genetic algorithms, the model is 52 | explicitly a regression model (commonly the Gaussian process), the 53 | sampling is by solving an acquisition function; in 54 | `RACOS `__ 55 | algorithm that has been implemented in ZOOpt, the model is a hypercube 56 | and the sampling is from the uniform distribution in the hypercube, so 57 | that RACOS is simple enough to have theoretical guarantee and high 58 | practical efficiency. 59 | -------------------------------------------------------------------------------- /zoopt/exp_opt.py: -------------------------------------------------------------------------------- 1 | """ 2 | This module contains the class ExpOpt, which provides a experiment interface for users. 3 | 4 | Author: 5 | Yu-Ren Liu 6 | """ 7 | from zoopt.utils.zoo_global import gl 8 | from zoopt.opt import Opt 9 | import matplotlib.pyplot as plt 10 | from zoopt.utils.tool_function import ToolFunction 11 | import numpy as np 12 | 13 | 14 | class ExpOpt: 15 | """ 16 | The experiment entrance of the optimization. 17 | """ 18 | 19 | def __init__(self): 20 | """ 21 | Initialization. 22 | """ 23 | return 24 | 25 | @staticmethod 26 | def min(objective, parameter, repeat=1, best_n=None, plot=False, plot_file=None): 27 | """ 28 | Minimization function. 29 | 30 | :param objective: an Objective object 31 | :param parameter: a Parameter object 32 | :param repeat: integer, repeat times of the optimization 33 | :param best_n: 34 | integer, ExpOpt.min will print average value and standard deviation of best_n optimal results among 35 | returned solution list. 36 | :param plot: whether to plot regret curve during the optimization 37 | :param plot_file: the file name to output the figure 38 | :param seed: random seed of the optimization 39 | :return: a best_solution set 40 | """ 41 | objective.parameter_set(parameter) 42 | ret = [] 43 | if best_n is None: 44 | best_n = repeat 45 | result = [] 46 | for i in range(repeat): 47 | # perform the optimization 48 | solution = Opt.min(objective, parameter) 49 | ret.append(solution) 50 | ToolFunction.log('The best solution is:') 51 | solution.print_solution() 52 | # store the optimization result 53 | result.append(solution.get_value()) 54 | 55 | # for plotting the optimization history 56 | history = np.array(objective.get_history_bestsofar()) # init for reducing 57 | if plot is True: 58 | plt.plot(history) 59 | objective.clean_history() 60 | if plot is True: 61 | if plot_file is not None: 62 | plt.savefig(plot_file) 63 | else: 64 | plt.show() 65 | ExpOpt.result_analysis(result, best_n) 66 | return ret 67 | 68 | @staticmethod 69 | def result_analysis(results, top): 70 | """ 71 | Get mean value and standard deviation of best 'top' results. 72 | 73 | :param results: a list of results 74 | :param top: the number of best results used to calculate mean value and standard deviation 75 | :return: mean value and standard deviation of best 'top' results 76 | """ 77 | limit = top if top < len(results) else len(results) 78 | results.sort() 79 | top_k = results[0:limit] 80 | mean_r = np.mean(top_k, axis=0, dtype=np.float64) 81 | std_r = np.std(top_k, axis=0, dtype=np.float64) 82 | if limit <= 1: 83 | ToolFunction.log('Best %d result: %s +- %s' % (limit, mean_r, std_r)) 84 | else: 85 | ToolFunction.log('Best %d results: %s +- %s' % (limit, mean_r, std_r)) 86 | return mean_r, std_r 87 | -------------------------------------------------------------------------------- /zoopt/algos/opt_algorithms/racos/racos_optimization.py: -------------------------------------------------------------------------------- 1 | """ 2 | This module contains the class RacosOptimization, which will choose the optimization algorithm and get the best solution. 3 | 4 | Author: 5 | Yu-Ren Liu 6 | """ 7 | 8 | from zoopt.algos.noise_handling.ssracos import SSRacos 9 | from zoopt.algos.opt_algorithms.racos.racos import Racos 10 | from zoopt.algos.opt_algorithms.racos.sracos import SRacos 11 | from zoopt.algos.opt_algorithms.racos.asracos import ASRacos 12 | 13 | 14 | class RacosOptimization: 15 | """ 16 | This class will choose the optimization algorithm and get the best solution. 17 | """ 18 | 19 | def __init__(self): 20 | """ 21 | Initialization. 22 | """ 23 | self.__best_solution = None 24 | self.__algorithm = None 25 | 26 | def clear(self): 27 | """ 28 | Clear the instance. 29 | 30 | :return: no return value 31 | """ 32 | self.__best_solution = None 33 | self.__algorithm = None 34 | 35 | def opt(self, objective, parameter, strategy='WR'): 36 | """ 37 | This function will choose optimization algorithm and use it to optimize. 38 | 39 | :param objective: a Objective object 40 | :param parameter: a Parameter object 41 | :param strategy: replace strategy, used by SRacos and SSRacos 42 | :return: the best solution 43 | """ 44 | 45 | self.clear() 46 | ub = parameter.get_uncertain_bits() 47 | if ub is None: 48 | ub = self.choose_ub(objective) 49 | if parameter.get_parallel() is True: 50 | self.__best_solution = ASRacos().opt(objective, parameter, strategy, ub) 51 | else: 52 | if parameter.get_sequential(): 53 | if parameter.get_noise_handling() is True and parameter.get_suppression() is True: 54 | self.__algorithm = SSRacos() 55 | else: 56 | self.__algorithm = SRacos() 57 | self.__best_solution = self.__algorithm.opt( 58 | objective, parameter, strategy, ub) 59 | else: 60 | self.__algorithm = Racos() 61 | self.__best_solution = self.__algorithm.opt( 62 | objective, parameter, ub) 63 | return self.__best_solution 64 | 65 | @staticmethod 66 | def choose_ub(objective): 67 | """ 68 | Choose uncertain_bits according to the dimension size automatically. 69 | 70 | :param objective: an Objective object 71 | :return: uncertain bits 72 | """ 73 | dim = objective.get_dim() 74 | dim_size = dim.get_size() 75 | is_discrete = dim.is_discrete() 76 | if is_discrete is False: 77 | if dim_size <= 100: 78 | ub = 1 79 | elif dim_size <= 1000: 80 | ub = 2 81 | else: 82 | ub = 3 83 | else: 84 | if dim_size <= 10: 85 | ub = 1 86 | elif dim_size <= 50: 87 | ub = 2 88 | elif dim_size <= 100: 89 | ub = 3 90 | elif dim_size <= 1000: 91 | ub = 4 92 | else: 93 | ub = 5 94 | return ub 95 | 96 | def get_best_sol(self): 97 | return self.__best_solution 98 | -------------------------------------------------------------------------------- /docs/Examples/Optimize-a-Continuous-Function.rst: -------------------------------------------------------------------------------- 1 | ------------------------------------- 2 | Optimize a Continuous Function 3 | ------------------------------------- 4 | 5 | 6 | In mathematical optimization, the `Ackley 7 | function `__, which has 8 | many local minima, is a non-convex function used as a performance test 9 | problem for optimization algorithms. In 2-dimension, it looks like (from 10 | wikipedia) 11 | 12 | .. image:: https://upload.wikimedia.org/wikipedia/commons/thumb/9/98/Ackley%27s_function.pdf/page1-400px-Ackley%27s_function.pdf.jpg?raw=True 13 | 14 | 15 | We define the Ackley function in simple\_function.py for minimization 16 | 17 | .. code:: python 18 | 19 | import numpy as np 20 | 21 | def ackley(solution): 22 | """ 23 | Ackley function for continuous optimization 24 | """ 25 | x = solution.get_x() 26 | bias = 0.2 27 | ave_seq = sum([(i - bias) * (i - bias) for i in x]) / len(x) 28 | ave_cos = sum([np.cos(2.0 * np.pi * (i - bias)) for i in x]) / len(x) 29 | value = -20 * np.exp(-0.2 * np.sqrt(ave_seq)) - np.exp(ave_cos) + 20.0 + np.e 30 | return value 31 | 32 | Then, define corresponding *objective* and *parameter*. 33 | 34 | .. code:: python 35 | 36 | dim_size = 100 # dimensions 37 | dim_regs = [[-1, 1]] * dim_size # dimension range 38 | dim_tys = [True] * dim_size # dimension type : real 39 | dim = Dimension(dim_size, dim_regs, dim_tys) # form up the dimension object 40 | # dim = Dimension2([(ValueType.CONTINUOUS, [-1, 1], 1e-6)]*dim_size) # another way to form up the dimension object 41 | objective = Objective(ackley, dim) # form up the objective function 42 | 43 | .. code:: python 44 | 45 | budget = 100 * dim_size # number of calls to the objective function 46 | parameter = Parameter(budget=budget) 47 | 48 | Finally, optimize this function. 49 | 50 | .. code:: python 51 | 52 | solution_list = ExpOpt.min(objective, parameter, repeat=1, plot=True) 53 | 54 | The whole process lists below. 55 | 56 | .. code:: python 57 | 58 | from simple_function import ackley 59 | from zoopt import Dimension, ValueType, Dimension2, Objective, Parameter, ExpOpt 60 | 61 | 62 | def minimize_ackley_continuous(): 63 | """ 64 | Continuous optimization example of minimizing the ackley function. 65 | 66 | :return: no return value 67 | """ 68 | dim_size = 100 # dimensions 69 | dim_regs = [[-1, 1]] * dim_size # dimension range 70 | dim_tys = [True] * dim_size # dimension type : real 71 | dim = Dimension(dim_size, dim_regs, dim_tys) # form up the dimension object 72 | # dim = Dimension2([(ValueType.CONTINUOUS, [-1, 1], 1e-6)]*dim_size) # another way to form up the dimension object 73 | objective = Objective(ackley, dim) # form up the objective function 74 | 75 | budget = 100 * dim_size # number of calls to the objective function 76 | parameter = Parameter(budget=budget) 77 | 78 | solution_list = ExpOpt.min(objective, parameter, repeat=1, plot=True) 79 | 80 | if __name__ == '__main__': 81 | minimize_ackley_continuous() 82 | 83 | For a few seconds, the optimization is done. Visualized optimization 84 | progress looks like 85 | 86 | .. image:: https://github.com/eyounx/ZOOpt/blob/dev/img/ackley_continuous_figure.png?raw=true 87 | :width: 500 88 | 89 | More concrete examples are available in the 90 | ``example/simple_functions/continuous_opt.py`` file . 91 | -------------------------------------------------------------------------------- /docs/Examples/Optimize-a-Function-with-Mixed-Search-Space.rst: -------------------------------------------------------------------------------- 1 | --------------------------------------------------------------------------------- 2 | Optimize a Function with Mixed Search Space 3 | --------------------------------------------------------------------------------- 4 | 5 | In some cases, the search space of the problem consists of both 6 | continuous subspace and discrete subspace. ZOOpt can solve this kind of 7 | problem easily. 8 | 9 | We define the Sphere function in simple\_function.py for minimization. 10 | 11 | .. code:: python 12 | 13 | def sphere_mixed(solution): 14 | """ 15 | Sphere function for mixed optimization 16 | """ 17 | x = solution.get_x() 18 | value = sum([i*i for i in x]) 19 | return value 20 | 21 | Then, define corresponding *objective* and *parameter*. 22 | 23 | .. code:: python 24 | 25 | dim_size = 100 26 | dim_regs = [] 27 | dim_tys = [] 28 | # In this example, the search space is discrete if this dimension index is odd, Otherwise, the search space is continuous. 29 | for i in range(dim_size): 30 | if i % 2 == 0: 31 | dim_regs.append([0, 1]) 32 | dim_tys.append(True) 33 | else: 34 | dim_regs.append([0, 100]) 35 | dim_tys.append(False) 36 | dim = Dimension(dim_size, dim_regs, dim_tys) 37 | # dim = Dimension2([(ValueType.CONTINUOUS, [0, 1], 1e-6), (ValueType.DISCRETE, [0, 100], False)] * (dim_size/2)) 38 | objective = Objective(sphere_mixed, dim) # form up the objective function 39 | 40 | .. code:: python 41 | 42 | budget = 100 * dim_size # number of calls to the objective function 43 | parameter = Parameter(budget=budget) 44 | 45 | Finally, use ZOOpt to optimize. 46 | 47 | .. code:: python 48 | 49 | solution_list = ExpOpt.min(objective, parameter, repeat=1, plot=True) 50 | 51 | The whole process lists below. 52 | 53 | .. code:: python 54 | 55 | from simple_function import sphere_mixed 56 | from zoopt import Dimension, ValueType, Dimension2, Objective, Parameter, ExpOpt 57 | 58 | 59 | # mixed optimization 60 | def minimize_sphere_mixed(): 61 | """ 62 | Mixed optimization example of minimizing sphere function, which has mixed search search space. 63 | 64 | :return: no return value 65 | """ 66 | 67 | # setup optimization problem 68 | dim_size = 100 69 | dim_regs = [] 70 | dim_tys = [] 71 | # In this example, the search space is discrete if this dimension index is odd, Otherwise, the search space 72 | # is continuous. 73 | for i in range(dim_size): 74 | if i % 2 == 0: 75 | dim_regs.append([0, 1]) 76 | dim_tys.append(True) 77 | else: 78 | dim_regs.append([0, 100]) 79 | dim_tys.append(False) 80 | dim = Dimension(dim_size, dim_regs, dim_tys) 81 | # dim = Dimension2([(ValueType.CONTINUOUS, [0, 1], 1e-6), (ValueType.DISCRETE, [0, 100], False)] * (dim_size/2) 82 | objective = Objective(sphere_mixed, dim) # form up the objective function 83 | budget = 100 * dim_size # the number of calls to the objective function 84 | parameter = Parameter(budget=budget) 85 | 86 | solution_list = ExpOpt.min(objective, parameter, repeat=1, plot=True) 87 | 88 | if __name__ == '__main__': 89 | minimize_sphere_mixed() 90 | 91 | For a few seconds, the optimization is done. Visualized optimization 92 | progress looks like 93 | 94 | .. image:: https://github.com/eyounx/ZOOpt/blob/dev/img/sphere_mixed_figure.png?raw=true 95 | :width: 500 96 | 97 | More concrete examples are available in the 98 | ``example/simple_functions/mixed_opt.py`` file . 99 | -------------------------------------------------------------------------------- /docs/Examples/Optimize-a-High-dimensional-Function.rst: -------------------------------------------------------------------------------- 1 | ------------------------------------------- 2 | How to Optimize a High-dimensional Function 3 | ------------------------------------------- 4 | 5 | Derivative-free optimization methods are suitable for sophisticated 6 | optimization problems, while are hard to scale to high dimensionality 7 | (e.g., larger than 1,000). 8 | 9 | ZOOpt contains a high-dimensionality handling algorithm called 10 | sequential random embedding (SRE). SRE runs the optimization algorithms 11 | in the low-dimensional space, where the function values of solutions are 12 | evaluated via the embedding into the original high-dimensional space 13 | sequentially. SRE is effective for the function class that all 14 | dimensions may affect the function value but many of them only have a 15 | small bounded effect, and can scale both RACOS and SRACOS (the main 16 | optimization algorithm in ZOOpt) to 100,000-dimensional problems. 17 | 18 | In this page, we will show how to use ZOOpt to optimize a high 19 | dimensional function. 20 | 21 | We define a variant of Sphere function in simple\_function.py for 22 | minimization. 23 | 24 | .. code:: python 25 | 26 | def sphere_sre(solution): 27 | """ 28 | Variant of the sphere function. Dimensions except the first 10 ones have limited impact on the function value. 29 | """ 30 | a = 0 31 | bias = 0.2 32 | x = solution.get_x() 33 | x1 = x[:10] 34 | x2 = x[10:] 35 | value1 = sum([(i-bias)*(i-bias) for i in x1]) 36 | value2 = 1/len(x) * sum([(i-bias)*(i-bias) for i in x2]) 37 | return value1 + value2 38 | 39 | Then, define corresponding *objective* and *parameter*. 40 | 41 | .. code:: python 42 | 43 | # sre should be set True 44 | objective = Objective(sphere_sre, dim, sre=True) 45 | 46 | .. code:: python 47 | 48 | # num_sre, low_dimension, withdraw_alpha should be set for sequential random embedding 49 | # num_sre means the number of sequential random embedding 50 | # low dimension means low dimensional solution space 51 | parameter = Parameter(budget=budget, high_dimensionality_handling=True, reducedim=True, num_sre=5, low_dimension=Dimension(10, [[-1, 1]] * 10, [True] * 10)) 52 | 53 | Finally, use ZOOpt to optimize. 54 | 55 | .. code:: python 56 | 57 | solution_list = ExpOpt.min(objective, parameter, repeat=1, plot=True) 58 | 59 | The whole process lists below. 60 | 61 | .. code:: python 62 | 63 | from simple_function import sphere_sre 64 | from zoopt import Dimension, ValueType, Dimension2, Objective, Parameter, ExpOpt 65 | 66 | 67 | def sphere_continuous_sre(): 68 | """ 69 | Example of minimizing high-dimensional sphere function with sequential random embedding. 70 | 71 | :return: no return value 72 | """ 73 | 74 | dim_size = 10000 # dimensions 75 | dim_regs = [[-1, 1]] * dim_size # dimension range 76 | dim_tys = [True] * dim_size # dimension type : real 77 | dim = Dimension(dim_size, dim_regs, dim_tys) # form up the dimension object 78 | # dim = Dimension2([(ValueType.CONTINUOUS, [-1, 1], 1e-6)]*dim_size) 79 | objective = Objective(sphere_sre, dim) # form up the objective function 80 | 81 | # setup algorithm parameters 82 | budget = 2000 # number of calls to the objective function 83 | parameter = Parameter(budget=budget, high_dimensionality_handling=True, reducedim=True, num_sre=5, low_dimension=Dimension(10, [[-1, 1]] * 10, [True] * 10)) 84 | 85 | solution_list = ExpOpt.min(objective, parameter, repeat=1, plot=True) 86 | 87 | if __name__ == "__main__": 88 | sphere_continuous_sre() 89 | 90 | For a few seconds, the optimization is done. Visualized optimization 91 | progress looks like 92 | 93 | .. image:: https://github.com/eyounx/ZOOpt/blob/dev/img/sphere_continuous_sre.png?raw=true 94 | :width: 500 95 | 96 | More concrete examples are available in the 97 | ``example/sequential_random_embedding/continuous_sre_opt.py`` file . 98 | -------------------------------------------------------------------------------- /test/test_seed.py: -------------------------------------------------------------------------------- 1 | from zoopt.algos.opt_algorithms.racos.racos_common import RacosCommon 2 | from zoopt.algos.opt_algorithms.racos.sracos import SRacos 3 | from zoopt import Solution, Objective, Dimension, Parameter, Opt, ExpOpt 4 | import numpy as np 5 | 6 | 7 | def ackley(solution): 8 | """ 9 | Ackley function for continuous optimization 10 | """ 11 | x = solution.get_x() 12 | bias = 0.2 13 | ave_seq = sum([(i - bias) * (i - bias) for i in x]) / len(x) 14 | ave_cos = sum([np.cos(2.0 * np.pi * (i - bias)) for i in x]) / len(x) 15 | value = -20 * np.exp(-0.2 * np.sqrt(ave_seq)) - np.exp(ave_cos) + 20.0 + np.e 16 | return value 17 | 18 | 19 | def ackley_noise_creator(mu, sigma): 20 | """ 21 | Ackley function under noise 22 | """ 23 | return lambda solution: ackley(solution) + np.random.normal(mu, sigma, 1) 24 | 25 | def sphere_sre(solution): 26 | """ 27 | Variant of the sphere function. Dimensions except the first 10 ones have limited impact on the function value. 28 | """ 29 | a = 0 30 | bias = 0.2 31 | x = solution.get_x() 32 | x1 = x[:10] 33 | x2 = x[10:] 34 | value1 = sum([(i-bias)*(i-bias) for i in x1]) 35 | value2 = 1/len(x) * sum([(i-bias)*(i-bias) for i in x2]) 36 | return value1 + value2 37 | 38 | class TestSeed(object): 39 | # def test_racos(self): 40 | # seed = 1 41 | # dim = 100 # dimension 42 | # objective = Objective(ackley, Dimension(dim, [[-1, 1]] * dim, [True] * dim)) # setup objective 43 | # parameter = Parameter(budget=100 * dim, seed=seed, sequential=False) # init with init_samples 44 | # sol1 = Opt.min(objective, parameter) 45 | # sol2 = Opt.min(objective, parameter) 46 | # assert sol1.get_value() == sol2.get_value() 47 | # 48 | # 49 | # def test_sracos(self): 50 | # seed = 1 51 | # dim = 100 # dimension 52 | # objective = Objective(ackley, Dimension(dim, [[-1, 1]] * dim, [True] * dim)) # setup objective 53 | # parameter = Parameter(budget=100 * dim, seed=seed, sequential=True) # init with init_samples 54 | # sol1 = Opt.min(objective, parameter) 55 | # sol2 = Opt.min(objective, parameter) 56 | # assert sol1.get_value() == sol2.get_value() 57 | # 58 | def test_noisy(self): 59 | ackley_noise_func = ackley_noise_creator(0, 0.1) 60 | dim_size = 100 # dimensions 61 | dim_regs = [[-1, 1]] * dim_size # dimension range 62 | dim_tys = [True] * dim_size # dimension type : real 63 | dim = Dimension(dim_size, dim_regs, dim_tys) # form up the dimension object 64 | objective = Objective(ackley_noise_func, dim) # form up the objective function 65 | budget = 20000 # 20*dim_size # number of calls to the objective function 66 | parameter = Parameter(budget=budget, noise_handling=True, suppression=True, non_update_allowed=200, 67 | resample_times=50, balance_rate=0.5, seed=1) 68 | 69 | # parameter = Parameter(budget=budget, noise_handling=True, resampling=True, resample_times=10) 70 | parameter.set_positive_size(5) 71 | sol1 = Opt.min(objective, parameter) 72 | sol2 = Opt.min(objective, parameter) 73 | assert sol1.get_value() == sol2.get_value() 74 | 75 | def test_high_dim(self): 76 | dim_size = 10000 # dimensions 77 | dim_regs = [[-1, 1]] * dim_size # dimension range 78 | dim_tys = [True] * dim_size # dimension type : real 79 | dim = Dimension(dim_size, dim_regs, dim_tys) # form up the dimension object 80 | objective = Objective(sphere_sre, dim) # form up the objective function 81 | 82 | # setup algorithm parameters 83 | budget = 2000 # number of calls to the objective function 84 | parameter = Parameter(budget=budget, high_dim_handling=True, reducedim=True, num_sre=5, 85 | low_dimension=Dimension(10, [[-1, 1]] * 10, [True] * 10), seed=1) 86 | sol1 = Opt.min(objective, parameter) 87 | sol2 = Opt.min(objective, parameter) 88 | assert sol1.get_value() == sol2.get_value() 89 | 90 | 91 | -------------------------------------------------------------------------------- /example/linear_classifier_using_ramploss/ramploss.py: -------------------------------------------------------------------------------- 1 | """ 2 | this example optimizes a linear classifier using the non-convex ramploss instead of any convex loss function. 3 | 4 | this example requires the liac-arff package to read ARFF file 5 | 6 | Author: 7 | Yu-Ren Liu, Yang Yu 8 | """ 9 | 10 | import arff, codecs 11 | from zoopt import Dimension, Objective, Parameter, ExpOpt 12 | 13 | 14 | class RampLoss: 15 | """ 16 | Define ramploss learning loss function. 17 | """ 18 | __data = None 19 | __test = None 20 | __ramploss_c = 10 21 | __ramploss_s = -1 22 | __dim_size = 0 23 | 24 | def __init__(self, arfffile): 25 | self.read_data(arfffile) 26 | 27 | def read_data(self, filename): 28 | """ 29 | Read data from file. 30 | 31 | :param filename: Name of the file to read 32 | :return: no return 33 | """ 34 | file_ = codecs.open(filename, 'rb', 'utf-8') 35 | decoder = arff.ArffDecoder() 36 | dataset = decoder.decode(file_.readlines(), encode_nominal=True) 37 | file_.close() 38 | self.__data = dataset['data'] 39 | if self.__data is not None and self.__data[0] is not None: 40 | self.__dim_size = len(self.__data[0]) 41 | 42 | def get_dim_size(self): 43 | return self.__dim_size 44 | 45 | def calc_product(self, weight, j): 46 | """ 47 | Calculate product between the weights and the instance. 48 | 49 | :param weight: weight vector 50 | :param j: the index of the instance 51 | :return: product value 52 | """ 53 | temp_sum = 0 54 | for i in range(len(weight) - 1): 55 | temp_sum += weight[i] * self.__data[j][i] 56 | temp_sum += weight[len(weight) - 1] 57 | return temp_sum 58 | 59 | def calc_h(self, ylfx, st): 60 | """ 61 | Calculate hinge loss. 62 | """ 63 | temp = st - ylfx 64 | if temp > 0: 65 | return temp 66 | else: 67 | return 0 68 | 69 | def calc_regularization(self, weight): 70 | """ 71 | Calculate regularization 72 | """ 73 | temp_sum = 0 74 | for i in range(len(weight)): 75 | temp_sum += weight[i] * weight[i] 76 | return temp_sum 77 | 78 | def trans_label(self, i): 79 | """ 80 | Transform label from 0/1 to -1/+1 81 | """ 82 | if self.__data[i][self.__dim_size - 1] == 1: 83 | return 1 84 | else: 85 | return -1 86 | 87 | # 88 | def eval(self, solution): 89 | """ 90 | Objectve function to calculate the ramploss. 91 | """ 92 | weight = solution.get_x() 93 | H1 = 0 94 | Hs = 0 95 | for i in range(len(self.__data)): 96 | fx = self.calc_product(weight, i) 97 | H1 += self.calc_h(self.trans_label(i) * fx, 1) 98 | Hs += self.calc_h(self.trans_label(i) * fx, self.__ramploss_s) 99 | regularization = self.calc_regularization(weight) 100 | value = regularization / 2 + self.__ramploss_c * H1 - self.__ramploss_c * Hs 101 | return value 102 | 103 | # 104 | def training_error(self, best): 105 | """ 106 | Training error. 107 | """ 108 | wrong = 0.0 109 | for i in range(len(self.__data)): 110 | fx = self.calc_product(best, i) 111 | if fx * self.trans_label(i) <= 0: 112 | wrong += 1 113 | rate = wrong / len(self.__data) 114 | return rate 115 | 116 | def dim(self): 117 | """ 118 | Construct dimension of this problem. 119 | """ 120 | return Dimension(self.__dim_size, [[-10, 10]] * self.__dim_size, [True] * self.__dim_size) 121 | 122 | 123 | if __name__=='__main__': 124 | # read data 125 | loss = RampLoss('ionosphere.arff') 126 | objective = Objective(loss.eval, loss.dim()) 127 | budget = 100 * loss.get_dim_size() 128 | parameter = Parameter(budget=budget) 129 | solution_list = ExpOpt.min(objective, parameter, repeat=1, plot=True, plot_file="img/ramploss.png") -------------------------------------------------------------------------------- /test/test_algos/test_noise_handling/sparse_mse.py: -------------------------------------------------------------------------------- 1 | """ 2 | This module contains the implementation of Sparse MSE problem. 3 | 4 | Author: 5 | Chao Feng 6 | """ 7 | 8 | import numpy as np 9 | from zoopt import Opt, Parameter, Objective, Dimension, ExpOpt 10 | import codecs 11 | import arff 12 | 13 | 14 | class SparseMSE: 15 | """ 16 | This class implements the Sparse MSE problem. 17 | """ 18 | def __init__(self, filename): 19 | """ 20 | Initialization. 21 | :param filename: filename 22 | """ 23 | data = self.read_data(filename) 24 | self._size = data.shape[1] - 1 25 | self._X = data[:, 0: self._size] 26 | self._Y = data[:, self._size] 27 | self._C = self._X.T * self._X 28 | self._b = self._X.T * self._Y 29 | self._k = 0 30 | self._best_solution = None 31 | 32 | def position(self, s): 33 | """ 34 | This function is to find the index of s where element is 1 35 | return a list of positions 36 | :param s: 37 | :return: a list of index of s where element is 1 38 | """ 39 | n = len(s) 40 | result = [] 41 | for i in range(n): 42 | if s[i] == 1: 43 | result.append(i) 44 | return result 45 | 46 | def constraint(self, solution): 47 | """ 48 | If the constraints are satisfied, the constraint function will return a zero or positive value. Otherwise a 49 | negative value will be returned. 50 | 51 | :param solution: a Solution object 52 | :return: a zero or positive value which means constraints are satisfied, otherwise a negative value 53 | """ 54 | x = solution.get_x() 55 | return self._k-sum(x) 56 | 57 | def set_sparsity(self, k): 58 | self._k = k 59 | 60 | def get_sparsity(self): 61 | return self._k 62 | 63 | def loss(self, solution): 64 | """ 65 | loss function for sparse regression 66 | :param solution: a Solution object 67 | """ 68 | x = solution.get_x() 69 | if sum(x) == 0.0 or sum(x) >= 2.0 * self._k: 70 | return float('inf') 71 | pos = self.position(x) 72 | alpha = (self._C[pos, :])[:, pos] 73 | alpha = alpha.I * self._b[pos, :] 74 | sub = self._Y - self._X[:, pos]*alpha 75 | mse = sub.T*sub / self._Y.shape[0] 76 | return mse[0, 0] 77 | 78 | def get_dim(self): 79 | """ 80 | Construct a Dimension object of this problem. 81 | :return: a dimension object of sparse mse. 82 | """ 83 | dim_regs = [[0, 1]] * self._size 84 | dim_tys = [False] * self._size 85 | return Dimension(self._size, dim_regs, dim_tys) 86 | 87 | def read_data(self, filename): 88 | """ 89 | Read data from file. 90 | :param filename: filename 91 | :return: normalized data 92 | """ 93 | file_ = codecs.open(filename, 'rb', 'utf-8') 94 | decoder = arff.ArffDecoder() 95 | dataset = decoder.decode(file_.readlines(), encode_nominal=True) 96 | file_.close() 97 | data = dataset['data'] 98 | return self.normalize_data(np.mat(data)) 99 | 100 | @staticmethod 101 | def normalize_data(data_matrix): 102 | """ 103 | Normalize data to have mean 0 and variance 1 for each column 104 | 105 | :param data_matrix: matrix of all data 106 | :return: normalized data 107 | """ 108 | try: 109 | mat_size = data_matrix.shape 110 | for i in range(0, mat_size[1]): 111 | the_column = data_matrix[:, i] 112 | column_mean = np.mean(the_column) 113 | minus_column = np.mat(the_column-column_mean) 114 | std = np.sqrt(np.transpose(minus_column)*minus_column/mat_size[0]) 115 | data_matrix[:, i] = (the_column-column_mean)/std 116 | return data_matrix 117 | except Exception as e: 118 | print(e) 119 | finally: 120 | pass 121 | 122 | def get_k(self): 123 | return self._k 124 | -------------------------------------------------------------------------------- /test/test_dimension.py: -------------------------------------------------------------------------------- 1 | from zoopt import Dimension, Dimension2, ValueType 2 | 3 | 4 | class TestDimension(object): 5 | def test_judge_match(self): 6 | size = 3 7 | regs = [[1, 5], [-1, 1], [1, 2]] 8 | tys = [True, True, True] 9 | assert Dimension.judge_match(size, regs, tys) == True 10 | tys = [True, True] 11 | assert Dimension.judge_match(size, regs, tys) == False 12 | 13 | def test_merge_dim(self): 14 | dim1 = Dimension(1, [[1, 2]], [True]) 15 | dim2 = Dimension(2, [[1, 2], [2, 3]], [True, True]) 16 | dim3 = Dimension.merge_dim(dim1, dim2) 17 | assert dim3.equal(Dimension(3, [[1, 2], [1, 2], [2, 3]], [True, True, True])) 18 | 19 | def test_set_region(self): 20 | dim = Dimension(2, [[1, 2], [2, 3]], [True, True]) 21 | dim.set_region(1, [-1, 1], True) 22 | assert dim.equal(Dimension(2, [[1, 2], [-1, 1]], [True, True])) 23 | 24 | def test_set_regions(self): 25 | dim = Dimension(2, [[1, 2], [2, 3]], [True, True]) 26 | dim.set_regions([[-1, 1], [-1, 1]], [True, True]) 27 | assert dim.equal(Dimension(2, [[-1, 1], [-1, 1]], [True, True])) 28 | 29 | def test_limited_space(self): 30 | dim1 = Dimension(2, [[-1, 1], [-1, 1]], [True, True]) 31 | limited, number = dim1.limited_space() 32 | assert limited is False and number == 0 33 | dim2 = Dimension(2, [[-1, 1], [-1, 1]], [False, False]) 34 | limited, number = dim2.limited_space() 35 | assert limited is True and number == 9 36 | 37 | def test_deep_copy(self): 38 | dim1 = Dimension(2, [[-1, 1], [-1, 1]], [True, True]) 39 | dim2 = dim1.deep_copy() 40 | assert dim1.equal(dim2) 41 | 42 | def test_copy_region(self): 43 | dim1 = Dimension(2, [[-1, 1], [-1, 1]], [True, True]) 44 | region = dim1.copy_region() 45 | assert region == [[-1, 1], [-1, 1]] 46 | 47 | def test_is_discrete(self): 48 | dim1 = Dimension(2, [[-1, 1], [-1, 1]], [True, False]) 49 | assert dim1.is_discrete() is False 50 | 51 | 52 | class TestDimension2(object): 53 | def test_judge_match(self): 54 | size = 3 55 | regs = [[1, 5], [-1, 1], [1, 2]] 56 | tys = [True, True, True] 57 | assert Dimension2.judge_match(size, regs, tys) == True 58 | tys = [True, True] 59 | assert Dimension2.judge_match(size, regs, tys) == False 60 | 61 | def test_merge_dim(self): 62 | dim_list1 = [(ValueType.CONTINUOUS, [1, 2], 1e-6)] 63 | dim_list2 = [(ValueType.CONTINUOUS, [1, 2], 1e-6), 64 | (ValueType.CONTINUOUS, [2, 3], 1e-6)] 65 | dim1 = Dimension2(dim_list1) 66 | dim2 = Dimension2(dim_list2) 67 | dim3 = Dimension2.merge_dim(dim1, dim2) 68 | assert dim3.equal(Dimension2([ 69 | (ValueType.CONTINUOUS, [1, 2], 1e-6), 70 | (ValueType.CONTINUOUS, [1, 2], 1e-6), 71 | (ValueType.CONTINUOUS, [2, 3], 1e-6) 72 | ])) 73 | 74 | def test_limited_space(self): 75 | dim1 = Dimension2([(ValueType.CONTINUOUS, [-1, 1], 1e-6), 76 | (ValueType.CONTINUOUS, [-1, 1], 1e-6)]) 77 | limited, number = dim1.limited_space() 78 | assert limited is True and number == 4e12 79 | dim2 = Dimension2([(ValueType.DISCRETE, [-1, 1], False), 80 | (ValueType.DISCRETE, [-1, 1], False)]) 81 | limited, number = dim2.limited_space() 82 | assert limited is True and number == 9 83 | 84 | def test_deep_copy(self): 85 | dim1 = Dimension2([(ValueType.CONTINUOUS, [-1, 1], 1e-6), 86 | (ValueType.CONTINUOUS, [-1, 1], 1e-6)]) 87 | dim2 = dim1.deep_copy() 88 | assert dim1.equal(dim2) 89 | 90 | def test_copy_region(self): 91 | dim1 = Dimension2([(ValueType.CONTINUOUS, [-1, 1], 1e-6), 92 | (ValueType.CONTINUOUS, [-1, 1], 1e-6)]) 93 | region = dim1.copy_region() 94 | assert region == [[-1, 1], [-1, 1]] 95 | 96 | def test_is_discrete(self): 97 | dim1 = Dimension2([(ValueType.CONTINUOUS, [-1, 1], 1e-6), 98 | (ValueType.DISCRETE, [-1, 1], False)]) 99 | assert dim1.is_discrete() is False -------------------------------------------------------------------------------- /example/sparse_regression/sparse_mse.py: -------------------------------------------------------------------------------- 1 | """ 2 | This module contains the implementation of Sparse MSE problem. 3 | 4 | Author: 5 | Chao Feng 6 | """ 7 | 8 | import numpy as np 9 | from zoopt import Opt, Parameter, Objective, Dimension, ExpOpt 10 | import codecs 11 | import arff 12 | 13 | 14 | class SparseMSE: 15 | """ 16 | This class implements the Sparse MSE problem. 17 | """ 18 | _X = 0 19 | _Y = 0 20 | _C = 0 21 | _b = 0 22 | _size = 0 23 | _k = 0 24 | _best_solution = None 25 | 26 | def __init__(self, filename): 27 | """ 28 | Initialization. 29 | :param filename: filename 30 | """ 31 | data = self.read_data(filename) 32 | self._size = np.shape(data)[1] - 1 33 | self._X = data[:, 0: self._size] 34 | self._Y = data[:, self._size] 35 | self._C = self._X.T * self._X 36 | self._b = self._X.T * self._Y 37 | 38 | def position(self, s): 39 | """ 40 | This function is to find the index of s where element is 1 41 | return a list of positions 42 | :param s: 43 | :return: a list of index of s where element is 1 44 | """ 45 | n = len(s) 46 | result = [] 47 | for i in range(n): 48 | if s[i] == 1: 49 | result.append(i) 50 | return result 51 | 52 | def constraint(self, solution): 53 | """ 54 | If the constraints are satisfied, the constraint function will return a zero or positive value. Otherwise a 55 | negative value will be returned. 56 | 57 | :param solution: a Solution object 58 | :return: a zero or positive value which means constraints are satisfied, otherwise a negative value 59 | """ 60 | x = solution.get_x() 61 | return self._k-sum(x) 62 | 63 | def set_sparsity(self, k): 64 | self._k = k 65 | 66 | def get_sparsity(self): 67 | return self._k 68 | 69 | def loss(self, solution): 70 | """ 71 | loss function for sparse regression 72 | :param solution: a Solution object 73 | """ 74 | x = solution.get_x() 75 | if sum(x) == 0.0 or sum(x) >= 2.0 * self._k: 76 | return float('inf') 77 | pos = self.position(x) 78 | alpha = (self._C[pos, :])[:, pos] 79 | alpha = alpha.I * self._b[pos, :] 80 | sub = self._Y - self._X[:, pos]*alpha 81 | mse = sub.T*sub / np.shape(self._Y)[0] 82 | return mse[0, 0] 83 | 84 | def get_dim(self): 85 | """ 86 | Construct a Dimension object of this problem. 87 | :return: a dimension object of sparse mse. 88 | """ 89 | dim_regs = [[0, 1]] * self._size 90 | dim_tys = [False] * self._size 91 | return Dimension(self._size, dim_regs, dim_tys) 92 | 93 | def read_data(self, filename): 94 | """ 95 | Read data from file. 96 | :param filename: filename 97 | :return: normalized data 98 | """ 99 | file_ = codecs.open(filename, 'rb', 'utf-8') 100 | decoder = arff.ArffDecoder() 101 | dataset = decoder.decode(file_.readlines(), encode_nominal=True) 102 | file_.close() 103 | data = dataset['data'] 104 | return self.normalize_data(np.mat(data)) 105 | 106 | @staticmethod 107 | def normalize_data(data_matrix): 108 | """ 109 | Normalize data to have mean 0 and variance 1 for each column 110 | 111 | :param data_matrix: matrix of all data 112 | :return: normalized data 113 | """ 114 | try: 115 | mat_size = np.shape(data_matrix) 116 | for i in range(0, mat_size[1]): 117 | the_column = data_matrix[:, i] 118 | column_mean = sum(the_column)/mat_size[0] 119 | minus_column = np.mat(the_column-column_mean) 120 | std = np.sqrt(np.transpose(minus_column)*minus_column/mat_size[0]) 121 | data_matrix[:, i] = (the_column-column_mean)/std 122 | return data_matrix 123 | except Exception as e: 124 | print(e) 125 | finally: 126 | pass 127 | 128 | def get_k(self): 129 | return self._k 130 | -------------------------------------------------------------------------------- /test/test_algos/test_opt_algorithm/test_paretoopt/sparse_mse.py: -------------------------------------------------------------------------------- 1 | """ 2 | This module contains the implementation of Sparse MSE problem. 3 | 4 | Author: 5 | Chao Feng 6 | """ 7 | 8 | import numpy as np 9 | from zoopt import Opt, Parameter, Objective, Dimension, ExpOpt 10 | import codecs 11 | import arff 12 | 13 | 14 | class SparseMSE: 15 | """ 16 | This class implements the Sparse MSE problem. 17 | """ 18 | _X = 0 19 | _Y = 0 20 | _C = 0 21 | _b = 0 22 | _size = 0 23 | _k = 0 24 | _best_solution = None 25 | 26 | def __init__(self, filename): 27 | """ 28 | Initialization. 29 | :param filename: filename 30 | """ 31 | data = self.read_data(filename) 32 | self._size = np.shape(data)[1] - 1 33 | self._X = data[:, 0: self._size] 34 | self._Y = data[:, self._size] 35 | self._C = self._X.T * self._X 36 | self._b = self._X.T * self._Y 37 | 38 | def position(self, s): 39 | """ 40 | This function is to find the index of s where element is 1 41 | return a list of positions 42 | :param s: 43 | :return: a list of index of s where element is 1 44 | """ 45 | n = len(s) 46 | result = [] 47 | for i in range(n): 48 | if s[i] == 1: 49 | result.append(i) 50 | return result 51 | 52 | def constraint(self, solution): 53 | """ 54 | If the constraints are satisfied, the constraint function will return a zero or positive value. Otherwise a 55 | negative value will be returned. 56 | 57 | :param solution: a Solution object 58 | :return: a zero or positive value which means constraints are satisfied, otherwise a negative value 59 | """ 60 | x = solution.get_x() 61 | return self._k-sum(x) 62 | 63 | def set_sparsity(self, k): 64 | self._k = k 65 | 66 | def get_sparsity(self): 67 | return self._k 68 | 69 | def loss(self, solution): 70 | """ 71 | loss function for sparse regression 72 | :param solution: a Solution object 73 | """ 74 | x = solution.get_x() 75 | if sum(x) == 0.0 or sum(x) >= 2.0 * self._k: 76 | return float('inf') 77 | pos = self.position(x) 78 | alpha = (self._C[pos, :])[:, pos] 79 | alpha = alpha.I * self._b[pos, :] 80 | sub = self._Y - self._X[:, pos]*alpha 81 | mse = sub.T*sub / np.shape(self._Y)[0] 82 | return mse[0, 0] 83 | 84 | def get_dim(self): 85 | """ 86 | Construct a Dimension object of this problem. 87 | :return: a dimension object of sparse mse. 88 | """ 89 | dim_regs = [[0, 1]] * self._size 90 | dim_tys = [False] * self._size 91 | return Dimension(self._size, dim_regs, dim_tys) 92 | 93 | def read_data(self, filename): 94 | """ 95 | Read data from file. 96 | :param filename: filename 97 | :return: normalized data 98 | """ 99 | file_ = codecs.open(filename, 'rb', 'utf-8') 100 | decoder = arff.ArffDecoder() 101 | dataset = decoder.decode(file_.readlines(), encode_nominal=True) 102 | file_.close() 103 | data = dataset['data'] 104 | return self.normalize_data(np.mat(data)) 105 | 106 | @staticmethod 107 | def normalize_data(data_matrix): 108 | """ 109 | Normalize data to have mean 0 and variance 1 for each column 110 | 111 | :param data_matrix: matrix of all data 112 | :return: normalized data 113 | """ 114 | try: 115 | mat_size = np.shape(data_matrix) 116 | for i in range(0, mat_size[1]): 117 | the_column = data_matrix[:, i] 118 | column_mean = sum(the_column)/mat_size[0] 119 | minus_column = np.mat(the_column-column_mean) 120 | std = np.sqrt(np.transpose(minus_column)*minus_column/mat_size[0]) 121 | data_matrix[:, i] = (the_column-column_mean)/std 122 | return data_matrix 123 | except Exception as e: 124 | print(e) 125 | finally: 126 | pass 127 | 128 | def get_k(self): 129 | return self._k 130 | -------------------------------------------------------------------------------- /zoopt/algos/opt_algorithms/racos/racos.py: -------------------------------------------------------------------------------- 1 | """ 2 | This module contains the class Racos, which is a optimization algorithm provided in ZOOpt. 3 | 4 | Author: 5 | Yu-Ren Liu 6 | """ 7 | 8 | import time 9 | import numpy as np 10 | from zoopt.algos.opt_algorithms.racos.racos_classification import RacosClassification 11 | from zoopt.algos.opt_algorithms.racos.racos_common import RacosCommon 12 | from zoopt.utils.tool_function import ToolFunction 13 | 14 | 15 | class Racos(RacosCommon): 16 | """ 17 | The class Racos represents Racos algorithm. It's inherited from RacosCommon. 18 | """ 19 | 20 | def __init__(self): 21 | RacosCommon.__init__(self) 22 | 23 | def opt(self, objective, parameter, ub=1): 24 | """ 25 | Racos optimization. 26 | 27 | :param objective: a Objective object 28 | :param parameter: a Parameter object 29 | :param ub: uncertain bits, which is a parameter of Racos 30 | :return: the best solution of the optimization 31 | """ 32 | self.clear() 33 | self.set_objective(objective) 34 | self.set_parameters(parameter) 35 | self.init_attribute() 36 | stopping_criterion = self._parameter.get_stopping_criterion() 37 | t = int(self._parameter.get_budget() / self._parameter.get_negative_size()) 38 | time_log1 = time.time() 39 | for i in range(t): 40 | j = 0 41 | iteration_num = len(self._negative_data) 42 | sampled_data = self._positive_data + self._negative_data 43 | while j < iteration_num: 44 | if np.random.random() < self._parameter.get_probability(): 45 | classifier = RacosClassification( 46 | self._objective.get_dim(), self._positive_data, self._negative_data, ub) 47 | classifier.mixed_classification() 48 | solution, distinct_flag = self.distinct_sample_classifier(classifier, sampled_data, True, 49 | self._parameter.get_train_size()) 50 | else: 51 | solution, distinct_flag = self.distinct_sample(self._objective.get_dim(), sampled_data) 52 | # panic stop 53 | if solution is None: 54 | return self._best_solution 55 | # If the solution had been sampled, skip it 56 | if distinct_flag is False: 57 | continue 58 | # evaluate the solution 59 | objective.eval(solution) 60 | # show best solution 61 | times = i * self._parameter.get_negative_size() + j + 1 62 | self.show_best_solution(parameter.get_intermediate_result(), times, 63 | parameter.get_intermediate_freq()) 64 | self._data.append(solution) 65 | sampled_data.append(solution) 66 | # stopping criterion check 67 | if stopping_criterion.check(self) is True: 68 | return self._best_solution 69 | j += 1 70 | self.selection() 71 | self._best_solution = self._positive_data[0] 72 | # display expected running time 73 | if i == 4: 74 | time_log2 = time.time() 75 | expected_time = t * (time_log2 - time_log1) / 5 76 | if self._parameter.get_time_budget() is not None: 77 | expected_time = min(expected_time, self._parameter.get_time_budget()) 78 | if expected_time > 5: 79 | m, s = divmod(expected_time, 60) 80 | h, m = divmod(m, 60) 81 | ToolFunction.log('expected remaining running time: %02d:%02d:%02d' % (h, m, s)) 82 | # time budget check 83 | if self._parameter.get_time_budget() is not None: 84 | if time.time() - time_log1 >= self._parameter.get_time_budget: 85 | ToolFunction.log('time_budget runs out') 86 | return self._best_solution 87 | # terminal_value check 88 | if self._parameter.get_terminal_value() is not None: 89 | if self._best_solution.get_value() <= self._parameter.get_terminal_value(): 90 | ToolFunction.log('terminal function value reached') 91 | return self._best_solution 92 | return self._best_solution 93 | -------------------------------------------------------------------------------- /docs/ZOOpt/Quick-Start.rst: -------------------------------------------------------------------------------- 1 | --------------- 2 | Quick Start 3 | --------------- 4 | 5 | ZOOpt is a python package for `Zeroth-Order Optimization `__. 6 | 7 | Zeroth-order optimization (a.k.a. derivative-free optimization/black-box optimization) does not rely on the gradient of the objective function, but instead, learns from samples of the search space. It is suitable for optimizing functions that are nondifferentiable, with many local minima, or even unknown but only testable. 8 | 9 | ZOOpt implements some state-of-the-art zeroth-order optimization methods and their parallel versions. Users only need to add serveral keywords to use parallel optimization on a single machine. For large-scale distributed optimization across multiple machines, please refer to `Distributed ZOOpt `__. 10 | 11 | .. contents:: Table of Contents 12 | 13 | Required packages 14 | ----------------- 15 | 16 | This package requires the following packages: 17 | 18 | - Python version 2.7 or above 3.5 19 | - ``numpy`` http://www.numpy.org 20 | - ``matplotlib`` http://matplotlib.org/ (optional for plot drawing) 21 | 22 | The easiest way to get these is to use 23 | `pip `__ or 24 | `conda `__ environment 25 | manager. Typing the following command in your terminal will install all 26 | required packages in your Python environment. 27 | 28 | .. code:: console 29 | 30 | $ conda install numpy matplotlib 31 | 32 | or 33 | 34 | .. code:: console 35 | 36 | $ pip install numpy matplotlib 37 | 38 | Getting and installing ZOOpt 39 | ---------------------------- 40 | 41 | The easiest way to install ZOOpt is to type ``pip install zoopt`` in you 42 | terminal/command line. 43 | 44 | If you want to install ZOOpt by source code, download this project and 45 | sequentially run following commands in your terminal/command line. 46 | 47 | .. code:: console 48 | 49 | $ python setup.py build 50 | $ python setup.py install 51 | 52 | A quick example 53 | --------------- 54 | 55 | We define the Ackley function for minimization (note that this function is for arbitrary dimensions, determined by the solution) 56 | 57 | .. code:: python 58 | 59 | import numpy as np 60 | def ackley(solution): 61 | x = solution.get_x() 62 | bias = 0.2 63 | value = -20 * np.exp(-0.2 * np.sqrt(sum([(i - bias) * (i - bias) for i in x]) / len(x))) - \ 64 | np.exp(sum([np.cos(2.0*np.pi*(i-bias)) for i in x]) / len(x)) + 20.0 + np.e 65 | return value 66 | 67 | Ackley function is a classical function with many local minima. In 2-dimension, it looks like (from wikipedia) 68 | 69 | .. image:: https://upload.wikimedia.org/wikipedia/commons/thumb/9/98/Ackley%27s_function.pdf/page1-400px-Ackley%27s_function.pdf.jpg 70 | :width: 400px 71 | :alt: Ackley function 72 | 73 | Then, use ZOOpt to optimize a 100-dimension Ackley function: 74 | 75 | .. code:: python 76 | 77 | from zoopt import Dimension, ValueType, Dimension2, Objective, Parameter, Opt, ExpOpt 78 | 79 | dim_size = 100 # dimension 80 | dim = Dimension(dim_size, [[-1, 1]]*dim_size, [True]*dim_size) # or dim = Dimension2([(ValueType.CONTINUOUS, [-1, 1], 1e-6)]*dim_size) 81 | obj = Objective(ackley, dim) 82 | # perform optimization 83 | solution = Opt.min(obj, Parameter(budget=100*dim_size)) 84 | # print the solution 85 | print(solution.get_x(), solution.get_value()) 86 | # parallel optimization for time-consuming tasks 87 | solution = Opt.min(obj, Parameter(budget=100*dim_size, parallel=True, server_num=3)) 88 | 89 | Note that two classes are provided for constructing dimensions, feel free to try them. 90 | For a few seconds, the optimization is done. Then, we can visualize the optimization progress. 91 | 92 | .. code:: python 93 | 94 | import matplotlib.pyplot as plt 95 | plt.plot(obj.get_history_bestsofar()) 96 | plt.savefig('figure.png') 97 | 98 | which looks like 99 | 100 | .. image:: https://github.com/eyounx/ZOOpt/blob/dev/img/quick_start.png?raw=true" alt="Expeirment results 101 | :width: 400px 102 | 103 | We can also use ``ExpOpt`` to repeat the optimization for performance analysis, which will calculate the mean and standard deviation of multiple optimization results while automatically visualizing the optimization progress. 104 | 105 | .. code:: python 106 | 107 | solution_list = ExpOpt.min(obj, Parameter(budget=100*dim), repeat=3, plot=True, plot_file="progress.png") 108 | for solution in solution_list: 109 | print(solution.get_x(), solution.get_value()) 110 | 111 | More examples are available in the **Example** part. 112 | -------------------------------------------------------------------------------- /zoopt/algos/opt_algorithms/paretoopt/paretoopt.py: -------------------------------------------------------------------------------- 1 | """ 2 | The canonical Pareto optimization 3 | Running Pareto optimization will use the objective.eval_constraint function. This function makes the solution.get_value() a vector. 4 | The first element of the vector is the objective function value by objective.__func, and the second element is the constraint degree by objective.__constraint 5 | 6 | Author: 7 | Chao Feng, Yang Yu 8 | """ 9 | 10 | import numpy as np 11 | from copy import deepcopy 12 | import time 13 | from zoopt.utils.tool_function import ToolFunction 14 | 15 | 16 | class ParetoOpt: 17 | 18 | """ 19 | Pareto optimization. 20 | """ 21 | def __init__(self): 22 | pass 23 | 24 | @staticmethod 25 | def mutation(s, n): 26 | """ 27 | Every bit of s will be flipped with probability 1/n. 28 | 29 | :param s: s is a list 30 | :param n: the probability of flipping is set to 1/n 31 | :return: flipped s 32 | """ 33 | s_temp = deepcopy(s) 34 | threshold = 1.0 / n 35 | flipped = False 36 | for i in range(0, n): 37 | # the probability is 1/n 38 | if np.random.uniform(0, 1) <= threshold: 39 | s_temp[i] = (s[i] + 1) % 2 40 | flipped = True 41 | if not flipped: 42 | mustflip = np.random.randint(0, n) 43 | s_temp[mustflip] = (s[mustflip] + 1) % 2 44 | return s_temp 45 | 46 | def opt(self, objective, parameter): 47 | """ 48 | Pareto optimization. 49 | 50 | :param objective: an Objective object 51 | :param parameter: a Parameters object 52 | :return: the best solution of the optimization 53 | """ 54 | isolationFunc = parameter.get_isolationFunc() 55 | n = objective.get_dim().get_size() 56 | 57 | # initiate the population 58 | sol = objective.construct_solution(np.zeros(n)) 59 | objective.eval_constraint(sol) 60 | 61 | population = [sol] 62 | fitness = [sol.get_value()] 63 | pop_size = 1 64 | # iteration count 65 | t = 0 66 | T = parameter.get_budget() 67 | while t < T: 68 | if t == 0: 69 | time_log1 = time.time() 70 | # choose a individual from population randomly 71 | s = population[np.random.randint(0, pop_size)] 72 | # every bit will be flipped with probability 1/n 73 | offspring_x = self.mutation(s.get_x(), n) 74 | offspring = objective.construct_solution(offspring_x) 75 | objective.eval_constraint(offspring) 76 | offspring_fit = offspring.get_value() 77 | # now we need to update the population 78 | hasBetter = False 79 | 80 | for i in range(0, pop_size): 81 | if isolationFunc(offspring_x) != isolationFunc(population[i].get_x()): 82 | continue 83 | else: 84 | if (fitness[i][0] < offspring_fit[0] and fitness[i][1] >= offspring_fit[1]) or \ 85 | (fitness[i][0] <= offspring_fit[0] and fitness[i][1] > offspring_fit[1]): 86 | hasBetter = True 87 | break 88 | # there is no better individual than offspring 89 | if not hasBetter: 90 | Q = [] 91 | Qfit = [] 92 | for j in range(0, pop_size): 93 | if offspring_fit[0] <= fitness[j][0] and offspring_fit[1] >= fitness[j][1]: 94 | continue 95 | else: 96 | Q.append(population[j]) 97 | Qfit.append(fitness[j]) 98 | Q.append(offspring) 99 | Qfit.append(offspring_fit) 100 | # update fitness 101 | fitness=Qfit 102 | # update population 103 | population = Q 104 | t += 1 105 | pop_size = np.shape(fitness)[0] 106 | 107 | # display expected running time 108 | if t == 5: 109 | time_log2 = time.time() 110 | expected_time = T * (time_log2 - time_log1) / 5 111 | if expected_time > 5: 112 | m, s = divmod(expected_time, 60) 113 | h, m = divmod(m, 60) 114 | ToolFunction.log('expected remaining running time: %02d:%02d:%02d' % (h, m, s)) 115 | result_index = -1 116 | max_value=float('inf') 117 | for p in range(0, pop_size): 118 | fitness = population[p].get_value() 119 | if fitness[1] >= 0 and fitness[0] < max_value: 120 | max_value = fitness[0] 121 | result_index = p 122 | return population[result_index] 123 | 124 | 125 | -------------------------------------------------------------------------------- /docs/Examples/Optimize-a-Noisy-Function.rst: -------------------------------------------------------------------------------- 1 | --------------------------------- 2 | Optimize a Noisy Function 3 | --------------------------------- 4 | 5 | Many real-world environments are noisy, where solution evaluations are 6 | inaccurate due to the noise. Noisy evaluation can badly injure 7 | derivative-free optimization, as it may make a worse solution looks 8 | better. 9 | 10 | Three noise handling methods are implemented in ZOOpt, respectively are 11 | resampling, value suppression for ``SRACOS`` (``SSRACOS``) and threshold 12 | selection for ``POSS`` (``PONSS``). 13 | 14 | In this page, we provide examples of how to use the three noise handling 15 | methods in ZOOpt. 16 | 17 | .. contents:: Table of Contents 18 | 19 | Re-sampling and Value Suppression 20 | --------------------------------- 21 | 22 | We define the Ackley function under noise in simple\_function.py for 23 | minimization. 24 | 25 | .. code:: python 26 | 27 | import numpy as np 28 | 29 | def ackley(solution): 30 | """ 31 | Ackley function for continuous optimization 32 | """ 33 | x = solution.get_x() 34 | bias = 0.2 35 | ave_seq = sum([(i - bias) * (i - bias) for i in x]) / len(x) 36 | ave_cos = sum([np.cos(2.0*np.pi*(i-bias)) for i in x]) / len(x) 37 | value = -20 * np.exp(-0.2 * np.sqrt(ave_seq)) - np.exp(ave_cos) + 20.0 + np.e 38 | return value 39 | 40 | 41 | def ackley_noise_creator(mu, sigma): 42 | """ 43 | Ackley function under noise 44 | """ 45 | return lambda solution: ackley(solution) + np.random.normal(mu, sigma, 1) 46 | 47 | Then, define a corresponding *objective* object. 48 | 49 | .. code:: python 50 | 51 | ackley_noise_func = ackley_noise_creator(0, 0.1) 52 | dim_size = 100 # dimensions 53 | dim_regs = [[-1, 1]] * dim_size # dimension range 54 | dim_tys = [True] * dim_size # dimension type : real 55 | dim = Dimension(dim_size, dim_regs, dim_tys) # form up the dimension object 56 | # dim = Dimension2([(ValueType.CONTINUOUS, [-1, 1], 1e-6)]*dim_size) # another way to form up the dimension object 57 | objective = Objective(ackley_noise_func, dim) # form up the objective function 58 | 59 | Re-sampling 60 | ~~~~~~~~~~~ 61 | 62 | To use Re-sampling noise handling method, ``noise_handling`` and 63 | ``resampling`` should be set to ``True``. In addition, 64 | ``resample_times`` should be provided by users. 65 | 66 | .. code:: python 67 | 68 | parameter = Parameter(budget=200000, noise_handling=True, resampling=True, resample_times=10) 69 | # This setting is alternative 70 | parameter.set_positive_size(5) 71 | 72 | Value Suppression for ``SRACOS`` (``SSRACOS``) 73 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 74 | 75 | To use ``SSRACOS`` noise handling method, ``noise_handling`` and 76 | ``suppression`` should be set to ``True``. In addition, 77 | ``non_update_allowed``, ``resample_times`` and ``balance_rate`` should 78 | be provided by users. 79 | 80 | .. code:: python 81 | 82 | # non_update_allowed=500 and resample_times=100 means if the best solution doesn't change for 500 budgets, the best solution will be evaluated repeatedly for 100 times 83 | # balance_rate is a parameter for exponential weight average of several evaluations of one sample. 84 | parameter = Parameter(budget=200000, noise_handling=True, suppression=True, non_update_allowed=500, resample_times=100, balance_rate=0.5) 85 | # This setting is alternative 86 | parameter.set_positive_size(5) 87 | 88 | Finally, use ``ExpOpt.min`` to optimize this function. 89 | 90 | .. code:: python 91 | 92 | solution_list = ExpOpt.min(objective, parameter, repeat=1, plot=True) 93 | 94 | Threshold Selection for ``POSS`` (``PONSS``) 95 | -------------------------------------------- 96 | 97 | A sparse regression problem is defined in 98 | ``example/sparse_regression/sparse_mse.py`` . 99 | 100 | Then define a corresponding *objective* object. 101 | 102 | .. code:: python 103 | 104 | from sparse_mse import SparseMSE 105 | from zoopt import Objective, Parameter, ExpOpt 106 | from math import exp 107 | 108 | # load data file 109 | mse = SparseMSE('sonar.arff') 110 | mse.set_sparsity(8) 111 | 112 | # setup objective 113 | objective = Objective(func=mse.loss, dim=mse.get_dim(), constraint=mse.constraint) 114 | 115 | To use ``PONSS`` noise handling method, ``algorithm`` should be set to 116 | ``'poss'`` and ``noise_handling``, ``ponss`` should be set to ``True``. 117 | In addition, ``ponss_theta`` and ``ponss_b`` should be provided by 118 | users. 119 | 120 | .. code:: python 121 | 122 | # ponss_theta and ponss_b are parameters used in PONSS algorithm and should be provided by users. ponss_theta stands 123 | # for the threshold. ponss_b limits the number of solutions in the population set. 124 | parameter = Parameter(algorithm='poss', noise_handling=True, ponss=True, ponss_theta=0.5, ponss_b=mse.get_k(), 125 | budget=2 * exp(1) * (mse.get_sparsity() ** 2) * mse.get_dim().get_size()) 126 | 127 | Finally, use ``ExpOpt.min`` to optimize this function. 128 | 129 | .. code:: python 130 | 131 | solution_list = ExpOpt.min(objective, parameter, repeat=1, plot=True) 132 | 133 | More concrete examples are available in the 134 | ``example/simple_functions/opt_under_noise.py`` and 135 | ``example/sparse_regression/ponss_opt.py``. 136 | -------------------------------------------------------------------------------- /example/direct_policy_search_for_gym/run.py: -------------------------------------------------------------------------------- 1 | """ 2 | Function run_test is defined in this file. You can run this file to get results of this example. 3 | 4 | Author: 5 | Yu-Ren Liu 6 | """ 7 | from gym_task import GymTask 8 | from zoopt import Dimension, Objective, Parameter, ExpOpt, Opt, Solution 9 | 10 | 11 | def run_test(task_name, layers, in_budget, max_step, repeat, terminal_value=None): 12 | """ 13 | example of running direct policy search for gym task. 14 | 15 | :param task_name: gym task name 16 | :param layers: 17 | layer information of the neural network 18 | e.g., [2, 5, 1] means input layer has 2 neurons, hidden layer(only one) has 5 and output layer has 1 19 | :param in_budget: number of calls to the objective function 20 | :param max_step: max step in gym 21 | :param repeat: repeat number in a test 22 | :param terminal_value: early stop, algorithm should stop when such value is reached 23 | :return: no return value 24 | """ 25 | gym_task = GymTask(task_name) # choose a task by name 26 | gym_task.new_nnmodel(layers) # construct a neural network 27 | gym_task.set_max_step(max_step) # set max step in gym 28 | 29 | budget = in_budget # number of calls to the objective function 30 | rand_probability = 0.95 # the probability of sample in model 31 | 32 | # set dimension 33 | dim_size = gym_task.get_w_size() 34 | dim_regs = [[-10, 10]] * dim_size 35 | dim_tys = [True] * dim_size 36 | dim = Dimension(dim_size, dim_regs, dim_tys) 37 | 38 | # form up the objective function 39 | objective = Objective(gym_task.sum_reward, dim) 40 | parameter = Parameter(budget=budget, terminal_value=terminal_value) 41 | parameter.set_probability(rand_probability) 42 | 43 | solution_list = ExpOpt.min(objective, parameter, repeat=repeat, plot=True) 44 | 45 | 46 | def run_test_handlingnoise(task_name, layers, in_budget, max_step, repeat, terminal_value): 47 | """ 48 | example of running direct policy search for gym task with noise handling. 49 | 50 | :param task_name: gym task name 51 | :param layers: 52 | layer information of the neural network 53 | e.g., [2, 5, 1] means input layer has 2 neurons, hidden layer(only one) has 5 and output layer has 1 54 | :param in_budget: number of calls to the objective function 55 | :param max_step: max step in gym 56 | :param repeat: number of repeatitions for noise handling 57 | :param terminal_value: early stop, algorithm should stop when such value is reached 58 | :return: no return value 59 | """ 60 | gym_task = GymTask(task_name) # choose a task by name 61 | gym_task.new_nnmodel(layers) # construct a neural network 62 | gym_task.set_max_step(max_step) # set max step in gym 63 | 64 | budget = in_budget # number of calls to the objective function 65 | rand_probability = 0.95 # the probability of sample in model 66 | 67 | # set dimension 68 | dim_size = gym_task.get_w_size() 69 | dim_regs = [[-10, 10]] * dim_size 70 | dim_tys = [True] * dim_size 71 | dim = Dimension(dim_size, dim_regs, dim_tys) 72 | # form up the objective function 73 | objective = Objective(gym_task.sum_reward, dim) 74 | # by default, the algorithm is sequential RACOS 75 | parameter = Parameter(budget=budget, autoset=True, 76 | suppression=True, terminal_value=terminal_value) 77 | parameter.set_resample_times(70) 78 | parameter.set_probability(rand_probability) 79 | 80 | solution_list = ExpOpt.min(objective, parameter, repeat=repeat) 81 | 82 | 83 | def test(task_name, layers, max_step, solution): 84 | gym_task = GymTask(task_name) # choose a task by name 85 | gym_task.new_nnmodel(layers) # construct a neural network 86 | gym_task.set_max_step(max_step) # set max step in gym 87 | reward = gym_task.sum_reward(solution) 88 | print(reward) 89 | return reward 90 | 91 | 92 | if __name__ == '__main__': 93 | CartPole_layers = [4, 5, 1] 94 | mountain_car_layers = [2, 5, 1] 95 | acrobot_layers = [6, 5, 3, 1] 96 | halfcheetah_layers = [17, 10, 6] 97 | humanoid_layers = [376, 25, 17] 98 | swimmer_layers = [8, 5, 3, 2] 99 | ant_layers = [111, 15, 8] 100 | hopper_layers = [11, 9, 5, 3] 101 | lunarlander_layers = [8, 5, 3, 1] 102 | 103 | # run_test('CartPole-v0', CartPole_layers, 2000, 500, 1) 104 | solution = Solution(x=[0.9572737226684644, 0.6786734362488325, 3.034275386199532, -1.465937683272493, -2.851881104646097, -1.4061455678150114, 5.406235543363033, -6.525666803912518, 5.509873865601744, -0.2641441560205742, 0.16264240578115619, 7.142612522126051, 7.401277183520886, -8.143118688085988, 1.3939130264981063, -7.288693746967178, 4.370406888883354, 6.996497964270439, -0.506503274716799, 2.7761417375401347, 0.23427516091347123, 7.707963832464561, 6.790387947114599, 1.6543213356897475, 8.549797968853504]) 105 | test("CartPole-v0", CartPole_layers, 500, solution) 106 | # run_test('MountainCar-v0', mountain_car_layers, 2000, 1000, 1) 107 | # run_test_handlingnoise('MountainCar-v0', mountain_car_layers, 1000, 1000, 5, terminal_value=-500) 108 | # run_test('Acrobot-v1', acrobot_layers, 2000, 500, 10) 109 | # If you want to run the following examples, you may need to install more libs(mujoco, Box2D). 110 | # run_test('HalfCheetah-v1', halfcheetah_layers, 2000, 10000, 10) 111 | # run_test('Humanoid-v1', humanoid_layers, 2000, 50000, 10) 112 | # run_test('Swimmer-v1', swimmer_layers, 2000, 10000, 10) 113 | # run_test('Ant-v1', ant_layers, 2000, 10000, 10) 114 | # run_test('Hopper-v1', hopper_layers, 2000, 10000, 10) 115 | # run_test('LunarLander-v2', lunarlander_layers, 2000, 10000, 10) 116 | -------------------------------------------------------------------------------- /example/direct_policy_search_for_gym/nn_model.py: -------------------------------------------------------------------------------- 1 | """ 2 | This file contains a class which defines a simple neural network model. 3 | 4 | Author: 5 | Yu-Ren Liu 6 | """ 7 | 8 | import numpy as np 9 | import math 10 | 11 | 12 | class ActivationFunction: 13 | """ 14 | This class defines activation functions in neural network. 15 | """ 16 | @staticmethod 17 | # sigmoid function 18 | def sigmoid(x): 19 | """ 20 | Sigmoid function. 21 | 22 | :param x: input of the sigmoid function 23 | :return: value of sigmoid(x) 24 | """ 25 | for i in range(len(x)): 26 | if -700 <= x[i] <= 700: 27 | x[i] = (2 / (1 + math.exp(-x[i]))) - 1 # sigmoid function 28 | else: 29 | if x[i] < -700: 30 | x[i] = -1 31 | else: 32 | x[i] = 1 33 | return x 34 | 35 | 36 | class Layer(object): 37 | """ 38 | This class defines a layer in neural network. 39 | """ 40 | 41 | def __init__(self, in_size, out_size, input_w=None, activation_function=None): 42 | """ 43 | Init function. 44 | 45 | :param in_size: input size of this layer 46 | :param out_size: output size of this layer 47 | :param input_w: initial weight matrix 48 | :param activation_function: activation function of this layer 49 | """ 50 | self.__row = in_size 51 | self.__column = out_size 52 | self.__w = [] 53 | self.decode_w(input_w) 54 | self.__activation_function = activation_function 55 | self.__wx_plus_b = 0 56 | self.outputs = 0 57 | 58 | def cal_output(self, inputs): 59 | """ 60 | Forward prop of one layer. In this example, we ignore bias. 61 | 62 | :param inputs: input of this layer 63 | :return: output of this layer 64 | """ 65 | 66 | self.__wx_plus_b = np.dot(inputs, self.__w) 67 | if self.__activation_function is None: 68 | self.outputs = self.__wx_plus_b 69 | else: 70 | self.outputs = self.__activation_function(self.__wx_plus_b) 71 | return self.outputs 72 | 73 | # 74 | def decode_w(self, w): 75 | """ 76 | The input x is a vector and this function decompose w into a matrix. 77 | 78 | :param w: input weight vector 79 | :return: weight matrix 80 | """ 81 | if w is None: 82 | return 83 | interval = self.__column 84 | begin = 0 85 | output = [] 86 | step = int(len(w) / interval) 87 | for i in range(step): 88 | output.append(w[begin: begin + interval]) 89 | begin += interval 90 | self.__w = np.array(output) 91 | return 92 | 93 | def get_row(self): 94 | return self.__row 95 | 96 | def get_column(self): 97 | return self.__column 98 | 99 | 100 | class NNModel: 101 | """ 102 | This class defines neural network. 103 | """ 104 | def __init__(self): 105 | self.__layers = [] 106 | self.__layer_size = [] 107 | self.__w_size = 0 108 | return 109 | 110 | def construct_nnmodel(self, layers): 111 | """ 112 | This function constructs a neural network from a list. 113 | 114 | :param layers: 115 | layers is a list, each element is the number of neurons in each layer 116 | len(layers) is at least 2, including input layer and output layer 117 | :return: no return value 118 | """ 119 | self.__layer_size = layers 120 | for i in range(len(layers) - 1): 121 | self.add_layer(layers[i], layers[i + 1], activation_function=ActivationFunction.sigmoid) 122 | self.__w_size += layers[i] * layers[i + 1] 123 | 124 | def add_layer(self, in_size, out_size, input_w=None, activation_function=None): 125 | """ 126 | Add one layer in neural network. 127 | 128 | :param in_size: input size of this layer 129 | :param out_size: output size of this layer 130 | :param input_w: initial weight matrix 131 | :param activation_function: activation function of this layer 132 | :return: no return value 133 | """ 134 | new_layer = Layer(in_size, out_size, input_w, activation_function) 135 | self.__layers.append(new_layer) 136 | return 137 | 138 | # 139 | def decode_w(self, w): 140 | """ 141 | This function decomposes a big vector into several vectors and assign them to weight matrices of the neural network. 142 | In the direct policy search example, big vector is the concatenation of all flattened weight matrices and small 143 | vectors are flattened weight matrices. 144 | 145 | :param w: concatenation of all flattened weight matrices 146 | :return: no return value 147 | """ 148 | begin = 0 149 | for i in range(len(self.__layers)): 150 | length = self.__layers[i].get_row() * self.__layers[i].get_column() 151 | w_temp = w[begin: begin + length] 152 | self.__layers[i].decode_w(w_temp) 153 | begin += length 154 | return 155 | 156 | # output y from input x 157 | def cal_output(self, x): 158 | """ 159 | Forward prop of the neural network. 160 | 161 | :param x: input of the neural network 162 | :return: output of the network 163 | """ 164 | out = x 165 | for i in range(len(self.__layers)): 166 | out = self.__layers[i].cal_output(out) 167 | return out 168 | 169 | def get_w_size(self): 170 | return self.__w_size 171 | -------------------------------------------------------------------------------- /example/simple_functions/simple_function.py: -------------------------------------------------------------------------------- 1 | """ 2 | Objective functions can be implemented in this file. 3 | 4 | Author: 5 | Yu-Ren Liu 6 | """ 7 | 8 | from zoopt.dimension import Dimension 9 | import numpy as np 10 | 11 | 12 | class SetCover: 13 | """ 14 | set cover problem for discrete optimization 15 | this problem has some extra initialization tasks, thus we define this problem as a class 16 | """ 17 | 18 | def __init__(self): 19 | self.__weight = [0.8356, 0.5495, 0.4444, 0.7269, 0.9960, 0.6633, 0.5062, 0.8429, 0.1293, 0.7355, 20 | 0.7979, 0.2814, 0.7962, 0.1754, 0.0267, 0.9862, 0.1786, 0.5884, 0.6289, 0.3008] 21 | self.__subset = [] 22 | self.__subset.append([0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0]) 23 | self.__subset.append([0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0]) 24 | self.__subset.append([1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0]) 25 | self.__subset.append([0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0]) 26 | self.__subset.append([1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1]) 27 | self.__subset.append([0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0]) 28 | self.__subset.append([0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0]) 29 | self.__subset.append([0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0]) 30 | self.__subset.append([0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0]) 31 | self.__subset.append([0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1]) 32 | self.__subset.append([0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0]) 33 | self.__subset.append([0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1]) 34 | self.__subset.append([1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1]) 35 | self.__subset.append([1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1]) 36 | self.__subset.append([0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1]) 37 | self.__subset.append([1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0]) 38 | self.__subset.append([1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1]) 39 | self.__subset.append([0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1]) 40 | self.__subset.append([0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0]) 41 | self.__subset.append([0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1]) 42 | 43 | def fx(self, solution): 44 | """ 45 | Objective function. 46 | 47 | :param solution: a Solution object 48 | :return: the value of f(x) 49 | """ 50 | x = solution.get_x() 51 | allweight = 0 52 | countw = 0 53 | for i in range(len(self.__weight)): 54 | allweight += self.__weight[i] 55 | 56 | dims = [] 57 | for i in range(len(self.__subset[0])): 58 | dims.append(False) 59 | 60 | for i in range(len(self.__subset)): 61 | if x[i] == 1: 62 | countw += self.__weight[i] 63 | for j in range(len(self.__subset[i])): 64 | if self.__subset[i][j] == 1: 65 | dims[j] = True 66 | full = True 67 | for i in range(len(dims)): 68 | if dims[i] is False: 69 | full = False 70 | 71 | if full is False: 72 | countw += allweight 73 | 74 | return countw 75 | 76 | @property 77 | def dim(self): 78 | """ 79 | Dimension of set cover problem. 80 | :return: Dimension instance 81 | """ 82 | dim_size = 20 83 | dim_regs = [[0, 1]] * dim_size 84 | dim_tys = [False] * dim_size 85 | return Dimension(dim_size, dim_regs, dim_tys) 86 | 87 | 88 | def sphere(solution): 89 | """ 90 | Sphere function for continuous optimization 91 | """ 92 | x = solution.get_x() 93 | value = sum([(i-0.2)*(i-0.2) for i in x]) 94 | return value 95 | 96 | 97 | def sphere_mixed(solution): 98 | """ 99 | Sphere function for mixed optimization 100 | """ 101 | x = solution.get_x() 102 | value = sum([i*i for i in x]) 103 | return value 104 | 105 | 106 | def sphere_discrete_order(solution): 107 | """ 108 | Sphere function for integer continuous optimization 109 | """ 110 | x = solution.get_x() 111 | value = sum([(i-2)*(i-2) for i in x]) 112 | return value 113 | 114 | 115 | def ackley(solution): 116 | """ 117 | Ackley function for continuous optimization 118 | """ 119 | x = solution.get_x() 120 | bias = 0.2 121 | ave_seq = sum([(i - bias) * (i - bias) for i in x]) / len(x) 122 | ave_cos = sum([np.cos(2.0*np.pi*(i-bias)) for i in x]) / len(x) 123 | value = -20 * np.exp(-0.2 * np.sqrt(ave_seq)) - np.exp(ave_cos) + 20.0 + np.e 124 | return value 125 | 126 | 127 | def ackley_noise_creator(mu, sigma): 128 | """ 129 | Ackley function under noise 130 | """ 131 | return lambda solution: ackley(solution) + np.random.normal(mu, sigma, 1) 132 | 133 | 134 | 135 | 136 | 137 | 138 | -------------------------------------------------------------------------------- /zoopt/algos/noise_handling/ponss.py: -------------------------------------------------------------------------------- 1 | """ 2 | This module contains the class PONSS, which is a variant of POSS to solve noisy subset selection problems. 3 | """ 4 | import time 5 | 6 | import numpy as np 7 | 8 | from zoopt.algos.opt_algorithms.paretoopt.paretoopt import ParetoOpt 9 | from zoopt.utils.tool_function import ToolFunction 10 | 11 | 12 | class PONSS(ParetoOpt): 13 | """ 14 | This class implements PONSS algorithm, which is a variant of POSS to solve noisy subset selection problems. 15 | """ 16 | 17 | def __init__(self): 18 | ParetoOpt.__init__(self) 19 | pass 20 | 21 | def opt(self, objective, parameter): 22 | """ 23 | Pareto optimization under noise. 24 | 25 | :param objective: an Objective object 26 | :param parameter: a Parameters object 27 | :return: the best solution of the optimization 28 | """ 29 | isolationFunc = parameter.get_isolationFunc() 30 | theta = parameter.get_ponss_theta() 31 | b = parameter.get_ponss_b() 32 | n = objective.get_dim().get_size() 33 | 34 | # initiate the population 35 | sol = objective.construct_solution(np.zeros(n)) 36 | objective.eval_constraint(sol) 37 | 38 | population = [sol] 39 | pop_size = 1 40 | # iteration count 41 | t = 0 42 | T = parameter.get_budget() 43 | while t < T: 44 | if t == 0: 45 | time_log1 = time.time() 46 | # choose a individual from population randomly 47 | s = population[np.random.randint(0, pop_size)] 48 | # every bit will be flipped with probability 1/n 49 | offspring_x = self.mutation(s.get_x(), n) 50 | offspring = objective.construct_solution(offspring_x) 51 | objective.eval_constraint(offspring) 52 | offspring_fit = offspring.get_value() 53 | # now we need to update the population 54 | has_better = False 55 | 56 | for i in range(0, pop_size): 57 | if isolationFunc(offspring_x) != isolationFunc(population[i].get_x()): 58 | continue 59 | else: 60 | if self.theta_dominate(theta, population[i], offspring): 61 | has_better = True 62 | break 63 | # there is no better individual than offspring 64 | if not has_better: 65 | P = [] 66 | Q = [] 67 | for j in range(0, pop_size): 68 | if self.theta_weak_dominate(theta, offspring, population[i]): 69 | continue 70 | else: 71 | P.append(population[j]) 72 | P.append(offspring) 73 | population = P 74 | for sol in population: 75 | if sol.get_value()[1] == offspring.get_value()[1]: 76 | Q.append(sol) 77 | if len(Q) == b + 1: 78 | for sol in Q: 79 | population.remove(sol) 80 | j = 0 81 | while j < b: 82 | sols = np.random.choice(Q, 2, replace=False) 83 | Q.remove(sols[0]) 84 | Q.remove(sols[1]) 85 | objective.eval_constraint(sols[0]) 86 | objective.eval_constraint(sols[1]) 87 | if sols[0].get_value()[0] < sols[1].get_value()[0]: 88 | population.append(sols[0]) 89 | Q.append(sols[1]) 90 | else: 91 | population.append(sols[1]) 92 | Q.append(sols[0]) 93 | j += 1 94 | t += 2 95 | t += 1 96 | pop_size = len(population) 97 | 98 | # display expected running time 99 | if t == 5: 100 | time_log2 = time.time() 101 | expected_time = T * (time_log2 - time_log1) / 5 102 | if expected_time > 5: 103 | m, s = divmod(expected_time, 60) 104 | h, m = divmod(m, 60) 105 | ToolFunction.log('expected remaining running time: %02d:%02d:%02d' % (h, m, s)) 106 | 107 | result_index = -1 108 | max_value = float('inf') 109 | for p in range(pop_size): 110 | fitness = population[p].get_value() 111 | if fitness[1] >= 0 and fitness[0] < max_value: 112 | max_value = fitness[0] 113 | result_index = p 114 | return population[result_index] 115 | 116 | @staticmethod 117 | def theta_dominate(theta, solution1, solution2): 118 | """ 119 | Judge if solution1 theta dominates solution2. 120 | :param theta: threshold 121 | :param solution1: a Solution object 122 | :param solution2: a Solution object 123 | :return: True or False 124 | """ 125 | fit1 = solution1.get_value() 126 | fit2 = solution2.get_value() 127 | if (fit1[0] + theta < fit2[0] and fit1[1] >= fit2[1]) or (fit1[0] + theta <= fit2[0] and fit1[1] > fit2[1]): 128 | return True 129 | else: 130 | return False 131 | 132 | @staticmethod 133 | def theta_weak_dominate(theta, solution1, solution2): 134 | """ 135 | Judge if solution1 theta weakly dominates solution2. 136 | :param theta: threshold 137 | :param solution1: a Solution object 138 | :param solution2: a Solution object 139 | :return: True or False 140 | """ 141 | fit1 = solution1.get_value() 142 | fit2 = solution2.get_value() 143 | if fit1[0] + theta <= fit2[0] and fit1[1] >= fit2[1]: 144 | return True 145 | else: 146 | return False 147 | -------------------------------------------------------------------------------- /docs/README.rst: -------------------------------------------------------------------------------- 1 | ZOOpt 2 | ^^^^^^ 3 | 4 | .. image:: https://www.travis-ci.org/eyounx/ZOOpt.svg?branch=master 5 | :target: https://www.travis-ci.org/eyounx/ZOOpt.svg 6 | :alt: Build Status 7 | .. image:: https://img.shields.io/github/license/mashape/apistatus.svg?maxAge=2592000 8 | :target: https://img.shields.io/github/license/mashape/apistatus.svg?maxAge=2592000 9 | :alt: License 10 | .. image:: https://readthedocs.org/projects/zoopt/badge/?version=latest 11 | :target: https://zoopt.readthedocs.io/en/latest/?badge=latest 12 | :alt: Documentation Status 13 | .. image:: https://codecov.io/gh/AlexLiuyuren/ZOOpt/branch/master/graph/badge.svg 14 | :target: https://codecov.io/gh/AlexLiuyuren/ZOOpt 15 | :alt: Code Coverage 16 | 17 | ZOOpt is a python package for Zeroth-Order Optimization. 18 | 19 | Zeroth-order optimization (a.k.a. derivative-free optimization/black-box optimization) does not rely on the gradient of the objective function, but instead, learns from samples of the search space. It is suitable for optimizing functions that are nondifferentiable, with many local minima, or even unknown but only testable. 20 | 21 | ZOOpt implements some state-of-the-art zeroth-order optimization methods and their parallel versions. Users only need to add serveral keywords to use parallel optimization on a single machine. For large-scale distributed optimization across multiple machines, please refer to `Distributed ZOOpt`_. 22 | 23 | .. _Distributed ZOOpt : https://github.com/eyounx/ZOOsrv 24 | 25 | **Citation**: Yu-Ren Liu, Yi-Qi Hu, Hong Qian, Yang Yu, Chao Qian. ZOOpt: Toolbox for Derivative-Free Optimization. SCIENCE CHINA Information Sciences, 2022. CORR abs/1801.00329 26 | (Features in this article are from version 0.2) 27 | 28 | 29 | Installation 30 | ------------- 31 | 32 | ZOOpt is distributed on PyPI and can be installed with ``pip``: 33 | 34 | .. code:: console 35 | 36 | $ pip install zoopt 37 | 38 | Alternatively, to install ZOOpt by source code, download this project and sequentially run following commands in your terminal/command line. 39 | 40 | .. code:: console 41 | 42 | $ python setup.py build 43 | $ python setup.py install 44 | 45 | A simple example 46 | ---------------- 47 | 48 | We define the Ackley function for minimization (note that this function is for arbitrary dimensions, determined by the solution) 49 | 50 | .. code:: python 51 | 52 | import numpy as np 53 | def ackley(solution): 54 | x = solution.get_x() 55 | bias = 0.2 56 | value = -20 * np.exp(-0.2 * np.sqrt(sum([(i - bias) * (i - bias) for i in x]) / len(x))) - \ 57 | np.exp(sum([np.cos(2.0*np.pi*(i-bias)) for i in x]) / len(x)) + 20.0 + np.e 58 | return value 59 | 60 | Ackley function is a classical function with many local minima. In 2-dimension, it looks like (from wikipedia) 61 | 62 | .. image:: https://upload.wikimedia.org/wikipedia/commons/thumb/9/98/Ackley%27s_function.pdf/page1-400px-Ackley%27s_function.pdf.jpg 63 | :width: 400px 64 | :alt: Ackley function 65 | 66 | Then, use ZOOpt to optimize a 100-dimension Ackley function: 67 | 68 | .. code:: python 69 | 70 | from zoopt import Dimension, ValueType, Dimension2, Objective, Parameter, Opt, ExpOpt 71 | 72 | dim_size = 100 # dimension size 73 | dim = Dimension(dim_size, [[-1, 1]]*dim_size, [True]*dim_size) 74 | # dim = Dimension2([(ValueType.CONTINUOUS, [-1, 1], 1e-6)]*dim_size) 75 | obj = Objective(ackley, dim) 76 | # perform optimization 77 | solution = Opt.min(obj, Parameter(budget=100*dim_size)) 78 | # print the solution 79 | print(solution.get_x(), solution.get_value()) 80 | # parallel optimization for time-consuming tasks 81 | solution = Opt.min(obj, Parameter(budget=100*dim_size, parallel=True, server_num=3)) 82 | 83 | For a few seconds, the optimization is done. Then, we can visualize the optimization progress 84 | 85 | .. code:: python 86 | 87 | import matplotlib.pyplot as plt 88 | plt.plot(obj.get_history_bestsofar()) 89 | plt.savefig('figure.png') 90 | 91 | which looks like 92 | 93 | .. image:: https://github.com/eyounx/ZOOpt/blob/dev/img/quick_start.png?raw=true" alt="Expeirment results 94 | :width: 400px 95 | 96 | We can also use ``ExpOpt`` to repeat the optimization for performance analysis, which will calculate the mean and standard deviation of multiple optimization results while automatically visualizing the optimization progress. 97 | 98 | .. code:: python 99 | 100 | solution_list = ExpOpt.min(obj, Parameter(budget=100*dim_size), repeat=3, plot=True, plot_file="progress.png") 101 | for solution in solution_list: 102 | print(solution.get_x(), solution.get_value()) 103 | 104 | More examples are available in the **EXAMPLES** part. 105 | 106 | Releases 107 | -------- 108 | `release 0.4`_ 109 | 110 | - Add Dimension2 class, which provides another format to construct dimensions. Unlike Dimension class, Dimension2 allows users to specify optimization precision. 111 | - Add SRacosTune class, which is used to suggest/provide trials and process results for Tune (a platform based on RAY for distributed model selection and training). 112 | - Deprecate Python 2 support 113 | 114 | `release 0.3`_ 115 | 116 | - Add a parallel implementation of SRACOS, which accelarates the optimization by asynchronous parallelization. 117 | - Users can now set a customized stop criteria for the optimization 118 | 119 | `release 0.2`_ 120 | 121 | - Add the noise handling strategies Re-sampling and Value Suppression (AAAI'18), and the subset selection method with noise handling PONSS (NIPS'17) 122 | - Add high-dimensionality handling method Sequential Random Embedding (IJCAI'16) 123 | - Rewrite Pareto optimization method. Bugs fixed. 124 | 125 | `release 0.1`_ 126 | 127 | - Include the general optimization method RACOS (AAAI'16) and Sequential RACOS (AAAI'17), and the subset selection method POSS (NIPS'15). 128 | - The algorithm selection is automatic. See examples in the example fold.- Default parameters work well on many problems, while parameters are fully controllable 129 | - Running speed optmized for Python 130 | 131 | .. _release 0.4: https://github.com/polixir/ZOOpt/releases/tag/v0.4 132 | .. _release 0.3: https://github.com/eyounx/ZOOpt/releases/tag/v0.3 133 | .. _release 0.2: https://github.com/eyounx/ZOOpt/releases/tag/v0.2.1 134 | .. _release 0.1: https://github.com/eyounx/ZOOpt/releases/tag/v0.1 135 | -------------------------------------------------------------------------------- /test/test_algos/test_noise_handling/test_ssracos.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from zoopt import Dimension, Objective, Parameter, Opt, Dimension2, ValueType 3 | 4 | 5 | def ackley(solution): 6 | """ 7 | Ackley function for continuous optimization 8 | """ 9 | x = solution.get_x() 10 | bias = 0.2 11 | ave_seq = sum([(i - bias) * (i - bias) for i in x]) / len(x) 12 | ave_cos = sum([np.cos(2.0*np.pi*(i-bias)) for i in x]) / len(x) 13 | value = -20 * np.exp(-0.2 * np.sqrt(ave_seq)) - np.exp(ave_cos) + 20.0 + np.e 14 | return value 15 | 16 | 17 | def ackley_noise_creator(mu, sigma): 18 | """ 19 | Ackley function under noise 20 | """ 21 | return lambda solution: ackley(solution) + np.random.normal(mu, sigma, 1) 22 | 23 | 24 | class TestSSRacos(object): 25 | def test_performance(self): 26 | ackley_noise_func = ackley_noise_creator(0, 0.1) 27 | dim_size = 100 # dimensions 28 | dim_regs = [[-1, 1]] * dim_size # dimension range 29 | dim_tys = [True] * dim_size # dimension type : real 30 | dim = Dimension(dim_size, dim_regs, dim_tys) # form up the dimension object 31 | objective = Objective(ackley_noise_func, dim) # form up the objective function 32 | budget = 20000 # 20*dim_size # number of calls to the objective function 33 | # suppression=True means optimize with value suppression, which is a noise handling method 34 | # resampling=True means optimize with re-sampling, which is another common used noise handling method 35 | # non_update_allowed=500 and resample_times=100 means if the best solution doesn't change for 500 budgets, 36 | # the best solution will be evaluated repeatedly for 100 times 37 | # balance_rate is a parameter for exponential weight average of several evaluations of one sample. 38 | parameter = Parameter(budget=budget, noise_handling=True, suppression=True, non_update_allowed=200, 39 | resample_times=50, balance_rate=0.5) 40 | 41 | # parameter = Parameter(budget=budget, noise_handling=True, resampling=True, resample_times=10) 42 | parameter.set_positive_size(5) 43 | 44 | sol = Opt.min(objective, parameter) 45 | assert sol.get_value() < 4 46 | 47 | def test_resample(self): 48 | ackley_noise_func = ackley_noise_creator(0, 0.1) 49 | dim_size = 100 # dimensions 50 | dim_regs = [[-1, 1]] * dim_size # dimension range 51 | dim_tys = [True] * dim_size # dimension type : real 52 | dim = Dimension(dim_size, dim_regs, dim_tys) # form up the dimension object 53 | objective = Objective(ackley_noise_func, dim) # form up the objective function 54 | budget = 20000 # 20*dim_size # number of calls to the objective function 55 | # suppression=True means optimize with value suppression, which is a noise handling method 56 | # resampling=True means optimize with re-sampling, which is another common used noise handling method 57 | # non_update_allowed=500 and resample_times=100 means if the best solution doesn't change for 500 budgets, 58 | # the best solution will be evaluated repeatedly for 100 times 59 | # balance_rate is a parameter for exponential weight average of several evaluations of one sample. 60 | parameter = Parameter(budget=budget, noise_handling=True, resampling=True, resample_times=10) 61 | 62 | # parameter = Parameter(budget=budget, noise_handling=True, resampling=True, resample_times=10) 63 | parameter.set_positive_size(5) 64 | 65 | sol = Opt.min(objective, parameter) 66 | assert sol.get_value() < 4 67 | 68 | 69 | class TestSSRacos2(object): 70 | def test_performance(self): 71 | ackley_noise_func = ackley_noise_creator(0, 0.1) 72 | dim_size = 100 # dimensions 73 | one_dim = (ValueType.CONTINUOUS, [-1, 1], 1e-6) 74 | dim_list = [(one_dim)] * dim_size 75 | dim = Dimension2(dim_list) # form up the dimension object 76 | objective = Objective(ackley_noise_func, dim) # form up the objective function 77 | budget = 20000 # 20*dim_size # number of calls to the objective function 78 | # suppression=True means optimize with value suppression, which is a noise handling method 79 | # resampling=True means optimize with re-sampling, which is another common used noise handling method 80 | # non_update_allowed=500 and resample_times=100 means if the best solution doesn't change for 500 budgets, 81 | # the best solution will be evaluated repeatedly for 100 times 82 | # balance_rate is a parameter for exponential weight average of several evaluations of one sample. 83 | parameter = Parameter(budget=budget, noise_handling=True, suppression=True, non_update_allowed=200, 84 | resample_times=50, balance_rate=0.5) 85 | 86 | # parameter = Parameter(budget=budget, noise_handling=True, resampling=True, resample_times=10) 87 | parameter.set_positive_size(5) 88 | 89 | sol = Opt.min(objective, parameter) 90 | assert sol.get_value() < 4 91 | 92 | def test_resample(self): 93 | ackley_noise_func = ackley_noise_creator(0, 0.1) 94 | dim_size = 100 # dimensions 95 | one_dim = (ValueType.CONTINUOUS, [-1, 1], 1e-6) 96 | dim_list = [(one_dim)] * dim_size 97 | dim = Dimension2(dim_list) # form up the dimension object 98 | objective = Objective(ackley_noise_func, dim) # form up the objective function 99 | budget = 20000 # 20*dim_size # number of calls to the objective function 100 | # suppression=True means optimize with value suppression, which is a noise handling method 101 | # resampling=True means optimize with re-sampling, which is another common used noise handling method 102 | # non_update_allowed=500 and resample_times=100 means if the best solution doesn't change for 500 budgets, 103 | # the best solution will be evaluated repeatedly for 100 times 104 | # balance_rate is a parameter for exponential weight average of several evaluations of one sample. 105 | parameter = Parameter(budget=budget, noise_handling=True, resampling=True, resample_times=10) 106 | 107 | # parameter = Parameter(budget=budget, noise_handling=True, resampling=True, resample_times=10) 108 | parameter.set_positive_size(5) 109 | 110 | sol = Opt.min(objective, parameter) 111 | assert sol.get_value() < 4 -------------------------------------------------------------------------------- /zoopt/objective.py: -------------------------------------------------------------------------------- 1 | """ 2 | This module contains the class Objective 3 | 4 | Author: 5 | Yu-Ren Liu, Xiong-Hui Chen 6 | """ 7 | 8 | from zoopt.solution import Solution 9 | from zoopt.utils.zoo_global import pos_inf 10 | from zoopt.utils.tool_function import ToolFunction 11 | import numpy as np 12 | 13 | 14 | class Objective: 15 | """ 16 | This class represents the objective function and its associated variables 17 | """ 18 | def __init__(self, func=None, dim=None, constraint=None, resample_func=None): 19 | """ 20 | Initialization. 21 | 22 | :param func: objective function defined by the user 23 | :param dim: a Dimension object, which describes the search space. 24 | :param constraint: constraint function for POSS 25 | :param resample_func: resample function for SSRacos 26 | :param reducedim: whether to use sequential random embedding 27 | """ 28 | self.__func = func 29 | self.__dim = dim 30 | # the function for inheriting solution attachment 31 | self.__inherit = self.default_inherit 32 | self.__post_inherit = self.default_post_inherit 33 | # the constraint function 34 | self.__constraint = constraint 35 | # the history of optimization 36 | self.__history = [] 37 | 38 | self.__resample_times = 1 39 | self.__resample_func = self.resample_func if resample_func is None else resample_func 40 | self.__balance_rate = 1 41 | # for sequential random embedding 42 | self.__reducedim = False 43 | self.__A = None 44 | self.__last_x = None 45 | 46 | def parameter_set(self, parameter): 47 | """ 48 | Use a Parameter object to set attributes in Objective object. 49 | 50 | :param parameter: a Parameter object 51 | :return: no return 52 | """ 53 | if parameter.get_noise_handling() is True and parameter.get_suppression() is True: 54 | self.__balance_rate = parameter.get_balance_rate() 55 | if parameter.get_noise_handling() is True and parameter.get_resampling() is True: 56 | self.__resample_times = parameter.get_resample_times() 57 | if parameter.get_high_dim_handling() is True and parameter.get_reducedim() is True: 58 | self.__reducedim = True 59 | 60 | def construct_solution(self, x, parent=None): 61 | """ 62 | Construct a solution from x 63 | 64 | :param x: a list 65 | :param parent: the attached structure 66 | :return: solution 67 | """ 68 | new_solution = Solution() 69 | new_solution.set_x(x) 70 | new_solution.set_attach(self.__inherit(parent)) 71 | return new_solution 72 | 73 | def eval(self, solution): 74 | """ 75 | Use the objective function to evaluate a solution. 76 | 77 | :param solution: 78 | :return: value of fx(evaluation result) will be returned 79 | """ 80 | res = [] 81 | for i in range(self.__resample_times): 82 | if self.__reducedim is False: 83 | val = self.__func(solution) 84 | else: 85 | x = solution.get_x() 86 | x_origin = x[0] * np.array(self.__last_x.get_x()) + np.dot(self.__A, np.array(x[1:])) 87 | val = self.__func(Solution(x=x_origin)) 88 | res.append(val) 89 | self.__history.append(val) 90 | value = sum(res) / float(len(res)) 91 | solution.set_value(value) 92 | solution.set_post_attach(self.__post_inherit()) 93 | return value 94 | 95 | def resample(self, solution, repeat_times): 96 | """ 97 | Resample function for value suppression. 98 | 99 | :param solution: a Solution object 100 | :param repeat_times: repeat times 101 | :return: repeat times 102 | """ 103 | if solution.get_resample_value() is None: 104 | solution.set_resample_value(self.__resample_func(solution, repeat_times)) 105 | solution.set_value((1 - self.__balance_rate) * solution.get_value() + 106 | self.__balance_rate * solution.get_resample_value()) 107 | solution.set_post_attach(self.__post_inherit()) 108 | return repeat_times 109 | else: 110 | return 0 111 | 112 | def resample_func(self, solution, iteration_num): 113 | result = [] 114 | for i in range(iteration_num): 115 | result.append(self.eval(solution)) 116 | return sum(result) * 1.0 / len(result) 117 | 118 | def eval_constraint(self, solution): 119 | solution.set_value( 120 | [self.eval(solution), self.__constraint(solution)]) 121 | solution.set_post_attach(self.__post_inherit()) 122 | 123 | def set_func(self, func): 124 | """ 125 | Set the objective function 126 | :param func: the objective function 127 | :return: no return value 128 | """ 129 | self.__func = func 130 | 131 | def get_func(self): 132 | return self.__func 133 | 134 | def set_dim(self, dim): 135 | self.__dim = dim 136 | 137 | def get_dim(self): 138 | return self.__dim 139 | 140 | def set_inherit_func(self, inherit_func): 141 | self.__inherit = inherit_func 142 | 143 | def set_post_inherit_func(self, inherit_func): 144 | self.__post_inherit = inherit_func 145 | 146 | def get_post_inherit_func(self): 147 | return self.__post_inherit 148 | 149 | def get_inherit_func(self): 150 | return self.__inherit 151 | 152 | def set_constraint(self, constraint): 153 | self.__constraint = constraint 154 | return 155 | 156 | def get_constraint(self): 157 | return self.__constraint 158 | 159 | def set_history(self, history): 160 | self.__history = history 161 | 162 | def get_history(self): 163 | return self.__history 164 | 165 | def get_history_bestsofar(self): 166 | """ 167 | Get the best-so-far history. 168 | """ 169 | history_bestsofar = [] 170 | bestsofar = pos_inf 171 | for i in range(len(self.__history)): 172 | if self.__history[i] < bestsofar: 173 | bestsofar = self.__history[i] 174 | history_bestsofar.append(bestsofar) 175 | return history_bestsofar 176 | 177 | def get_reducedim(self): 178 | return self.__reducedim 179 | 180 | def get_last_x(self): 181 | return self.__last_x 182 | 183 | def get_A(self): 184 | return self.__A 185 | 186 | def set_A(self, A): 187 | self.__A = A 188 | 189 | def set_last_x(self, x): 190 | self.__last_x = x 191 | 192 | def clean_history(self): 193 | """ 194 | clean the optimization history 195 | """ 196 | self.__history = [] 197 | 198 | @staticmethod 199 | def default_inherit(parent=None): 200 | """ 201 | Default inherited function. 202 | 203 | :param parent: the parent structure 204 | :return: None 205 | """ 206 | return None 207 | 208 | @staticmethod 209 | def default_post_inherit(parent=None): 210 | """ 211 | Default post inherited function. 212 | 213 | :param parent: the parent structure 214 | :return: None 215 | """ 216 | return None 217 | -------------------------------------------------------------------------------- /docs/Examples/Optimize-a-Discrete-Function.rst: -------------------------------------------------------------------------------- 1 | -------------------------------------- 2 | Optimize a Discrete Function 3 | -------------------------------------- 4 | 5 | 6 | The `set cover `__ 7 | problem is a classical question in combinatorics, computer science and 8 | complexity theory. It is one of Karp's 21 NP-complete problems shown to 9 | be NP-complete in 1972. 10 | 11 | We define the SetCover function in fx.py for minimization. 12 | 13 | .. code:: python 14 | 15 | from zoopt.dimension import Dimension, ValueType, Dimension2 16 | 17 | class SetCover: 18 | """ 19 | set cover problem for discrete optimization 20 | this problem has some extra initialization tasks, thus we define this problem as a class 21 | """ 22 | __weight = None 23 | __subset = None 24 | 25 | def __init__(self): 26 | self.__weight = [0.8356, 0.5495, 0.4444, 0.7269, 0.9960, 0.6633, 0.5062, 0.8429, 0.1293, 0.7355, 27 | 0.7979, 0.2814, 0.7962, 0.1754, 0.0267, 0.9862, 0.1786, 0.5884, 0.6289, 0.3008] 28 | self.__subset = [] 29 | self.__subset.append([0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0]) 30 | self.__subset.append([0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0]) 31 | self.__subset.append([1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0]) 32 | self.__subset.append([0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0]) 33 | self.__subset.append([1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1]) 34 | self.__subset.append([0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0]) 35 | self.__subset.append([0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0]) 36 | self.__subset.append([0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0]) 37 | self.__subset.append([0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0]) 38 | self.__subset.append([0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1]) 39 | self.__subset.append([0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0]) 40 | self.__subset.append([0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1]) 41 | self.__subset.append([1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1]) 42 | self.__subset.append([1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1]) 43 | self.__subset.append([0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1]) 44 | self.__subset.append([1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0]) 45 | self.__subset.append([1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1]) 46 | self.__subset.append([0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1]) 47 | self.__subset.append([0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0]) 48 | self.__subset.append([0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1]) 49 | 50 | def fx(self, solution): 51 | """ 52 | objective function 53 | """ 54 | x = solution.get_x() 55 | allweight = 0 56 | countw = 0 57 | for i in range(len(self.__weight)): 58 | allweight += self.__weight[i] 59 | 60 | dims = [] 61 | for i in range(len(self.__subset[0])): 62 | dims.append(False) 63 | 64 | for i in range(len(self.__subset)): 65 | if x[i] == 1: 66 | countw += self.__weight[i] 67 | for j in range(len(self.__subset[i])): 68 | if self.__subset[i][j] == 1: 69 | dims[j] = True 70 | full = True 71 | for i in range(len(dims)): 72 | if dims[i] is False: 73 | full = False 74 | 75 | if full is False: 76 | countw += allweight 77 | 78 | return countw 79 | 80 | @property 81 | def dim(self): 82 | """ 83 | Dimension of set cover problem. 84 | :return: Dimension instance 85 | """ 86 | dim_size = 20 87 | dim_regs = [[0, 1]] * dim_size 88 | dim_tys = [False] * dim_size 89 | return Dimension(dim_size, dim_regs, dim_tys) 90 | 91 | @property 92 | def dim2(self): 93 | dim_size = 20 94 | one_dim = (ValueType.DISCRETE, [0, 1], False) 95 | dim_list = [one_dim] * dim_size 96 | return Dimension2(dim_list) 97 | 98 | 99 | Then, Define corresponding *objective* and *parameter*. 100 | 101 | .. code:: python 102 | 103 | problem = SetCover() 104 | dim = problem.dim # the dim is prepared by the class 105 | objective = Objective(problem.fx, dim) # form up the objective function 106 | 107 | .. code:: python 108 | 109 | # autoset=True in default. If autoset is False, you should define train_size, positive_size, negative_size on your own. 110 | parameter = Parameter(budget=budget, autoset=False) 111 | parameter.set_train_size(6) 112 | parameter.set_positive_size(1) 113 | parameter.set_negative_size(5) 114 | 115 | Finally, optimize this function. 116 | 117 | .. code:: python 118 | 119 | ExpOpt.min(objective, parameter, repeat=1, plot=True) 120 | 121 | The whole process lists below. 122 | 123 | .. code:: python 124 | 125 | from fx import SetCover 126 | from zoopt import Dimension, ValueType, Dimension2, Objective, Parameter, ExpOpt 127 | 128 | 129 | def minimize_setcover_discrete(): 130 | """ 131 | Discrete optimization example of minimizing setcover problem. 132 | """ 133 | problem = SetCover() 134 | dim = problem.dim # the dim is prepared by the class 135 | # dim = problem.dim2 136 | objective = Objective(problem.fx, dim) # form up the objective function 137 | 138 | budget = 100 * dim.get_size() # number of calls to the objective function 139 | # if autoset is False, you should define train_size, positive_size, negative_size on your own 140 | parameter = Parameter(budget=budget, autoset=False) 141 | parameter.set_train_size(6) 142 | parameter.set_positive_size(1) 143 | parameter.set_negative_size(5) 144 | 145 | ExpOpt.min(objective, parameter, repeat=1, plot=True) 146 | 147 | if __name__ == '__main__': 148 | minimize_setcover_discrete() 149 | 150 | For a few seconds, the optimization is done. Visualized optimization 151 | progress looks like 152 | 153 | .. image:: https://github.com/eyounx/ZOOpt/blob/dev/img/setcover_discrete_figure.png?raw=true 154 | :width: 500 155 | 156 | More concrete examples are available in the 157 | ``example/simple_functions/discrete_opt.py`` file. 158 | -------------------------------------------------------------------------------- /docs/ZOOpt/Practical-Parameter-Settings-and-fine-tuning-tricks.rst: -------------------------------------------------------------------------------- 1 | Practical Parameter Settings and Fine-tuning Tricks 2 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 3 | 4 | .. contents:: Table of Contents 5 | 6 | Practical Parameter Settings 7 | ----------------------------- 8 | 9 | Enable parallel optimization 10 | >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 11 | 12 | .. code:: python 13 | 14 | parameter = Parameter(..., parallel=True, server_num=3, ...) 15 | 16 | Using parallel optimization in ZOOpt is quite simple, just adding two keys in the definition of the parameter. In this example, ZOOpt will start three daemon processors for evaluating the solution. Make sure that the server_num is less than the number of available cores of your compouter, otherwise the overhead of competing for computing resources will be high. 17 | 18 | Set seed 19 | >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 20 | 21 | .. code:: python 22 | 23 | parameter = Parameter(..., seed=999, ...) 24 | 25 | Fixing the seed makes the optimization result reproducible. Note that if the parallel optimization is enabled, fixing the seed cannot 26 | reprodece the result because it cannot assure the same sequence of evaluated solutions. 27 | 28 | Specify some initial samples 29 | >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 30 | .. code:: python 31 | 32 | dim_size = 10 33 | sol1 = Solution(x = [0] * dim_size) 34 | sol2 = Solution(x = [1] * dim_size) 35 | parameter = Parameter(..., init_samples=[sol1, sol2], ...) 36 | 37 | In some cases, users have known several good solutions of a problem. ZOOpt can use them as initial samples, helping accelerating the optimization. Another more common situation is that users want to resume the optmization when the budget runs out. To achieve this, users can use the last result that ZOOpt outputs as a initial sample in the next optimization progress. The number of initial samples should not exceed the population size (train_size). 38 | 39 | Print intermediate results 40 | >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 41 | 42 | .. code:: python 43 | 44 | parameter = Parameter(..., intermediate_result=True, intermediate_freq=100, ...) 45 | 46 | ``intermediate_result`` and ``intermediate_freq`` are set for showing 47 | intermediate results during the optimization progress. The procedure 48 | will show the best solution every ``intermediate_freq`` calls to the 49 | objective function if ``intermediate_result=True``. 50 | ``intermediate_freq`` is set to 100 by default. 51 | 52 | In this example, the optimization procedure will print the best solution 53 | every 100 budgets. 54 | 55 | Set population size manually 56 | >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 57 | 58 | .. code:: python 59 | 60 | parameter = Parameter(budget=20000) 61 | parameter.set_train_size(22) 62 | parameter.set_positive_size(2) 63 | 64 | In ZOOpt, population size is represented by ``train_size``. 65 | ``train_size`` represents the size of the binary classification data 66 | set, which is a component of the optimization algorithm ``RACOS``, ``SRACOS`` and ``SSRACOS``. 67 | ``positive_size`` represents the size of the positive data among all 68 | data. ``negetive_size`` is set to ``train_size`` - ``positive_size`` 69 | automatically. There is no need to set it manually. 70 | 71 | In most cases, default setting can work well and there's no need to set 72 | them manually. 73 | 74 | Set the time limit 75 | >>>>>>>>>>>>>>>>>>>>>> 76 | 77 | .. code:: python 78 | 79 | parameter = Parameter(..., time_budget=3600, ...) 80 | 81 | In this example, time budget is 3600s and it means if the 82 | running time exceeds 3600s, the optimization procedure will stop early 83 | and return the best solution so far regardless of the budget. 84 | 85 | Customize a stopping criterion 86 | >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 87 | 88 | .. code:: python 89 | 90 | class StoppingCriterion: 91 | def __init__(self): 92 | self.__best_result = 0 93 | self.__count = 0 94 | self.__total_count = 0 95 | self.__count_limit = 100 96 | 97 | def check(self, optcontent): 98 | """ 99 | :param optcontent: an instance of the class RacosCommon. Several functions can be invoked to get the contexts of the optimization, which are listed as follows, 100 | optcontent.get_best_solution(): get the current optimal solution 101 | optcontent.get_data(): get all the solutions contained in the current solution pool 102 | optcontent.get_positive_data(): get positive solutions contained in the current solution pool 103 | optcontent.get_negative_data(): get negative solutions contained in the current solution pool 104 | 105 | :return: bool object. 106 | 107 | """ 108 | self.__total_count += 1 109 | content_best_value = optcontent.get_best_solution().get_value() 110 | if content_best_value == self.__best_result: 111 | self.__count += 1 112 | else: 113 | self.__best_result = content_best_value 114 | self.__count = 0 115 | if self.__count >= self.__count_limit: 116 | print("stopping criterion holds, total_count: %d" % self.__total_count) 117 | return True 118 | else: 119 | return False 120 | 121 | parameter = Parameter(budget=20000, stopping_criterion=StoppingCriterion()) 122 | 123 | StoppingCriterion customizes a stopping criterion for the optimization, which is used as a initialization parameter of the class Parameter and should implement a member function ``check(self, optcontent)``. The ``check`` function is invoked at each iteration of the optimization. The optimization will stop if this function returns True, otherwise, it is not affected. In this example, the optimization will be stopped if the best result remains unchanged for 100 iterations. 124 | 125 | Fine-tuning Tricks 126 | ----------------------------- 127 | As shown in the previous introduction, the number of adjustable parameters in ZOOpt may look scary. However, remember that there is no need to set each parameter manually. ZOOpt's default parameters can work well in most case. In this part, we will introduce some advisable fine-tuning tricks to configure the best zeroth-order optimization solver for your task. 128 | 129 | Adjust the uncertain_bits 130 | >>>>>>>>>>>>>>>>>>>>>>>>>> 131 | ``uncertain_bits`` determines how many bits can be different from the present best solution when a new solution is sampled from the learned search space. In default, when the dimension size is less than 50, uncertain_bits equals 1. When the dimension size is between 50 and 1000, 132 | uncertain_bits equals 3, otherwise, uncertain_bits equals 5. We suggest to use smaller uncertain_bits at first especially when the budget is abundant. For example, the uncertain_bits can be set to be 1 even if the dimension size is larger than 50. 133 | 134 | .. code:: python 135 | 136 | par = Parameter(..., uncertain_bit=1, ...) 137 | 138 | Adjust the exploration rate 139 | >>>>>>>>>>>>>>>>>>>>>>>>>>>> 140 | Exploration rate (sample from the whole search space) is an important factor for the optimization. In default, it is set to be only 1%. This setting can help ZOOpt achieve good results in locally highly non-convex but globally trendy functions. For many real-world optimization tasks, there is no obvious trend in global either. We suggest to increase exploration rate in such conditions, e.g., incresing the exploration rate to 10% or 20%. 141 | 142 | .. code:: python 143 | 144 | par = Parameter(..., exploration_rate = 0.2, ...) 145 | -------------------------------------------------------------------------------- /example/direct_policy_search_for_gym/gym_task.py: -------------------------------------------------------------------------------- 1 | """ 2 | The class GymTask sets a gym runtime environment. 3 | 4 | Author: 5 | Yu-Ren Liu 6 | """ 7 | 8 | import gym 9 | from gym.spaces.discrete import Discrete 10 | from nn_model import NNModel 11 | 12 | 13 | class GymTask: 14 | """ 15 | This class sets a gym runtime environment. 16 | """ 17 | 18 | def __init__(self, name): 19 | """ 20 | Init function. 21 | 22 | :param name: gym task name 23 | """ 24 | self.reset_task() 25 | self.__envir = gym.make(name) # gym environment 26 | self.__envir_name = name # environment name 27 | self.__obser_size = self.__envir.observation_space.shape[0] # the number of parameters in observation 28 | self.__obser_up_bound = [] # the upper bound of parameters in observation 29 | self.__obser_low_bound = [] # the lower bound of parameters in observation 30 | self.total_step = 0 # total s 31 | self.__action_size = None # the number of parameters in action 32 | self.__action_sca = [] # environment action space, specified by gym 33 | self.__action_type = [] # the type of action, false means discrete 34 | self.__action_low_bound = [] # action lower bound 35 | self.__action_up_bound = [] # action upper bound 36 | # policy model, it's a neural network in this example 37 | self.__policy_model = None 38 | self.__max_step = 0 # maximum stop step 39 | self.__stop_step = 0 # the stop step in recent trajectory 40 | 41 | for i in range(self.__obser_size): 42 | self.__obser_low_bound.append( 43 | self.__envir.observation_space.high[i]) 44 | self.__obser_up_bound.append(self.__envir.observation_space.low[i]) 45 | 46 | # if the dimension of action space is one 47 | if isinstance(self.__envir.action_space, Discrete): 48 | self.__action_size = 1 49 | self.__action_sca = [] 50 | self.__action_type = [] 51 | self.__action_sca.append(self.__envir.action_space.n) 52 | self.__action_type.append(False) 53 | # if action object is Box 54 | else: 55 | self.__action_size = self.__envir.action_space.shape[0] 56 | self.__action_type = [] 57 | self.__action_low_bound = [] 58 | self.__action_up_bound = [] 59 | for i in range(self.__action_size): 60 | self.__action_type.append(True) 61 | self.__action_low_bound.append( 62 | self.__envir.action_space.low[i]) 63 | self.__action_up_bound.append( 64 | self.__envir.action_space.high[i]) 65 | 66 | def reset_task(self): 67 | """ 68 | Reset gym runtime environment. 69 | 70 | :return: no return value 71 | """ 72 | self.__envir = None 73 | self.__envir_name = None 74 | self.__obser_size = None 75 | self.__obser_low_bound = [] 76 | self.__obser_up_bound = [] 77 | self.__action_type = [] 78 | self.__policy_model = None 79 | self.__max_step = 0 80 | 81 | # 82 | def transform_action(self, temp_act): 83 | """ 84 | Transform action from neural network into true action. 85 | 86 | :param temp_act: output of the neural network 87 | :return: action 88 | """ 89 | action = [] 90 | for i in range(self.__action_size): 91 | # if action is continue 92 | if self.__action_type[i]: 93 | tmp_act = (temp_act[i]+1)*((self.__action_up_bound[i] - 94 | self.__action_low_bound[i])/2.0)+self.__action_low_bound[i] 95 | action.append(tmp_act) 96 | else: 97 | sca = 2.0 / self.__action_sca[0] 98 | start = -1.0 99 | now_value = start + sca 100 | true_act = 0 101 | while now_value <= 1.0: 102 | if temp_act[i] <= now_value: 103 | break 104 | else: 105 | now_value += sca 106 | true_act += 1 107 | if true_act >= self.__action_sca[i]: 108 | true_act = self.__action_sca[i] - 1 109 | action.append(true_act) 110 | if self.__action_size == 1 and self.__action_type[0] is False: 111 | action = action[0] 112 | return action 113 | 114 | def new_nnmodel(self, layers): 115 | """ 116 | Generate a new model 117 | 118 | :param layers: layer information 119 | :return: no return 120 | """ 121 | # initialize NN model as policy 122 | self.__policy_model = NNModel() 123 | self.__policy_model.construct_nnmodel(layers) 124 | 125 | return 126 | 127 | def nn_policy_sample(self, observation): 128 | """ 129 | Generate action from observation using neuron network policy 130 | 131 | :param observation: observation is the output of gym task environment 132 | :return: action to choose 133 | """ 134 | output = self.__policy_model.cal_output(observation) 135 | action = self.transform_action(output) 136 | return action 137 | 138 | def sum_reward(self, solution): 139 | """ 140 | Objective function of racos by summation of reward in a trajectory 141 | 142 | :param solution: a data structure containing x and fx 143 | :return: value of fx 144 | """ 145 | x = solution.get_x() 146 | sum_re = 0 147 | # reset stop step 148 | self.__stop_step = self.__max_step 149 | # reset nn model weight 150 | self.__policy_model.decode_w(x) 151 | # reset environment 152 | observation = self.__envir.reset() 153 | for i in range(self.__max_step): 154 | action = self.nn_policy_sample(observation) 155 | observation, reward, done, info = self.__envir.step(action) 156 | sum_re += reward 157 | if done: 158 | self.__stop_step = i 159 | break 160 | self.total_step += 1 161 | value = sum_re 162 | name = self.__envir_name 163 | # turn the direction for minimization 164 | if name == 'CartPole-v0' or name == 'CartPole-v1' or name == 'MountainCar-v0' or name == 'Acrobot-v1' or name == 'HalfCheetah-v1' \ 165 | or name == 'Humanoid-v1' or name == 'Swimmer-v1' or name == 'Ant-v1' or name == 'Hopper-v1' \ 166 | or name == 'LunarLander-v2' or name == 'BipedalWalker-v2': 167 | value = -value 168 | return value 169 | 170 | def get_environment(self): 171 | return self.__envir 172 | 173 | def get_environment_name(self): 174 | return self.__envir_name 175 | 176 | def get_observation_size(self): 177 | return self.__obser_size 178 | 179 | def get_observation_low_bound(self, index): 180 | return self.__obser_low_bound[index] 181 | 182 | def get_observation_upbound(self, index): 183 | return self.__obser_up_bound[index] 184 | 185 | def get_action_size(self): 186 | return self.__action_size 187 | 188 | def get_action_type(self, index): 189 | return self.__action_type[index] 190 | 191 | def get_stop_step(self): 192 | return self.__stop_step 193 | 194 | def get_w_size(self): 195 | return self.__policy_model.get_w_size() 196 | 197 | def set_max_step(self, ms): 198 | self.__max_step = ms 199 | return 200 | 201 | -------------------------------------------------------------------------------- /zoopt/solution.py: -------------------------------------------------------------------------------- 1 | """ 2 | This module contains the class Solution. 3 | 4 | Author: 5 | Yu-Ren Liu, Xiong-Hui Chen 6 | 7 | Updated by: 8 | Ze-Wen Li 9 | """ 10 | import copy 11 | 12 | import numpy as np 13 | from zoopt.utils.tool_function import ToolFunction 14 | from zoopt.utils.zoo_global import pos_inf, neg_inf, nan, gl 15 | 16 | 17 | class Solution: 18 | """ 19 | A solution encapsulates a solution vector with attached properties, including dimension information, objective value, 20 | and attachment 21 | """ 22 | 23 | def __init__(self, x=[], value=nan, resample_value=None, attach=None, post_attach=None, is_in_possible_solution=False, no=None): 24 | """ 25 | Initialization. 26 | 27 | :param x: a list 28 | :param value: objective value 29 | :param resample_value: re-evaluated value. 30 | This is a meaningful parameter only when using the SSRACOS algorithm. 31 | In SSRACOS algorithm, we record the noise reduction result in this parameter. 32 | :param attach: attached structure. 33 | self.set_attach() will be called after constructed a solution. 34 | You can define the behavior through rewrite Objecttive.__inherit function (just do nothing as default). 35 | See more details in Objective.set_inherit_func() 36 | :param post_attach: the attachment to the solution. 37 | self.set_post_attach() will be called after evaluated a solution. 38 | You can define the behavior through rewrite Objecttive.__post_inherit function (just do nothing as default). 39 | See more details in Objective.set_post_inherit_func() 40 | :param is_in_possible_solution: 41 | This is a meaningful parameter only when using the SSRACOS algorithm. 42 | In SSRACOS algorithm, a solution will be added to "possible solution list" after being re-sampling. 43 | This parameter is to mark if a solution has been added to "possible solution list". 44 | :param no: 45 | Serial number. For ASRacos. 46 | """ 47 | self.__x = x 48 | self.__value = value 49 | self.__resample_value = resample_value 50 | self.__attach = attach 51 | self.__post_attach = post_attach 52 | self.__is_in_possible_solution = is_in_possible_solution 53 | self.__no = no 54 | return 55 | 56 | @property 57 | def is_in_possible_solution(self): 58 | return self.__is_in_possible_solution 59 | 60 | @is_in_possible_solution.setter 61 | def is_in_possible_solution(self, value): 62 | self.__is_in_possible_solution = value 63 | 64 | def deep_copy(self): 65 | """ 66 | Deep copy this solution. Note that the attachment is not deeply copied. 67 | 68 | :return: a new solution 69 | """ 70 | x = [] 71 | for x_i in self.__x: 72 | x.append(x_i) 73 | value = self.__value 74 | attach = None if self.__attach is None else copy.deepcopy(self.__attach) 75 | resample_value = None if self.__resample_value is None else copy.deepcopy(self.__resample_value) 76 | post_attach = None if self.__post_attach is None else copy.deepcopy(self.__post_attach) 77 | return Solution(x, value, resample_value, attach, post_attach, self.is_in_possible_solution) 78 | 79 | def is_equal(self, sol): 80 | """ 81 | Check if two solutions equal. 82 | 83 | :param sol: another solution 84 | :return: True or False 85 | """ 86 | sol_x = sol.get_x() 87 | sol_value = sol.get_value() 88 | if sol_value is not nan and self.__value is not nan: 89 | if abs(self.__value - sol_value) >= gl.precision: 90 | return False 91 | if len(self.__x) != len(sol_x): 92 | return False 93 | for i in range(len(self.__x)): 94 | if not type(self.__x[i]) == type(sol_x[i]): 95 | return False 96 | else: 97 | if gl.float_precisions: # for Dimension2 class 98 | if gl.float_precisions[i] is not None: # CONTINUOUS 99 | if abs(self.__x[i] - sol_x[i]) >= pow(10, -1 * gl.float_precisions[i]): 100 | return False 101 | else: # DISCRETE or GRID 102 | if not self.__x[i] == sol_x[i]: 103 | return False 104 | else: # for Dimension class 105 | if abs(self.__x[i] - sol_x[i]) >= gl.precision: 106 | return False 107 | return True 108 | 109 | def exist_equal(self, sol_set): 110 | """ 111 | Check if exists another solution in sol_set the same as this one. 112 | 113 | :param sol_set: a solution set 114 | :return: True or False 115 | """ 116 | for sol in sol_set: 117 | if self.is_equal(sol): 118 | return True 119 | return False 120 | 121 | def set_x_index(self, index, x): 122 | self.__x[index] = x 123 | return 124 | 125 | def set_x(self, x): 126 | self.__x = x 127 | return 128 | 129 | def set_value(self, value): 130 | self.__value = value 131 | return 132 | 133 | def set_attach(self, attach): 134 | self.__attach = attach 135 | return 136 | 137 | def set_post_attach(self, attach): 138 | self.__post_attach = attach 139 | return 140 | 141 | def set_resample_value(self, resample_value): 142 | self.__resample_value = resample_value 143 | 144 | def set_no(self, no): 145 | self.__no = no 146 | 147 | def get_resample_value(self): 148 | return self.__resample_value 149 | 150 | def get_post_attach(self): 151 | return self.__post_attach 152 | 153 | def get_x_index(self, index): 154 | return self.__x[index] 155 | 156 | def get_x(self): 157 | return self.__x 158 | 159 | def get_value(self): 160 | return self.__value 161 | 162 | def get_attach(self): 163 | return self.__attach 164 | 165 | def get_no(self): 166 | return self.__no 167 | 168 | def print_solution(self): 169 | ToolFunction.log('x: ' + repr(self.__x)) 170 | ToolFunction.log('value: ' + repr(self.__value)) 171 | 172 | @staticmethod 173 | def deep_copy_set(sol_set): 174 | """ 175 | Deep copy a solution set. 176 | 177 | :param sol_set: a solution set 178 | :return: the copied solution set 179 | """ 180 | result_set = [] 181 | for sol in sol_set: 182 | result_set.append(sol.deep_copy()) 183 | return result_set 184 | 185 | @staticmethod 186 | def print_solution_set(sol_set): 187 | """ 188 | Print the value of each solution in an solution set. 189 | 190 | :param sol_set: solution set 191 | :return: no return value 192 | """ 193 | for sol in sol_set: 194 | ToolFunction.log('value: %f' % (sol.get_value())) 195 | return 196 | 197 | @staticmethod 198 | def find_maximum(sol_set): 199 | """ 200 | Find the solution having maximum value from the solution set. 201 | 202 | :param sol_set: solution set 203 | :return: solution, index 204 | """ 205 | maxi = neg_inf 206 | max_index = 0 207 | for i in range(len(sol_set)): 208 | if sol_set[i].get_value() > maxi: 209 | maxi = sol_set[i].get_value() 210 | max_index = i 211 | return sol_set[max_index], max_index 212 | 213 | @staticmethod 214 | def find_minimum(sol_set): 215 | """ 216 | Find the solution having minimum value from the solution set. 217 | 218 | :param sol_set: solution set 219 | :return: solution, index 220 | """ 221 | mini = pos_inf 222 | mini_index = 0 223 | for i in range(len(sol_set)): 224 | if sol_set[i].get_value() < mini: 225 | mini = sol_set[i].get_value() 226 | mini_index = i 227 | return sol_set[mini_index], mini_index 228 | -------------------------------------------------------------------------------- /docs/ZOOpt/A-Brief-Introduction-to-ZOOpt.rst: -------------------------------------------------------------------------------- 1 | -------------------------------- 2 | A Brief Introduction to ZOOpt 3 | -------------------------------- 4 | 5 | .. contents:: Table of Contents 6 | 7 | ZOOpt Components 8 | ---------------------------------------- 9 | 10 | In ZOOpt, an optimization problem is abstracted into several components: 11 | ``Objective``, ``Dimension`` (or ``Dimension2``), ``Parameter``, and ``Solution``, each is a 12 | Python class. 13 | 14 | An ``Objective`` object is initialized with a function and a 15 | ``Dimension`` (or ``Dimension2``) object, where the ``Dimension`` (or ``Dimension2``) object 16 | defines the dimension size and boundaries of the search space. A 17 | ``Parameter`` object specifies algorithm parameters. ZOOpt is able to 18 | automatically choose parameters for a range of problems, leaving only 19 | one parameter, the optimization budget (i.e. the number of solution 20 | evaluations), to be manually determined according to the time of 21 | the user. The ``Opt.min`` (or ``ExpOpt.min``) function makes the optimization happen, and 22 | returns a ``Solution`` object which contains the final solution and the 23 | function value (a solution list for ``ExpOpt``). Moreover, after the optimization, the ``Objective`` 24 | object contains the history of the optimization for observation. 25 | 26 | Use ZOOpt step by step 27 | ------------------------------ 28 | 29 | Using ZOOpt for your optimization tasks contains four steps 30 | 31 | - Define an objective function ``f`` 32 | - Define a ``Dimension`` (or ``Dimension2``) object ``dim``, then use ``f`` and ``dim`` to 33 | construct an ``Objective`` object 34 | - Define a ``Parameter`` object 35 | - Use ``Opt.min`` or ``ExpOpt.min`` to optimize 36 | 37 | Define an objective function 38 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 39 | 40 | An objective function should be defined to satisfy the interface 41 | ``def func(solution):`` , where ``solution`` is a ``Solution`` object 42 | which encapsulates x and f(x). In general, users can customize their 43 | objective function by 44 | 45 | .. code:: python 46 | 47 | def func(solution): 48 | x = solution.get_x() # fixed pattern 49 | value = f(x) # function f takes a vector x as input 50 | return value 51 | 52 | In the Sphere function example, the objective function which looks like 53 | 54 | .. code:: python 55 | 56 | def sphere(solution): 57 | x = solution.get_x() 58 | value = sum([(i-0.2)*(i-0.2) for i in x]) # sphere center is (0.2, 0.2) 59 | return value 60 | 61 | The objective function can also be a member function of a class, so that 62 | it can be much more complex than a single function. In this case, the 63 | function should be defined to satisfy the interface ``def func(self, solution):``. 64 | 65 | Define a ``Dimension`` (or ``Dimension2``) object ``dim``, then use ``f`` and ``dim`` to construct an ``Objective`` object 66 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 67 | 68 | A ``Dimension`` (or ``Dimension2``) object ``dim`` and an objective function ``f`` are 69 | necessary components to construct an ``Objective`` object, which is one 70 | of the two requisite parameters to call ``Opt.min`` function. 71 | 72 | ``Dimension`` class has an initial function looks like 73 | 74 | .. code:: python 75 | 76 | def __init__(self, size=0, regs=[], tys=[], order=[]): 77 | 78 | ``size`` is an integer indicating the dimension size. ``regs`` is a list 79 | contains the search space of each dimension (search space is a 80 | two-element list showing the range of each dimension, e.g., [-1, 1] for 81 | the range from -1 to 1, including -1 and 1). ``tys`` is a list of boolean value, ``True`` 82 | means it is continuous in this dimension and ``False`` means discrete. 83 | ``order`` is a list of boolean value, ``True`` means this dimension has 84 | an order relation and ``False`` means not. The boolean value in 85 | ``order`` is effective only when this dimension is discrete. By default, 86 | ``order=[False] * size``. Order is quite important for discrete optimization. 87 | whereby ZOOpt can make full use of the order relation if it is set to be True. 88 | For example, we can specify ``order=[True] * size`` in the optimization of the Sphere function with discrete search space [-10, 10]. 89 | 90 | In the optmization of the Sphere function with continuous search space, ``dim`` looks like 91 | 92 | .. code:: python 93 | 94 | dim_size = 100 95 | dim = Dimension(dim_size, [[-1, 1]] * dim_size, [True] * dim_size ) 96 | 97 | It means that the dimension size is 100, the range of each dimension is 98 | [-1, 1] and is continuous. 99 | 100 | Besides, if you prefer to put together the info of each dimension, 101 | ``Dimension2`` is a good choice. It looks like: 102 | 103 | .. code:: python 104 | 105 | def __init__(self, dim_list=[]): 106 | 107 | Where ``dim_list`` is a list of tuples. 108 | Each tuple has three arguments. For continuous dimensions, arguments are 109 | ``(type, range, float_precision)``. ``type`` indicates the continuity of the dimension, 110 | which should be set to ``ValueType.CONTINUOUS``. ``range`` is a list that indicates the search space. 111 | ``float_precision`` indicates the precision of the dimension, e.g., if ``float_precision`` 112 | is set to ``1e-6``, ``0.001``, or ``10``, the answer will be accurate to six decimal places, 113 | three decimal places, or tens places. For discrete dimensions, arguments are 114 | ``(type, range, has_partial_order)``. ``type`` indicates the continuity of the dimension, 115 | which should be set to ``ValueType.DISCRETE``. ``range`` is a list that indicates the search space. 116 | ``has_partial_order`` indicates whether this dimension is ordered. ``True`` is for an ordered 117 | relation and ``False`` means not. 118 | 119 | In the optmization of the Sphere function with continuous search space, ``dim`` looks like 120 | 121 | .. code:: python 122 | 123 | dim_size = 100 124 | one_dim = (ValueType.CONTINUOUS, [-1, 1], 1e-6) 125 | dim_list = [one_dim] * dim_size 126 | dim = Dimension2(dim_list) 127 | 128 | It means that the dimension size is 100, each dimension is continuous, ranging from -1 to 1, 129 | with two decimal precision. 130 | 131 | Then use ``dim`` and ``f`` to construct an Objective object. 132 | 133 | .. code:: python 134 | 135 | objective = Objective(sphere, dim) 136 | 137 | Define a ``parameter`` objective 138 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 139 | 140 | The class ``Parameter`` defines all parameters used in the optimization 141 | algorithms. Commonly, ``budget`` is the only parameter needed to be 142 | manually determined by users, while all parameters are controllable. 143 | Other parameters will be discussed in **Commonly used parameter setting 144 | in ZOOpt** 145 | 146 | .. code:: python 147 | 148 | par = Parameter(budget=10000) 149 | 150 | Use ``Opt.min`` or ``ExpOpt.min`` to optimize 151 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 152 | 153 | ``Opt.min`` and ``ExpOpt.min`` are two functions for optimization. 154 | 155 | ``Opt.min`` takes an ``Objective`` object, e.g. ``objective``, and a 156 | ``Parameter`` object, e.g. ``par``, as input. It will return a 157 | ``Solution`` object e.g. ``sol``, which represents the optimal result of 158 | the optimization problem. ``sol.get_x()`` and ``sol.get_value()`` will 159 | return ``sol``'s x and f(x). 160 | 161 | .. code:: python 162 | 163 | sol = Opt.min(objective, par) 164 | print(sol.get_x(), sol.get_value()) 165 | 166 | ``ExpOpt.min`` is an API designed for repeated experiments, it will 167 | return a ``Solution`` object list containing ``repeat`` solutions. 168 | 169 | .. code:: python 170 | 171 | class ExpOpt: 172 | @staticmethod 173 | def min(objective, parameter, repeat=1, best_n=None, plot=False, plot_file=None): 174 | 175 | ``repeat`` indicates the number of repetitions of the optimization (each 176 | starts from scratch). ``best_n`` is a parameter for result analysis, 177 | ``best_n`` is an integer and equals to ``repeat`` by default. 178 | ``ExpOpt.min`` will print the average value and the standard deviation 179 | of the ``best_n`` best results among the returned solution list. 180 | ``plot`` determines whether to plot the regret curve on screen during 181 | the optimization progress. When ``plot=True``, the procedure will be 182 | blocked and show figure during its running if ``plot_file`` is not 183 | given. Otherwise, the procedure will save the figures to disk without 184 | blocking. 185 | 186 | .. code:: python 187 | 188 | solution_list = ExpOpt.min(objective, par, repeat=10, best_n=5, plot=True, plot_file='opt_progress.pdf') 189 | -------------------------------------------------------------------------------- /zoopt/algos/noise_handling/ssracos.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # coding=utf-8 3 | 4 | """ 5 | This module contains the class SSRacos, which combines the noise handling method value suppression with SRacos. 6 | 7 | Author: 8 | Xiong-Hui Chen, Yu-Ren Liu 9 | """ 10 | 11 | import time 12 | from zoopt.algos.opt_algorithms.racos.racos_classification import RacosClassification 13 | from zoopt.algos.opt_algorithms.racos.sracos import SRacos 14 | from zoopt.utils.tool_function import ToolFunction 15 | import numpy as np 16 | 17 | 18 | class SSRacos(SRacos): 19 | """ 20 | This class implements SSRacos algorithm, which combines the noise handling method value suppression with SRacos. 21 | """ 22 | 23 | def __init__(self): 24 | SRacos.__init__(self) 25 | return 26 | 27 | def opt(self, objective, parameter, strategy='WR', ub=1): 28 | """ 29 | SSRacos optimization. 30 | 31 | :param objective: a Objective object 32 | :param parameter: a Parameter object 33 | :param strategy: replace strategy 34 | :param ub: uncertain bits, which is a parameter of SRacos 35 | :return: the best solution of the optimization 36 | """ 37 | self.clear() 38 | self.set_objective(objective) 39 | self.set_parameters(parameter) 40 | self.init_attribute() 41 | self.i = 0 42 | iteration_num = self._parameter.get_budget() - self._parameter.get_train_size() 43 | time_log1 = time.time() 44 | max_distinct_repeat_times = 100 45 | current_not_distinct_times = 0 46 | last_best = None 47 | non_update_allowed = parameter.get_non_update_allowed() 48 | non_update_times = 0 49 | current_stay_times = 0 50 | non_update_baselines_times = 0 51 | 52 | while self.i < iteration_num: 53 | sampled_data = self._positive_data + self._negative_data 54 | if np.random.random() < self._parameter.get_probability(): 55 | classifier = RacosClassification( 56 | self._objective.get_dim(), self._positive_data, self._negative_data, ub) 57 | classifier.mixed_classification() 58 | solution, distinct_flag = self.distinct_sample_classifier( 59 | classifier, sampled_data, True, self._parameter.get_train_size()) 60 | else: 61 | solution, distinct_flag = self.distinct_sample( 62 | self._objective.get_dim(), sampled_data) 63 | # panic stop 64 | if solution is None: 65 | ToolFunction.log(" [break loop] because solution is None") 66 | return self.get_best_solution() 67 | if distinct_flag is False: 68 | current_not_distinct_times += 1 69 | if current_not_distinct_times >= max_distinct_repeat_times: 70 | ToolFunction.log( 71 | "[break loop] because distinct_flag is false too much times") 72 | return self.get_best_solution() 73 | else: 74 | continue 75 | else: 76 | current_not_distinct_times = 0 77 | # evaluate the solution 78 | objective.eval(solution) 79 | # show best solution 80 | times = self.i + self._parameter.get_train_size() + 1 81 | self.show_best_solution(parameter.get_intermediate_result(), times, 82 | parameter.get_intermediate_freq()) 83 | # suppression 84 | if self._is_worest(solution): 85 | non_update_times += 1 86 | if non_update_times >= non_update_allowed: 87 | self._positive_data_re_sample() 88 | self.update_possible_solution() 89 | self._positive_data = self.sort_solution_list( 90 | self._positive_data) 91 | non_update_times = 0 92 | best_solution = self.get_best_solution(for_test=True) 93 | last_best = best_solution.get_resample_value() 94 | else: 95 | non_update_times = 0 96 | 97 | bad_ele = self.replace(self._positive_data, solution, 'pos') 98 | self.replace(self._negative_data, bad_ele, 'neg', strategy) 99 | self._best_solution = self._positive_data[0] 100 | 101 | if self.i == 4: 102 | time_log2 = time.time() 103 | expected_time = (self._parameter.get_budget( 104 | ) - self._parameter.get_train_size()) * (time_log2 - time_log1) / 5 105 | if self._parameter.get_time_budget() is not None: 106 | expected_time = min( 107 | expected_time, self._parameter.get_time_budget()) 108 | if expected_time > 5: 109 | m, s = divmod(expected_time, 60) 110 | h, m = divmod(m, 60) 111 | ToolFunction.log( 112 | 'expected remaining running time: %02d:%02d:%02d' % (h, m, s)) 113 | # time budget check 114 | if self._parameter.get_time_budget() is not None: 115 | if (time.time() - time_log1) >= self._parameter.get_time_budget(): 116 | ToolFunction.log('time_budget runs out') 117 | return self.get_best_solution() 118 | # terminal_value check 119 | if self._parameter.get_terminal_value() is not None: 120 | solution = self.get_best_solution(for_test=True) 121 | if solution is not None and solution.get_resample_value() <= self._parameter.get_terminal_value(): 122 | ToolFunction.log('terminal function value reached') 123 | return self.get_best_solution() 124 | self.i += 1 125 | return self.get_best_solution() 126 | 127 | def update_possible_solution(self): 128 | """ 129 | Search all of solutions in self._positive_data, add it to self._possible_solution_list if not exist. 130 | """ 131 | for solution in self._positive_data: 132 | if solution.is_in_possible_solution: 133 | continue 134 | else: 135 | solution.is_in_possible_solution = True 136 | new_solution = solution.deep_copy() 137 | self._possible_solution_list.append(new_solution) 138 | 139 | def get_best_solution(self, for_test=False): 140 | """ 141 | Find the best solution. 142 | 143 | :param for_test: if set for_test as False, this method will re-sample all of the solutions in positive data and then add them to possible solution before search the best solution. 144 | :return: return resample value if for_test is False otherwise return suppression value. 145 | """ 146 | if not for_test: 147 | # update solution in positive data 148 | self._positive_data_re_sample() 149 | self.update_possible_solution() 150 | # sort 151 | sort_solution = self.sort_solution_list( 152 | self._possible_solution_list, key=lambda x: x.get_resample_value()) 153 | if sort_solution == []: 154 | return None 155 | else: 156 | if not for_test: 157 | sort_solution[0].set_value( 158 | sort_solution[0].get_resample_value()) 159 | return sort_solution[0] 160 | else: 161 | return sort_solution[0] 162 | 163 | def sort_solution_list(self, solution_list, key=lambda x: x.get_value()): 164 | """ 165 | Sort a solution list (eg. self._positive_data, self._possible_solution_list) with key 166 | 167 | :param solution_list: the solution list to be sorted. 168 | :param key: a function which input a solution and return its key. 169 | :return: return a copy of sorted list(without change origin solution list). 170 | """ 171 | return sorted(solution_list, key=key) 172 | 173 | def _positive_data_re_sample(self): 174 | """ 175 | Re-sample all of the solutions in positive data(ignore solutions which have re-sampled before). 176 | """ 177 | for data in self._positive_data: 178 | iter_times = self._objective.resample( 179 | data, self.get_parameters().get_resample_times()) 180 | self.i += iter_times 181 | 182 | def _is_worest(self, solution): 183 | """ 184 | Judge if the solution is the worst solution in positive data. 185 | 186 | :param solution: the solution to be judged. 187 | :return: True if the solution is the worst False otherwise. 188 | """ 189 | return self._positive_data[-1].get_value() <= solution.get_value() 190 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # ZOOpt 2 | 3 | [![license](https://img.shields.io/github/license/mashape/apistatus.svg?maxAge=2592000)](https://github.com/eyounx/ZOOpt/blob/master/LICENSE.txt) [![Build Status](https://www.travis-ci.org/eyounx/ZOOpt.svg?branch=master)](https://www.travis-ci.org/eyounx/ZOOpt) [![Documentation Status](https://readthedocs.org/projects/zoopt/badge/?version=latest)](https://zoopt.readthedocs.io/en/latest/?badge=latest) [![codecov](https://codecov.io/gh/AlexLiuyuren/ZOOpt/branch/master/graph/badge.svg)](https://codecov.io/gh/AlexLiuyuren/ZOOpt) 4 | 5 | ZOOpt is a python package for Zeroth-Order Optimization. 6 | 7 | Zeroth-order optimization (a.k.a. derivative-free optimization/black-box optimization) does not rely on the gradient of the objective function, but instead, learns from samples of the search space. It is suitable for optimizing functions that are nondifferentiable, with many local minima, or even unknown but only testable. 8 | 9 | ZOOpt implements some state-of-the-art zeroth-order optimization methods and their parallel versions. Users only need to add several keywords to use parallel optimization on a single machine. For large-scale distributed optimization across multiple machines, please refer to [Distributed ZOOpt](https://github.com/eyounx/ZOOsrv). 10 | 11 | **Documents**: [Tutorial of ZOOpt](http://zoopt.readthedocs.io/en/latest/index.html) 12 | 13 | **Citation**: 14 | 15 | > **Yu-Ren Liu, Yi-Qi Hu, Hong Qian, Chao Qian, Yang Yu. ZOOpt: Toolbox for Derivative-Free Optimization**. SCIENCE CHINA Information Sciences, 2022. [CORR abs/1801.00329](https://arxiv.org/abs/1801.00329) 16 | 17 | (Features in this article are from version 0.2) 18 | 19 | ## Installation 20 | 21 | The easiest way to install ZOOpt is to type `pip install zoopt` in the terminal/command line. 22 | 23 | Alternatively, to install ZOOpt by source code, download this repository and sequentially run following commands in your terminal/command line. 24 | 25 | ``` 26 | $ python setup.py build 27 | $ python setup.py install 28 | ``` 29 | 30 | ## Quick tutorial for `Dimension2` class 31 | Since release 0.4.1, `Dimension2` class in ZOOpt supports constructing THREE types of dimensions, 32 | i.e.: `ValueType.CONTINUOUS`, `ValueType.DISCRETE`, and `ValueType.GRID`. 33 | 34 | For **continuous dimensions**, the arguments should be like `(ValueType.CONTINUOUS, range, float_precision)`.
35 | Where `ValueType.CONTINUOUS` indicates this dimension is continuous. 36 | `range` is a list that indicates the search space, such as `[min, max]` (endpoints are inclusive). 37 | `float_precision` means the precision of this dimension, e.g., if it is set to 1e-6, 0.001, or 10, the answer will be accurate to six decimal places, three decimal places, or tens places. 38 | 39 | For **discrete dimensions**, the arguments should be like `(ValueType.DISCRETE, range, has_partial_order)`.
40 | Where `ValueType.DISCRETE` indicates this dimension is discrete. 41 | `range` is also a list that indicates the search space, such as `[min, max]` (endpoints are inclusive), but **ONLY integers can be sampled**. 42 | `has_partial_order` means whether this dimension is ordered. `True` is for an ordered relation and `False` means not. 43 | 44 | For **grid dimensions**, the arguments should be like `(ValueType.GRID, grid_list)`.
45 | Where `ValueType.GRID` indicates this dimension is a grid, which is convenient to instance-wise search. 46 | `grid_list` is a list whose values can be *str*, *int*, *float*, etc. All values in this list will be sampled like grid search. 47 | 48 | For instance, you can define your own dimensions like: 49 | ```python 50 | dim_list = [ 51 | (ValueType.CONTINUOUS, [-1, 1], 1e-6), 52 | (ValueType.DISCRETE, [-10, 10], False), 53 | (ValueType.DISCRETE, [10, 100], True), 54 | (ValueType.GRID, [64, 128, 256, 512, 1024]), 55 | (ValueType.GRID, ["relu", "leaky_relu", "tanh", "sigmoid"]) 56 | ] 57 | 58 | dim = Dimension2(dim_list) 59 | ``` 60 | 61 | ## A simple example 62 | 63 | We define the Ackley function for minimization (note that this function is for arbitrary dimensions, determined by the solution) 64 | 65 | ```python 66 | import numpy as np 67 | def ackley(solution): 68 | x = solution.get_x() 69 | bias = 0.2 70 | value = -20 * np.exp(-0.2 * np.sqrt(sum([(i - bias) * (i - bias) for i in x]) / len(x))) - \ 71 | np.exp(sum([np.cos(2.0*np.pi*(i-bias)) for i in x]) / len(x)) + 20.0 + np.e 72 | return value 73 | ``` 74 | 75 | Ackley function is a classical function with many local minima. In 2-dimension, it looks like (from wikipedia) 76 | 77 |
Ackley function
78 | Then, use ZOOpt to optimize a 100-dimension Ackley function: 79 | 80 | ```python 81 | from zoopt import Dimension, ValueType, Dimension2, Objective, Parameter, Opt, ExpOpt 82 | 83 | dim_size = 100 # dimension size 84 | dim = Dimension(dim_size, [[-1, 1]]*dim_size, [True]*dim_size) 85 | # dim = Dimension2([(ValueType.CONTINUOUS, [-1, 1], 1e-6)]*dim_size) 86 | obj = Objective(ackley, dim) 87 | # perform optimization 88 | solution = Opt.min(obj, Parameter(budget=100*dim_size)) 89 | # print the solution 90 | print(solution.get_x(), solution.get_value()) 91 | # parallel optimization for time-consuming tasks 92 | solution = Opt.min(obj, Parameter(budget=100*dim_size, parallel=True, server_num=3)) 93 | ``` 94 | 95 | For a few seconds, the optimization is done. Then, we can visualize the optimization progress 96 | 97 | ```python 98 | import matplotlib.pyplot as plt 99 | plt.plot(obj.get_history_bestsofar()) 100 | plt.savefig('figure.png') 101 | ``` 102 | 103 | which looks like 104 | 105 |
Expeirment results
106 | We can also use `ExpOpt` to repeat the optimization for performance analysis, which will calculate the mean and standard deviation of multiple optimization results while automatically visualizing the optimization progress. 107 | 108 | ```python 109 | solution_list = ExpOpt.min(obj, Parameter(budget=100*dim_size), repeat=3, 110 | plot=True, plot_file="progress.png") 111 | for solution in solution_list: 112 | print(solution.get_x(), solution.get_value()) 113 | 114 | ``` 115 | 116 | More examples are available in the `example` fold. 117 | 118 | # Releases 119 | 120 | ## [release 0.4.2](https://github.com/polixir/ZOOpt/releases/tag/v0.4.2) 121 | 122 | - Fix known bugs that may cause terrible outcomes while optimizing. 123 | It is strongly recommended to upgrade to the latest version. 124 | 125 | ## [release 0.4.1](https://github.com/polixir/ZOOpt/releases/tag/v0.4.1) 126 | 127 | - Fix known bugs when sampling for Tune and compatible with the latest Ray 0.8.7. 128 | It is strongly recommended to update to this version if you are leveraging Ray. 129 | - Add ValueType.GRID for instance-wise search in Dimension2 class. 130 | 131 | ## [release 0.4](https://github.com/eyounx/ZOOpt/releases/tag/v0.4) 132 | 133 | - Add Dimension2 class, which provides another format to construct dimensions. Unlike Dimension class, Dimension2 allows users to specify optimization precision. 134 | - Add SRacosTune class, which is used to suggest/provide trials and process results for [Tune](https://github.com/ray-project/ray) (a platform based on RAY for distributed model selection and training). 135 | - Deprecate Python 2 support 136 | 137 | ## [release 0.3](https://github.com/eyounx/ZOOpt/releases/tag/v0.3) 138 | 139 | - Add a parallel implementation of SRACOS, which accelarates the optimization by asynchronous parallelization. 140 | - Add a function that enables users to set a customized stop criteria for the optimization. 141 | - Rewrite the documentation to make it easier to follow. 142 | 143 | ## [release 0.2](https://github.com/eyounx/ZOOpt/releases/tag/v0.2.1) 144 | 145 | - Add the noise handling strategies Re-sampling and Value Suppression (AAAI'18), and the subset selection method with noise handling PONSS (NIPS'17) 146 | - Add high-dimensionality handling method Sequential Random Embedding (IJCAI'16) 147 | - Rewrite Pareto optimization method. Bugs fixed. 148 | 149 | ## [release 0.1](https://github.com/eyounx/ZOOpt/releases/tag/v0.1) 150 | 151 | - Include the general optimization method RACOS (AAAI'16) and Sequential RACOS (AAAI'17), and the subset selection method POSS (NIPS'15). 152 | - The algorithm selection is automatic. See examples in the `example` fold.- Default parameters work well on many problems, while parameters are fully controllable 153 | - Running speed optmized for Python 154 | 155 | # Distributed ZOOpt 156 | 157 | Distributed ZOOpt is consisted of a [server project](https://github.com/eyounx/ZOOsrv) and a [client project](https://github.com/eyounx/ZOOclient.jl). Details can be found in the [Tutorial of Distributed ZOOpt](http://zoopt.readthedocs.io/en/latest/Tutorial%20of%20Distributed%20ZOOpt.html) 158 | 159 | --------------------------------------------------------------------------------