4 |
5 |
6 |
--------------------------------------------------------------------------------
/.idea/misc.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
--------------------------------------------------------------------------------
/.idea/modules.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
--------------------------------------------------------------------------------
/.idea/vcs.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
--------------------------------------------------------------------------------
/CODE_OF_CONDUCT.md:
--------------------------------------------------------------------------------
1 | # Contributor Covenant Code of Conduct
2 |
3 | ## Our Pledge
4 |
5 | We as members, contributors, and leaders pledge to make participation in our
6 | community a harassment-free experience for everyone, regardless of age, body
7 | size, visible or invisible disability, ethnicity, sex characteristics, gender
8 | identity and expression, level of experience, education, socio-economic status,
9 | nationality, personal appearance, race, religion, or sexual identity
10 | and orientation.
11 |
12 | We pledge to act and interact in ways that contribute to an open, welcoming,
13 | diverse, inclusive, and healthy community.
14 |
15 | ## Our Standards
16 |
17 | Examples of behavior that contributes to a positive environment for our
18 | community include:
19 |
20 | * Demonstrating empathy and kindness toward other people
21 | * Being respectful of differing opinions, viewpoints, and experiences
22 | * Giving and gracefully accepting constructive feedback
23 | * Accepting responsibility and apologizing to those affected by our mistakes,
24 | and learning from the experience
25 | * Focusing on what is best not just for us as individuals, but for the
26 | overall community
27 |
28 | Examples of unacceptable behavior include:
29 |
30 | * The use of sexualized language or imagery, and sexual attention or
31 | advances of any kind
32 | * Trolling, insulting or derogatory comments, and personal or political attacks
33 | * Public or private harassment
34 | * Publishing others' private information, such as a physical or email
35 | address, without their explicit permission
36 | * Other conduct which could reasonably be considered inappropriate in a
37 | professional setting
38 |
39 | ## Enforcement Responsibilities
40 |
41 | Community leaders are responsible for clarifying and enforcing our standards of
42 | acceptable behavior and will take appropriate and fair corrective action in
43 | response to any behavior that they deem inappropriate, threatening, offensive,
44 | or harmful.
45 |
46 | Community leaders have the right and responsibility to remove, edit, or reject
47 | comments, commits, code, wiki edits, issues, and other contributions that are
48 | not aligned to this Code of Conduct, and will communicate reasons for moderation
49 | decisions when appropriate.
50 |
51 | ## Scope
52 |
53 | This Code of Conduct applies within all community spaces, and also applies when
54 | an individual is officially representing the community in public spaces.
55 | Examples of representing our community include using an official e-mail address,
56 | posting via an official social media account, or acting as an appointed
57 | representative at an online or offline event.
58 |
59 | ## Enforcement
60 |
61 | Instances of abusive, harassing, or otherwise unacceptable behavior may be
62 | reported to the community leaders responsible for enforcement at
63 | murad.mustafayev@ufaz.az.
64 | All complaints will be reviewed and investigated promptly and fairly.
65 |
66 | All community leaders are obligated to respect the privacy and security of the
67 | reporter of any incident.
68 |
69 | ## Enforcement Guidelines
70 |
71 | Community leaders will follow these Community Impact Guidelines in determining
72 | the consequences for any action they deem in violation of this Code of Conduct:
73 |
74 | ### 1. Correction
75 |
76 | **Community Impact**: Use of inappropriate language or other behavior deemed
77 | unprofessional or unwelcome in the community.
78 |
79 | **Consequence**: A private, written warning from community leaders, providing
80 | clarity around the nature of the violation and an explanation of why the
81 | behavior was inappropriate. A public apology may be requested.
82 |
83 | ### 2. Warning
84 |
85 | **Community Impact**: A violation through a single incident or series
86 | of actions.
87 |
88 | **Consequence**: A warning with consequences for continued behavior. No
89 | interaction with the people involved, including unsolicited interaction with
90 | those enforcing the Code of Conduct, for a specified period of time. This
91 | includes avoiding interactions in community spaces as well as external channels
92 | like social media. Violating these terms may lead to a temporary or
93 | permanent ban.
94 |
95 | ### 3. Temporary Ban
96 |
97 | **Community Impact**: A serious violation of community standards, including
98 | sustained inappropriate behavior.
99 |
100 | **Consequence**: A temporary ban from any sort of interaction or public
101 | communication with the community for a specified period of time. No public or
102 | private interaction with the people involved, including unsolicited interaction
103 | with those enforcing the Code of Conduct, is allowed during this period.
104 | Violating these terms may lead to a permanent ban.
105 |
106 | ### 4. Permanent Ban
107 |
108 | **Community Impact**: Demonstrating a pattern of violation of community
109 | standards, including sustained inappropriate behavior, harassment of an
110 | individual, or aggression toward or disparagement of classes of individuals.
111 |
112 | **Consequence**: A permanent ban from any sort of public interaction within
113 | the community.
114 |
115 | ## Attribution
116 |
117 | This Code of Conduct is adapted from the [Contributor Covenant][homepage],
118 | version 2.0, available at
119 | https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
120 |
121 | Community Impact Guidelines were inspired by [Mozilla's code of conduct
122 | enforcement ladder](https://github.com/mozilla/diversity).
123 |
124 | [homepage]: https://www.contributor-covenant.org
125 |
126 | For answers to common questions about this code of conduct, see the FAQ at
127 | https://www.contributor-covenant.org/faq. Translations are available at
128 | https://www.contributor-covenant.org/translations.
129 |
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | You are welcome to comtribute and I'd really appreciate if you follow these conditions:
2 | - Try to follow the code format of the project
3 | - Use docstrings in the same style as in the project
4 | - Add short descriptions with reference links in README.md for each newly added algorithms
5 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | GNU GENERAL PUBLIC LICENSE
2 | Version 3, 29 June 2007
3 |
4 | Copyright (C) 2007 Free Software Foundation, Inc.
5 | Everyone is permitted to copy and distribute verbatim copies
6 | of this license document, but changing it is not allowed.
7 |
8 | Preamble
9 |
10 | The GNU General Public License is a free, copyleft license for
11 | software and other kinds of works.
12 |
13 | The licenses for most software and other practical works are designed
14 | to take away your freedom to share and change the works. By contrast,
15 | the GNU General Public License is intended to guarantee your freedom to
16 | share and change all versions of a program--to make sure it remains free
17 | software for all its users. We, the Free Software Foundation, use the
18 | GNU General Public License for most of our software; it applies also to
19 | any other work released this way by its authors. You can apply it to
20 | your programs, too.
21 |
22 | When we speak of free software, we are referring to freedom, not
23 | price. Our General Public Licenses are designed to make sure that you
24 | have the freedom to distribute copies of free software (and charge for
25 | them if you wish), that you receive source code or can get it if you
26 | want it, that you can change the software or use pieces of it in new
27 | free programs, and that you know you can do these things.
28 |
29 | To protect your rights, we need to prevent others from denying you
30 | these rights or asking you to surrender the rights. Therefore, you have
31 | certain responsibilities if you distribute copies of the software, or if
32 | you modify it: responsibilities to respect the freedom of others.
33 |
34 | For example, if you distribute copies of such a program, whether
35 | gratis or for a fee, you must pass on to the recipients the same
36 | freedoms that you received. You must make sure that they, too, receive
37 | or can get the source code. And you must show them these terms so they
38 | know their rights.
39 |
40 | Developers that use the GNU GPL protect your rights with two steps:
41 | (1) assert copyright on the software, and (2) offer you this License
42 | giving you legal permission to copy, distribute and/or modify it.
43 |
44 | For the developers' and authors' protection, the GPL clearly explains
45 | that there is no warranty for this free software. For both users' and
46 | authors' sake, the GPL requires that modified versions be marked as
47 | changed, so that their problems will not be attributed erroneously to
48 | authors of previous versions.
49 |
50 | Some devices are designed to deny users access to install or run
51 | modified versions of the software inside them, although the manufacturer
52 | can do so. This is fundamentally incompatible with the aim of
53 | protecting users' freedom to change the software. The systematic
54 | pattern of such abuse occurs in the area of products for individuals to
55 | use, which is precisely where it is most unacceptable. Therefore, we
56 | have designed this version of the GPL to prohibit the practice for those
57 | products. If such problems arise substantially in other domains, we
58 | stand ready to extend this provision to those domains in future versions
59 | of the GPL, as needed to protect the freedom of users.
60 |
61 | Finally, every program is threatened constantly by software patents.
62 | States should not allow patents to restrict development and use of
63 | software on general-purpose computers, but in those that do, we wish to
64 | avoid the special danger that patents applied to a free program could
65 | make it effectively proprietary. To prevent this, the GPL assures that
66 | patents cannot be used to render the program non-free.
67 |
68 | The precise terms and conditions for copying, distribution and
69 | modification follow.
70 |
71 | TERMS AND CONDITIONS
72 |
73 | 0. Definitions.
74 |
75 | "This License" refers to version 3 of the GNU General Public License.
76 |
77 | "Copyright" also means copyright-like laws that apply to other kinds of
78 | works, such as semiconductor masks.
79 |
80 | "The Program" refers to any copyrightable work licensed under this
81 | License. Each licensee is addressed as "you". "Licensees" and
82 | "recipients" may be individuals or organizations.
83 |
84 | To "modify" a work means to copy from or adapt all or part of the work
85 | in a fashion requiring copyright permission, other than the making of an
86 | exact copy. The resulting work is called a "modified version" of the
87 | earlier work or a work "based on" the earlier work.
88 |
89 | A "covered work" means either the unmodified Program or a work based
90 | on the Program.
91 |
92 | To "propagate" a work means to do anything with it that, without
93 | permission, would make you directly or secondarily liable for
94 | infringement under applicable copyright law, except executing it on a
95 | computer or modifying a private copy. Propagation includes copying,
96 | distribution (with or without modification), making available to the
97 | public, and in some countries other activities as well.
98 |
99 | To "convey" a work means any kind of propagation that enables other
100 | parties to make or receive copies. Mere interaction with a user through
101 | a computer network, with no transfer of a copy, is not conveying.
102 |
103 | An interactive user interface displays "Appropriate Legal Notices"
104 | to the extent that it includes a convenient and prominently visible
105 | feature that (1) displays an appropriate copyright notice, and (2)
106 | tells the user that there is no warranty for the work (except to the
107 | extent that warranties are provided), that licensees may convey the
108 | work under this License, and how to view a copy of this License. If
109 | the interface presents a list of user commands or options, such as a
110 | menu, a prominent item in the list meets this criterion.
111 |
112 | 1. Source Code.
113 |
114 | The "source code" for a work means the preferred form of the work
115 | for making modifications to it. "Object code" means any non-source
116 | form of a work.
117 |
118 | A "Standard Interface" means an interface that either is an official
119 | standard defined by a recognized standards body, or, in the case of
120 | interfaces specified for a particular programming language, one that
121 | is widely used among developers working in that language.
122 |
123 | The "System Libraries" of an executable work include anything, other
124 | than the work as a whole, that (a) is included in the normal form of
125 | packaging a Major Component, but which is not part of that Major
126 | Component, and (b) serves only to enable use of the work with that
127 | Major Component, or to implement a Standard Interface for which an
128 | implementation is available to the public in source code form. A
129 | "Major Component", in this context, means a major essential component
130 | (kernel, window system, and so on) of the specific operating system
131 | (if any) on which the executable work runs, or a compiler used to
132 | produce the work, or an object code interpreter used to run it.
133 |
134 | The "Corresponding Source" for a work in object code form means all
135 | the source code needed to generate, install, and (for an executable
136 | work) run the object code and to modify the work, including scripts to
137 | control those activities. However, it does not include the work's
138 | System Libraries, or general-purpose tools or generally available free
139 | programs which are used unmodified in performing those activities but
140 | which are not part of the work. For example, Corresponding Source
141 | includes interface definition files associated with source files for
142 | the work, and the source code for shared libraries and dynamically
143 | linked subprograms that the work is specifically designed to require,
144 | such as by intimate data communication or control flow between those
145 | subprograms and other parts of the work.
146 |
147 | The Corresponding Source need not include anything that users
148 | can regenerate automatically from other parts of the Corresponding
149 | Source.
150 |
151 | The Corresponding Source for a work in source code form is that
152 | same work.
153 |
154 | 2. Basic Permissions.
155 |
156 | All rights granted under this License are granted for the term of
157 | copyright on the Program, and are irrevocable provided the stated
158 | conditions are met. This License explicitly affirms your unlimited
159 | permission to run the unmodified Program. The output from running a
160 | covered work is covered by this License only if the output, given its
161 | content, constitutes a covered work. This License acknowledges your
162 | rights of fair use or other equivalent, as provided by copyright law.
163 |
164 | You may make, run and propagate covered works that you do not
165 | convey, without conditions so long as your license otherwise remains
166 | in force. You may convey covered works to others for the sole purpose
167 | of having them make modifications exclusively for you, or provide you
168 | with facilities for running those works, provided that you comply with
169 | the terms of this License in conveying all material for which you do
170 | not control copyright. Those thus making or running the covered works
171 | for you must do so exclusively on your behalf, under your direction
172 | and control, on terms that prohibit them from making any copies of
173 | your copyrighted material outside their relationship with you.
174 |
175 | Conveying under any other circumstances is permitted solely under
176 | the conditions stated below. Sublicensing is not allowed; section 10
177 | makes it unnecessary.
178 |
179 | 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
180 |
181 | No covered work shall be deemed part of an effective technological
182 | measure under any applicable law fulfilling obligations under article
183 | 11 of the WIPO copyright treaty adopted on 20 December 1996, or
184 | similar laws prohibiting or restricting circumvention of such
185 | measures.
186 |
187 | When you convey a covered work, you waive any legal power to forbid
188 | circumvention of technological measures to the extent such circumvention
189 | is effected by exercising rights under this License with respect to
190 | the covered work, and you disclaim any intention to limit operation or
191 | modification of the work as a means of enforcing, against the work's
192 | users, your or third parties' legal rights to forbid circumvention of
193 | technological measures.
194 |
195 | 4. Conveying Verbatim Copies.
196 |
197 | You may convey verbatim copies of the Program's source code as you
198 | receive it, in any medium, provided that you conspicuously and
199 | appropriately publish on each copy an appropriate copyright notice;
200 | keep intact all notices stating that this License and any
201 | non-permissive terms added in accord with section 7 apply to the code;
202 | keep intact all notices of the absence of any warranty; and give all
203 | recipients a copy of this License along with the Program.
204 |
205 | You may charge any price or no price for each copy that you convey,
206 | and you may offer support or warranty protection for a fee.
207 |
208 | 5. Conveying Modified Source Versions.
209 |
210 | You may convey a work based on the Program, or the modifications to
211 | produce it from the Program, in the form of source code under the
212 | terms of section 4, provided that you also meet all of these conditions:
213 |
214 | a) The work must carry prominent notices stating that you modified
215 | it, and giving a relevant date.
216 |
217 | b) The work must carry prominent notices stating that it is
218 | released under this License and any conditions added under section
219 | 7. This requirement modifies the requirement in section 4 to
220 | "keep intact all notices".
221 |
222 | c) You must license the entire work, as a whole, under this
223 | License to anyone who comes into possession of a copy. This
224 | License will therefore apply, along with any applicable section 7
225 | additional terms, to the whole of the work, and all its parts,
226 | regardless of how they are packaged. This License gives no
227 | permission to license the work in any other way, but it does not
228 | invalidate such permission if you have separately received it.
229 |
230 | d) If the work has interactive user interfaces, each must display
231 | Appropriate Legal Notices; however, if the Program has interactive
232 | interfaces that do not display Appropriate Legal Notices, your
233 | work need not make them do so.
234 |
235 | A compilation of a covered work with other separate and independent
236 | works, which are not by their nature extensions of the covered work,
237 | and which are not combined with it such as to form a larger program,
238 | in or on a volume of a storage or distribution medium, is called an
239 | "aggregate" if the compilation and its resulting copyright are not
240 | used to limit the access or legal rights of the compilation's users
241 | beyond what the individual works permit. Inclusion of a covered work
242 | in an aggregate does not cause this License to apply to the other
243 | parts of the aggregate.
244 |
245 | 6. Conveying Non-Source Forms.
246 |
247 | You may convey a covered work in object code form under the terms
248 | of sections 4 and 5, provided that you also convey the
249 | machine-readable Corresponding Source under the terms of this License,
250 | in one of these ways:
251 |
252 | a) Convey the object code in, or embodied in, a physical product
253 | (including a physical distribution medium), accompanied by the
254 | Corresponding Source fixed on a durable physical medium
255 | customarily used for software interchange.
256 |
257 | b) Convey the object code in, or embodied in, a physical product
258 | (including a physical distribution medium), accompanied by a
259 | written offer, valid for at least three years and valid for as
260 | long as you offer spare parts or customer support for that product
261 | model, to give anyone who possesses the object code either (1) a
262 | copy of the Corresponding Source for all the software in the
263 | product that is covered by this License, on a durable physical
264 | medium customarily used for software interchange, for a price no
265 | more than your reasonable cost of physically performing this
266 | conveying of source, or (2) access to copy the
267 | Corresponding Source from a network server at no charge.
268 |
269 | c) Convey individual copies of the object code with a copy of the
270 | written offer to provide the Corresponding Source. This
271 | alternative is allowed only occasionally and noncommercially, and
272 | only if you received the object code with such an offer, in accord
273 | with subsection 6b.
274 |
275 | d) Convey the object code by offering access from a designated
276 | place (gratis or for a charge), and offer equivalent access to the
277 | Corresponding Source in the same way through the same place at no
278 | further charge. You need not require recipients to copy the
279 | Corresponding Source along with the object code. If the place to
280 | copy the object code is a network server, the Corresponding Source
281 | may be on a different server (operated by you or a third party)
282 | that supports equivalent copying facilities, provided you maintain
283 | clear directions next to the object code saying where to find the
284 | Corresponding Source. Regardless of what server hosts the
285 | Corresponding Source, you remain obligated to ensure that it is
286 | available for as long as needed to satisfy these requirements.
287 |
288 | e) Convey the object code using peer-to-peer transmission, provided
289 | you inform other peers where the object code and Corresponding
290 | Source of the work are being offered to the general public at no
291 | charge under subsection 6d.
292 |
293 | A separable portion of the object code, whose source code is excluded
294 | from the Corresponding Source as a System Library, need not be
295 | included in conveying the object code work.
296 |
297 | A "User Product" is either (1) a "consumer product", which means any
298 | tangible personal property which is normally used for personal, family,
299 | or household purposes, or (2) anything designed or sold for incorporation
300 | into a dwelling. In determining whether a product is a consumer product,
301 | doubtful cases shall be resolved in favor of coverage. For a particular
302 | product received by a particular user, "normally used" refers to a
303 | typical or common use of that class of product, regardless of the status
304 | of the particular user or of the way in which the particular user
305 | actually uses, or expects or is expected to use, the product. A product
306 | is a consumer product regardless of whether the product has substantial
307 | commercial, industrial or non-consumer uses, unless such uses represent
308 | the only significant mode of use of the product.
309 |
310 | "Installation Information" for a User Product means any methods,
311 | procedures, authorization keys, or other information required to install
312 | and execute modified versions of a covered work in that User Product from
313 | a modified version of its Corresponding Source. The information must
314 | suffice to ensure that the continued functioning of the modified object
315 | code is in no case prevented or interfered with solely because
316 | modification has been made.
317 |
318 | If you convey an object code work under this section in, or with, or
319 | specifically for use in, a User Product, and the conveying occurs as
320 | part of a transaction in which the right of possession and use of the
321 | User Product is transferred to the recipient in perpetuity or for a
322 | fixed term (regardless of how the transaction is characterized), the
323 | Corresponding Source conveyed under this section must be accompanied
324 | by the Installation Information. But this requirement does not apply
325 | if neither you nor any third party retains the ability to install
326 | modified object code on the User Product (for example, the work has
327 | been installed in ROM).
328 |
329 | The requirement to provide Installation Information does not include a
330 | requirement to continue to provide support service, warranty, or updates
331 | for a work that has been modified or installed by the recipient, or for
332 | the User Product in which it has been modified or installed. Access to a
333 | network may be denied when the modification itself materially and
334 | adversely affects the operation of the network or violates the rules and
335 | protocols for communication across the network.
336 |
337 | Corresponding Source conveyed, and Installation Information provided,
338 | in accord with this section must be in a format that is publicly
339 | documented (and with an implementation available to the public in
340 | source code form), and must require no special password or key for
341 | unpacking, reading or copying.
342 |
343 | 7. Additional Terms.
344 |
345 | "Additional permissions" are terms that supplement the terms of this
346 | License by making exceptions from one or more of its conditions.
347 | Additional permissions that are applicable to the entire Program shall
348 | be treated as though they were included in this License, to the extent
349 | that they are valid under applicable law. If additional permissions
350 | apply only to part of the Program, that part may be used separately
351 | under those permissions, but the entire Program remains governed by
352 | this License without regard to the additional permissions.
353 |
354 | When you convey a copy of a covered work, you may at your option
355 | remove any additional permissions from that copy, or from any part of
356 | it. (Additional permissions may be written to require their own
357 | removal in certain cases when you modify the work.) You may place
358 | additional permissions on material, added by you to a covered work,
359 | for which you have or can give appropriate copyright permission.
360 |
361 | Notwithstanding any other provision of this License, for material you
362 | add to a covered work, you may (if authorized by the copyright holders of
363 | that material) supplement the terms of this License with terms:
364 |
365 | a) Disclaiming warranty or limiting liability differently from the
366 | terms of sections 15 and 16 of this License; or
367 |
368 | b) Requiring preservation of specified reasonable legal notices or
369 | author attributions in that material or in the Appropriate Legal
370 | Notices displayed by works containing it; or
371 |
372 | c) Prohibiting misrepresentation of the origin of that material, or
373 | requiring that modified versions of such material be marked in
374 | reasonable ways as different from the original version; or
375 |
376 | d) Limiting the use for publicity purposes of names of licensors or
377 | authors of the material; or
378 |
379 | e) Declining to grant rights under trademark law for use of some
380 | trade names, trademarks, or service marks; or
381 |
382 | f) Requiring indemnification of licensors and authors of that
383 | material by anyone who conveys the material (or modified versions of
384 | it) with contractual assumptions of liability to the recipient, for
385 | any liability that these contractual assumptions directly impose on
386 | those licensors and authors.
387 |
388 | All other non-permissive additional terms are considered "further
389 | restrictions" within the meaning of section 10. If the Program as you
390 | received it, or any part of it, contains a notice stating that it is
391 | governed by this License along with a term that is a further
392 | restriction, you may remove that term. If a license document contains
393 | a further restriction but permits relicensing or conveying under this
394 | License, you may add to a covered work material governed by the terms
395 | of that license document, provided that the further restriction does
396 | not survive such relicensing or conveying.
397 |
398 | If you add terms to a covered work in accord with this section, you
399 | must place, in the relevant source files, a statement of the
400 | additional terms that apply to those files, or a notice indicating
401 | where to find the applicable terms.
402 |
403 | Additional terms, permissive or non-permissive, may be stated in the
404 | form of a separately written license, or stated as exceptions;
405 | the above requirements apply either way.
406 |
407 | 8. Termination.
408 |
409 | You may not propagate or modify a covered work except as expressly
410 | provided under this License. Any attempt otherwise to propagate or
411 | modify it is void, and will automatically terminate your rights under
412 | this License (including any patent licenses granted under the third
413 | paragraph of section 11).
414 |
415 | However, if you cease all violation of this License, then your
416 | license from a particular copyright holder is reinstated (a)
417 | provisionally, unless and until the copyright holder explicitly and
418 | finally terminates your license, and (b) permanently, if the copyright
419 | holder fails to notify you of the violation by some reasonable means
420 | prior to 60 days after the cessation.
421 |
422 | Moreover, your license from a particular copyright holder is
423 | reinstated permanently if the copyright holder notifies you of the
424 | violation by some reasonable means, this is the first time you have
425 | received notice of violation of this License (for any work) from that
426 | copyright holder, and you cure the violation prior to 30 days after
427 | your receipt of the notice.
428 |
429 | Termination of your rights under this section does not terminate the
430 | licenses of parties who have received copies or rights from you under
431 | this License. If your rights have been terminated and not permanently
432 | reinstated, you do not qualify to receive new licenses for the same
433 | material under section 10.
434 |
435 | 9. Acceptance Not Required for Having Copies.
436 |
437 | You are not required to accept this License in order to receive or
438 | run a copy of the Program. Ancillary propagation of a covered work
439 | occurring solely as a consequence of using peer-to-peer transmission
440 | to receive a copy likewise does not require acceptance. However,
441 | nothing other than this License grants you permission to propagate or
442 | modify any covered work. These actions infringe copyright if you do
443 | not accept this License. Therefore, by modifying or propagating a
444 | covered work, you indicate your acceptance of this License to do so.
445 |
446 | 10. Automatic Licensing of Downstream Recipients.
447 |
448 | Each time you convey a covered work, the recipient automatically
449 | receives a license from the original licensors, to run, modify and
450 | propagate that work, subject to this License. You are not responsible
451 | for enforcing compliance by third parties with this License.
452 |
453 | An "entity transaction" is a transaction transferring control of an
454 | organization, or substantially all assets of one, or subdividing an
455 | organization, or merging organizations. If propagation of a covered
456 | work results from an entity transaction, each party to that
457 | transaction who receives a copy of the work also receives whatever
458 | licenses to the work the party's predecessor in interest had or could
459 | give under the previous paragraph, plus a right to possession of the
460 | Corresponding Source of the work from the predecessor in interest, if
461 | the predecessor has it or can get it with reasonable efforts.
462 |
463 | You may not impose any further restrictions on the exercise of the
464 | rights granted or affirmed under this License. For example, you may
465 | not impose a license fee, royalty, or other charge for exercise of
466 | rights granted under this License, and you may not initiate litigation
467 | (including a cross-claim or counterclaim in a lawsuit) alleging that
468 | any patent claim is infringed by making, using, selling, offering for
469 | sale, or importing the Program or any portion of it.
470 |
471 | 11. Patents.
472 |
473 | A "contributor" is a copyright holder who authorizes use under this
474 | License of the Program or a work on which the Program is based. The
475 | work thus licensed is called the contributor's "contributor version".
476 |
477 | A contributor's "essential patent claims" are all patent claims
478 | owned or controlled by the contributor, whether already acquired or
479 | hereafter acquired, that would be infringed by some manner, permitted
480 | by this License, of making, using, or selling its contributor version,
481 | but do not include claims that would be infringed only as a
482 | consequence of further modification of the contributor version. For
483 | purposes of this definition, "control" includes the right to grant
484 | patent sublicenses in a manner consistent with the requirements of
485 | this License.
486 |
487 | Each contributor grants you a non-exclusive, worldwide, royalty-free
488 | patent license under the contributor's essential patent claims, to
489 | make, use, sell, offer for sale, import and otherwise run, modify and
490 | propagate the contents of its contributor version.
491 |
492 | In the following three paragraphs, a "patent license" is any express
493 | agreement or commitment, however denominated, not to enforce a patent
494 | (such as an express permission to practice a patent or covenant not to
495 | sue for patent infringement). To "grant" such a patent license to a
496 | party means to make such an agreement or commitment not to enforce a
497 | patent against the party.
498 |
499 | If you convey a covered work, knowingly relying on a patent license,
500 | and the Corresponding Source of the work is not available for anyone
501 | to copy, free of charge and under the terms of this License, through a
502 | publicly available network server or other readily accessible means,
503 | then you must either (1) cause the Corresponding Source to be so
504 | available, or (2) arrange to deprive yourself of the benefit of the
505 | patent license for this particular work, or (3) arrange, in a manner
506 | consistent with the requirements of this License, to extend the patent
507 | license to downstream recipients. "Knowingly relying" means you have
508 | actual knowledge that, but for the patent license, your conveying the
509 | covered work in a country, or your recipient's use of the covered work
510 | in a country, would infringe one or more identifiable patents in that
511 | country that you have reason to believe are valid.
512 |
513 | If, pursuant to or in connection with a single transaction or
514 | arrangement, you convey, or propagate by procuring conveyance of, a
515 | covered work, and grant a patent license to some of the parties
516 | receiving the covered work authorizing them to use, propagate, modify
517 | or convey a specific copy of the covered work, then the patent license
518 | you grant is automatically extended to all recipients of the covered
519 | work and works based on it.
520 |
521 | A patent license is "discriminatory" if it does not include within
522 | the scope of its coverage, prohibits the exercise of, or is
523 | conditioned on the non-exercise of one or more of the rights that are
524 | specifically granted under this License. You may not convey a covered
525 | work if you are a party to an arrangement with a third party that is
526 | in the business of distributing software, under which you make payment
527 | to the third party based on the extent of your activity of conveying
528 | the work, and under which the third party grants, to any of the
529 | parties who would receive the covered work from you, a discriminatory
530 | patent license (a) in connection with copies of the covered work
531 | conveyed by you (or copies made from those copies), or (b) primarily
532 | for and in connection with specific products or compilations that
533 | contain the covered work, unless you entered into that arrangement,
534 | or that patent license was granted, prior to 28 March 2007.
535 |
536 | Nothing in this License shall be construed as excluding or limiting
537 | any implied license or other defenses to infringement that may
538 | otherwise be available to you under applicable patent law.
539 |
540 | 12. No Surrender of Others' Freedom.
541 |
542 | If conditions are imposed on you (whether by court order, agreement or
543 | otherwise) that contradict the conditions of this License, they do not
544 | excuse you from the conditions of this License. If you cannot convey a
545 | covered work so as to satisfy simultaneously your obligations under this
546 | License and any other pertinent obligations, then as a consequence you may
547 | not convey it at all. For example, if you agree to terms that obligate you
548 | to collect a royalty for further conveying from those to whom you convey
549 | the Program, the only way you could satisfy both those terms and this
550 | License would be to refrain entirely from conveying the Program.
551 |
552 | 13. Use with the GNU Affero General Public License.
553 |
554 | Notwithstanding any other provision of this License, you have
555 | permission to link or combine any covered work with a work licensed
556 | under version 3 of the GNU Affero General Public License into a single
557 | combined work, and to convey the resulting work. The terms of this
558 | License will continue to apply to the part which is the covered work,
559 | but the special requirements of the GNU Affero General Public License,
560 | section 13, concerning interaction through a network will apply to the
561 | combination as such.
562 |
563 | 14. Revised Versions of this License.
564 |
565 | The Free Software Foundation may publish revised and/or new versions of
566 | the GNU General Public License from time to time. Such new versions will
567 | be similar in spirit to the present version, but may differ in detail to
568 | address new problems or concerns.
569 |
570 | Each version is given a distinguishing version number. If the
571 | Program specifies that a certain numbered version of the GNU General
572 | Public License "or any later version" applies to it, you have the
573 | option of following the terms and conditions either of that numbered
574 | version or of any later version published by the Free Software
575 | Foundation. If the Program does not specify a version number of the
576 | GNU General Public License, you may choose any version ever published
577 | by the Free Software Foundation.
578 |
579 | If the Program specifies that a proxy can decide which future
580 | versions of the GNU General Public License can be used, that proxy's
581 | public statement of acceptance of a version permanently authorizes you
582 | to choose that version for the Program.
583 |
584 | Later license versions may give you additional or different
585 | permissions. However, no additional obligations are imposed on any
586 | author or copyright holder as a result of your choosing to follow a
587 | later version.
588 |
589 | 15. Disclaimer of Warranty.
590 |
591 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
592 | APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
593 | HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
594 | OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
595 | THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
596 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
597 | IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
598 | ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
599 |
600 | 16. Limitation of Liability.
601 |
602 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
603 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
604 | THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
605 | GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
606 | USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
607 | DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
608 | PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
609 | EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
610 | SUCH DAMAGES.
611 |
612 | 17. Interpretation of Sections 15 and 16.
613 |
614 | If the disclaimer of warranty and limitation of liability provided
615 | above cannot be given local legal effect according to their terms,
616 | reviewing courts shall apply local law that most closely approximates
617 | an absolute waiver of all civil liability in connection with the
618 | Program, unless a warranty or assumption of liability accompanies a
619 | copy of the Program in return for a fee.
620 |
621 | END OF TERMS AND CONDITIONS
622 |
623 | How to Apply These Terms to Your New Programs
624 |
625 | If you develop a new program, and you want it to be of the greatest
626 | possible use to the public, the best way to achieve this is to make it
627 | free software which everyone can redistribute and change under these terms.
628 |
629 | To do so, attach the following notices to the program. It is safest
630 | to attach them to the start of each source file to most effectively
631 | state the exclusion of warranty; and each file should have at least
632 | the "copyright" line and a pointer to where the full notice is found.
633 |
634 |
635 | Copyright (C)
636 |
637 | This program is free software: you can redistribute it and/or modify
638 | it under the terms of the GNU General Public License as published by
639 | the Free Software Foundation, either version 3 of the License, or
640 | (at your option) any later version.
641 |
642 | This program is distributed in the hope that it will be useful,
643 | but WITHOUT ANY WARRANTY; without even the implied warranty of
644 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
645 | GNU General Public License for more details.
646 |
647 | You should have received a copy of the GNU General Public License
648 | along with this program. If not, see .
649 |
650 | Also add information on how to contact you by electronic and paper mail.
651 |
652 | If the program does terminal interaction, make it output a short
653 | notice like this when it starts in an interactive mode:
654 |
655 | Copyright (C)
656 | This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
657 | This is free software, and you are welcome to redistribute it
658 | under certain conditions; type `show c' for details.
659 |
660 | The hypothetical commands `show w' and `show c' should show the appropriate
661 | parts of the General Public License. Of course, your program's commands
662 | might be different; for a GUI interface, you would use an "about box".
663 |
664 | You should also get your employer (if you work as a programmer) or school,
665 | if any, to sign a "copyright disclaimer" for the program, if necessary.
666 | For more information on this, and how to apply and follow the GNU GPL, see
667 | .
668 |
669 | The GNU General Public License does not permit incorporating your program
670 | into proprietary programs. If your program is a subroutine library, you
671 | may consider it more useful to permit linking proprietary applications with
672 | the library. If this is what you want to do, use the GNU Lesser General
673 | Public License instead of this License. But first, please read
674 | .
675 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Optimisation-Algorithms
2 | A collection of the most commonly used Optimisation Algorithms for Data Science & Machine Learning
3 |
4 | ---
5 |
6 | This repository is created by and belongs to: https://github.com/Muradmustafayev-03
7 |
8 | Contributing guide: https://github.com/Muradmustafayev-03/Optimisation-Algorithms/blob/main/CONTRIBUTING.md
9 |
10 | To report any issues: https://github.com/Muradmustafayev-03/Optimisation-Algorithms/issues
11 |
12 | To install the package as a library use:
13 | > *pip install optimisation-algorithms*
14 |
15 | Then to import:
16 | > *import optimisation_algorithms*
17 |
18 | ---
19 |
20 | In this project I try to collect as many useful Optimisation Algorithms as possible, and write them in a simple and reusable way.
21 | The idea is write all these algorithms in python, in a fundamental, yet easy-to-use way, with *numpy* being the only external library used.
22 | The project is currently on the early stages of the development, but one can already try and implement it in their own projects.
23 | And for sure, you are always welcome to contribute or to make any suggestions. Any feedback is appreciated.
24 |
25 | ## What is an Optimisation Algorithm?
26 | *For more information: https://en.wikipedia.org/wiki/Mathematical_optimization*
27 |
28 | **Optimisation Algorithm** is an algorithm, that is used to find input values of the *global minima*, or less often, of the *global maxima* of a function.
29 |
30 | In this project, all the algorithms look for the *global minima* of the given function.
31 | However, if you want to find the *global maxima* of a function, you can pass the negation of your function: *-f(x)*, instead of *f(x)*,
32 | having that its mimima will be the maxima of your function.
33 |
34 | **Optimization algorithms** are widely used in **Machine Learning**, **Mathematics** and a range of other applied sciences.
35 |
36 | There are multiple kinds of **Optimization algorithms**, so here is a short description ofthe ones, used in the project:
37 |
38 | ### Iterative Algorithms
39 | Tese algorithms start from a random or specified point, and step by step move torwards the closest minima.
40 | They often require the *partial derivative* or the *gradient* of the function, which requires function to be differentiable.
41 | These algorithms are simple and work good for *bowl-like* functions,
42 | thow if there is mor than one minimum in function, it can stock at a *local minima*, insted of finding the *global* one.
43 |
44 | ##### Examples of the *Iterative Algorithms* used in this project are:
45 | - Gradient Descent
46 | - Batch Gradient Descent
47 | - Approximated Batch Gradient Descent
48 |
49 | ### Metaheuristic Algorithms
50 | These algorithms start with a set of random solutions,
51 | then competetively chose the best solutions from the set and based on them,
52 | generate a new set of better solutions, thus evolving each iteration.
53 | These algorithms don't stock at *local minimums*, but directly find the *global* one, so they can be used for *many-local-minimums* functions.
54 |
55 | ##### Examples of the *Metaheuristic Algorithms* used in this project are:
56 | - Harmony Search Algorithm
57 | - Genetic Algorithm
58 |
59 | ## Benchmark Functions
60 | *Benchmark functions* are used to test *Optimization Algorithms*, however they can be used by themselves.
61 | There are multiple *benchmark functions* used in this project, and they are divided into several types depending on their shape.
62 |
63 | *For more information and the mathematical definition of the functions see: https://www.sfu.ca/~ssurjano/optimization.html*
64 |
--------------------------------------------------------------------------------
/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Muradmustafayev-03/Optimisation-Algorithms/61480d1b10cb067c161320274b7061fc83caaa5a/__init__.py
--------------------------------------------------------------------------------
/optimisation_algorithms.egg-info/PKG-INFO:
--------------------------------------------------------------------------------
1 | Metadata-Version: 2.1
2 | Name: optimisation-algorithms
3 | Version: 1.0.2
4 | Summary: A collection of the most commonly used Optimisation Algorithms for Data Science & Machine Learning
5 | Home-page: https://github.com/Muradmustafayev-03/Optimisation-Algorithms
6 | Author: Murad Mustafayev
7 | Author-email: murad.mustafayev@ufaz.az
8 | Project-URL: Bug Reports, https://github.com/Muradmustafayev-03/Optimisation-Algorithms/issues
9 | Project-URL: Funding, https://donate.pypi.org
10 | Project-URL: Source, https://github.com/Muradmustafayev-03/Optimisation-Algorithms/
11 | Keywords: optimisation,algorithms,metaheuristic,ML
12 | Classifier: Development Status :: 3 - Alpha
13 | Classifier: Intended Audience :: Science/Research
14 | Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
15 | Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3)
16 | Classifier: Programming Language :: Python :: 3
17 | Classifier: Programming Language :: Python :: 3.7
18 | Classifier: Programming Language :: Python :: 3.8
19 | Classifier: Programming Language :: Python :: 3.9
20 | Classifier: Programming Language :: Python :: 3.10
21 | Classifier: Programming Language :: Python :: 3 :: Only
22 | Requires-Python: >=3.7, <4
23 | Description-Content-Type: text/markdown
24 | License-File: LICENSE
25 |
26 | # Optimisation-Algorithms
27 | A collection of the most commonly used Optimisation Algorithms for Data Science & Machine Learning
28 |
29 | ---
30 |
31 | This repository is created by and belongs to: https://github.com/Muradmustafayev-03
32 |
33 | Contributing guide: https://github.com/Muradmustafayev-03/Optimisation-Algorithms/blob/main/CONTRIBUTING.md
34 |
35 | To report any issues: https://github.com/Muradmustafayev-03/Optimisation-Algorithms/issues
36 |
37 | To install the package as a library use: ***pip install optimisation-algorithms***
38 |
39 | ---
40 |
41 | In this project I try to collect as many useful Optimisation Algorithms as possible, and write them in a simple and reusable way.
42 | The idea is write all these algorithms in python, in a fundamental, yet easy-to-use way, with *numpy* being the only external library used.
43 | The project is currently on the early stages of the development, but one can already try and implement it in their own projects.
44 | And for sure, you are always welcome to contribute or to make any suggestions. Any feedback is appreciated.
45 |
46 | ## What is an Optimisation Algorithm?
47 | *For more information: https://en.wikipedia.org/wiki/Mathematical_optimization*
48 |
49 | **Optimisation Algorithm** is an algorithm, that is used to find input values of the *global minima*, or less often, of the *global maxima* of a function.
50 |
51 | In this project, all the algorithms look for the *global minima* of the given function.
52 | However, if you want to find the *global maxima* of a function, you can pass the negation of your function: *-f(x)*, instead of *f(x)*,
53 | having that its mimima will be the maxima of your function.
54 |
55 | **Optimization algorithms** are widely used in **Machine Learning**, **Mathematics** and a range of other applied sciences.
56 |
57 | There are multiple kinds of **Optimization algorithms**, so here is a short description ofthe ones, used in the project:
58 |
59 | ### Iterative Algorithms
60 | Tese algorithms start from a random or specified point, and step by step move torwards the closest minima.
61 | They often require the *partial derivative* or the *gradient* of the function, which requires function to be differentiable.
62 | These algorithms are simple and work good for *bowl-like* functions,
63 | thow if there is mor than one minimum in function, it can stock at a *local minima*, insted of finding the *global* one.
64 |
65 | ##### Examples of the *Iterative Algorithms* used in this project are:
66 | - Gradient Descent
67 | - Batch Gradient Descent
68 | - Approximated Batch Gradient Descent
69 |
70 | ### Metaheuristic Algorithms
71 | These algorithms start with a set of random solutions,
72 | then competetively chose the best solutions from the set and based on them,
73 | generate a new set of better solutions, thus evolving each iteration.
74 | These algorithms don't stock at *local minimums*, but directly find the *global* one, so they can be used for *many-local-minimums* functions.
75 |
76 | ##### Examples of the *Metaheuristic Algorithms* used in this project are:
77 | - Harmony Search Algorithm
78 | - Genetic Algorithm
79 |
80 | ## Benchmark Functions
81 | *Benchmark functions* are used to test *Optimization Algorithms*, however they can be used by themselves.
82 | There are multiple *benchmark functions* used in this project, and they are divided into several types depending on their shape.
83 |
84 | *For more information and the mathematical definition of the functions see: https://www.sfu.ca/~ssurjano/optimization.html*
85 |
--------------------------------------------------------------------------------
/optimisation_algorithms.egg-info/SOURCES.txt:
--------------------------------------------------------------------------------
1 | LICENSE
2 | README.md
3 | setup.py
4 | optimisation_algorithms.egg-info/PKG-INFO
5 | optimisation_algorithms.egg-info/SOURCES.txt
6 | optimisation_algorithms.egg-info/dependency_links.txt
7 | optimisation_algorithms.egg-info/requires.txt
8 | optimisation_algorithms.egg-info/top_level.txt
--------------------------------------------------------------------------------
/optimisation_algorithms.egg-info/dependency_links.txt:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/optimisation_algorithms.egg-info/requires.txt:
--------------------------------------------------------------------------------
1 | click==8.1.3
2 | colorama==0.4.5
3 | numpy==1.23.3
4 |
--------------------------------------------------------------------------------
/optimisation_algorithms.egg-info/top_level.txt:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | click==8.1.3
2 | colorama==0.4.5
3 | numpy==1.23.3
4 | pip==22.2.2
5 | setuptools==65.4.0
6 | wheel==0.37.1
7 |
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | """A setuptools based setup module.
2 | See:
3 | https://packaging.python.org/guides/distributing-packages-using-setuptools/
4 | https://github.com/pypa/sampleproject
5 | """
6 |
7 | # Always prefer setuptools over distutils
8 | from setuptools import setup, find_packages
9 | import pathlib
10 |
11 | here = pathlib.Path(__file__).parent.resolve()
12 |
13 | # Get the long description from the README file
14 | long_description = (here / "README.md").read_text(encoding="utf-8")
15 |
16 | # Arguments marked as "Required" below must be included for upload to PyPI.
17 | # Fields marked as "Optional" may be commented out.
18 |
19 | setup(
20 | # This is the name of your project. The first time you publish this
21 | # package, this name will be registered for you. It will determine how
22 | # users can install this project, e.g.:
23 | #
24 | # $ pip install sampleproject
25 | #
26 | # And where it will live on PyPI: https://pypi.org/project/sampleproject/
27 | #
28 | # There are some restrictions on what makes a valid project name
29 | # specification here:
30 | # https://packaging.python.org/specifications/core-metadata/#name
31 | name="optimisation_algorithms", # Required
32 | # Versions should comply with PEP 440:
33 | # https://www.python.org/dev/peps/pep-0440/
34 | #
35 | # For a discussion on single-sourcing the version across setup.py and the
36 | # project code, see
37 | # https://packaging.python.org/guides/single-sourcing-package-version/
38 | version="1.1.2", # Required
39 | # This is a one-line description or tagline of what your project does. This
40 | # corresponds to the "Summary" metadata field:
41 | # https://packaging.python.org/specifications/core-metadata/#summary
42 | description="A collection of the most commonly used Optimisation Algorithms"
43 | " for Data Science & Machine Learning", # Optional
44 | # This is an optional longer description of your project that represents
45 | # the body of text which users will see when they visit PyPI.
46 | #
47 | # Often, this is the same as your README, so you can just read it in from
48 | # that file directly (as we have already done above)
49 | #
50 | # This field corresponds to the "Description" metadata field:
51 | # https://packaging.python.org/specifications/core-metadata/#description-optional
52 | long_description=long_description, # Optional
53 | # Denotes that our long_description is in Markdown; valid values are
54 | # text/plain, text/x-rst, and text/markdown
55 | #
56 | # Optional if long_description is written in reStructuredText (rst) but
57 | # required for plain-text or Markdown; if unspecified, "applications should
58 | # attempt to render [the long_description] as text/x-rst; charset=UTF-8 and
59 | # fall back to text/plain if it is not valid rst" (see link below)
60 | #
61 | # This field corresponds to the "Description-Content-Type" metadata field:
62 | # https://packaging.python.org/specifications/core-metadata/#description-content-type-optional
63 | long_description_content_type="text/markdown", # Optional (see note above)
64 | # This should be a valid link to your project's main homepage.
65 | #
66 | # This field corresponds to the "Home-Page" metadata field:
67 | # https://packaging.python.org/specifications/core-metadata/#home-page-optional
68 | url="https://github.com/Muradmustafayev-03/Optimisation-Algorithms", # Optional
69 | # This should be your name or the name of the organization which owns the
70 | # project.
71 | author="Murad Mustafayev", # Optional
72 | # This should be a valid email address corresponding to the author listed
73 | # above.
74 | author_email="murad.mustafayev@ufaz.az", # Optional
75 | # Classifiers help users find your project by categorizing it.
76 | #
77 | # For a list of valid classifiers, see https://pypi.org/classifiers/
78 | classifiers=[ # Optional
79 | # How mature is this project? Common values are
80 | # 3 - Alpha
81 | # 4 - Beta
82 | # 5 - Production/Stable
83 | "Development Status :: 3 - Alpha",
84 | # Indicate who your project is intended for
85 | "Intended Audience :: Science/Research",
86 | "Topic :: Scientific/Engineering :: Artificial Intelligence",
87 | # Pick your license as you wish
88 | "License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
89 | # Specify the Python versions you support here. In particular, ensure
90 | # that you indicate you support Python 3. These classifiers are *not*
91 | # checked by 'pip install'. See instead 'python_requires' below.
92 | "Programming Language :: Python :: 3",
93 | "Programming Language :: Python :: 3.7",
94 | "Programming Language :: Python :: 3.8",
95 | "Programming Language :: Python :: 3.9",
96 | "Programming Language :: Python :: 3.10",
97 | "Programming Language :: Python :: 3 :: Only",
98 | ],
99 | # This field adds keywords for your project which will appear on the
100 | # project page. What does your project relate to?
101 | #
102 | # Note that this is a list of additional keywords, separated
103 | # by commas, to be used to assist searching for the distribution in a
104 | # larger catalog.
105 | keywords="optimisation, optimization algorithms, algorithms, metaheuristic, ML", # Optional
106 | # When your source code is in a subdirectory under the project root, e.g.
107 | # `src/`, it is necessary to specify the `package_dir` argument.
108 | package_dir={"": "src"}, # Optional
109 | # You can just specify package directories manually here if your project is
110 | # simple. Or you can use find_packages().
111 | #
112 | # Alternatively, if you just want to distribute a single Python file, use
113 | # the `py_modules` argument instead as follows, which will expect a file
114 | # called `my_module.py` to exist:
115 | #
116 | # py_modules=["my_module"],
117 | #
118 | packages=find_packages(where="src"), # Required
119 | # Specify which Python versions you support. In contrast to the
120 | # 'Programming Language' classifiers above, 'pip install' will check this
121 | # and refuse to install the project if the version does not match. See
122 | # https://packaging.python.org/guides/distributing-packages-using-setuptools/#python-requires
123 | python_requires=">=3.7, <4",
124 | # This field lists other packages that your project depends on to fit.
125 | # Any package you put here will be installed by pip when your project is
126 | # installed, so they must be valid existing projects.
127 | #
128 | # For an analysis of "install_requires" vs pip's requirements files see:
129 | # https://packaging.python.org/discussions/install-requires-vs-requirements/
130 | install_requires=['click==8.1.3',
131 | 'colorama==0.4.5',
132 | 'numpy==1.23.3'], # Optional
133 | # List additional groups of dependencies here (e.g. development
134 | # dependencies). Users will be able to install these using the "extras"
135 | # syntax, for example:
136 | #
137 | # $ pip install sampleproject[dev]
138 | #
139 | # Similar to `install_requires` above, these must be valid existing
140 | # projects.
141 | # extras_require={ # Optional
142 | # "dev": ["check-manifest"],
143 | # "test": ["coverage"],
144 | # },
145 | # If there are data files included in your packages that need to be
146 | # installed, specify them here.
147 | # package_data={ # Optional
148 | # "sample": ["package_data.dat"],
149 | # },
150 | # Although 'package_data' is the preferred approach, in some case you may
151 | # need to place data files outside of your packages. See:
152 | # http://docs.python.org/distutils/setupscript.html#installing-additional-files
153 | #
154 | # In this case, 'data_file' will be installed into '/my_data'
155 | # data_files=[("my_data", ["data/data_file"])], # Optional
156 | # To provide executable scripts, use entry points in preference to the
157 | # "scripts" keyword. Entry points provide cross-platform support and allow
158 | # `pip` to create the appropriate form of executable for the target
159 | # platform.
160 | #
161 | # For example, the following would provide a command called `sample` which
162 | # executes the function `main` from this package when invoked:
163 | # entry_points={ # Optional
164 | # "console_scripts": [
165 | # "sample=sample:main",
166 | # ],
167 | # },
168 | # List additional URLs that are relevant to your project as a dict.
169 | #
170 | # This field corresponds to the "Project-URL" metadata fields:
171 | # https://packaging.python.org/specifications/core-metadata/#project-url-multiple-use
172 | #
173 | # Examples listed include a pattern for specifying where the package tracks
174 | # issues, where the source is hosted, where to say thanks to the package
175 | # maintainers, and where to support the project financially. The key is
176 | # what's used to render the link text on PyPI.
177 | project_urls={ # Optional
178 | "Bug Reports": "https://github.com/Muradmustafayev-03/Optimisation-Algorithms/issues",
179 | "Funding": "https://donate.pypi.org",
180 | "Source": "https://github.com/Muradmustafayev-03/Optimisation-Algorithms/",
181 | },
182 | )
183 |
--------------------------------------------------------------------------------
/src/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Muradmustafayev-03/Optimisation-Algorithms/61480d1b10cb067c161320274b7061fc83caaa5a/src/__init__.py
--------------------------------------------------------------------------------
/src/optimisation.egg-info/PKG-INFO:
--------------------------------------------------------------------------------
1 | Metadata-Version: 2.1
2 | Name: optimisation
3 | Version: 1.0.1
4 | Summary: A collection of the most commonly used Optimisation Algorithms for Data Science & Machine Learning
5 | Home-page: https://github.com/Muradmustafayev-03/Optimisation-Algorithms
6 | Author: Murad Mustafayev
7 | Author-email: murad.mustafayev@ufaz.az
8 | Project-URL: Bug Reports, https://github.com/Muradmustafayev-03/Optimisation-Algorithms/issues
9 | Project-URL: Funding, https://donate.pypi.org
10 | Project-URL: Source, https://github.com/Muradmustafayev-03/Optimisation-Algorithms/
11 | Keywords: optimisation,algorithms,metaheuristic,ML
12 | Classifier: Development Status :: 3 - Alpha
13 | Classifier: Intended Audience :: Science/Research
14 | Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
15 | Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3)
16 | Classifier: Programming Language :: Python :: 3
17 | Classifier: Programming Language :: Python :: 3.7
18 | Classifier: Programming Language :: Python :: 3.8
19 | Classifier: Programming Language :: Python :: 3.9
20 | Classifier: Programming Language :: Python :: 3.10
21 | Classifier: Programming Language :: Python :: 3 :: Only
22 | Requires-Python: >=3.7, <4
23 | Description-Content-Type: text/markdown
24 | License-File: LICENSE
25 |
26 | # Optimisation-Algorithms
27 | A collection of the most commonly used Optimisation Algorithms for Data Science & Machine Learning
28 |
29 | ---
30 |
31 | This repository is created by and belongs to: https://github.com/Muradmustafayev-03
32 |
33 | Contributing guide: https://github.com/Muradmustafayev-03/Optimisation-Algorithms/blob/main/CONTRIBUTING.md
34 |
35 | To report any issues: https://github.com/Muradmustafayev-03/Optimisation-Algorithms/issues
36 |
37 | To install the package as a library use: ***pip install Optimisation-Algorithms==1.0.1***
38 |
39 | ---
40 |
41 | In this project I try to collect as many useful Optimisation Algorithms as possible, and write them in a simple and reusable way.
42 | The idea is write all these algorithms in python, in a fundamental, yet easy-to-use way, with *numpy* being the only external library used.
43 | The project is currently on the early stages of the development, but one can already try and implement it in their own projects.
44 | And for sure, you are always welcome to contribute or to make any suggestions. Any feedback is appreciated.
45 |
46 | ## What is an Optimisation Algorithm?
47 | *For more information: https://en.wikipedia.org/wiki/Mathematical_optimization*
48 |
49 | **Optimisation Algorithm** is an algorithm, that is used to find input values of the *global minima*, or less often, of the *global maxima* of a function.
50 |
51 | In this project, all the algorithms look for the *global minima* of the given function.
52 | However, if you want to find the *global maxima* of a function, you can pass the negation of your function: *-f(x)*, instead of *f(x)*,
53 | having that its mimima will be the maxima of your function.
54 |
55 | **Optimization algorithms** are widely used in **Machine Learning**, **Mathematics** and a range of other applied sciences.
56 |
57 | There are multiple kinds of **Optimization algorithms**, so here is a short description ofthe ones, used in the project:
58 |
59 | ### Iterative Algorithms
60 | Tese algorithms start from a random or specified point, and step by step move torwards the closest minima.
61 | They often require the *partial derivative* or the *gradient* of the function, which requires function to be differentiable.
62 | These algorithms are simple and work good for *bowl-like* functions,
63 | thow if there is mor than one minimum in function, it can stock at a *local minima*, insted of finding the *global* one.
64 |
65 | ##### Examples of the *Iterative Algorithms* used in this project are:
66 | - Gradient Descent
67 | - Batch Gradient Descent
68 | - Approximated Batch Gradient Descent
69 |
70 | ### Metaheuristic Algorithms
71 | These algorithms start with a set of random solutions,
72 | then competetively chose the best solutions from the set and based on them,
73 | generate a new set of better solutions, thus evolving each iteration.
74 | These algorithms don't stock at *local minimums*, but directly find the *global* one, so they can be used for *many-local-minimums* functions.
75 |
76 | ##### Examples of the *Metaheuristic Algorithms* used in this project are:
77 | - Harmony Search Algorithm
78 | - Genetic Algorithm
79 |
80 | ## Benchmark Functions
81 | *Benchmark functions* are used to test *Optimization Algorithms*, however they can be used by themselves.
82 | There are multiple *benchmark functions* used in this project, and they are divided into several types depending on their shape.
83 |
84 | *For more information and the mathematical definition of the functions see: https://www.sfu.ca/~ssurjano/optimization.html*
85 |
--------------------------------------------------------------------------------
/src/optimisation.egg-info/SOURCES.txt:
--------------------------------------------------------------------------------
1 | LICENSE
2 | README.md
3 | setup.py
4 | src/optimisation.egg-info/PKG-INFO
5 | src/optimisation.egg-info/SOURCES.txt
6 | src/optimisation.egg-info/dependency_links.txt
7 | src/optimisation.egg-info/requires.txt
8 | src/optimisation.egg-info/top_level.txt
--------------------------------------------------------------------------------
/src/optimisation.egg-info/dependency_links.txt:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/src/optimisation.egg-info/requires.txt:
--------------------------------------------------------------------------------
1 | click==8.1.3
2 | colorama==0.4.5
3 | numpy==1.23.3
4 |
--------------------------------------------------------------------------------
/src/optimisation.egg-info/top_level.txt:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/src/optimisation_algorithms.egg-info/PKG-INFO:
--------------------------------------------------------------------------------
1 | Metadata-Version: 2.1
2 | Name: optimisation-algorithms
3 | Version: 1.1.2
4 | Summary: A collection of the most commonly used Optimisation Algorithms for Data Science & Machine Learning
5 | Home-page: https://github.com/Muradmustafayev-03/Optimisation-Algorithms
6 | Author: Murad Mustafayev
7 | Author-email: murad.mustafayev@ufaz.az
8 | Project-URL: Bug Reports, https://github.com/Muradmustafayev-03/Optimisation-Algorithms/issues
9 | Project-URL: Funding, https://donate.pypi.org
10 | Project-URL: Source, https://github.com/Muradmustafayev-03/Optimisation-Algorithms/
11 | Keywords: optimisation,optimization algorithms,algorithms,metaheuristic,ML
12 | Classifier: Development Status :: 3 - Alpha
13 | Classifier: Intended Audience :: Science/Research
14 | Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
15 | Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3)
16 | Classifier: Programming Language :: Python :: 3
17 | Classifier: Programming Language :: Python :: 3.7
18 | Classifier: Programming Language :: Python :: 3.8
19 | Classifier: Programming Language :: Python :: 3.9
20 | Classifier: Programming Language :: Python :: 3.10
21 | Classifier: Programming Language :: Python :: 3 :: Only
22 | Requires-Python: >=3.7, <4
23 | Description-Content-Type: text/markdown
24 | License-File: LICENSE
25 |
26 | # Optimisation-Algorithms
27 | A collection of the most commonly used Optimisation Algorithms for Data Science & Machine Learning
28 |
29 | ---
30 |
31 | This repository is created by and belongs to: https://github.com/Muradmustafayev-03
32 |
33 | Contributing guide: https://github.com/Muradmustafayev-03/Optimisation-Algorithms/blob/main/CONTRIBUTING.md
34 |
35 | To report any issues: https://github.com/Muradmustafayev-03/Optimisation-Algorithms/issues
36 |
37 | To install the package as a library use:
38 | > *pip install optimisation-algorithms*
39 |
40 | Then to import:
41 | > *import optimisation-algorithms*
42 |
43 | ---
44 |
45 | In this project I try to collect as many useful Optimisation Algorithms as possible, and write them in a simple and reusable way.
46 | The idea is write all these algorithms in python, in a fundamental, yet easy-to-use way, with *numpy* being the only external library used.
47 | The project is currently on the early stages of the development, but one can already try and implement it in their own projects.
48 | And for sure, you are always welcome to contribute or to make any suggestions. Any feedback is appreciated.
49 |
50 | ## What is an Optimisation Algorithm?
51 | *For more information: https://en.wikipedia.org/wiki/Mathematical_optimization*
52 |
53 | **Optimisation Algorithm** is an algorithm, that is used to find input values of the *global minima*, or less often, of the *global maxima* of a function.
54 |
55 | In this project, all the algorithms look for the *global minima* of the given function.
56 | However, if you want to find the *global maxima* of a function, you can pass the negation of your function: *-f(x)*, instead of *f(x)*,
57 | having that its mimima will be the maxima of your function.
58 |
59 | **Optimization algorithms** are widely used in **Machine Learning**, **Mathematics** and a range of other applied sciences.
60 |
61 | There are multiple kinds of **Optimization algorithms**, so here is a short description ofthe ones, used in the project:
62 |
63 | ### Iterative Algorithms
64 | Tese algorithms start from a random or specified point, and step by step move torwards the closest minima.
65 | They often require the *partial derivative* or the *gradient* of the function, which requires function to be differentiable.
66 | These algorithms are simple and work good for *bowl-like* functions,
67 | thow if there is mor than one minimum in function, it can stock at a *local minima*, insted of finding the *global* one.
68 |
69 | ##### Examples of the *Iterative Algorithms* used in this project are:
70 | - Gradient Descent
71 | - Batch Gradient Descent
72 | - Approximated Batch Gradient Descent
73 |
74 | ### Metaheuristic Algorithms
75 | These algorithms start with a set of random solutions,
76 | then competetively chose the best solutions from the set and based on them,
77 | generate a new set of better solutions, thus evolving each iteration.
78 | These algorithms don't stock at *local minimums*, but directly find the *global* one, so they can be used for *many-local-minimums* functions.
79 |
80 | ##### Examples of the *Metaheuristic Algorithms* used in this project are:
81 | - Harmony Search Algorithm
82 | - Genetic Algorithm
83 |
84 | ## Benchmark Functions
85 | *Benchmark functions* are used to test *Optimization Algorithms*, however they can be used by themselves.
86 | There are multiple *benchmark functions* used in this project, and they are divided into several types depending on their shape.
87 |
88 | *For more information and the mathematical definition of the functions see: https://www.sfu.ca/~ssurjano/optimization.html*
89 |
--------------------------------------------------------------------------------
/src/optimisation_algorithms.egg-info/SOURCES.txt:
--------------------------------------------------------------------------------
1 | LICENSE
2 | README.md
3 | setup.py
4 | src/Optimisation_Algorithms.egg-info/PKG-INFO
5 | src/Optimisation_Algorithms.egg-info/SOURCES.txt
6 | src/Optimisation_Algorithms.egg-info/dependency_links.txt
7 | src/Optimisation_Algorithms.egg-info/requires.txt
8 | src/Optimisation_Algorithms.egg-info/top_level.txt
9 | src/optimisation_algorithms/GeneticAlgorithm.py
10 | src/optimisation_algorithms/GradientDescent.py
11 | src/optimisation_algorithms/HarmonySearch.py
12 | src/optimisation_algorithms/__init__.py
13 | src/optimisation_algorithms.egg-info/PKG-INFO
14 | src/optimisation_algorithms.egg-info/SOURCES.txt
15 | src/optimisation_algorithms.egg-info/dependency_links.txt
16 | src/optimisation_algorithms.egg-info/requires.txt
17 | src/optimisation_algorithms.egg-info/top_level.txt
18 | src/optimisation_algorithms/benchmark_functions/__init__.py
19 | src/optimisation_algorithms/benchmark_functions/bowl_shape.py
20 | src/optimisation_algorithms/benchmark_functions/gradients.py
21 | src/optimisation_algorithms/benchmark_functions/imports.py
22 | src/optimisation_algorithms/benchmark_functions/many_local_minimums.py
23 | src/optimisation_algorithms/benchmark_functions/other.py
24 | src/optimisation_algorithms/benchmark_functions/tests.py
25 | src/optimisation_algorithms/exceptions/FailedToConverge.py
26 | src/optimisation_algorithms/exceptions/__init__.py
--------------------------------------------------------------------------------
/src/optimisation_algorithms.egg-info/dependency_links.txt:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/src/optimisation_algorithms.egg-info/requires.txt:
--------------------------------------------------------------------------------
1 | click==8.1.3
2 | colorama==0.4.6
3 | numpy==1.24.2
4 |
--------------------------------------------------------------------------------
/src/optimisation_algorithms.egg-info/top_level.txt:
--------------------------------------------------------------------------------
1 | optimisation_algorithms
2 |
--------------------------------------------------------------------------------
/src/optimisation_algorithms/Algorithmic/GradientDescent.py:
--------------------------------------------------------------------------------
1 | from GradientDescentAbstract import SimpleGD as Simple, ConjugateGD as Conjugate, ExponentiallyWeightedGD as Exponential
2 | from GradientDescentAbstract import BatchGD as Batch, StochasticGD as Stochastic, MiniBatchGD as MiniBatch
3 |
4 |
5 | class GradientDescent(Simple, Batch):
6 | def __init__(self, f: callable, d: int, learning_rate: float = 0.1, max_iter: int = 10 ** 5, tol: float = 1e-8,
7 | h: float = 1e-8, rand_min: float = 0, rand_max: float = 1):
8 | Simple.__init__(self, f, d, learning_rate, max_iter, tol, h, rand_min, rand_max)
9 |
10 |
11 | class StochasticGradientDescent(Simple, Stochastic):
12 | def __init__(self, f: callable, d: int, learning_rate: float = 0.1, max_iter: int = 10 ** 5, tol: float = 1e-8,
13 | h: float = 1e-8, rand_min: float = 0, rand_max: float = 1):
14 | Simple.__init__(self, f, d, learning_rate, max_iter, tol, h, rand_min, rand_max)
15 | Stochastic.__init__(self, d, tol)
16 |
17 |
18 | class MiniBatchGradientDescent(Simple, MiniBatch):
19 | def __init__(self, f: callable, d: int, batch_size: int, learning_rate: float = 0.1, max_iter: int = 10 ** 5,
20 | tol: float = 1e-8, h: float = 1e-8, rand_min: float = 0, rand_max: float = 1):
21 | Simple.__init__(self, f, d, learning_rate, max_iter, tol, h, rand_min, rand_max)
22 | MiniBatch.__init__(self, d, batch_size, tol)
23 |
24 |
25 | class ConjugateGradientDescent(Conjugate, Batch):
26 | def __init__(self, f: callable, d: int, max_iter: int = 10 ** 5, tol: float = 1e-8, h: float = 1e-8,
27 | rand_min: float = 0, rand_max: float = 1):
28 | Conjugate.__init__(self, f, d, max_iter, tol, h, rand_min, rand_max)
29 |
30 |
31 | class ConjugateSGD(Conjugate, Stochastic):
32 | def __init__(self, f: callable, d: int, max_iter: int = 10 ** 5, tol: float = 1e-8, h: float = 1e-8,
33 | rand_min: float = 0, rand_max: float = 1):
34 | Conjugate.__init__(self, f, d, max_iter, tol, h, rand_min, rand_max)
35 | Stochastic.__init__(self, d, tol)
36 |
37 |
38 | class ConjugateMiniBatchGD(Conjugate, MiniBatch):
39 | def __init__(self, f: callable, d: int, batch_size: int, max_iter: int = 10 ** 5, tol: float = 1e-8,
40 | h: float = 1e-8, rand_min: float = 0, rand_max: float = 1):
41 | Conjugate.__init__(self, f, d, max_iter, tol, h, rand_min, rand_max)
42 | MiniBatch.__init__(self, d, batch_size, tol)
43 |
44 |
45 | class ExponentiallyWeightedGradientDescent(Exponential, Batch):
46 | def __init__(self, f: callable, d: int, learning_rate: float = 0.1, alpha: float = 0.9, max_iter: int = 10 ** 5,
47 | tol: float = 1e-8, h: float = 1e-8, rand_min: float = 0, rand_max: float = 1):
48 | Exponential.__init__(self, f, d, learning_rate, alpha, max_iter, tol, h, rand_min, rand_max)
49 | Batch.__init__(self, d, tol)
50 |
51 |
52 | class ExponentiallyWeightedSGD(Exponential, Stochastic):
53 | def __init__(self, f: callable, d: int, learning_rate: float = 0.1, alpha: float = 0.9, max_iter: int = 10 ** 5,
54 | tol: float = 1e-8, h: float = 1e-8, rand_min: float = 0, rand_max: float = 1):
55 | Exponential.__init__(self, f, d, learning_rate, alpha, max_iter, tol, h, rand_min, rand_max)
56 | Stochastic.__init__(self, d, tol)
57 |
58 |
59 | class ExponentiallyWeightedMiniBatchGD(Exponential, MiniBatch):
60 | def __init__(self, f: callable, d: int, batch_size: int, learning_rate: float = 0.1, alpha: float = 0.9,
61 | max_iter: int = 10 ** 5, tol: float = 1e-8, h: float = 1e-8, rand_min: float = 0, rand_max: float = 1):
62 | Exponential.__init__(self, f, d, learning_rate, alpha, max_iter, tol, h, rand_min, rand_max)
63 | MiniBatch.__init__(self, d, batch_size, tol)
64 |
--------------------------------------------------------------------------------
/src/optimisation_algorithms/Algorithmic/GradientDescentAbstract.py:
--------------------------------------------------------------------------------
1 | from abc import ABC, abstractmethod
2 | from typing import Tuple
3 | import numpy as np
4 | import warnings
5 |
6 |
7 | class BaseGD(ABC):
8 | """
9 | A base abstract class for gradient descent algorithms.
10 |
11 | Methods:
12 | -------
13 | - generate_random_sample() -> np.ndarray[float]:
14 | Generates a random initial sample of dimension d of values between self.rand_min and self.rand_max
15 | - gradient(x: np.ndarray) -> np.ndarray:
16 | Computes the gradient of a function f at point x.
17 | - fit(maximize: bool = False) -> Tuple[np.ndarray, float]:
18 | Abstract method that finds the minimum or maximum of a function f using batch gradient descent
19 | starting from a random point.
20 | - fit_multiple(self, num_runs: int = 10, maximize: bool = False) -> Tuple[np.ndarray, float]:
21 | Perform multiple runs of the optimization routine and return the best result.
22 | - _selection(**kwargs) -> np.ndarray:
23 | Abstract method that selects a subset of features to use in the optimization process.
24 |
25 | Raises:
26 | ------
27 | - NotImplementedError:
28 | If either of the abstract methods is not implemented in a subclass.
29 | """
30 |
31 | @abstractmethod
32 | def __init__(self, f: callable, d: int, h: float = 1e-8, rand_min: float = 0, rand_max: float = 1):
33 | self.f = f
34 | self.d = d
35 | self.h = h
36 | self.rand_min = rand_min
37 | self.rand_max = rand_max
38 |
39 | def generate_random_sample(self) -> np.ndarray[float]:
40 | """
41 | Generates a random initial sample of dimension d of values between self.rand_min and self.rand_max
42 |
43 | Returns:
44 | -------
45 | - x : np.ndarray
46 | An array of randomized values.
47 | """
48 | return np.random.uniform(self.rand_min, self.rand_max, self.d)
49 |
50 | def gradient(self, x: np.ndarray) -> np.ndarray:
51 | """
52 | Computes the gradient of a function f at point x.
53 |
54 | Parameters:
55 | ----------
56 | - x : numpy.ndarray
57 | An array representing the point at which to compute the gradient.
58 | Returns:
59 | -------
60 | - numpy.ndarray:
61 | An array representing the gradient of the function self.f at point x.
62 | """
63 |
64 | identity = np.identity(len(x))
65 | gradient = np.array(
66 | [self.f(x + self.h * identity[i]) - self.f(x - self.h * identity[i]) for i in range(len(x))]
67 | ) / (2 * self.h)
68 | return gradient
69 |
70 | @abstractmethod
71 | def fit(self, maximize: bool = False) -> Tuple[np.ndarray, float]:
72 | """
73 | Finds the minimum or maximum of a function f using batch gradient descent starting from a random point.
74 |
75 | Parameters:
76 | ----------
77 | - maximize : bool (default: False)
78 | If True, the method will find the maximum of the function. Otherwise, the default is False, and the method
79 | will find the minimum of the function.
80 | Returns:
81 | -------
82 | - x : np.ndarray
83 | The parameter values at the minimum or maximum of the function.
84 | - f(x) : float
85 | The value of the function at the minimum or maximum.
86 | Raises:
87 | ------
88 | - RuntimeWarning:
89 | Gradient failed to converge within the maximum number of iterations.
90 | """
91 |
92 | def fit_multiple(self, num_runs: int = 10, maximize: bool = False) -> Tuple[np.ndarray, float]:
93 | """
94 | Perform multiple runs of the optimization routine and return the best result.
95 |
96 | Parameters:
97 | -----------
98 | - num_runs : int (default: 1)
99 | The number of optimization runs to perform.
100 | - maximize : bool (default: False)
101 | Whether to maximize or minimize the objective function.
102 | Returns:
103 | --------
104 | - x : np.ndarray
105 | The parameter values at the minimum or maximum of the function.
106 | - f(x) : float
107 | The value of the function at the minimum or maximum.
108 | """
109 | best_solution, min_val = None, np.inf
110 | for _ in range(num_runs):
111 | x, f_x = self.fit(maximize)
112 | if f_x < min_val:
113 | best_solution, min_val = x, f_x
114 | return best_solution, min_val
115 |
116 | @abstractmethod
117 | def _selection(self, **kwargs) -> np.ndarray:
118 | """
119 | Selects a subset of features to use in the optimization process.
120 | """
121 |
122 |
123 | class BatchGD(BaseGD, ABC):
124 | """
125 | Abstract class to be inherited for Batch Gradient Descent.
126 | """
127 |
128 | @abstractmethod
129 | def __init__(self, d: int, tol: float = 1e-8):
130 | self.d = d
131 | self.tol = tol
132 |
133 | def _selection(self) -> np.ndarray:
134 | return np.arange(self.d)
135 |
136 |
137 | class MiniBatchGD(BatchGD, ABC):
138 | """
139 | Abstract class to be inherited for Mini-Batch Gradient Descent.
140 | """
141 |
142 | @abstractmethod
143 | def __init__(self, d: int, batch_size: int, tol: float = 1e-8):
144 | self.d = d
145 | self.tol = tol
146 | self.batch_size = batch_size
147 |
148 | def _selection(self) -> np.ndarray:
149 | return np.random.choice(self.d, self.batch_size)
150 |
151 |
152 | class StochasticGD(MiniBatchGD, ABC):
153 | """
154 | Abstract class to be inherited for Stochastic Gradient Descent.
155 | """
156 |
157 | @abstractmethod
158 | def __init__(self, d: int, tol: float = 1e-8):
159 | MiniBatchGD.__init__(self, d=d, batch_size=1, tol=tol)
160 |
161 |
162 | class SimpleGD(BaseGD, ABC):
163 | @abstractmethod
164 | def __init__(self, f: callable, d: int, learning_rate: float = 0.1, max_iter: int = 10 ** 5,
165 | tol: float = 1e-8, h: float = 1e-8, rand_min: float = 0, rand_max: float = 1):
166 | BaseGD.__init__(self, f, d, h, rand_min, rand_max)
167 | self.learning_rate = learning_rate
168 | self.max_iter = max_iter
169 | self.tol = tol
170 |
171 | def fit(self, maximize: bool = False) -> Tuple[np.ndarray, float]:
172 | x = self.generate_random_sample()
173 | sign = 1 if maximize else -1
174 | for _ in range(self.max_iter):
175 | indices = self._selection()
176 | grad = self.gradient(x[indices])
177 | x[indices] += sign * self.learning_rate * grad
178 | if np.abs(np.linalg.norm(grad)) < self.tol:
179 | break
180 | else:
181 | warnings.warn("Gradient failed to converge within the maximum number of iterations.")
182 | return x, self.f(x)
183 |
184 |
185 | class ConjugateGD(BaseGD, ABC):
186 | @abstractmethod
187 | def __init__(self, f: callable, d: int, max_iter: int = 10 ** 5,
188 | tol: float = 1e-8, h: float = 1e-8, rand_min: float = 0, rand_max: float = 1):
189 | BaseGD.__init__(self, f, d, h, rand_min, rand_max)
190 | self.max_iter = max_iter
191 | self.tol = tol
192 |
193 | def fit(self, maximize: bool = False) -> Tuple[np.ndarray, float]:
194 | x = self.generate_random_sample()
195 | sign = 1 if maximize else -1
196 | r = -self.gradient(x)
197 | p = r
198 | for _ in range(self.max_iter):
199 | Ap = self.gradient(p)
200 | alpha = np.dot(r, r) / np.dot(p, Ap)
201 | x += alpha * p
202 | r_new = sign * self.gradient(x)
203 | if np.abs(np.linalg.norm(r)) < self.tol:
204 | break
205 | beta = np.dot(r_new, r_new) / np.dot(r, r)
206 | p = r_new + beta * p
207 | r = r_new
208 | else:
209 | warnings.warn("Gradient failed to converge within the maximum number of iterations.")
210 |
211 | return x, self.f(x)
212 |
213 |
214 | class ExponentiallyWeightedGD(BaseGD, ABC):
215 | @abstractmethod
216 | def __init__(self, f: callable, d: int, learning_rate: float = 0.1, alpha: float = 0.9, max_iter: int = 10 ** 5,
217 | tol: float = 1e-8, h: float = 1e-8, rand_min: float = 0, rand_max: float = 1):
218 | BaseGD.__init__(self, f, d, h, rand_min, rand_max)
219 | self.learning_rate = learning_rate
220 | self.max_iter = max_iter
221 | self.alpha = alpha
222 | self.tol = tol
223 |
224 | def fit(self, maximize: bool = False) -> Tuple[np.ndarray, float]:
225 | x = self.generate_random_sample()
226 | sign = 1 if maximize else -1
227 | v = 0 # Initialize exponentially weighted moving average
228 | for _ in range(self.max_iter):
229 | grad = self.gradient(x)
230 | v = self.alpha * v + (1 - self.alpha) * grad**2
231 | x += sign * self.learning_rate * grad / np.sqrt(v + self.h) # Add the small value to avoid division by zero
232 | if np.abs(np.linalg.norm(grad)) < self.tol:
233 | break
234 | else:
235 | warnings.warn("Gradient failed to converge within the maximum number of iterations.")
236 | return x, self.f(x)
237 |
--------------------------------------------------------------------------------
/src/optimisation_algorithms/Algorithmic/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Muradmustafayev-03/Optimisation-Algorithms/61480d1b10cb067c161320274b7061fc83caaa5a/src/optimisation_algorithms/Algorithmic/__init__.py
--------------------------------------------------------------------------------
/src/optimisation_algorithms/Heuristic/AntColonyOptimization.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from PopulationalAbstract import PopulationalOptimization
3 |
4 |
5 | class AntColonyOptimization(PopulationalOptimization):
6 | def __init__(self, f: callable, d: int, min_val: float, max_val: float, step: float = 1, population_size: int = 20,
7 | tol: float = 1e-8, patience: int = 10**3, max_iter: int = 10 ** 5, alpha: float = 1,
8 | beta: float = 3, evaporation_rate: float = 0.5, pheromone_init: float = 0.01):
9 | super().__init__(f, d, population_size, tol, patience, max_iter, min_val, max_val)
10 | self.alpha = alpha
11 | self.beta = beta
12 | self.evaporation_rate = evaporation_rate
13 | self.step = step
14 | self.steps_range = int((max_val - min_val) / step)
15 | self.pheromones = np.full((self.steps_range, self.steps_range), pheromone_init)
16 |
17 | def eval_path(self, path):
18 | return sum(self.f([path[j], path[j + 1]]) for j in range(len(path) - 1))
19 |
20 | def update_population(self, **kwargs):
21 | pheromones = np.copy(self.pheromones)
22 | pheromones *= (1 - self.evaporation_rate)
23 |
24 | for i in range(self.population_size):
25 | path = self.generate_path(pheromones)
26 | path_fitness = self.eval_path(path)
27 | pheromones = self.update_pheromones(pheromones, path, path_fitness)
28 |
29 | self.pheromones = pheromones
30 |
31 | def generate_path(self, pheromones):
32 | allowed_moves = np.arange(self.rand_min, self.rand_max, self.step)
33 | start_node = self.make_move(None, allowed_moves)
34 | allowed_moves = np.setdiff1d(allowed_moves, start_node)
35 | path = [start_node]
36 |
37 | while allowed_moves.size > 1:
38 | current_node = path[-1]
39 | allowed_moves = np.setdiff1d(allowed_moves, current_node)
40 | move_probs = self.calculate_move_probs(current_node, allowed_moves, pheromones)
41 | next_node = self.make_move(move_probs, allowed_moves)
42 | path.append(next_node)
43 |
44 | return np.array(path)
45 |
46 | # def get_allowed_moves(self, current_node):
47 | # allowed_moves = np.arange(self.rand_min, self.rand_max, self.step)
48 | # allowed_moves = np.setdiff1d(allowed_moves, current_node)
49 | # return allowed_moves
50 |
51 | def calculate_move_probs(self, current_node, allowed_moves, pheromones):
52 | move_probs = []
53 | denominator = 0
54 |
55 | for move in allowed_moves:
56 | pheromone = pheromones[current_node, move]
57 | print(current_node, move)
58 | distance = self.f([current_node, move])
59 | denominator += (pheromone ** self.alpha) * ((1 / distance) ** self.beta)
60 |
61 | for move in allowed_moves:
62 | pheromone = pheromones[current_node, move]
63 | distance = self.f([current_node, move])
64 | prob = (pheromone ** self.alpha) * ((1 / distance) ** self.beta) / denominator
65 | move_probs.append(prob)
66 |
67 | return np.array(move_probs).flatten()
68 |
69 | @staticmethod
70 | def make_move(move_probs, allowed_moves):
71 | return np.random.choice(allowed_moves, p=move_probs)
72 |
73 | @staticmethod
74 | def update_pheromones(pheromones, ant_path, ant_fitness):
75 | for i in range(len(ant_path) - 1):
76 | curr_node = ant_path[i]
77 | next_node = ant_path[i + 1]
78 | pheromones[curr_node, next_node] += ant_fitness
79 | pheromones[next_node, curr_node] = pheromones[curr_node, next_node]
80 |
81 | return pheromones
82 |
83 |
84 | def dis(x):
85 | return abs(x[1] - x[0])
86 |
87 |
88 | aco = AntColonyOptimization(dis, 2, -20, 20)
89 | ant_path = aco.generate_path(aco.pheromones)
90 | print("path: ", ant_path)
91 | ant_fitness = sum(dis([ant_path[j], ant_path[j + 1]]) for j in range(len(ant_path) - 1))
92 | print("fitness = ", ant_fitness)
93 |
--------------------------------------------------------------------------------
/src/optimisation_algorithms/Heuristic/GeneticAlgorithm.py:
--------------------------------------------------------------------------------
1 | from PopulationalAbstract import PopulationalOptimization
2 | import numpy as np
3 |
4 |
5 | class GeneticAlgorithm(PopulationalOptimization):
6 | """
7 | Genetic Algorithm optimization algorithm.
8 |
9 | Parameters:
10 | ----------
11 | - f : callable
12 | The objective function to be optimized.
13 | - d : int
14 | The dimensionality of the decision variables.
15 | - pop_size : int (default=50)
16 | The size of the population.
17 | - mutation_rate : float (default=0.1)
18 | The mutation rate.
19 | - crossover_rate : float (default=0.8)
20 | The crossover rate.
21 | - n_elites : int (default=1)
22 | The number of elite solutions to keep in each generation.
23 | - tol : float (default=1e-8)
24 | The convergence threshold.
25 | - patience : int (default=10**3)
26 | The number of iterations to wait for improvement before stopping the optimization.
27 | - max_iter : int (default=10**5)
28 | The maximum number of iterations to fit.
29 | - rand_min : float (default=0)
30 | The minimum value for random initialization of decision variables.
31 | - rand_max : float (default=1)
32 | The maximum value for random initialization of decision variables.
33 |
34 | Methods:
35 | --------
36 | - eval(pop: np.ndarray) -> np.ndarray:
37 | Evaluates the objective function at each point in the population.
38 | - generate_population()
39 | Generates an initial population.
40 | - select(pop: np.ndarray, fitness: np.ndarray) -> np.ndarray:
41 | Selects individuals for mating based on their fitness.
42 | - crossover(parents: np.ndarray) -> np.ndarray:
43 | Combines the decision variables of two parents to create a new offspring.
44 | - mutate(individual: np.ndarray) -> np.ndarray:
45 | Mutates an individual by randomly adjusting its decision variables.
46 | - elitism(pop: np.ndarray, fitness: np.ndarray, n_elites: int) -> np.ndarray:
47 | Selects the elite solutions from the population.
48 | - fit(maximize: bool = False) -> Tuple[np.ndarray, float]:
49 | Finds the optimal solution for the given objective function.
50 | """
51 | def __init__(self, f: callable, d: int, population_size: int = 50, mutation_rate: float = 0.1,
52 | crossover_rate: float = 0.8, n_elites: int = 1, tol: float = 1e-8, patience: int = 10 ** 3,
53 | max_iter: int = 10 ** 5, rand_min: float = 0, rand_max: float = 1):
54 |
55 | super().__init__(f, d, population_size, tol, patience, max_iter, rand_min, rand_max)
56 | self.mutation_rate = mutation_rate
57 | self.crossover_rate = crossover_rate
58 | self.n_elites = n_elites
59 | ############################################################################
60 | self.n_parents = int(np.ceil((self.population_size - self.n_elites) / 2))
61 | self.n_mutations = int(np.ceil(self.mutation_rate * self.population_size))
62 | self.n_crossovers = int(np.ceil(self.crossover_rate * self.population_size))
63 |
64 | def select(self, fitness: np.ndarray) -> np.ndarray:
65 | """
66 | Selects individuals for mating based on their fitness.
67 |
68 | Parameters:
69 | ----------
70 | - pop : numpy.ndarray
71 | A numpy array representing the population.
72 | - fitness : numpy.ndarray
73 | A numpy array representing the fitness values of each individual in the population.
74 |
75 | Returns:
76 | -------
77 | - numpy.ndarray:
78 | A numpy array representing the selected individuals for mating.
79 | """
80 | idx = np.random.choice(self.population_size, self.population_size, p=fitness/fitness.sum())
81 | return idx
82 |
83 | def crossover(self, parents: np.ndarray) -> np.ndarray:
84 | """
85 | Combine the decision variables of two parents to create a new offspring.
86 |
87 | Parameters:
88 | ----------
89 | - parents : np.ndarray of shape (2, self.d)
90 | The decision variables of the two parents.
91 |
92 | Returns:
93 | -------
94 | - child : np.ndarray of shape (self.d,)
95 | The decision variables of the new offspring.
96 | """
97 | crossover_point = np.random.randint(self.d)
98 | offspring = np.concatenate([parents[0][:crossover_point], parents[1][crossover_point:]])
99 | return offspring
100 |
101 | def mutate(self, individual: np.ndarray) -> np.ndarray:
102 | """
103 | Mutates an individual by randomly adjusting its decision variables.
104 |
105 | Parameters:
106 | ----------
107 | - individual : np.ndarray
108 | The individual to mutate.
109 |
110 | Returns:
111 | -------
112 | - np.ndarray
113 | The mutated individual.
114 | """
115 | mask = np.random.rand(*individual.shape) < self.mutation_rate
116 | mutation = np.random.uniform(self.rand_min, self.rand_max, size=individual.shape)
117 | individual[mask] = mutation[mask]
118 | return individual
119 |
120 | @staticmethod
121 | def elitism(pop: np.ndarray, fitness: np.ndarray, n_elites: int) -> np.ndarray:
122 | """
123 | Selects the elite solutions from the population.
124 |
125 | Parameters:
126 | ----------
127 | - pop : np.ndarray
128 | The population of solutions.
129 | - fitness : np.ndarray
130 | The fitness values of each solution in the population.
131 | - n_elites : int
132 | The number of elite solutions to select.
133 |
134 | Returns:
135 | -------
136 | - elite_pop : np.ndarray
137 | The elite solutions.
138 | """
139 | sorted_indices = np.argsort(fitness)
140 | elite_indices = sorted_indices[-n_elites:]
141 | elite_pop = pop[elite_indices, :]
142 | return elite_pop
143 |
144 | def update_population(self, **kwargs):
145 | fitness = kwargs['fitness']
146 | parents_idx = self.select(fitness)[:self.n_parents]
147 |
148 | # Apply crossover and mutation to create new offspring
149 | crossovers_idx = np.random.choice(parents_idx, size=self.n_crossovers, replace=True)
150 | offspring = np.empty((self.population_size, self.d))
151 | offspring[:self.n_crossovers] = self.crossover(self.population[crossovers_idx])
152 | offspring[self.n_crossovers:self.n_elites] = self.population[parents_idx[:self.n_elites]]
153 | mutations_idx = np.random.choice(range(self.population_size), size=self.n_mutations, replace=False)
154 | offspring[mutations_idx] = self.mutate(offspring[mutations_idx])
155 |
156 | # Select elite solutions to keep
157 | elites = self.elitism(self.population, fitness, self.n_elites)
158 | offspring[:self.n_elites] = elites
159 |
160 | # Replace old population with new offspring
161 | self.population = offspring
162 |
--------------------------------------------------------------------------------
/src/optimisation_algorithms/Heuristic/HarmonySearch.py:
--------------------------------------------------------------------------------
1 | from PopulationalAbstract import PopulationalOptimization
2 | import numpy as np
3 |
4 |
5 | class HarmonySearch(PopulationalOptimization):
6 | """
7 | Harmony Search optimization algorithm.
8 |
9 | Parameters:
10 | ----------
11 | - f : callable
12 | The objective function to be optimized.
13 | - d : int
14 | The dimensionality of the decision variables.
15 | - hm_size : int (default=30)
16 | The number of harmonies in the harmony memory.
17 | - hmcr : float (default=0.8)
18 | The harmony memory considering rate.
19 | - par : float (default=0.4)
20 | The pitch adjustment rate.
21 | - bandwidth : float (default=1)
22 | The bandwidth for pitch adjustment.
23 | - top_n : int (default=1)
24 | The number of the best harmony solutions to consider for replacement.
25 | - tol : float (default=1e-8)
26 | The convergence threshold.
27 | - patience : int (default=10)
28 | The number of iterations to wait for improvement before stopping the optimization.
29 | - max_iter : int (default=10 ** 5)
30 | The maximum number of iterations to fit.
31 | - rand_min : float (default=0)
32 | The minimum value for random initialization of decision variables.
33 | - rand_max : float (default=1)
34 | The maximum value for random initialization of decision variables.
35 |
36 | Methods:
37 | --------
38 | - eval(hm: np.ndarray) -> np.ndarray:
39 | Evaluates the objective function at each point in the harmony memory.
40 | - generate_population()
41 | Generates an initial population.
42 | - improvise(hm: np.ndarray) -> np.ndarray:
43 | Generates a new harmony by adjusting elements of the input harmony.
44 | - replace_worst_with_best(old_hm: np.ndarray, new_hm: np.ndarray, maximize: bool = False) -> np.ndarray:
45 | Replaces the worst solutions in the harmony memory with the best solutions.
46 | - fit(maximize: bool = False) -> Tuple[np.ndarray, float]:
47 | Finds the optimal solution for the given objective function.
48 | """
49 | def __init__(self, f: callable, d: int, hm_size: int = 30, hmcr: float = 0.8, par: float = 0.4,
50 | bandwidth: float = 1, top_n: int = 1, tol: float = 1e-8, patience: int = 10 ** 3,
51 | max_iter: int = 10 ** 5, rand_min: float = 0, rand_max: float = 1):
52 | super().__init__(f, d, hm_size, tol, patience, max_iter, rand_min, rand_max)
53 | self.hmcr = hmcr
54 | self.par = par
55 | self.bw = bandwidth
56 | self.top_n = top_n
57 |
58 | def improvise(self, hm: np.array) -> np.ndarray:
59 | """
60 | Generates a new harmony by improvising and adjusting the existing harmony.
61 |
62 | Parameters:
63 | ----------
64 | - hm : numpy.ndarray
65 | A numpy array representing the current harmony to improvise.
66 |
67 | Returns:
68 | -------
69 | - numpy.ndarray:
70 | A numpy array representing the new harmony generated by improvisation.
71 | """
72 |
73 | new_hm = hm.copy()
74 |
75 | # Replace the elements to adjust with new random values
76 | adjust_mask = np.random.rand(*new_hm.shape) > self.hmcr
77 | new_hm[adjust_mask] = np.random.uniform(self.rand_min, self.rand_max, adjust_mask.sum())
78 |
79 | # Apply the adjustments to the elements in the mutation mask
80 | adjust_amounts = np.random.uniform(-self.bw, self.bw, size=new_hm.shape)
81 | mutate_mask = np.random.rand(*new_hm.shape) > self.par
82 | new_hm[mutate_mask] += adjust_amounts[mutate_mask]
83 |
84 | return new_hm
85 |
86 | def replace_worst_with_best(self, old_hm: np.array, new_hm: np.array, maximize: bool = False) -> np.ndarray:
87 | """
88 | Replaces the worst solutions in the old harmony memory with the best solutions in the new harmony memory.
89 |
90 | Parameters:
91 | ----------
92 | - old_hm : numpy.ndarray
93 | The old harmony memory.
94 | - new_hm : numpy.ndarray
95 | The new harmony memory.
96 | - maximize : bool, optional
97 | A boolean indicating whether the optimization problem is maximization or minimization. Defaults to False.
98 |
99 | Returns:
100 | -------
101 | - numpy.ndarray:
102 | The updated old harmony memory after replacing the worst solutions with the best solutions.
103 | """
104 | old_results = self.eval(old_hm)
105 | new_results = self.eval(new_hm)
106 |
107 | sign = -1 if maximize else 1
108 |
109 | sorted_indices = np.argsort(old_results)[::sign]
110 | best_indices = sorted_indices[:self.top_n]
111 | worst_indices = sorted_indices[-self.top_n:]
112 |
113 | # Replace the worst solutions with the best solutions
114 | replace_mask = sign * new_results[best_indices] < sign * old_results[worst_indices]
115 | old_hm[worst_indices[replace_mask]] = new_hm[best_indices[replace_mask]]
116 |
117 | return old_hm
118 |
119 | def update_population(self, **kwargs):
120 | new_population = self.improvise(self.population)
121 | self.population = self.replace_worst_with_best(self.population, new_population, kwargs['maximize'])
122 |
--------------------------------------------------------------------------------
/src/optimisation_algorithms/Heuristic/PopulationalAbstract.py:
--------------------------------------------------------------------------------
1 | from typing import Tuple
2 | import numpy as np
3 | from abc import ABC, abstractmethod
4 |
5 |
6 | class PopulationalOptimization(ABC):
7 | def __init__(self, f: callable, d: int, population_size: int, tol: float = 1e-8, patience: int = 10**3,
8 | max_iter: int = 10 ** 5, rand_min: float = 0, rand_max: float = 1):
9 | self.f = f
10 | self.d = d
11 | self.tol = tol
12 | self.patience = patience
13 | self.max_iter = max_iter
14 | self.rand_min = rand_min
15 | self.rand_max = rand_max
16 | self.population_size = population_size
17 | self.population = self.generate_population()
18 |
19 | def eval(self, population: np.array) -> np.ndarray:
20 | """
21 | Evaluates the population to the function.
22 |
23 | Parameters:
24 | ----------
25 | - population : numpy.ndarray
26 | An array representing the population to evaluate.
27 |
28 | Returns:
29 | -------
30 | - numpy.ndarray:
31 | An array representing the results of evaluating each solution in the harmony memory.
32 | """
33 | return np.apply_along_axis(self.f, 1, population)
34 |
35 | def generate_population(self):
36 | """
37 | Generates an initial population
38 | """
39 | return np.random.uniform(self.rand_min, self.rand_max, size=(self.population_size, self.d))
40 |
41 | def _check_improved(self, fitness, improvement_counter, best_fitness, best_solution, maximize):
42 | is_better = fitness > best_fitness if maximize else fitness < best_fitness
43 | if np.any(is_better):
44 | improvement_counter = 0
45 | best_fitness = fitness[is_better][0]
46 | best_solution = self.population[is_better][0]
47 | else:
48 | improvement_counter += 1
49 | return improvement_counter, best_fitness, best_solution
50 |
51 | @abstractmethod
52 | def update_population(self, **kwargs):
53 | """
54 |
55 | :param kwargs:
56 | :return:
57 | """
58 |
59 | def fit(self, maximize: bool = False) -> Tuple[np.ndarray, float]:
60 | """
61 | Finds the optimal solution for the given objective function.
62 |
63 | Parameters:
64 | ----------
65 | - maximize : bool (default: False)
66 | If True, the method will find the maximum of the function. Otherwise, the default is False, and the method
67 | will find the minimum of the function.
68 | Returns:
69 | -------
70 | - best_hm : numpy.ndarray
71 | An array representing the decision variables that optimize the objective function.
72 | - best_val : float
73 | The optimized function value.
74 | """
75 | best_fitness = -np.inf if maximize else np.inf
76 | best_solution = None
77 | improvement_counter = 0
78 |
79 | for _ in range(self.max_iter):
80 | fitness = self.eval(self.population)
81 | improvement_counter, best_fitness, best_solution = self._check_improved(
82 | fitness, improvement_counter, best_fitness, best_solution, maximize)
83 | if improvement_counter >= self.patience:
84 | break
85 |
86 | self.update_population(fitness=fitness, maximize=maximize)
87 |
88 | return best_solution, best_fitness
89 |
--------------------------------------------------------------------------------
/src/optimisation_algorithms/Heuristic/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Muradmustafayev-03/Optimisation-Algorithms/61480d1b10cb067c161320274b7061fc83caaa5a/src/optimisation_algorithms/Heuristic/__init__.py
--------------------------------------------------------------------------------
/src/optimisation_algorithms/MultiObjective/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Muradmustafayev-03/Optimisation-Algorithms/61480d1b10cb067c161320274b7061fc83caaa5a/src/optimisation_algorithms/MultiObjective/__init__.py
--------------------------------------------------------------------------------
/src/optimisation_algorithms/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Muradmustafayev-03/Optimisation-Algorithms/61480d1b10cb067c161320274b7061fc83caaa5a/src/optimisation_algorithms/__init__.py
--------------------------------------------------------------------------------
/src/optimisation_algorithms/benchmark_functions/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Muradmustafayev-03/Optimisation-Algorithms/61480d1b10cb067c161320274b7061fc83caaa5a/src/optimisation_algorithms/benchmark_functions/__init__.py
--------------------------------------------------------------------------------
/src/optimisation_algorithms/benchmark_functions/bowl_shape.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 |
4 | def bohachevsky_n1(x: np.ndarray) -> float:
5 | """
6 | The Bohachevsky function N. 1.
7 | Typically, evaluated on the input domain [-100, 100] x [-100, 100].
8 |
9 | Dimensions: 2
10 | Global optimum: f(0, 0) = 0
11 |
12 | Arguments:
13 | ---------
14 | - x: a NumPy array of shape (2,) representing the point at which to evaluate the function
15 |
16 | Returns:
17 | -------
18 | - The value of the first Bohachevsky function at point x
19 | """
20 | x1, x2 = x
21 | return x1 ** 2 + 2 * x2 ** 2 - 0.3 * np.cos(3 * np.pi * x1) - 0.4 * np.cos(4 * np.pi * x2) + 0.7
22 |
23 |
24 | def bohachevsky_n2(x: np.ndarray) -> float:
25 | """
26 | The Bohachevsky function N. 2.
27 | Typically, evaluated on the input domain [-100, 100] x [-100, 100].
28 |
29 | Dimensions: 2
30 | Global optimum: f(0, 0) = 0
31 |
32 | Arguments:
33 | ---------
34 | - x: a NumPy array of shape (2,) representing the point at which to evaluate the function
35 |
36 | Returns:
37 | -------
38 | - The value of the second Bohachevsky function at point x
39 | """
40 | x1, x2 = x
41 | return x1 ** 2 + 2 * x2 ** 2 - 0.3 * np.cos(3 * np.pi * x1) * np.cos(4 * np.pi * x2) + 0.3
42 |
43 |
44 | def bohachevsky_n3(x: np.ndarray) -> float:
45 | """
46 | The Bohachevsky function N. 3.
47 | Typically, evaluated on the input domain [-100, 100] x [-100, 100].
48 |
49 | Dimensions: 2
50 | Global optimum: f(0, 0) = 0
51 |
52 | Arguments:
53 | ---------
54 | - x: a NumPy array of shape (2,) representing the point at which to evaluate the function
55 |
56 | Returns:
57 | -------
58 | - The value of the second Bohachevsky function at point x
59 | """
60 | x1, x2 = x
61 | return x1 ** 2 + 2 * x2 ** 2 - 0.3 * np.cos(3 * np.pi * x1 + 4 * np.pi * x2) + 0.3
62 |
63 |
64 | def perm0(x: np.ndarray, beta: float) -> np.ndarray:
65 | """
66 | The Perm Function 0.
67 | Typically, evaluated on the input domain [-1, 1]^d.
68 |
69 | Dimensions: d
70 | Global optimum: f(0,...,d-1) = 0
71 |
72 | Arguments:
73 | ---------
74 | - x: a NumPy array of shape (d,) representing the point at which to evaluate the function
75 | - beta: a float parameter controlling the "sharpness" of the function
76 |
77 | Returns:
78 | -------
79 | - The value of the Perm Function 0 at point x
80 | """
81 | d = len(x)
82 | p = np.arange(1, d + 1)
83 | inner_sum = (np.power(np.abs(x), p) + beta) / p
84 | return np.sum(np.power(inner_sum, 10))
85 |
86 |
87 | def rotated_hyper_ellipsoid(x: np.ndarray) -> np.ndarray:
88 | """
89 | The Rotated Hyper-Ellipsoid Function.
90 | Typically, evaluated on the input domain [-65.536, 65.536]^d.
91 |
92 | Dimensions: d
93 | Global optimum: f(0,...,0) = 0
94 |
95 | Arguments:
96 | ---------
97 | - x: a NumPy array of shape (d,) representing the point at which to evaluate the function
98 |
99 | Returns:
100 | - The value of the Rotated Hyper-Ellipsoid Function at point x
101 | """
102 | d = len(x)
103 | return np.sum(np.power(np.dot(np.tril(np.ones((d, d))), x), 2))
104 |
105 |
106 | def sphere(x: np.ndarray) -> np.ndarray:
107 | """
108 | The Sphere Function.
109 | Typically, evaluated on the input domain [-5.12, 5.12]^d.
110 |
111 | Dimensions: d
112 | Global optimum: f(0,...,0) = 0
113 |
114 | Arguments:
115 | ---------
116 | - x: a NumPy array of shape (d,) representing the point at which to evaluate the function
117 |
118 | Returns:
119 | -------
120 | - The value of the Sphere Function at point x
121 | """
122 | return np.sum(np.power(x, 2))
123 |
124 |
125 | def sum_of_different_powers(x: np.ndarray) -> np.ndarray:
126 | """
127 | The Sum of Different Powers Function.
128 | Typically, evaluated on the input domain [-1, 1]^d.
129 |
130 | Dimensions: d
131 | Global optimum: f(0,...,0) = 0
132 |
133 | Arguments:
134 | ---------
135 | - x: a NumPy array of shape (d,) representing the point at which to evaluate the function
136 |
137 | Returns:
138 | -------
139 | - The value of the Sum of Different Powers Function at point x
140 | """
141 | d = len(x)
142 | powers = np.arange(1, d + 1)
143 | return np.sum(np.power(np.abs(x), powers))
144 |
145 |
146 | def sum_squares(x: np.ndarray) -> np.ndarray:
147 | """
148 | The Sum Squares Function.
149 | Typically, evaluated on the input domain [-10, 10]^d.
150 |
151 | Dimensions: d
152 | Global optimum: f(0,...,0) = 0
153 |
154 | Arguments:
155 | ---------
156 | - x: a NumPy array of shape (d,) representing the point at which to evaluate the function
157 |
158 | Returns:
159 | -------
160 | - The value of the Sum Squares Function at point x
161 | """
162 | d = len(x)
163 | return np.sum(np.arange(1, d+1) * np.power(x, 2))
164 |
165 |
166 | def trid(x: np.ndarray) -> np.ndarray:
167 | """
168 | The Trid Function.
169 | Typically, evaluated on the input domain [-d^2, d^2]^d.
170 |
171 | Dimensions: d
172 | Global optimum: f(0,...,0) = -d(d+4)/6
173 |
174 | Arguments:
175 | ---------
176 | - x: a NumPy array of shape (d,) representing the point at which to evaluate the function
177 |
178 | Returns:
179 | -------
180 | - The value of the Trid Function at point x
181 | """
182 | return np.sum(np.power(x - 1, 2)) - np.sum(x[1:] * x[:-1])
183 |
--------------------------------------------------------------------------------
/src/optimisation_algorithms/benchmark_functions/concave.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 |
4 | def perm_function(x: np.ndarray, beta: float) -> float:
5 | """
6 | The Perm Function.
7 | Typically, evaluated on the input domain [0, 1]^d.
8 |
9 | Dimensions: d
10 | Global optimum: f(x_1, x_2, ..., x_d) = 0, with x_i = i/d for i = 1,...,d
11 |
12 | Arguments:
13 | ---------
14 | - x: a NumPy array of shape (d,) representing the point at which to evaluate the function
15 | - beta: a scalar parameter
16 |
17 | Returns:
18 | - The value of the Perm Function at point x with parameter beta
19 | """
20 | d = len(x)
21 | idx = np.arange(d) + 1
22 | return np.power(np.abs(np.power(idx, beta) / idx - x)).sum()
23 |
24 |
25 | def power_sum(x: np.ndarray, b: float = 2.0) -> float:
26 | """
27 | The Power Sum Function.
28 | Typically, evaluated on the input domain [-1, 1]^d.
29 |
30 | Dimensions: d
31 | Global optimum: f(0,...,0) = 0
32 |
33 | Arguments:
34 | ---------
35 | - x: a NumPy array of shape (d,) representing the point at which to evaluate the function
36 | - b: a float value that controls the steepness of the valleys (default is 2.0)
37 |
38 | Returns:
39 | - The value of the Power Sum Function at point x
40 | """
41 | return np.sum(np.power(np.abs(x), b)) + np.power(np.sum(np.abs(x)), b)
42 |
--------------------------------------------------------------------------------
/src/optimisation_algorithms/benchmark_functions/many_local_minimums.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 |
4 | def ackley(x: np.ndarray, a: int = 20, b: float = 0.2, c: float = 2 * np.pi) -> float:
5 | """
6 | The Ackley Function.
7 | Typically, evaluated on the input domain [-32.768, 32.768]^d.
8 |
9 | Dimensions: d
10 | Global optimum: f(0,...,0) = 0
11 |
12 | Arguments:
13 | ---------
14 | - x: a NumPy array of shape (d,) representing the point at which to evaluate the function
15 |
16 | Returns:
17 | -------
18 | - The value of the Ackley Function at point x
19 | """
20 | d = x.shape[0]
21 | term1 = -b * np.sqrt(np.sum(np.power(x, 2)) / d)
22 | term2 = np.sum(np.cos(c * x)) / d
23 | return -a * np.exp(term1) - np.exp(term2) + a + np.exp(1)
24 |
25 |
26 | def bukin(x: np.ndarray) -> float:
27 | """
28 | The Bukin Function N. 6.
29 | Typically, evaluated on the input domain [-15, -5] x [-3, 3].
30 |
31 | Dimensions: 2
32 | Global optimum: f(-10, 1) = 0
33 |
34 | Arguments:
35 | ---------
36 | - x: a NumPy array of shape (2,) representing the point at which to evaluate the function
37 |
38 | Returns:
39 | -------
40 | - The value of the Bukin Function N. 6 at point x
41 | """
42 | x1, x2 = x
43 | term1 = 100 * np.sqrt(np.abs(x2 - 0.01 * np.power(x1, 2)))
44 | term2 = 0.01 * np.abs(x1 + 10)
45 | return term1 + term2
46 |
47 |
48 | def cross_in_tray(x: np.ndarray) -> float:
49 | """
50 | The Cross-in-Tray Function.
51 | Typically, evaluated on the input domain [-10, 10]^2.
52 |
53 | Dimensions: 2
54 | Global optimum: f(1.3491, -1.3491) = -2.06261
55 |
56 | Arguments:
57 | ---------
58 | - x: a NumPy array of shape (2,) representing the point at which to evaluate the function
59 |
60 | Returns:
61 | -------
62 | - The value of the Cross-in-Tray Function at point x
63 | """
64 | x1, x2 = x
65 | term1 = np.abs(100 - np.sqrt(x1**2 + x2**2) / np.pi)
66 | term2 = np.abs(np.sin(x1) * np.sin(x2) * np.exp(term1))
67 | return -0.0001 * np.power(term2 + 1, 0.1)
68 |
69 |
70 | def drop_wave(x: np.ndarray) -> float:
71 | """
72 | The Drop-Wave Function.
73 | Typically, evaluated on the input domain [-5.12, 5.12]^d.
74 |
75 | Dimensions: 2
76 | Global optimum: f(0,0) = -1
77 |
78 | Arguments:
79 | ---------
80 | - x: a NumPy array of shape (2,) representing the point at which to evaluate the function
81 |
82 | Returns:
83 | -------
84 | - The value of the Drop-Wave Function at point x
85 | """
86 | x1, x2 = x
87 | numerator = 1 + np.cos(12 * np.sqrt(x1**2 + x2**2))
88 | denominator = 0.5 * (x1**2 + x2**2) + 2
89 | return -numerator / denominator
90 |
91 |
92 | def eggholder(x: np.ndarray) -> float:
93 | """
94 | The Eggholder function.
95 | Typically, evaluated on the input domain [-512, 512]^2.
96 |
97 | Dimensions: 2
98 | Global optimum: f(512, 404.2319) = -959.6407
99 |
100 | Arguments:
101 | ---------
102 | - x: a NumPy array of shape (2,) representing the point at which to evaluate the function
103 |
104 | Returns:
105 | -------
106 | - The value of the Eggholder function at point x
107 | """
108 | x1, x2 = x
109 | term1 = -(x2 + 47) * np.sin(np.sqrt(np.abs(x2 + x1 / 2 + 47)))
110 | term2 = -x1 * np.sin(np.sqrt(np.abs(x1 - (x2 + 47))))
111 | return term1 + term2
112 |
113 |
114 | def gramacy_lee(x: float) -> float:
115 | """
116 | The Gramacy & Lee (2012) function.
117 | Typically, evaluated on the input domain [0, 1]^d.
118 |
119 | Dimensions: 1
120 | Global optimum: f(0.548563362974474, -0.550903198335186) = -0.869011135358703
121 |
122 | Arguments:
123 | ---------
124 | - x: a float representing the point at which to evaluate the function
125 |
126 | Returns:
127 | -------
128 | - The value of the Gramacy & Lee (2012) function at point x
129 | """
130 | term1 = np.sin(10 * np.pi * x) / (2 * x)
131 | term2 = (x - 1) ** 4
132 | return term1 + term2 - 0.5
133 |
134 |
135 | def griewank(x: np.ndarray) -> float:
136 | """
137 | The Griewank function.
138 | Typically, evaluated on the input domain [-600, 600]^d.
139 |
140 | Dimensions: d
141 | Global optimum: f(0, ..., 0) = 0
142 |
143 | Arguments:
144 | ---------
145 | - x: a NumPy array of shape (d,) representing the point at which to evaluate the function
146 |
147 | Returns:
148 | -------
149 | - The value of the Griewank function at point x
150 | """
151 | d = x.shape[0]
152 | term1 = np.sum(np.power(x, 2)) / 4000
153 | term2 = np.prod(np.cos(x / np.sqrt(np.arange(1, d + 1))))
154 | return 1 + term1 - term2
155 |
156 |
157 | def holder_table(x: np.ndarray) -> float:
158 | """
159 | The Holder Table function.
160 | Typically, evaluated on the input domain [-10, 10]^2.
161 |
162 | Dimensions: 2
163 | Global optimum: f(8.05502, 9.66459) = -19.2085
164 |
165 | Arguments:
166 | ---------
167 | - x: a NumPy array of shape (2,) representing the point at which to evaluate the function
168 |
169 | Returns:
170 | -------
171 | - The value of the Holder Table function at point x
172 | """
173 | x1, x2 = x
174 | term1 = -np.abs(np.sin(x1) * np.cos(x2) * np.exp(np.abs(1 - np.sqrt(x1 ** 2 + x2 ** 2) / np.pi)))
175 | return term1
176 |
177 |
178 | def langermann(x: np.ndarray, A: np.ndarray = None, c: np.ndarray = None, W: np.ndarray = None) -> float:
179 | """
180 | The Langermann function.
181 | Typically, evaluated in the domain [0, 10]^d, where d is the number of input dimensions.
182 |
183 | Dimensions: d
184 | Global optimum: Unknown
185 |
186 | Arguments:
187 | ---------
188 | - x: a NumPy array of shape (d,) representing the point at which to evaluate the function
189 | - A: a NumPy array of shape (m, d) containing the m coefficient sets
190 | - c: a NumPy array of shape (m,) containing the m constant offsets
191 | - W: a NumPy array of shape (m, d) containing the m frequency sets
192 |
193 | Returns:
194 | -------
195 | - The value of the Langermann function at point x
196 | """
197 | if A is None:
198 | A = np.random.rand(5, x.shape[0])
199 | if c is None:
200 | c = np.random.rand(5)
201 | if W is None:
202 | W = np.random.rand(5, x.shape[0])
203 |
204 | inner_sum = np.sum(A * np.exp(-(1 / np.pi) * np.sum(np.square(x - W), axis=1)), axis=1)
205 | return -np.sum(c * inner_sum)
206 |
207 |
208 | def levy(x: np.ndarray) -> float:
209 | """
210 | The Levy function.
211 | Typically, evaluated in the domain [-10, 10]^d, where d is the number of input dimensions.
212 |
213 | Dimensions: d
214 | Global optimum: f(1, 1, ..., 1) = 0
215 |
216 | Arguments:
217 | ---------
218 | - x: a NumPy array of shape (d,) representing the point at which to evaluate the function
219 |
220 | Returns:
221 | -------
222 | - The value of the Levy function at point x
223 | """
224 | w = 1 + (x - 1) / 4
225 | term1 = (np.sin(np.pi * w[0])) ** 2
226 | term2 = np.sum((w[:-1] - 1) ** 2 * (1 + 10 * (np.sin(np.pi * w[:-1] + 1)) ** 2))
227 | term3 = (w[-1] - 1) ** 2 * (1 + (np.sin(2 * np.pi * w[-1])) ** 2)
228 | return term1 + term2 + term3
229 |
230 |
231 | def levy_n13(x: np.ndarray) -> np.ndarray:
232 | """
233 | The Levy Function N. 13.
234 | Typically, evaluated on the square xi ∈ [-10, 10], for all i = 1, 2.
235 |
236 | Dimensions: d
237 | Global optimum: f(1,...,1) = 0
238 |
239 | Arguments:
240 | ----------
241 | - x: a NumPy array of shape (n,) representing the point at which to evaluate the function
242 |
243 | Returns:
244 | --------
245 | - The value of the Levy Function N. 13 at point x
246 | """
247 |
248 | w = 1 + (x - 1) / 4
249 | term1 = (np.sin(np.pi * w[0])) ** 2
250 | term2 = ((w[:-1] - 1) ** 2) * (1 + 10 * (np.sin(np.pi * w[:-1] + 1) ** 2))
251 | term3 = ((w[-1] - 1) ** 2) * (1 + (np.sin(2 * np.pi * w[-1])) ** 2)
252 | return np.sum(term1 + np.sum(term2) + term3)
253 |
254 |
255 | def rastrigin(x: np.ndarray) -> float:
256 | """
257 | The Rastrigin Function.
258 | Typically, evaluated on the hypercube xi ∈ [-5.12, 5.12], for all i = 1, …, d.
259 |
260 | Dimensions: d
261 | Global optimum: f(0,...,0) = 0
262 |
263 | Arguments:
264 | ----------
265 | - x: a NumPy array of shape (n,) representing the point at which to evaluate the function
266 |
267 | Returns:
268 | --------
269 | - The value of the Rastrigin Function at point x
270 | """
271 | d = x.shape[0]
272 | return 10 * d + np.sum(x ** 2 - 10 * np.cos(2 * np.pi * x))
273 |
274 |
275 | def schaffer_n2(x: np.ndarray) -> float:
276 | """
277 | The Schaffer Function N. 2.
278 |
279 | Domain: [-100, 100]^2
280 | Dimensions: 2
281 | Global minimum: f(0,0) = 0
282 |
283 | Arguments:
284 | ----------
285 | - x: a numpy array of shape (2,) representing the point at which to evaluate the function
286 |
287 | Returns:
288 | --------
289 | - The value of the Schaffer Function N. 2 at point x
290 | """
291 | x1, x2 = x
292 | numerator = np.square(np.sin(np.sqrt(x1 ** 2 + x2 ** 2))) - 0.5
293 | denominator = np.square(1 + 0.001 * (x1 ** 2 + x2 ** 2))
294 | return 0.5 + numerator / denominator
295 |
296 |
297 | def schaffer_n4(x: np.ndarray) -> float:
298 | """
299 | The Schaffer Function N. 4.
300 |
301 | Domain: [-100, 100]^2
302 | Dimensions: 2
303 | Global minimum: f(0, ±1.25313) = 0.292579
304 |
305 | Arguments:
306 | ----------
307 | - x: a numpy array of shape (2,) representing the point at which to evaluate the function
308 |
309 | Returns:
310 | --------
311 | - The value of the Schaffer Function N. 4 at point x
312 | """
313 | x1, x2 = x
314 | term1 = np.cos(np.sin(np.abs(x1 ** 2 - x2 ** 2)))
315 | term2 = 1 + 0.001 * (x1 ** 2 + x2 ** 2)
316 | return 0.5 + (term1 ** 2 - 0.5) / (term2 ** 2)
317 |
318 |
319 | def schwefel(x: np.ndarray) -> float:
320 | """
321 | The Schwefel Function.
322 | Typically, evaluated on the hypercube xi ∈ [-500, 500]^n
323 |
324 | Dimensions: d
325 | Global minimum: f(x*) = 0 at x* = (420.9687,..., 420.9687)
326 |
327 | Arguments:
328 | ----------
329 | - x: a numpy array of shape (n,) representing the point at which to evaluate the function
330 |
331 | Returns:
332 | --------
333 | - The value of the Schwefel Function at point x
334 | """
335 | d = x.shape[0]
336 | return 418.9829 * d - np.sum(x * np.sin(np.sqrt(np.abs(x))))
337 |
338 |
339 | def shubert(x: np.ndarray) -> np.ndarray:
340 | """
341 | The Shubert Function.
342 | The function is usually evaluated on the square xi ∈ [-10, 10], for all i = 1, 2,
343 | although this may be restricted to the square xi ∈ [-5.12, 5.12], for all i = 1, 2.
344 |
345 | Dimensions: d
346 | Global optimum: f(x) = -186.7309 (multiple global optima)
347 |
348 | Arguments:
349 | ---------
350 | - x: a NumPy array of shape (d,) representing the point at which to evaluate the function
351 |
352 | Returns:
353 | -------
354 | - The value of the Shubert Function at point x
355 | """
356 |
357 | x1 = np.outer(x, np.arange(1, 6))
358 | x2 = np.outer(x, np.ones(5))
359 | return np.sum(np.sin(x1) * np.cos((x1 * 2 + x2) / 2), axis=1)
360 |
--------------------------------------------------------------------------------
/src/optimisation_algorithms/benchmark_functions/plate_shape.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 |
4 | def booth(x: np.ndarray) -> float:
5 | """
6 | The Booth function.
7 | The function is usually evaluated on the square xi ∈ [-10, 10], for all i = 1, 2.
8 |
9 | Dimensions: 2
10 | Global optimum: f(1, 3) = 0
11 |
12 | Arguments:
13 | ---------
14 | - x: a NumPy array of shape (2,) representing the point at which to evaluate the function
15 |
16 | Returns:
17 | -------
18 | - The value of the Booth function at point x
19 | """
20 | x1, x2 = x
21 | return (x1 + 2*x2 - 7)**2 + (2*x1 + x2 - 5)**2
22 |
23 |
24 | def matyas(x: np.ndarray) -> float:
25 | """
26 | The Matyas function.
27 | The function is usually evaluated on the square xi ∈ [-10, 10], for all i = 1, 2.
28 |
29 | Dimensions: 2
30 | Global optimum: f(0, 0) = 0
31 |
32 | Arguments:
33 | ---------
34 | - x: a NumPy array of shape (2,) representing the point at which to evaluate the function
35 |
36 | Returns:
37 | -------
38 | - The value of the Matyas function at point x
39 | """
40 | x1, x2 = x
41 | return 0.26 * (x1**2 + x2**2) - 0.48*x1*x2
42 |
43 |
44 | def mccormick(x: np.ndarray) -> float:
45 | """
46 | The McCormick function.
47 | The function is usually evaluated on the rectangle x1 ∈ [-1.5, 4], x2 ∈ [-3, 4].
48 |
49 | Dimensions: 2
50 | Global optimum: f(-0.54719, -1.54719) = -1.9133
51 |
52 | Arguments:
53 | ---------
54 | - x: a NumPy array of shape (2,) representing the point at which to evaluate the function
55 |
56 | Returns:
57 | -------
58 | - The value of the McCormick function at point x
59 | """
60 | x1, x2 = x
61 | return np.sin(x1 + x2) + (x1 - x2)**2 - 1.5*x1 + 2.5*x2 + 1
62 |
63 |
64 | def zakharov(x: np.ndarray) -> float:
65 | """
66 | Zakharov Function
67 | The function is usually evaluated on the hypercube xi ∈ [-5, 10], for all i = 1, …, d.
68 |
69 | Dimensions: 2
70 | Global optimum: f(0,...,0) = 0
71 |
72 | Parameters:
73 | ----------
74 | - x: a NumPy array of shape (d,) representing the point at which to evaluate the function
75 |
76 | Returns:
77 | -------
78 | - The value of the Zakharov function at the given input.
79 |
80 | """
81 | d = len(x)
82 | sum_sq = np.sum(np.power(x, 2))
83 | sum_cos = np.sum(0.5 * np.multiply(np.arange(1, d+1), x))
84 | return sum_sq + np.power(sum_cos, 2) + np.power(sum_cos, 4)
85 |
--------------------------------------------------------------------------------
/src/optimisation_algorithms/benchmark_functions/steep_ridges_n_drops.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 |
4 | def dejong5(x: np.ndarray) -> float:
5 | """
6 | De Jong Function N. 5.
7 | Typically, evaluated on the input domain [-65.536, 65.536]^2.
8 |
9 | Dimensions: 2
10 | Global optimum: f(1.584, 1.584) = 0
11 |
12 | Arguments:
13 | ---------
14 | - x: a NumPy array of shape (2,) representing the point at which to evaluate the function
15 |
16 | Returns:
17 | - The value of the De Jong Function N. 5 at point x
18 | """
19 | return np.sum(np.power(np.abs(np.square(x) - np.roll(np.square(x), -1)), 4)) + np.sum(np.power(x, 2))
20 |
21 |
22 | def easom(x: np.ndarray) -> float:
23 | """
24 | The Easom Function.
25 | Typically, evaluated on the input domain [-100, 100]^2.
26 |
27 | Dimensions: 2
28 | Global optimum: f(pi, pi) = -1
29 |
30 | Arguments:
31 | ---------
32 | - x: a NumPy array of shape (2,) representing the point at which to evaluate the function
33 |
34 | Returns:
35 | - The value of the Easom Function at point x
36 | """
37 | return -np.cos(x[0]) * np.cos(x[1]) * np.exp(-np.square(x[0] - np.pi) - np.square(x[1] - np.pi))
38 |
39 |
40 | def michalewicz(x: np.ndarray, m: int = 10) -> float:
41 | """
42 | The Michalewicz Function.
43 | Typically, evaluated on the input domain [0, pi]^d.
44 |
45 | Dimensions: d
46 | Global optimum: unknown
47 |
48 | Arguments:
49 | ---------
50 | - x: a NumPy array of shape (d,) representing the point at which to evaluate the function
51 | - m: a positive integer parameter
52 |
53 | Returns:
54 | - The value of the Michalewicz Function at point x
55 | """
56 | d = len(x)
57 | i = np.arange(1, d + 1)
58 | return -np.sum(np.sin(x) * np.power(np.sin(i * np.square(x) / np.pi), 2 * m))
59 |
--------------------------------------------------------------------------------
/src/optimisation_algorithms/benchmark_functions/valley_shape.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 |
4 | def three_hump_camel(x: np.ndarray) -> np.ndarray:
5 | """
6 | The Three-Hump Camel Function.
7 | Typically, evaluated on the input domain [-5, 5]^2.
8 |
9 | Dimensions: 2
10 | Global optimum: f(0,0) = 0
11 |
12 | Arguments:
13 | ---------
14 | - x: a NumPy array of shape (2,) representing the point at which to evaluate the function
15 |
16 | Returns:
17 | - The value of the Three-Hump Camel Function at point x
18 | """
19 | x1, x2 = x
20 | return 2*x1**2 - 1.05*x1**4 + x1**6/6 + x1*x2 + x2**2
21 |
22 |
23 | def six_hump_camel(x: np.ndarray) -> np.ndarray:
24 | """
25 | The Six-Hump Camel Function.
26 | Typically, evaluated on the input domain [-5, 5]^2.
27 |
28 | Dimensions: 2
29 | Global optimum: f(0.0898,-0.7126) = f(-0.0898,0.7126) = -1.0316
30 |
31 | Arguments:
32 | ---------
33 | - x: a NumPy array of shape (2,) representing the point at which to evaluate the function
34 |
35 | Returns:
36 | - The value of the Six-Hump Camel Function at point x
37 | """
38 | x1, x2 = x
39 | return (4 - 2.1*x1**2 + x1**4/3)*x1**2 + x1*x2 + (-4 + 4*x2**2)*x2**2
40 |
41 |
42 | def dixon_price(x: np.ndarray) -> np.ndarray:
43 | """
44 | The Dixon-Price Function.
45 | Typically, evaluated on the input domain [2^-30, 2^30-1]^d.
46 |
47 | Dimensions: d
48 | Global optimum: f(2^(-1^(2^i-2)/(2^i-1))) = 0, i=2,3,...,d
49 |
50 | Arguments:
51 | ---------
52 | - x: a NumPy array of shape (d,) representing the point at which to evaluate the function
53 |
54 | Returns:
55 | - The value of the Dixon-Price Function at point x
56 | """
57 | d = len(x)
58 | x1 = x[0]
59 | summation = np.sum((i+1)*(2*np.power(x[1:], 2) - x[:-1])**2 for i in range(d-1))
60 | return np.power(x1-1, 2) + summation
61 |
62 |
63 | def rosenbrock(x: np.ndarray) -> np.ndarray:
64 | """
65 | The Rosenbrock Function.
66 | Typically, evaluated on the input domain [-5, 10]^d.
67 |
68 | Dimensions: d
69 | Global optimum: f(1,1,...,1) = 0
70 |
71 | Arguments:
72 | ---------
73 | - x: a NumPy array of shape (d,) representing the point at which to evaluate the function
74 |
75 | Returns:
76 | - The value of the Rosenbrock Function at point x
77 | """
78 | return np.sum(100*(x[1:]-x[:-1]**2)**2 + (1-x[:-1])**2)
79 |
--------------------------------------------------------------------------------
/tests/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Muradmustafayev-03/Optimisation-Algorithms/61480d1b10cb067c161320274b7061fc83caaa5a/tests/__init__.py
--------------------------------------------------------------------------------