├── .gitignore
├── LICENSE
├── README.md
├── data
├── Bike.csv
├── Iris.csv
└── Titanic.csv
├── environment.yml
├── images
├── decision_tree_algorithm_1.png
└── decision_tree_algorithm_2.png
└── notebooks
├── Video 01 - Introduction.ipynb
├── Video 02 - Helper Functions 1.ipynb
├── Video 03 - Helper Functions 2.ipynb
├── Video 04 - Helper Functions 3.ipynb
├── Video 05 - Main Algorithm 1.ipynb
├── Video 06 - Main Algorithm 2.ipynb
├── Video 07 - Classification.ipynb
├── Video 08 - Categorical Features.ipynb
├── Video 09 - Code Update.ipynb
├── Video 10 - Regression 1.ipynb
├── Video 11 - Regression 2.ipynb
├── Video 12 - Post Pruning 1.ipynb
├── Video 13 - Post Pruning 2.ipynb
├── Video 14 - Post Pruning 3.ipynb
├── decision_tree_functions.py
└── helper_functions.py
/.gitignore:
--------------------------------------------------------------------------------
1 | .ipynb_checkpoints/
2 | __pycache__/
3 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | GNU GENERAL PUBLIC LICENSE
2 | Version 3, 29 June 2007
3 |
4 | Copyright (C) 2007 Free Software Foundation, Inc.
5 | Everyone is permitted to copy and distribute verbatim copies
6 | of this license document, but changing it is not allowed.
7 |
8 | Preamble
9 |
10 | The GNU General Public License is a free, copyleft license for
11 | software and other kinds of works.
12 |
13 | The licenses for most software and other practical works are designed
14 | to take away your freedom to share and change the works. By contrast,
15 | the GNU General Public License is intended to guarantee your freedom to
16 | share and change all versions of a program--to make sure it remains free
17 | software for all its users. We, the Free Software Foundation, use the
18 | GNU General Public License for most of our software; it applies also to
19 | any other work released this way by its authors. You can apply it to
20 | your programs, too.
21 |
22 | When we speak of free software, we are referring to freedom, not
23 | price. Our General Public Licenses are designed to make sure that you
24 | have the freedom to distribute copies of free software (and charge for
25 | them if you wish), that you receive source code or can get it if you
26 | want it, that you can change the software or use pieces of it in new
27 | free programs, and that you know you can do these things.
28 |
29 | To protect your rights, we need to prevent others from denying you
30 | these rights or asking you to surrender the rights. Therefore, you have
31 | certain responsibilities if you distribute copies of the software, or if
32 | you modify it: responsibilities to respect the freedom of others.
33 |
34 | For example, if you distribute copies of such a program, whether
35 | gratis or for a fee, you must pass on to the recipients the same
36 | freedoms that you received. You must make sure that they, too, receive
37 | or can get the source code. And you must show them these terms so they
38 | know their rights.
39 |
40 | Developers that use the GNU GPL protect your rights with two steps:
41 | (1) assert copyright on the software, and (2) offer you this License
42 | giving you legal permission to copy, distribute and/or modify it.
43 |
44 | For the developers' and authors' protection, the GPL clearly explains
45 | that there is no warranty for this free software. For both users' and
46 | authors' sake, the GPL requires that modified versions be marked as
47 | changed, so that their problems will not be attributed erroneously to
48 | authors of previous versions.
49 |
50 | Some devices are designed to deny users access to install or run
51 | modified versions of the software inside them, although the manufacturer
52 | can do so. This is fundamentally incompatible with the aim of
53 | protecting users' freedom to change the software. The systematic
54 | pattern of such abuse occurs in the area of products for individuals to
55 | use, which is precisely where it is most unacceptable. Therefore, we
56 | have designed this version of the GPL to prohibit the practice for those
57 | products. If such problems arise substantially in other domains, we
58 | stand ready to extend this provision to those domains in future versions
59 | of the GPL, as needed to protect the freedom of users.
60 |
61 | Finally, every program is threatened constantly by software patents.
62 | States should not allow patents to restrict development and use of
63 | software on general-purpose computers, but in those that do, we wish to
64 | avoid the special danger that patents applied to a free program could
65 | make it effectively proprietary. To prevent this, the GPL assures that
66 | patents cannot be used to render the program non-free.
67 |
68 | The precise terms and conditions for copying, distribution and
69 | modification follow.
70 |
71 | TERMS AND CONDITIONS
72 |
73 | 0. Definitions.
74 |
75 | "This License" refers to version 3 of the GNU General Public License.
76 |
77 | "Copyright" also means copyright-like laws that apply to other kinds of
78 | works, such as semiconductor masks.
79 |
80 | "The Program" refers to any copyrightable work licensed under this
81 | License. Each licensee is addressed as "you". "Licensees" and
82 | "recipients" may be individuals or organizations.
83 |
84 | To "modify" a work means to copy from or adapt all or part of the work
85 | in a fashion requiring copyright permission, other than the making of an
86 | exact copy. The resulting work is called a "modified version" of the
87 | earlier work or a work "based on" the earlier work.
88 |
89 | A "covered work" means either the unmodified Program or a work based
90 | on the Program.
91 |
92 | To "propagate" a work means to do anything with it that, without
93 | permission, would make you directly or secondarily liable for
94 | infringement under applicable copyright law, except executing it on a
95 | computer or modifying a private copy. Propagation includes copying,
96 | distribution (with or without modification), making available to the
97 | public, and in some countries other activities as well.
98 |
99 | To "convey" a work means any kind of propagation that enables other
100 | parties to make or receive copies. Mere interaction with a user through
101 | a computer network, with no transfer of a copy, is not conveying.
102 |
103 | An interactive user interface displays "Appropriate Legal Notices"
104 | to the extent that it includes a convenient and prominently visible
105 | feature that (1) displays an appropriate copyright notice, and (2)
106 | tells the user that there is no warranty for the work (except to the
107 | extent that warranties are provided), that licensees may convey the
108 | work under this License, and how to view a copy of this License. If
109 | the interface presents a list of user commands or options, such as a
110 | menu, a prominent item in the list meets this criterion.
111 |
112 | 1. Source Code.
113 |
114 | The "source code" for a work means the preferred form of the work
115 | for making modifications to it. "Object code" means any non-source
116 | form of a work.
117 |
118 | A "Standard Interface" means an interface that either is an official
119 | standard defined by a recognized standards body, or, in the case of
120 | interfaces specified for a particular programming language, one that
121 | is widely used among developers working in that language.
122 |
123 | The "System Libraries" of an executable work include anything, other
124 | than the work as a whole, that (a) is included in the normal form of
125 | packaging a Major Component, but which is not part of that Major
126 | Component, and (b) serves only to enable use of the work with that
127 | Major Component, or to implement a Standard Interface for which an
128 | implementation is available to the public in source code form. A
129 | "Major Component", in this context, means a major essential component
130 | (kernel, window system, and so on) of the specific operating system
131 | (if any) on which the executable work runs, or a compiler used to
132 | produce the work, or an object code interpreter used to run it.
133 |
134 | The "Corresponding Source" for a work in object code form means all
135 | the source code needed to generate, install, and (for an executable
136 | work) run the object code and to modify the work, including scripts to
137 | control those activities. However, it does not include the work's
138 | System Libraries, or general-purpose tools or generally available free
139 | programs which are used unmodified in performing those activities but
140 | which are not part of the work. For example, Corresponding Source
141 | includes interface definition files associated with source files for
142 | the work, and the source code for shared libraries and dynamically
143 | linked subprograms that the work is specifically designed to require,
144 | such as by intimate data communication or control flow between those
145 | subprograms and other parts of the work.
146 |
147 | The Corresponding Source need not include anything that users
148 | can regenerate automatically from other parts of the Corresponding
149 | Source.
150 |
151 | The Corresponding Source for a work in source code form is that
152 | same work.
153 |
154 | 2. Basic Permissions.
155 |
156 | All rights granted under this License are granted for the term of
157 | copyright on the Program, and are irrevocable provided the stated
158 | conditions are met. This License explicitly affirms your unlimited
159 | permission to run the unmodified Program. The output from running a
160 | covered work is covered by this License only if the output, given its
161 | content, constitutes a covered work. This License acknowledges your
162 | rights of fair use or other equivalent, as provided by copyright law.
163 |
164 | You may make, run and propagate covered works that you do not
165 | convey, without conditions so long as your license otherwise remains
166 | in force. You may convey covered works to others for the sole purpose
167 | of having them make modifications exclusively for you, or provide you
168 | with facilities for running those works, provided that you comply with
169 | the terms of this License in conveying all material for which you do
170 | not control copyright. Those thus making or running the covered works
171 | for you must do so exclusively on your behalf, under your direction
172 | and control, on terms that prohibit them from making any copies of
173 | your copyrighted material outside their relationship with you.
174 |
175 | Conveying under any other circumstances is permitted solely under
176 | the conditions stated below. Sublicensing is not allowed; section 10
177 | makes it unnecessary.
178 |
179 | 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
180 |
181 | No covered work shall be deemed part of an effective technological
182 | measure under any applicable law fulfilling obligations under article
183 | 11 of the WIPO copyright treaty adopted on 20 December 1996, or
184 | similar laws prohibiting or restricting circumvention of such
185 | measures.
186 |
187 | When you convey a covered work, you waive any legal power to forbid
188 | circumvention of technological measures to the extent such circumvention
189 | is effected by exercising rights under this License with respect to
190 | the covered work, and you disclaim any intention to limit operation or
191 | modification of the work as a means of enforcing, against the work's
192 | users, your or third parties' legal rights to forbid circumvention of
193 | technological measures.
194 |
195 | 4. Conveying Verbatim Copies.
196 |
197 | You may convey verbatim copies of the Program's source code as you
198 | receive it, in any medium, provided that you conspicuously and
199 | appropriately publish on each copy an appropriate copyright notice;
200 | keep intact all notices stating that this License and any
201 | non-permissive terms added in accord with section 7 apply to the code;
202 | keep intact all notices of the absence of any warranty; and give all
203 | recipients a copy of this License along with the Program.
204 |
205 | You may charge any price or no price for each copy that you convey,
206 | and you may offer support or warranty protection for a fee.
207 |
208 | 5. Conveying Modified Source Versions.
209 |
210 | You may convey a work based on the Program, or the modifications to
211 | produce it from the Program, in the form of source code under the
212 | terms of section 4, provided that you also meet all of these conditions:
213 |
214 | a) The work must carry prominent notices stating that you modified
215 | it, and giving a relevant date.
216 |
217 | b) The work must carry prominent notices stating that it is
218 | released under this License and any conditions added under section
219 | 7. This requirement modifies the requirement in section 4 to
220 | "keep intact all notices".
221 |
222 | c) You must license the entire work, as a whole, under this
223 | License to anyone who comes into possession of a copy. This
224 | License will therefore apply, along with any applicable section 7
225 | additional terms, to the whole of the work, and all its parts,
226 | regardless of how they are packaged. This License gives no
227 | permission to license the work in any other way, but it does not
228 | invalidate such permission if you have separately received it.
229 |
230 | d) If the work has interactive user interfaces, each must display
231 | Appropriate Legal Notices; however, if the Program has interactive
232 | interfaces that do not display Appropriate Legal Notices, your
233 | work need not make them do so.
234 |
235 | A compilation of a covered work with other separate and independent
236 | works, which are not by their nature extensions of the covered work,
237 | and which are not combined with it such as to form a larger program,
238 | in or on a volume of a storage or distribution medium, is called an
239 | "aggregate" if the compilation and its resulting copyright are not
240 | used to limit the access or legal rights of the compilation's users
241 | beyond what the individual works permit. Inclusion of a covered work
242 | in an aggregate does not cause this License to apply to the other
243 | parts of the aggregate.
244 |
245 | 6. Conveying Non-Source Forms.
246 |
247 | You may convey a covered work in object code form under the terms
248 | of sections 4 and 5, provided that you also convey the
249 | machine-readable Corresponding Source under the terms of this License,
250 | in one of these ways:
251 |
252 | a) Convey the object code in, or embodied in, a physical product
253 | (including a physical distribution medium), accompanied by the
254 | Corresponding Source fixed on a durable physical medium
255 | customarily used for software interchange.
256 |
257 | b) Convey the object code in, or embodied in, a physical product
258 | (including a physical distribution medium), accompanied by a
259 | written offer, valid for at least three years and valid for as
260 | long as you offer spare parts or customer support for that product
261 | model, to give anyone who possesses the object code either (1) a
262 | copy of the Corresponding Source for all the software in the
263 | product that is covered by this License, on a durable physical
264 | medium customarily used for software interchange, for a price no
265 | more than your reasonable cost of physically performing this
266 | conveying of source, or (2) access to copy the
267 | Corresponding Source from a network server at no charge.
268 |
269 | c) Convey individual copies of the object code with a copy of the
270 | written offer to provide the Corresponding Source. This
271 | alternative is allowed only occasionally and noncommercially, and
272 | only if you received the object code with such an offer, in accord
273 | with subsection 6b.
274 |
275 | d) Convey the object code by offering access from a designated
276 | place (gratis or for a charge), and offer equivalent access to the
277 | Corresponding Source in the same way through the same place at no
278 | further charge. You need not require recipients to copy the
279 | Corresponding Source along with the object code. If the place to
280 | copy the object code is a network server, the Corresponding Source
281 | may be on a different server (operated by you or a third party)
282 | that supports equivalent copying facilities, provided you maintain
283 | clear directions next to the object code saying where to find the
284 | Corresponding Source. Regardless of what server hosts the
285 | Corresponding Source, you remain obligated to ensure that it is
286 | available for as long as needed to satisfy these requirements.
287 |
288 | e) Convey the object code using peer-to-peer transmission, provided
289 | you inform other peers where the object code and Corresponding
290 | Source of the work are being offered to the general public at no
291 | charge under subsection 6d.
292 |
293 | A separable portion of the object code, whose source code is excluded
294 | from the Corresponding Source as a System Library, need not be
295 | included in conveying the object code work.
296 |
297 | A "User Product" is either (1) a "consumer product", which means any
298 | tangible personal property which is normally used for personal, family,
299 | or household purposes, or (2) anything designed or sold for incorporation
300 | into a dwelling. In determining whether a product is a consumer product,
301 | doubtful cases shall be resolved in favor of coverage. For a particular
302 | product received by a particular user, "normally used" refers to a
303 | typical or common use of that class of product, regardless of the status
304 | of the particular user or of the way in which the particular user
305 | actually uses, or expects or is expected to use, the product. A product
306 | is a consumer product regardless of whether the product has substantial
307 | commercial, industrial or non-consumer uses, unless such uses represent
308 | the only significant mode of use of the product.
309 |
310 | "Installation Information" for a User Product means any methods,
311 | procedures, authorization keys, or other information required to install
312 | and execute modified versions of a covered work in that User Product from
313 | a modified version of its Corresponding Source. The information must
314 | suffice to ensure that the continued functioning of the modified object
315 | code is in no case prevented or interfered with solely because
316 | modification has been made.
317 |
318 | If you convey an object code work under this section in, or with, or
319 | specifically for use in, a User Product, and the conveying occurs as
320 | part of a transaction in which the right of possession and use of the
321 | User Product is transferred to the recipient in perpetuity or for a
322 | fixed term (regardless of how the transaction is characterized), the
323 | Corresponding Source conveyed under this section must be accompanied
324 | by the Installation Information. But this requirement does not apply
325 | if neither you nor any third party retains the ability to install
326 | modified object code on the User Product (for example, the work has
327 | been installed in ROM).
328 |
329 | The requirement to provide Installation Information does not include a
330 | requirement to continue to provide support service, warranty, or updates
331 | for a work that has been modified or installed by the recipient, or for
332 | the User Product in which it has been modified or installed. Access to a
333 | network may be denied when the modification itself materially and
334 | adversely affects the operation of the network or violates the rules and
335 | protocols for communication across the network.
336 |
337 | Corresponding Source conveyed, and Installation Information provided,
338 | in accord with this section must be in a format that is publicly
339 | documented (and with an implementation available to the public in
340 | source code form), and must require no special password or key for
341 | unpacking, reading or copying.
342 |
343 | 7. Additional Terms.
344 |
345 | "Additional permissions" are terms that supplement the terms of this
346 | License by making exceptions from one or more of its conditions.
347 | Additional permissions that are applicable to the entire Program shall
348 | be treated as though they were included in this License, to the extent
349 | that they are valid under applicable law. If additional permissions
350 | apply only to part of the Program, that part may be used separately
351 | under those permissions, but the entire Program remains governed by
352 | this License without regard to the additional permissions.
353 |
354 | When you convey a copy of a covered work, you may at your option
355 | remove any additional permissions from that copy, or from any part of
356 | it. (Additional permissions may be written to require their own
357 | removal in certain cases when you modify the work.) You may place
358 | additional permissions on material, added by you to a covered work,
359 | for which you have or can give appropriate copyright permission.
360 |
361 | Notwithstanding any other provision of this License, for material you
362 | add to a covered work, you may (if authorized by the copyright holders of
363 | that material) supplement the terms of this License with terms:
364 |
365 | a) Disclaiming warranty or limiting liability differently from the
366 | terms of sections 15 and 16 of this License; or
367 |
368 | b) Requiring preservation of specified reasonable legal notices or
369 | author attributions in that material or in the Appropriate Legal
370 | Notices displayed by works containing it; or
371 |
372 | c) Prohibiting misrepresentation of the origin of that material, or
373 | requiring that modified versions of such material be marked in
374 | reasonable ways as different from the original version; or
375 |
376 | d) Limiting the use for publicity purposes of names of licensors or
377 | authors of the material; or
378 |
379 | e) Declining to grant rights under trademark law for use of some
380 | trade names, trademarks, or service marks; or
381 |
382 | f) Requiring indemnification of licensors and authors of that
383 | material by anyone who conveys the material (or modified versions of
384 | it) with contractual assumptions of liability to the recipient, for
385 | any liability that these contractual assumptions directly impose on
386 | those licensors and authors.
387 |
388 | All other non-permissive additional terms are considered "further
389 | restrictions" within the meaning of section 10. If the Program as you
390 | received it, or any part of it, contains a notice stating that it is
391 | governed by this License along with a term that is a further
392 | restriction, you may remove that term. If a license document contains
393 | a further restriction but permits relicensing or conveying under this
394 | License, you may add to a covered work material governed by the terms
395 | of that license document, provided that the further restriction does
396 | not survive such relicensing or conveying.
397 |
398 | If you add terms to a covered work in accord with this section, you
399 | must place, in the relevant source files, a statement of the
400 | additional terms that apply to those files, or a notice indicating
401 | where to find the applicable terms.
402 |
403 | Additional terms, permissive or non-permissive, may be stated in the
404 | form of a separately written license, or stated as exceptions;
405 | the above requirements apply either way.
406 |
407 | 8. Termination.
408 |
409 | You may not propagate or modify a covered work except as expressly
410 | provided under this License. Any attempt otherwise to propagate or
411 | modify it is void, and will automatically terminate your rights under
412 | this License (including any patent licenses granted under the third
413 | paragraph of section 11).
414 |
415 | However, if you cease all violation of this License, then your
416 | license from a particular copyright holder is reinstated (a)
417 | provisionally, unless and until the copyright holder explicitly and
418 | finally terminates your license, and (b) permanently, if the copyright
419 | holder fails to notify you of the violation by some reasonable means
420 | prior to 60 days after the cessation.
421 |
422 | Moreover, your license from a particular copyright holder is
423 | reinstated permanently if the copyright holder notifies you of the
424 | violation by some reasonable means, this is the first time you have
425 | received notice of violation of this License (for any work) from that
426 | copyright holder, and you cure the violation prior to 30 days after
427 | your receipt of the notice.
428 |
429 | Termination of your rights under this section does not terminate the
430 | licenses of parties who have received copies or rights from you under
431 | this License. If your rights have been terminated and not permanently
432 | reinstated, you do not qualify to receive new licenses for the same
433 | material under section 10.
434 |
435 | 9. Acceptance Not Required for Having Copies.
436 |
437 | You are not required to accept this License in order to receive or
438 | run a copy of the Program. Ancillary propagation of a covered work
439 | occurring solely as a consequence of using peer-to-peer transmission
440 | to receive a copy likewise does not require acceptance. However,
441 | nothing other than this License grants you permission to propagate or
442 | modify any covered work. These actions infringe copyright if you do
443 | not accept this License. Therefore, by modifying or propagating a
444 | covered work, you indicate your acceptance of this License to do so.
445 |
446 | 10. Automatic Licensing of Downstream Recipients.
447 |
448 | Each time you convey a covered work, the recipient automatically
449 | receives a license from the original licensors, to run, modify and
450 | propagate that work, subject to this License. You are not responsible
451 | for enforcing compliance by third parties with this License.
452 |
453 | An "entity transaction" is a transaction transferring control of an
454 | organization, or substantially all assets of one, or subdividing an
455 | organization, or merging organizations. If propagation of a covered
456 | work results from an entity transaction, each party to that
457 | transaction who receives a copy of the work also receives whatever
458 | licenses to the work the party's predecessor in interest had or could
459 | give under the previous paragraph, plus a right to possession of the
460 | Corresponding Source of the work from the predecessor in interest, if
461 | the predecessor has it or can get it with reasonable efforts.
462 |
463 | You may not impose any further restrictions on the exercise of the
464 | rights granted or affirmed under this License. For example, you may
465 | not impose a license fee, royalty, or other charge for exercise of
466 | rights granted under this License, and you may not initiate litigation
467 | (including a cross-claim or counterclaim in a lawsuit) alleging that
468 | any patent claim is infringed by making, using, selling, offering for
469 | sale, or importing the Program or any portion of it.
470 |
471 | 11. Patents.
472 |
473 | A "contributor" is a copyright holder who authorizes use under this
474 | License of the Program or a work on which the Program is based. The
475 | work thus licensed is called the contributor's "contributor version".
476 |
477 | A contributor's "essential patent claims" are all patent claims
478 | owned or controlled by the contributor, whether already acquired or
479 | hereafter acquired, that would be infringed by some manner, permitted
480 | by this License, of making, using, or selling its contributor version,
481 | but do not include claims that would be infringed only as a
482 | consequence of further modification of the contributor version. For
483 | purposes of this definition, "control" includes the right to grant
484 | patent sublicenses in a manner consistent with the requirements of
485 | this License.
486 |
487 | Each contributor grants you a non-exclusive, worldwide, royalty-free
488 | patent license under the contributor's essential patent claims, to
489 | make, use, sell, offer for sale, import and otherwise run, modify and
490 | propagate the contents of its contributor version.
491 |
492 | In the following three paragraphs, a "patent license" is any express
493 | agreement or commitment, however denominated, not to enforce a patent
494 | (such as an express permission to practice a patent or covenant not to
495 | sue for patent infringement). To "grant" such a patent license to a
496 | party means to make such an agreement or commitment not to enforce a
497 | patent against the party.
498 |
499 | If you convey a covered work, knowingly relying on a patent license,
500 | and the Corresponding Source of the work is not available for anyone
501 | to copy, free of charge and under the terms of this License, through a
502 | publicly available network server or other readily accessible means,
503 | then you must either (1) cause the Corresponding Source to be so
504 | available, or (2) arrange to deprive yourself of the benefit of the
505 | patent license for this particular work, or (3) arrange, in a manner
506 | consistent with the requirements of this License, to extend the patent
507 | license to downstream recipients. "Knowingly relying" means you have
508 | actual knowledge that, but for the patent license, your conveying the
509 | covered work in a country, or your recipient's use of the covered work
510 | in a country, would infringe one or more identifiable patents in that
511 | country that you have reason to believe are valid.
512 |
513 | If, pursuant to or in connection with a single transaction or
514 | arrangement, you convey, or propagate by procuring conveyance of, a
515 | covered work, and grant a patent license to some of the parties
516 | receiving the covered work authorizing them to use, propagate, modify
517 | or convey a specific copy of the covered work, then the patent license
518 | you grant is automatically extended to all recipients of the covered
519 | work and works based on it.
520 |
521 | A patent license is "discriminatory" if it does not include within
522 | the scope of its coverage, prohibits the exercise of, or is
523 | conditioned on the non-exercise of one or more of the rights that are
524 | specifically granted under this License. You may not convey a covered
525 | work if you are a party to an arrangement with a third party that is
526 | in the business of distributing software, under which you make payment
527 | to the third party based on the extent of your activity of conveying
528 | the work, and under which the third party grants, to any of the
529 | parties who would receive the covered work from you, a discriminatory
530 | patent license (a) in connection with copies of the covered work
531 | conveyed by you (or copies made from those copies), or (b) primarily
532 | for and in connection with specific products or compilations that
533 | contain the covered work, unless you entered into that arrangement,
534 | or that patent license was granted, prior to 28 March 2007.
535 |
536 | Nothing in this License shall be construed as excluding or limiting
537 | any implied license or other defenses to infringement that may
538 | otherwise be available to you under applicable patent law.
539 |
540 | 12. No Surrender of Others' Freedom.
541 |
542 | If conditions are imposed on you (whether by court order, agreement or
543 | otherwise) that contradict the conditions of this License, they do not
544 | excuse you from the conditions of this License. If you cannot convey a
545 | covered work so as to satisfy simultaneously your obligations under this
546 | License and any other pertinent obligations, then as a consequence you may
547 | not convey it at all. For example, if you agree to terms that obligate you
548 | to collect a royalty for further conveying from those to whom you convey
549 | the Program, the only way you could satisfy both those terms and this
550 | License would be to refrain entirely from conveying the Program.
551 |
552 | 13. Use with the GNU Affero General Public License.
553 |
554 | Notwithstanding any other provision of this License, you have
555 | permission to link or combine any covered work with a work licensed
556 | under version 3 of the GNU Affero General Public License into a single
557 | combined work, and to convey the resulting work. The terms of this
558 | License will continue to apply to the part which is the covered work,
559 | but the special requirements of the GNU Affero General Public License,
560 | section 13, concerning interaction through a network will apply to the
561 | combination as such.
562 |
563 | 14. Revised Versions of this License.
564 |
565 | The Free Software Foundation may publish revised and/or new versions of
566 | the GNU General Public License from time to time. Such new versions will
567 | be similar in spirit to the present version, but may differ in detail to
568 | address new problems or concerns.
569 |
570 | Each version is given a distinguishing version number. If the
571 | Program specifies that a certain numbered version of the GNU General
572 | Public License "or any later version" applies to it, you have the
573 | option of following the terms and conditions either of that numbered
574 | version or of any later version published by the Free Software
575 | Foundation. If the Program does not specify a version number of the
576 | GNU General Public License, you may choose any version ever published
577 | by the Free Software Foundation.
578 |
579 | If the Program specifies that a proxy can decide which future
580 | versions of the GNU General Public License can be used, that proxy's
581 | public statement of acceptance of a version permanently authorizes you
582 | to choose that version for the Program.
583 |
584 | Later license versions may give you additional or different
585 | permissions. However, no additional obligations are imposed on any
586 | author or copyright holder as a result of your choosing to follow a
587 | later version.
588 |
589 | 15. Disclaimer of Warranty.
590 |
591 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
592 | APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
593 | HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
594 | OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
595 | THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
596 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
597 | IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
598 | ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
599 |
600 | 16. Limitation of Liability.
601 |
602 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
603 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
604 | THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
605 | GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
606 | USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
607 | DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
608 | PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
609 | EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
610 | SUCH DAMAGES.
611 |
612 | 17. Interpretation of Sections 15 and 16.
613 |
614 | If the disclaimer of warranty and limitation of liability provided
615 | above cannot be given local legal effect according to their terms,
616 | reviewing courts shall apply local law that most closely approximates
617 | an absolute waiver of all civil liability in connection with the
618 | Program, unless a warranty or assumption of liability accompanies a
619 | copy of the Program in return for a fee.
620 |
621 | END OF TERMS AND CONDITIONS
622 |
623 | How to Apply These Terms to Your New Programs
624 |
625 | If you develop a new program, and you want it to be of the greatest
626 | possible use to the public, the best way to achieve this is to make it
627 | free software which everyone can redistribute and change under these terms.
628 |
629 | To do so, attach the following notices to the program. It is safest
630 | to attach them to the start of each source file to most effectively
631 | state the exclusion of warranty; and each file should have at least
632 | the "copyright" line and a pointer to where the full notice is found.
633 |
634 |
635 | Copyright (C)
636 |
637 | This program is free software: you can redistribute it and/or modify
638 | it under the terms of the GNU General Public License as published by
639 | the Free Software Foundation, either version 3 of the License, or
640 | (at your option) any later version.
641 |
642 | This program is distributed in the hope that it will be useful,
643 | but WITHOUT ANY WARRANTY; without even the implied warranty of
644 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
645 | GNU General Public License for more details.
646 |
647 | You should have received a copy of the GNU General Public License
648 | along with this program. If not, see .
649 |
650 | Also add information on how to contact you by electronic and paper mail.
651 |
652 | If the program does terminal interaction, make it output a short
653 | notice like this when it starts in an interactive mode:
654 |
655 | Copyright (C)
656 | This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
657 | This is free software, and you are welcome to redistribute it
658 | under certain conditions; type `show c' for details.
659 |
660 | The hypothetical commands `show w' and `show c' should show the appropriate
661 | parts of the General Public License. Of course, your program's commands
662 | might be different; for a GUI interface, you would use an "about box".
663 |
664 | You should also get your employer (if you work as a programmer) or school,
665 | if any, to sign a "copyright disclaimer" for the program, if necessary.
666 | For more information on this, and how to apply and follow the GNU GPL, see
667 | .
668 |
669 | The GNU General Public License does not permit incorporating your program
670 | into proprietary programs. If your program is a subroutine library, you
671 | may consider it more useful to permit linking proprietary applications with
672 | the library. If this is what you want to do, use the GNU Lesser General
673 | Public License instead of this License. But first, please read
674 | .
675 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Decision-Tree-from-Scratch
2 | [](https://mybinder.org/v2/gh/SebastianMantey/Decision-Tree-from-Scratch/master)
3 |
4 | This repo serves as a tutorial for coding a Decision Tree from scratch in Python using just NumPy and Pandas. And here are the accompanying [blog posts](https://www.sebastian-mantey.com/code-blog/coding-a-decision-tree-from-scratch-python-p1-introduction) or [YouTube videos](https://www.youtube.com/watch?v=y6DmpG_PtN0&list=PLPOTBrypY74xS3WD0G_uzqPjCQfU6IRK-).
5 |
6 | # Credits
7 | - [Iris flower data set](https://www.kaggle.com/uciml/iris)
8 | - [Titanic data set](https://www.kaggle.com/c/titanic)
9 | - [Bike Sharing data set](https://www.kaggle.com/marklvl/bike-sharing-dataset)
10 |
--------------------------------------------------------------------------------
/data/Iris.csv:
--------------------------------------------------------------------------------
1 | Id,sepal_length,sepal_width,petal_length,petal_width,species
2 | 1,5.1,3.5,1.4,0.2,Iris-setosa
3 | 2,4.9,3.0,1.4,0.2,Iris-setosa
4 | 3,4.7,3.2,1.3,0.2,Iris-setosa
5 | 4,4.6,3.1,1.5,0.2,Iris-setosa
6 | 5,5.0,3.6,1.4,0.2,Iris-setosa
7 | 6,5.4,3.9,1.7,0.4,Iris-setosa
8 | 7,4.6,3.4,1.4,0.3,Iris-setosa
9 | 8,5.0,3.4,1.5,0.2,Iris-setosa
10 | 9,4.4,2.9,1.4,0.2,Iris-setosa
11 | 10,4.9,3.1,1.5,0.1,Iris-setosa
12 | 11,5.4,3.7,1.5,0.2,Iris-setosa
13 | 12,4.8,3.4,1.6,0.2,Iris-setosa
14 | 13,4.8,3.0,1.4,0.1,Iris-setosa
15 | 14,4.3,3.0,1.1,0.1,Iris-setosa
16 | 15,5.8,4.0,1.2,0.2,Iris-setosa
17 | 16,5.7,4.4,1.5,0.4,Iris-setosa
18 | 17,5.4,3.9,1.3,0.4,Iris-setosa
19 | 18,5.1,3.5,1.4,0.3,Iris-setosa
20 | 19,5.7,3.8,1.7,0.3,Iris-setosa
21 | 20,5.1,3.8,1.5,0.3,Iris-setosa
22 | 21,5.4,3.4,1.7,0.2,Iris-setosa
23 | 22,5.1,3.7,1.5,0.4,Iris-setosa
24 | 23,4.6,3.6,1.0,0.2,Iris-setosa
25 | 24,5.1,3.3,1.7,0.5,Iris-setosa
26 | 25,4.8,3.4,1.9,0.2,Iris-setosa
27 | 26,5.0,3.0,1.6,0.2,Iris-setosa
28 | 27,5.0,3.4,1.6,0.4,Iris-setosa
29 | 28,5.2,3.5,1.5,0.2,Iris-setosa
30 | 29,5.2,3.4,1.4,0.2,Iris-setosa
31 | 30,4.7,3.2,1.6,0.2,Iris-setosa
32 | 31,4.8,3.1,1.6,0.2,Iris-setosa
33 | 32,5.4,3.4,1.5,0.4,Iris-setosa
34 | 33,5.2,4.1,1.5,0.1,Iris-setosa
35 | 34,5.5,4.2,1.4,0.2,Iris-setosa
36 | 35,4.9,3.1,1.5,0.1,Iris-setosa
37 | 36,5.0,3.2,1.2,0.2,Iris-setosa
38 | 37,5.5,3.5,1.3,0.2,Iris-setosa
39 | 38,4.9,3.1,1.5,0.1,Iris-setosa
40 | 39,4.4,3.0,1.3,0.2,Iris-setosa
41 | 40,5.1,3.4,1.5,0.2,Iris-setosa
42 | 41,5.0,3.5,1.3,0.3,Iris-setosa
43 | 42,4.5,2.3,1.3,0.3,Iris-setosa
44 | 43,4.4,3.2,1.3,0.2,Iris-setosa
45 | 44,5.0,3.5,1.6,0.6,Iris-setosa
46 | 45,5.1,3.8,1.9,0.4,Iris-setosa
47 | 46,4.8,3.0,1.4,0.3,Iris-setosa
48 | 47,5.1,3.8,1.6,0.2,Iris-setosa
49 | 48,4.6,3.2,1.4,0.2,Iris-setosa
50 | 49,5.3,3.7,1.5,0.2,Iris-setosa
51 | 50,5.0,3.3,1.4,0.2,Iris-setosa
52 | 51,7.0,3.2,4.7,1.4,Iris-versicolor
53 | 52,6.4,3.2,4.5,1.5,Iris-versicolor
54 | 53,6.9,3.1,4.9,1.5,Iris-versicolor
55 | 54,5.5,2.3,4.0,1.3,Iris-versicolor
56 | 55,6.5,2.8,4.6,1.5,Iris-versicolor
57 | 56,5.7,2.8,4.5,1.3,Iris-versicolor
58 | 57,6.3,3.3,4.7,1.6,Iris-versicolor
59 | 58,4.9,2.4,3.3,1.0,Iris-versicolor
60 | 59,6.6,2.9,4.6,1.3,Iris-versicolor
61 | 60,5.2,2.7,3.9,1.4,Iris-versicolor
62 | 61,5.0,2.0,3.5,1.0,Iris-versicolor
63 | 62,5.9,3.0,4.2,1.5,Iris-versicolor
64 | 63,6.0,2.2,4.0,1.0,Iris-versicolor
65 | 64,6.1,2.9,4.7,1.4,Iris-versicolor
66 | 65,5.6,2.9,3.6,1.3,Iris-versicolor
67 | 66,6.7,3.1,4.4,1.4,Iris-versicolor
68 | 67,5.6,3.0,4.5,1.5,Iris-versicolor
69 | 68,5.8,2.7,4.1,1.0,Iris-versicolor
70 | 69,6.2,2.2,4.5,1.5,Iris-versicolor
71 | 70,5.6,2.5,3.9,1.1,Iris-versicolor
72 | 71,5.9,3.2,4.8,1.8,Iris-versicolor
73 | 72,6.1,2.8,4.0,1.3,Iris-versicolor
74 | 73,6.3,2.5,4.9,1.5,Iris-versicolor
75 | 74,6.1,2.8,4.7,1.2,Iris-versicolor
76 | 75,6.4,2.9,4.3,1.3,Iris-versicolor
77 | 76,6.6,3.0,4.4,1.4,Iris-versicolor
78 | 77,6.8,2.8,4.8,1.4,Iris-versicolor
79 | 78,6.7,3.0,5.0,1.7,Iris-versicolor
80 | 79,6.0,2.9,4.5,1.5,Iris-versicolor
81 | 80,5.7,2.6,3.5,1.0,Iris-versicolor
82 | 81,5.5,2.4,3.8,1.1,Iris-versicolor
83 | 82,5.5,2.4,3.7,1.0,Iris-versicolor
84 | 83,5.8,2.7,3.9,1.2,Iris-versicolor
85 | 84,6.0,2.7,5.1,1.6,Iris-versicolor
86 | 85,5.4,3.0,4.5,1.5,Iris-versicolor
87 | 86,6.0,3.4,4.5,1.6,Iris-versicolor
88 | 87,6.7,3.1,4.7,1.5,Iris-versicolor
89 | 88,6.3,2.3,4.4,1.3,Iris-versicolor
90 | 89,5.6,3.0,4.1,1.3,Iris-versicolor
91 | 90,5.5,2.5,4.0,1.3,Iris-versicolor
92 | 91,5.5,2.6,4.4,1.2,Iris-versicolor
93 | 92,6.1,3.0,4.6,1.4,Iris-versicolor
94 | 93,5.8,2.6,4.0,1.2,Iris-versicolor
95 | 94,5.0,2.3,3.3,1.0,Iris-versicolor
96 | 95,5.6,2.7,4.2,1.3,Iris-versicolor
97 | 96,5.7,3.0,4.2,1.2,Iris-versicolor
98 | 97,5.7,2.9,4.2,1.3,Iris-versicolor
99 | 98,6.2,2.9,4.3,1.3,Iris-versicolor
100 | 99,5.1,2.5,3.0,1.1,Iris-versicolor
101 | 100,5.7,2.8,4.1,1.3,Iris-versicolor
102 | 101,6.3,3.3,6.0,2.5,Iris-virginica
103 | 102,5.8,2.7,5.1,1.9,Iris-virginica
104 | 103,7.1,3.0,5.9,2.1,Iris-virginica
105 | 104,6.3,2.9,5.6,1.8,Iris-virginica
106 | 105,6.5,3.0,5.8,2.2,Iris-virginica
107 | 106,7.6,3.0,6.6,2.1,Iris-virginica
108 | 107,4.9,2.5,4.5,1.7,Iris-virginica
109 | 108,7.3,2.9,6.3,1.8,Iris-virginica
110 | 109,6.7,2.5,5.8,1.8,Iris-virginica
111 | 110,7.2,3.6,6.1,2.5,Iris-virginica
112 | 111,6.5,3.2,5.1,2.0,Iris-virginica
113 | 112,6.4,2.7,5.3,1.9,Iris-virginica
114 | 113,6.8,3.0,5.5,2.1,Iris-virginica
115 | 114,5.7,2.5,5.0,2.0,Iris-virginica
116 | 115,5.8,2.8,5.1,2.4,Iris-virginica
117 | 116,6.4,3.2,5.3,2.3,Iris-virginica
118 | 117,6.5,3.0,5.5,1.8,Iris-virginica
119 | 118,7.7,3.8,6.7,2.2,Iris-virginica
120 | 119,7.7,2.6,6.9,2.3,Iris-virginica
121 | 120,6.0,2.2,5.0,1.5,Iris-virginica
122 | 121,6.9,3.2,5.7,2.3,Iris-virginica
123 | 122,5.6,2.8,4.9,2.0,Iris-virginica
124 | 123,7.7,2.8,6.7,2.0,Iris-virginica
125 | 124,6.3,2.7,4.9,1.8,Iris-virginica
126 | 125,6.7,3.3,5.7,2.1,Iris-virginica
127 | 126,7.2,3.2,6.0,1.8,Iris-virginica
128 | 127,6.2,2.8,4.8,1.8,Iris-virginica
129 | 128,6.1,3.0,4.9,1.8,Iris-virginica
130 | 129,6.4,2.8,5.6,2.1,Iris-virginica
131 | 130,7.2,3.0,5.8,1.6,Iris-virginica
132 | 131,7.4,2.8,6.1,1.9,Iris-virginica
133 | 132,7.9,3.8,6.4,2.0,Iris-virginica
134 | 133,6.4,2.8,5.6,2.2,Iris-virginica
135 | 134,6.3,2.8,5.1,1.5,Iris-virginica
136 | 135,6.1,2.6,5.6,1.4,Iris-virginica
137 | 136,7.7,3.0,6.1,2.3,Iris-virginica
138 | 137,6.3,3.4,5.6,2.4,Iris-virginica
139 | 138,6.4,3.1,5.5,1.8,Iris-virginica
140 | 139,6.0,3.0,4.8,1.8,Iris-virginica
141 | 140,6.9,3.1,5.4,2.1,Iris-virginica
142 | 141,6.7,3.1,5.6,2.4,Iris-virginica
143 | 142,6.9,3.1,5.1,2.3,Iris-virginica
144 | 143,5.8,2.7,5.1,1.9,Iris-virginica
145 | 144,6.8,3.2,5.9,2.3,Iris-virginica
146 | 145,6.7,3.3,5.7,2.5,Iris-virginica
147 | 146,6.7,3.0,5.2,2.3,Iris-virginica
148 | 147,6.3,2.5,5.0,1.9,Iris-virginica
149 | 148,6.5,3.0,5.2,2.0,Iris-virginica
150 | 149,6.2,3.4,5.4,2.3,Iris-virginica
151 | 150,5.9,3.0,5.1,1.8,Iris-virginica
152 |
--------------------------------------------------------------------------------
/environment.yml:
--------------------------------------------------------------------------------
1 | name: decision_tree
2 | channels:
3 | - defaults
4 | dependencies:
5 | - jupyterlab=1.2.6
6 | - matplotlib=3.1.3
7 | - numpy=1.18.1
8 | - pandas=1.0.1
9 | - python=3.7.6
10 | - seaborn=0.10.0
11 |
--------------------------------------------------------------------------------
/images/decision_tree_algorithm_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SebastianMantey/Decision-Tree-from-Scratch/abcfa4ac38a797b567453b2aebd4a98f28192acd/images/decision_tree_algorithm_1.png
--------------------------------------------------------------------------------
/images/decision_tree_algorithm_2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SebastianMantey/Decision-Tree-from-Scratch/abcfa4ac38a797b567453b2aebd4a98f28192acd/images/decision_tree_algorithm_2.png
--------------------------------------------------------------------------------
/notebooks/Video 01 - Introduction.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "The goal of this notebook is to code a decision tree classifier that can be used with the following API:\n",
8 | "\n",
9 | "```Python\n",
10 | "df = pd.read_csv(\"data.csv\")\n",
11 | "\n",
12 | "train_df, test_df = train_test_split(df, test_size=0.2)\n",
13 | "tree = decision_tree_algorithm(train_df)\n",
14 | "accuracy = calculate_accuracy(test_df, tree)\n",
15 | "```\n",
16 | "\n",
17 | "The algorithm that is going to be implemented looks like this:\n",
18 | "\n",
19 | "
"
20 | ]
21 | },
22 | {
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "# Import Statements"
27 | ]
28 | },
29 | {
30 | "cell_type": "code",
31 | "execution_count": 1,
32 | "metadata": {},
33 | "outputs": [],
34 | "source": [
35 | "import numpy as np\n",
36 | "import pandas as pd\n",
37 | "\n",
38 | "import matplotlib.pyplot as plt\n",
39 | "import seaborn as sns\n",
40 | "\n",
41 | "import random\n",
42 | "from pprint import pprint"
43 | ]
44 | },
45 | {
46 | "cell_type": "code",
47 | "execution_count": 2,
48 | "metadata": {},
49 | "outputs": [],
50 | "source": [
51 | "%matplotlib inline\n",
52 | "sns.set_style(\"darkgrid\")"
53 | ]
54 | },
55 | {
56 | "cell_type": "markdown",
57 | "metadata": {},
58 | "source": [
59 | "# Load and Prepare Data"
60 | ]
61 | },
62 | {
63 | "cell_type": "markdown",
64 | "metadata": {},
65 | "source": [
66 | "#### Format of the data\n",
67 | "- the last column of the data frame must contain the label and it must also be called \"label\"\n",
68 | "- there should be no missing values in the data frame"
69 | ]
70 | },
71 | {
72 | "cell_type": "code",
73 | "execution_count": 3,
74 | "metadata": {},
75 | "outputs": [],
76 | "source": [
77 | "df = pd.read_csv(\"../data/Iris.csv\")\n",
78 | "df = df.drop(\"Id\", axis=1)\n",
79 | "df = df.rename(columns={\"species\": \"label\"})"
80 | ]
81 | },
82 | {
83 | "cell_type": "code",
84 | "execution_count": 4,
85 | "metadata": {},
86 | "outputs": [
87 | {
88 | "data": {
89 | "text/html": [
90 | "\n",
91 | "\n",
104 | "
\n",
105 | " \n",
106 | " \n",
107 | " | \n",
108 | " sepal_length | \n",
109 | " sepal_width | \n",
110 | " petal_length | \n",
111 | " petal_width | \n",
112 | " label | \n",
113 | "
\n",
114 | " \n",
115 | " \n",
116 | " \n",
117 | " 0 | \n",
118 | " 5.1 | \n",
119 | " 3.5 | \n",
120 | " 1.4 | \n",
121 | " 0.2 | \n",
122 | " Iris-setosa | \n",
123 | "
\n",
124 | " \n",
125 | " 1 | \n",
126 | " 4.9 | \n",
127 | " 3.0 | \n",
128 | " 1.4 | \n",
129 | " 0.2 | \n",
130 | " Iris-setosa | \n",
131 | "
\n",
132 | " \n",
133 | " 2 | \n",
134 | " 4.7 | \n",
135 | " 3.2 | \n",
136 | " 1.3 | \n",
137 | " 0.2 | \n",
138 | " Iris-setosa | \n",
139 | "
\n",
140 | " \n",
141 | " 3 | \n",
142 | " 4.6 | \n",
143 | " 3.1 | \n",
144 | " 1.5 | \n",
145 | " 0.2 | \n",
146 | " Iris-setosa | \n",
147 | "
\n",
148 | " \n",
149 | " 4 | \n",
150 | " 5.0 | \n",
151 | " 3.6 | \n",
152 | " 1.4 | \n",
153 | " 0.2 | \n",
154 | " Iris-setosa | \n",
155 | "
\n",
156 | " \n",
157 | "
\n",
158 | "
"
159 | ],
160 | "text/plain": [
161 | " sepal_length sepal_width petal_length petal_width label\n",
162 | "0 5.1 3.5 1.4 0.2 Iris-setosa\n",
163 | "1 4.9 3.0 1.4 0.2 Iris-setosa\n",
164 | "2 4.7 3.2 1.3 0.2 Iris-setosa\n",
165 | "3 4.6 3.1 1.5 0.2 Iris-setosa\n",
166 | "4 5.0 3.6 1.4 0.2 Iris-setosa"
167 | ]
168 | },
169 | "execution_count": 4,
170 | "metadata": {},
171 | "output_type": "execute_result"
172 | }
173 | ],
174 | "source": [
175 | "df.head()"
176 | ]
177 | },
178 | {
179 | "cell_type": "markdown",
180 | "metadata": {},
181 | "source": [
182 | "# Train-Test-Split"
183 | ]
184 | },
185 | {
186 | "cell_type": "code",
187 | "execution_count": 5,
188 | "metadata": {},
189 | "outputs": [],
190 | "source": [
191 | "def train_test_split(df, test_size):\n",
192 | " \n",
193 | " if isinstance(test_size, float):\n",
194 | " test_size = round(test_size * len(df))\n",
195 | "\n",
196 | " indices = df.index.tolist()\n",
197 | " test_indices = random.sample(population=indices, k=test_size)\n",
198 | "\n",
199 | " test_df = df.loc[test_indices]\n",
200 | " train_df = df.drop(test_indices)\n",
201 | " \n",
202 | " return train_df, test_df"
203 | ]
204 | },
205 | {
206 | "cell_type": "code",
207 | "execution_count": 6,
208 | "metadata": {},
209 | "outputs": [],
210 | "source": [
211 | "random.seed(0)\n",
212 | "train_df, test_df = train_test_split(df, test_size=20)"
213 | ]
214 | },
215 | {
216 | "cell_type": "code",
217 | "execution_count": null,
218 | "metadata": {},
219 | "outputs": [],
220 | "source": []
221 | }
222 | ],
223 | "metadata": {
224 | "kernelspec": {
225 | "display_name": "Python 3",
226 | "language": "python",
227 | "name": "python3"
228 | },
229 | "language_info": {
230 | "codemirror_mode": {
231 | "name": "ipython",
232 | "version": 3
233 | },
234 | "file_extension": ".py",
235 | "mimetype": "text/x-python",
236 | "name": "python",
237 | "nbconvert_exporter": "python",
238 | "pygments_lexer": "ipython3",
239 | "version": "3.7.3"
240 | }
241 | },
242 | "nbformat": 4,
243 | "nbformat_minor": 4
244 | }
245 |
--------------------------------------------------------------------------------
/notebooks/Video 02 - Helper Functions 1.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "The goal of this notebook is to code a decision tree classifier that can be used with the following API:\n",
8 | "\n",
9 | "```Python\n",
10 | "df = pd.read_csv(\"data.csv\")\n",
11 | "\n",
12 | "train_df, test_df = train_test_split(df, test_size=0.2)\n",
13 | "tree = decision_tree_algorithm(train_df)\n",
14 | "accuracy = calculate_accuracy(test_df, tree)\n",
15 | "```\n",
16 | "\n",
17 | "The algorithm that is going to be implemented looks like this:\n",
18 | "\n",
19 | "
"
20 | ]
21 | },
22 | {
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "# Import Statements"
27 | ]
28 | },
29 | {
30 | "cell_type": "code",
31 | "execution_count": 1,
32 | "metadata": {},
33 | "outputs": [],
34 | "source": [
35 | "import numpy as np\n",
36 | "import pandas as pd\n",
37 | "\n",
38 | "import matplotlib.pyplot as plt\n",
39 | "import seaborn as sns\n",
40 | "\n",
41 | "import random\n",
42 | "from pprint import pprint"
43 | ]
44 | },
45 | {
46 | "cell_type": "code",
47 | "execution_count": 2,
48 | "metadata": {},
49 | "outputs": [],
50 | "source": [
51 | "%matplotlib inline\n",
52 | "sns.set_style(\"darkgrid\")"
53 | ]
54 | },
55 | {
56 | "cell_type": "markdown",
57 | "metadata": {},
58 | "source": [
59 | "# Load and Prepare Data"
60 | ]
61 | },
62 | {
63 | "cell_type": "markdown",
64 | "metadata": {},
65 | "source": [
66 | "#### Format of the data\n",
67 | "- the last column of the data frame must contain the label and it must also be called \"label\"\n",
68 | "- there should be no missing values in the data frame"
69 | ]
70 | },
71 | {
72 | "cell_type": "code",
73 | "execution_count": 3,
74 | "metadata": {},
75 | "outputs": [],
76 | "source": [
77 | "df = pd.read_csv(\"../data/Iris.csv\")\n",
78 | "df = df.drop(\"Id\", axis=1)\n",
79 | "df = df.rename(columns={\"species\": \"label\"})"
80 | ]
81 | },
82 | {
83 | "cell_type": "code",
84 | "execution_count": 4,
85 | "metadata": {},
86 | "outputs": [
87 | {
88 | "data": {
89 | "text/html": [
90 | "\n",
91 | "\n",
104 | "
\n",
105 | " \n",
106 | " \n",
107 | " | \n",
108 | " sepal_length | \n",
109 | " sepal_width | \n",
110 | " petal_length | \n",
111 | " petal_width | \n",
112 | " label | \n",
113 | "
\n",
114 | " \n",
115 | " \n",
116 | " \n",
117 | " 0 | \n",
118 | " 5.1 | \n",
119 | " 3.5 | \n",
120 | " 1.4 | \n",
121 | " 0.2 | \n",
122 | " Iris-setosa | \n",
123 | "
\n",
124 | " \n",
125 | " 1 | \n",
126 | " 4.9 | \n",
127 | " 3.0 | \n",
128 | " 1.4 | \n",
129 | " 0.2 | \n",
130 | " Iris-setosa | \n",
131 | "
\n",
132 | " \n",
133 | " 2 | \n",
134 | " 4.7 | \n",
135 | " 3.2 | \n",
136 | " 1.3 | \n",
137 | " 0.2 | \n",
138 | " Iris-setosa | \n",
139 | "
\n",
140 | " \n",
141 | " 3 | \n",
142 | " 4.6 | \n",
143 | " 3.1 | \n",
144 | " 1.5 | \n",
145 | " 0.2 | \n",
146 | " Iris-setosa | \n",
147 | "
\n",
148 | " \n",
149 | " 4 | \n",
150 | " 5.0 | \n",
151 | " 3.6 | \n",
152 | " 1.4 | \n",
153 | " 0.2 | \n",
154 | " Iris-setosa | \n",
155 | "
\n",
156 | " \n",
157 | "
\n",
158 | "
"
159 | ],
160 | "text/plain": [
161 | " sepal_length sepal_width petal_length petal_width label\n",
162 | "0 5.1 3.5 1.4 0.2 Iris-setosa\n",
163 | "1 4.9 3.0 1.4 0.2 Iris-setosa\n",
164 | "2 4.7 3.2 1.3 0.2 Iris-setosa\n",
165 | "3 4.6 3.1 1.5 0.2 Iris-setosa\n",
166 | "4 5.0 3.6 1.4 0.2 Iris-setosa"
167 | ]
168 | },
169 | "execution_count": 4,
170 | "metadata": {},
171 | "output_type": "execute_result"
172 | }
173 | ],
174 | "source": [
175 | "df.head()"
176 | ]
177 | },
178 | {
179 | "cell_type": "markdown",
180 | "metadata": {},
181 | "source": [
182 | "# Train-Test-Split"
183 | ]
184 | },
185 | {
186 | "cell_type": "code",
187 | "execution_count": 5,
188 | "metadata": {},
189 | "outputs": [],
190 | "source": [
191 | "def train_test_split(df, test_size):\n",
192 | " \n",
193 | " if isinstance(test_size, float):\n",
194 | " test_size = round(test_size * len(df))\n",
195 | "\n",
196 | " indices = df.index.tolist()\n",
197 | " test_indices = random.sample(population=indices, k=test_size)\n",
198 | "\n",
199 | " test_df = df.loc[test_indices]\n",
200 | " train_df = df.drop(test_indices)\n",
201 | " \n",
202 | " return train_df, test_df"
203 | ]
204 | },
205 | {
206 | "cell_type": "code",
207 | "execution_count": 6,
208 | "metadata": {},
209 | "outputs": [],
210 | "source": [
211 | "random.seed(0)\n",
212 | "train_df, test_df = train_test_split(df, test_size=20)"
213 | ]
214 | },
215 | {
216 | "cell_type": "markdown",
217 | "metadata": {},
218 | "source": [
219 | "# Helper Functions\n",
220 | "The helper functions operate on a NumPy 2d-array. Therefore, let’s create a variable called “data” to see what we will be working with."
221 | ]
222 | },
223 | {
224 | "cell_type": "code",
225 | "execution_count": 7,
226 | "metadata": {},
227 | "outputs": [
228 | {
229 | "data": {
230 | "text/plain": [
231 | "array([[5.1, 3.5, 1.4, 0.2, 'Iris-setosa'],\n",
232 | " [4.9, 3.0, 1.4, 0.2, 'Iris-setosa'],\n",
233 | " [4.7, 3.2, 1.3, 0.2, 'Iris-setosa'],\n",
234 | " [4.6, 3.1, 1.5, 0.2, 'Iris-setosa'],\n",
235 | " [5.0, 3.6, 1.4, 0.2, 'Iris-setosa']], dtype=object)"
236 | ]
237 | },
238 | "execution_count": 7,
239 | "metadata": {},
240 | "output_type": "execute_result"
241 | }
242 | ],
243 | "source": [
244 | "data = train_df.values\n",
245 | "data[:5]"
246 | ]
247 | },
248 | {
249 | "cell_type": "markdown",
250 | "metadata": {},
251 | "source": [
252 | "### Data pure?"
253 | ]
254 | },
255 | {
256 | "cell_type": "code",
257 | "execution_count": 8,
258 | "metadata": {},
259 | "outputs": [],
260 | "source": [
261 | "def check_purity(data):\n",
262 | " \n",
263 | " label_column = data[:, -1]\n",
264 | " unique_classes = np.unique(label_column)\n",
265 | "\n",
266 | " if len(unique_classes) == 1:\n",
267 | " return True\n",
268 | " else:\n",
269 | " return False"
270 | ]
271 | },
272 | {
273 | "cell_type": "markdown",
274 | "metadata": {},
275 | "source": [
276 | "### Classify"
277 | ]
278 | },
279 | {
280 | "cell_type": "code",
281 | "execution_count": 9,
282 | "metadata": {},
283 | "outputs": [],
284 | "source": [
285 | "def classify_data(data):\n",
286 | " \n",
287 | " label_column = data[:, -1]\n",
288 | " unique_classes, counts_unique_classes = np.unique(label_column, return_counts=True)\n",
289 | "\n",
290 | " index = counts_unique_classes.argmax()\n",
291 | " classification = unique_classes[index]\n",
292 | " \n",
293 | " return classification"
294 | ]
295 | },
296 | {
297 | "cell_type": "code",
298 | "execution_count": null,
299 | "metadata": {},
300 | "outputs": [],
301 | "source": []
302 | }
303 | ],
304 | "metadata": {
305 | "kernelspec": {
306 | "display_name": "Python 3",
307 | "language": "python",
308 | "name": "python3"
309 | },
310 | "language_info": {
311 | "codemirror_mode": {
312 | "name": "ipython",
313 | "version": 3
314 | },
315 | "file_extension": ".py",
316 | "mimetype": "text/x-python",
317 | "name": "python",
318 | "nbconvert_exporter": "python",
319 | "pygments_lexer": "ipython3",
320 | "version": "3.7.3"
321 | }
322 | },
323 | "nbformat": 4,
324 | "nbformat_minor": 4
325 | }
326 |
--------------------------------------------------------------------------------
/notebooks/Video 03 - Helper Functions 2.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "The goal of this notebook is to code a decision tree classifier that can be used with the following API:\n",
8 | "\n",
9 | "```Python\n",
10 | "df = pd.read_csv(\"data.csv\")\n",
11 | "\n",
12 | "train_df, test_df = train_test_split(df, test_size=0.2)\n",
13 | "tree = decision_tree_algorithm(train_df)\n",
14 | "accuracy = calculate_accuracy(test_df, tree)\n",
15 | "```\n",
16 | "\n",
17 | "The algorithm that is going to be implemented looks like this:\n",
18 | "\n",
19 | "
"
20 | ]
21 | },
22 | {
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "# Import Statements"
27 | ]
28 | },
29 | {
30 | "cell_type": "code",
31 | "execution_count": 1,
32 | "metadata": {},
33 | "outputs": [],
34 | "source": [
35 | "import numpy as np\n",
36 | "import pandas as pd\n",
37 | "\n",
38 | "import matplotlib.pyplot as plt\n",
39 | "import seaborn as sns\n",
40 | "\n",
41 | "import random\n",
42 | "from pprint import pprint"
43 | ]
44 | },
45 | {
46 | "cell_type": "code",
47 | "execution_count": 2,
48 | "metadata": {},
49 | "outputs": [],
50 | "source": [
51 | "%matplotlib inline\n",
52 | "sns.set_style(\"darkgrid\")"
53 | ]
54 | },
55 | {
56 | "cell_type": "markdown",
57 | "metadata": {},
58 | "source": [
59 | "# Load and Prepare Data"
60 | ]
61 | },
62 | {
63 | "cell_type": "markdown",
64 | "metadata": {},
65 | "source": [
66 | "#### Format of the data\n",
67 | "- the last column of the data frame must contain the label and it must also be called \"label\"\n",
68 | "- there should be no missing values in the data frame"
69 | ]
70 | },
71 | {
72 | "cell_type": "code",
73 | "execution_count": 3,
74 | "metadata": {},
75 | "outputs": [],
76 | "source": [
77 | "df = pd.read_csv(\"../data/Iris.csv\")\n",
78 | "df = df.drop(\"Id\", axis=1)\n",
79 | "df = df.rename(columns={\"species\": \"label\"})"
80 | ]
81 | },
82 | {
83 | "cell_type": "code",
84 | "execution_count": 4,
85 | "metadata": {},
86 | "outputs": [
87 | {
88 | "data": {
89 | "text/html": [
90 | "\n",
91 | "\n",
104 | "
\n",
105 | " \n",
106 | " \n",
107 | " | \n",
108 | " sepal_length | \n",
109 | " sepal_width | \n",
110 | " petal_length | \n",
111 | " petal_width | \n",
112 | " label | \n",
113 | "
\n",
114 | " \n",
115 | " \n",
116 | " \n",
117 | " 0 | \n",
118 | " 5.1 | \n",
119 | " 3.5 | \n",
120 | " 1.4 | \n",
121 | " 0.2 | \n",
122 | " Iris-setosa | \n",
123 | "
\n",
124 | " \n",
125 | " 1 | \n",
126 | " 4.9 | \n",
127 | " 3.0 | \n",
128 | " 1.4 | \n",
129 | " 0.2 | \n",
130 | " Iris-setosa | \n",
131 | "
\n",
132 | " \n",
133 | " 2 | \n",
134 | " 4.7 | \n",
135 | " 3.2 | \n",
136 | " 1.3 | \n",
137 | " 0.2 | \n",
138 | " Iris-setosa | \n",
139 | "
\n",
140 | " \n",
141 | " 3 | \n",
142 | " 4.6 | \n",
143 | " 3.1 | \n",
144 | " 1.5 | \n",
145 | " 0.2 | \n",
146 | " Iris-setosa | \n",
147 | "
\n",
148 | " \n",
149 | " 4 | \n",
150 | " 5.0 | \n",
151 | " 3.6 | \n",
152 | " 1.4 | \n",
153 | " 0.2 | \n",
154 | " Iris-setosa | \n",
155 | "
\n",
156 | " \n",
157 | "
\n",
158 | "
"
159 | ],
160 | "text/plain": [
161 | " sepal_length sepal_width petal_length petal_width label\n",
162 | "0 5.1 3.5 1.4 0.2 Iris-setosa\n",
163 | "1 4.9 3.0 1.4 0.2 Iris-setosa\n",
164 | "2 4.7 3.2 1.3 0.2 Iris-setosa\n",
165 | "3 4.6 3.1 1.5 0.2 Iris-setosa\n",
166 | "4 5.0 3.6 1.4 0.2 Iris-setosa"
167 | ]
168 | },
169 | "execution_count": 4,
170 | "metadata": {},
171 | "output_type": "execute_result"
172 | }
173 | ],
174 | "source": [
175 | "df.head()"
176 | ]
177 | },
178 | {
179 | "cell_type": "markdown",
180 | "metadata": {},
181 | "source": [
182 | "# Train-Test-Split"
183 | ]
184 | },
185 | {
186 | "cell_type": "code",
187 | "execution_count": 5,
188 | "metadata": {},
189 | "outputs": [],
190 | "source": [
191 | "def train_test_split(df, test_size):\n",
192 | " \n",
193 | " if isinstance(test_size, float):\n",
194 | " test_size = round(test_size * len(df))\n",
195 | "\n",
196 | " indices = df.index.tolist()\n",
197 | " test_indices = random.sample(population=indices, k=test_size)\n",
198 | "\n",
199 | " test_df = df.loc[test_indices]\n",
200 | " train_df = df.drop(test_indices)\n",
201 | " \n",
202 | " return train_df, test_df"
203 | ]
204 | },
205 | {
206 | "cell_type": "code",
207 | "execution_count": 6,
208 | "metadata": {},
209 | "outputs": [],
210 | "source": [
211 | "random.seed(0)\n",
212 | "train_df, test_df = train_test_split(df, test_size=20)"
213 | ]
214 | },
215 | {
216 | "cell_type": "markdown",
217 | "metadata": {},
218 | "source": [
219 | "# Helper Functions\n",
220 | "\n",
221 | "The helper functions operate on a NumPy 2d-array. Therefore, let’s create a variable called “data” to see what we will be working with."
222 | ]
223 | },
224 | {
225 | "cell_type": "code",
226 | "execution_count": 7,
227 | "metadata": {},
228 | "outputs": [
229 | {
230 | "data": {
231 | "text/plain": [
232 | "array([[5.1, 3.5, 1.4, 0.2, 'Iris-setosa'],\n",
233 | " [4.9, 3.0, 1.4, 0.2, 'Iris-setosa'],\n",
234 | " [4.7, 3.2, 1.3, 0.2, 'Iris-setosa'],\n",
235 | " [4.6, 3.1, 1.5, 0.2, 'Iris-setosa'],\n",
236 | " [5.0, 3.6, 1.4, 0.2, 'Iris-setosa']], dtype=object)"
237 | ]
238 | },
239 | "execution_count": 7,
240 | "metadata": {},
241 | "output_type": "execute_result"
242 | }
243 | ],
244 | "source": [
245 | "data = train_df.values\n",
246 | "data[:5]"
247 | ]
248 | },
249 | {
250 | "cell_type": "markdown",
251 | "metadata": {},
252 | "source": [
253 | "### Data pure?"
254 | ]
255 | },
256 | {
257 | "cell_type": "code",
258 | "execution_count": 8,
259 | "metadata": {},
260 | "outputs": [],
261 | "source": [
262 | "def check_purity(data):\n",
263 | " \n",
264 | " label_column = data[:, -1]\n",
265 | " unique_classes = np.unique(label_column)\n",
266 | "\n",
267 | " if len(unique_classes) == 1:\n",
268 | " return True\n",
269 | " else:\n",
270 | " return False"
271 | ]
272 | },
273 | {
274 | "cell_type": "markdown",
275 | "metadata": {},
276 | "source": [
277 | "### Classify"
278 | ]
279 | },
280 | {
281 | "cell_type": "code",
282 | "execution_count": 9,
283 | "metadata": {},
284 | "outputs": [],
285 | "source": [
286 | "def classify_data(data):\n",
287 | " \n",
288 | " label_column = data[:, -1]\n",
289 | " unique_classes, counts_unique_classes = np.unique(label_column, return_counts=True)\n",
290 | "\n",
291 | " index = counts_unique_classes.argmax()\n",
292 | " classification = unique_classes[index]\n",
293 | " \n",
294 | " return classification"
295 | ]
296 | },
297 | {
298 | "cell_type": "markdown",
299 | "metadata": {},
300 | "source": [
301 | "### Potential splits?"
302 | ]
303 | },
304 | {
305 | "cell_type": "code",
306 | "execution_count": 10,
307 | "metadata": {},
308 | "outputs": [],
309 | "source": [
310 | "def get_potential_splits(data):\n",
311 | " \n",
312 | " potential_splits = {}\n",
313 | " _, n_columns = data.shape\n",
314 | " for column_index in range(n_columns - 1): # excluding the last column which is the label\n",
315 | " potential_splits[column_index] = []\n",
316 | " values = data[:, column_index]\n",
317 | " unique_values = np.unique(values)\n",
318 | "\n",
319 | " for index in range(len(unique_values)):\n",
320 | " if index != 0:\n",
321 | " current_value = unique_values[index]\n",
322 | " previous_value = unique_values[index - 1]\n",
323 | " potential_split = (current_value + previous_value) / 2\n",
324 | " \n",
325 | " potential_splits[column_index].append(potential_split)\n",
326 | " \n",
327 | " return potential_splits"
328 | ]
329 | },
330 | {
331 | "cell_type": "code",
332 | "execution_count": null,
333 | "metadata": {},
334 | "outputs": [],
335 | "source": []
336 | }
337 | ],
338 | "metadata": {
339 | "kernelspec": {
340 | "display_name": "Python 3",
341 | "language": "python",
342 | "name": "python3"
343 | },
344 | "language_info": {
345 | "codemirror_mode": {
346 | "name": "ipython",
347 | "version": 3
348 | },
349 | "file_extension": ".py",
350 | "mimetype": "text/x-python",
351 | "name": "python",
352 | "nbconvert_exporter": "python",
353 | "pygments_lexer": "ipython3",
354 | "version": "3.7.3"
355 | }
356 | },
357 | "nbformat": 4,
358 | "nbformat_minor": 4
359 | }
360 |
--------------------------------------------------------------------------------
/notebooks/Video 04 - Helper Functions 3.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "The goal of this notebook is to code a decision tree classifier that can be used with the following API:\n",
8 | "\n",
9 | "```Python\n",
10 | "df = pd.read_csv(\"data.csv\")\n",
11 | "\n",
12 | "train_df, test_df = train_test_split(df, test_size=0.2)\n",
13 | "tree = decision_tree_algorithm(train_df)\n",
14 | "accuracy = calculate_accuracy(test_df, tree)\n",
15 | "```\n",
16 | "\n",
17 | "The algorithm that is going to be implemented looks like this:\n",
18 | "\n",
19 | "
"
20 | ]
21 | },
22 | {
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "# Import Statements"
27 | ]
28 | },
29 | {
30 | "cell_type": "code",
31 | "execution_count": 1,
32 | "metadata": {},
33 | "outputs": [],
34 | "source": [
35 | "import numpy as np\n",
36 | "import pandas as pd\n",
37 | "\n",
38 | "import matplotlib.pyplot as plt\n",
39 | "import seaborn as sns\n",
40 | "\n",
41 | "import random\n",
42 | "from pprint import pprint"
43 | ]
44 | },
45 | {
46 | "cell_type": "code",
47 | "execution_count": 2,
48 | "metadata": {},
49 | "outputs": [],
50 | "source": [
51 | "%matplotlib inline\n",
52 | "sns.set_style(\"darkgrid\")"
53 | ]
54 | },
55 | {
56 | "cell_type": "markdown",
57 | "metadata": {},
58 | "source": [
59 | "# Load and Prepare Data"
60 | ]
61 | },
62 | {
63 | "cell_type": "markdown",
64 | "metadata": {},
65 | "source": [
66 | "#### Format of the data\n",
67 | "- the last column of the data frame must contain the label and it must also be called \"label\"\n",
68 | "- there should be no missing values in the data frame"
69 | ]
70 | },
71 | {
72 | "cell_type": "code",
73 | "execution_count": 3,
74 | "metadata": {},
75 | "outputs": [],
76 | "source": [
77 | "df = pd.read_csv(\"../data/Iris.csv\")\n",
78 | "df = df.drop(\"Id\", axis=1)\n",
79 | "df = df.rename(columns={\"species\": \"label\"})"
80 | ]
81 | },
82 | {
83 | "cell_type": "code",
84 | "execution_count": 4,
85 | "metadata": {},
86 | "outputs": [
87 | {
88 | "data": {
89 | "text/html": [
90 | "\n",
91 | "\n",
104 | "
\n",
105 | " \n",
106 | " \n",
107 | " | \n",
108 | " sepal_length | \n",
109 | " sepal_width | \n",
110 | " petal_length | \n",
111 | " petal_width | \n",
112 | " label | \n",
113 | "
\n",
114 | " \n",
115 | " \n",
116 | " \n",
117 | " 0 | \n",
118 | " 5.1 | \n",
119 | " 3.5 | \n",
120 | " 1.4 | \n",
121 | " 0.2 | \n",
122 | " Iris-setosa | \n",
123 | "
\n",
124 | " \n",
125 | " 1 | \n",
126 | " 4.9 | \n",
127 | " 3.0 | \n",
128 | " 1.4 | \n",
129 | " 0.2 | \n",
130 | " Iris-setosa | \n",
131 | "
\n",
132 | " \n",
133 | " 2 | \n",
134 | " 4.7 | \n",
135 | " 3.2 | \n",
136 | " 1.3 | \n",
137 | " 0.2 | \n",
138 | " Iris-setosa | \n",
139 | "
\n",
140 | " \n",
141 | " 3 | \n",
142 | " 4.6 | \n",
143 | " 3.1 | \n",
144 | " 1.5 | \n",
145 | " 0.2 | \n",
146 | " Iris-setosa | \n",
147 | "
\n",
148 | " \n",
149 | " 4 | \n",
150 | " 5.0 | \n",
151 | " 3.6 | \n",
152 | " 1.4 | \n",
153 | " 0.2 | \n",
154 | " Iris-setosa | \n",
155 | "
\n",
156 | " \n",
157 | "
\n",
158 | "
"
159 | ],
160 | "text/plain": [
161 | " sepal_length sepal_width petal_length petal_width label\n",
162 | "0 5.1 3.5 1.4 0.2 Iris-setosa\n",
163 | "1 4.9 3.0 1.4 0.2 Iris-setosa\n",
164 | "2 4.7 3.2 1.3 0.2 Iris-setosa\n",
165 | "3 4.6 3.1 1.5 0.2 Iris-setosa\n",
166 | "4 5.0 3.6 1.4 0.2 Iris-setosa"
167 | ]
168 | },
169 | "execution_count": 4,
170 | "metadata": {},
171 | "output_type": "execute_result"
172 | }
173 | ],
174 | "source": [
175 | "df.head()"
176 | ]
177 | },
178 | {
179 | "cell_type": "markdown",
180 | "metadata": {},
181 | "source": [
182 | "# Train-Test-Split"
183 | ]
184 | },
185 | {
186 | "cell_type": "code",
187 | "execution_count": 5,
188 | "metadata": {},
189 | "outputs": [],
190 | "source": [
191 | "def train_test_split(df, test_size):\n",
192 | " \n",
193 | " if isinstance(test_size, float):\n",
194 | " test_size = round(test_size * len(df))\n",
195 | "\n",
196 | " indices = df.index.tolist()\n",
197 | " test_indices = random.sample(population=indices, k=test_size)\n",
198 | "\n",
199 | " test_df = df.loc[test_indices]\n",
200 | " train_df = df.drop(test_indices)\n",
201 | " \n",
202 | " return train_df, test_df"
203 | ]
204 | },
205 | {
206 | "cell_type": "code",
207 | "execution_count": 6,
208 | "metadata": {},
209 | "outputs": [],
210 | "source": [
211 | "random.seed(0)\n",
212 | "train_df, test_df = train_test_split(df, test_size=20)"
213 | ]
214 | },
215 | {
216 | "cell_type": "markdown",
217 | "metadata": {},
218 | "source": [
219 | "# Helper Functions\n",
220 | "The helper functions operate on a NumPy 2d-array. Therefore, let’s create a variable called “data” to see what we will be working with."
221 | ]
222 | },
223 | {
224 | "cell_type": "code",
225 | "execution_count": 7,
226 | "metadata": {},
227 | "outputs": [
228 | {
229 | "data": {
230 | "text/plain": [
231 | "array([[5.1, 3.5, 1.4, 0.2, 'Iris-setosa'],\n",
232 | " [4.9, 3.0, 1.4, 0.2, 'Iris-setosa'],\n",
233 | " [4.7, 3.2, 1.3, 0.2, 'Iris-setosa'],\n",
234 | " [4.6, 3.1, 1.5, 0.2, 'Iris-setosa'],\n",
235 | " [5.0, 3.6, 1.4, 0.2, 'Iris-setosa']], dtype=object)"
236 | ]
237 | },
238 | "execution_count": 7,
239 | "metadata": {},
240 | "output_type": "execute_result"
241 | }
242 | ],
243 | "source": [
244 | "data = train_df.values\n",
245 | "data[:5]"
246 | ]
247 | },
248 | {
249 | "cell_type": "markdown",
250 | "metadata": {},
251 | "source": [
252 | "### Data pure?"
253 | ]
254 | },
255 | {
256 | "cell_type": "code",
257 | "execution_count": 8,
258 | "metadata": {},
259 | "outputs": [],
260 | "source": [
261 | "def check_purity(data):\n",
262 | " \n",
263 | " label_column = data[:, -1]\n",
264 | " unique_classes = np.unique(label_column)\n",
265 | "\n",
266 | " if len(unique_classes) == 1:\n",
267 | " return True\n",
268 | " else:\n",
269 | " return False"
270 | ]
271 | },
272 | {
273 | "cell_type": "markdown",
274 | "metadata": {},
275 | "source": [
276 | "### Classify"
277 | ]
278 | },
279 | {
280 | "cell_type": "code",
281 | "execution_count": 9,
282 | "metadata": {},
283 | "outputs": [],
284 | "source": [
285 | "def classify_data(data):\n",
286 | " \n",
287 | " label_column = data[:, -1]\n",
288 | " unique_classes, counts_unique_classes = np.unique(label_column, return_counts=True)\n",
289 | "\n",
290 | " index = counts_unique_classes.argmax()\n",
291 | " classification = unique_classes[index]\n",
292 | " \n",
293 | " return classification"
294 | ]
295 | },
296 | {
297 | "cell_type": "markdown",
298 | "metadata": {},
299 | "source": [
300 | "### Potential splits?"
301 | ]
302 | },
303 | {
304 | "cell_type": "code",
305 | "execution_count": 10,
306 | "metadata": {},
307 | "outputs": [],
308 | "source": [
309 | "def get_potential_splits(data):\n",
310 | " \n",
311 | " potential_splits = {}\n",
312 | " _, n_columns = data.shape\n",
313 | " for column_index in range(n_columns - 1): # excluding the last column which is the label\n",
314 | " potential_splits[column_index] = []\n",
315 | " values = data[:, column_index]\n",
316 | " unique_values = np.unique(values)\n",
317 | "\n",
318 | " for index in range(len(unique_values)):\n",
319 | " if index != 0:\n",
320 | " current_value = unique_values[index]\n",
321 | " previous_value = unique_values[index - 1]\n",
322 | " potential_split = (current_value + previous_value) / 2\n",
323 | " \n",
324 | " potential_splits[column_index].append(potential_split)\n",
325 | " \n",
326 | " return potential_splits"
327 | ]
328 | },
329 | {
330 | "cell_type": "markdown",
331 | "metadata": {},
332 | "source": [
333 | "### Split Data"
334 | ]
335 | },
336 | {
337 | "cell_type": "code",
338 | "execution_count": 11,
339 | "metadata": {},
340 | "outputs": [],
341 | "source": [
342 | "def split_data(data, split_column, split_value):\n",
343 | " \n",
344 | " split_column_values = data[:, split_column]\n",
345 | "\n",
346 | " data_below = data[split_column_values <= split_value]\n",
347 | " data_above = data[split_column_values > split_value]\n",
348 | " \n",
349 | " return data_below, data_above"
350 | ]
351 | },
352 | {
353 | "cell_type": "markdown",
354 | "metadata": {},
355 | "source": [
356 | "### Lowest Overall Entropy?"
357 | ]
358 | },
359 | {
360 | "cell_type": "code",
361 | "execution_count": 12,
362 | "metadata": {},
363 | "outputs": [],
364 | "source": [
365 | "def calculate_entropy(data):\n",
366 | " \n",
367 | " label_column = data[:, -1]\n",
368 | " _, counts = np.unique(label_column, return_counts=True)\n",
369 | "\n",
370 | " probabilities = counts / counts.sum()\n",
371 | " entropy = sum(probabilities * -np.log2(probabilities))\n",
372 | " \n",
373 | " return entropy"
374 | ]
375 | },
376 | {
377 | "cell_type": "code",
378 | "execution_count": 13,
379 | "metadata": {},
380 | "outputs": [],
381 | "source": [
382 | "def calculate_overall_entropy(data_below, data_above):\n",
383 | " \n",
384 | " n = len(data_below) + len(data_above)\n",
385 | " p_data_below = len(data_below) / n\n",
386 | " p_data_above = len(data_above) / n\n",
387 | "\n",
388 | " overall_entropy = (p_data_below * calculate_entropy(data_below) \n",
389 | " + p_data_above * calculate_entropy(data_above))\n",
390 | " \n",
391 | " return overall_entropy"
392 | ]
393 | },
394 | {
395 | "cell_type": "code",
396 | "execution_count": 14,
397 | "metadata": {},
398 | "outputs": [],
399 | "source": [
400 | "def determine_best_split(data, potential_splits):\n",
401 | " \n",
402 | " overall_entropy = 9999\n",
403 | " for column_index in potential_splits:\n",
404 | " for value in potential_splits[column_index]:\n",
405 | " data_below, data_above = split_data(data, split_column=column_index, split_value=value)\n",
406 | " current_overall_entropy = calculate_overall_entropy(data_below, data_above)\n",
407 | "\n",
408 | " if current_overall_entropy <= overall_entropy:\n",
409 | " overall_entropy = current_overall_entropy\n",
410 | " best_split_column = column_index\n",
411 | " best_split_value = value\n",
412 | " \n",
413 | " return best_split_column, best_split_value"
414 | ]
415 | },
416 | {
417 | "cell_type": "code",
418 | "execution_count": null,
419 | "metadata": {},
420 | "outputs": [],
421 | "source": []
422 | }
423 | ],
424 | "metadata": {
425 | "kernelspec": {
426 | "display_name": "Python 3",
427 | "language": "python",
428 | "name": "python3"
429 | },
430 | "language_info": {
431 | "codemirror_mode": {
432 | "name": "ipython",
433 | "version": 3
434 | },
435 | "file_extension": ".py",
436 | "mimetype": "text/x-python",
437 | "name": "python",
438 | "nbconvert_exporter": "python",
439 | "pygments_lexer": "ipython3",
440 | "version": "3.7.3"
441 | }
442 | },
443 | "nbformat": 4,
444 | "nbformat_minor": 4
445 | }
446 |
--------------------------------------------------------------------------------
/notebooks/Video 05 - Main Algorithm 1.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "The goal of this notebook is to code a decision tree classifier that can be used with the following API:\n",
8 | "\n",
9 | "```Python\n",
10 | "df = pd.read_csv(\"data.csv\")\n",
11 | "\n",
12 | "train_df, test_df = train_test_split(df, test_size=0.2)\n",
13 | "tree = decision_tree_algorithm(train_df)\n",
14 | "accuracy = calculate_accuracy(test_df, tree)\n",
15 | "```\n",
16 | "\n",
17 | "The algorithm that is going to be implemented looks like this:\n",
18 | "\n",
19 | "
"
20 | ]
21 | },
22 | {
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "# Import Statements"
27 | ]
28 | },
29 | {
30 | "cell_type": "code",
31 | "execution_count": 1,
32 | "metadata": {},
33 | "outputs": [],
34 | "source": [
35 | "import numpy as np\n",
36 | "import pandas as pd\n",
37 | "\n",
38 | "import matplotlib.pyplot as plt\n",
39 | "import seaborn as sns\n",
40 | "\n",
41 | "import random\n",
42 | "from pprint import pprint"
43 | ]
44 | },
45 | {
46 | "cell_type": "code",
47 | "execution_count": 2,
48 | "metadata": {},
49 | "outputs": [],
50 | "source": [
51 | "%matplotlib inline\n",
52 | "sns.set_style(\"darkgrid\")"
53 | ]
54 | },
55 | {
56 | "cell_type": "markdown",
57 | "metadata": {},
58 | "source": [
59 | "# Load and Prepare Data"
60 | ]
61 | },
62 | {
63 | "cell_type": "markdown",
64 | "metadata": {},
65 | "source": [
66 | "#### Format of the data\n",
67 | "- the last column of the data frame must contain the label and it must also be called \"label\"\n",
68 | "- there should be no missing values in the data frame"
69 | ]
70 | },
71 | {
72 | "cell_type": "code",
73 | "execution_count": 3,
74 | "metadata": {},
75 | "outputs": [],
76 | "source": [
77 | "df = pd.read_csv(\"../data/Iris.csv\")\n",
78 | "df = df.drop(\"Id\", axis=1)\n",
79 | "df = df.rename(columns={\"species\": \"label\"})"
80 | ]
81 | },
82 | {
83 | "cell_type": "code",
84 | "execution_count": 4,
85 | "metadata": {},
86 | "outputs": [
87 | {
88 | "data": {
89 | "text/html": [
90 | "\n",
91 | "\n",
104 | "
\n",
105 | " \n",
106 | " \n",
107 | " | \n",
108 | " sepal_length | \n",
109 | " sepal_width | \n",
110 | " petal_length | \n",
111 | " petal_width | \n",
112 | " label | \n",
113 | "
\n",
114 | " \n",
115 | " \n",
116 | " \n",
117 | " 0 | \n",
118 | " 5.1 | \n",
119 | " 3.5 | \n",
120 | " 1.4 | \n",
121 | " 0.2 | \n",
122 | " Iris-setosa | \n",
123 | "
\n",
124 | " \n",
125 | " 1 | \n",
126 | " 4.9 | \n",
127 | " 3.0 | \n",
128 | " 1.4 | \n",
129 | " 0.2 | \n",
130 | " Iris-setosa | \n",
131 | "
\n",
132 | " \n",
133 | " 2 | \n",
134 | " 4.7 | \n",
135 | " 3.2 | \n",
136 | " 1.3 | \n",
137 | " 0.2 | \n",
138 | " Iris-setosa | \n",
139 | "
\n",
140 | " \n",
141 | " 3 | \n",
142 | " 4.6 | \n",
143 | " 3.1 | \n",
144 | " 1.5 | \n",
145 | " 0.2 | \n",
146 | " Iris-setosa | \n",
147 | "
\n",
148 | " \n",
149 | " 4 | \n",
150 | " 5.0 | \n",
151 | " 3.6 | \n",
152 | " 1.4 | \n",
153 | " 0.2 | \n",
154 | " Iris-setosa | \n",
155 | "
\n",
156 | " \n",
157 | "
\n",
158 | "
"
159 | ],
160 | "text/plain": [
161 | " sepal_length sepal_width petal_length petal_width label\n",
162 | "0 5.1 3.5 1.4 0.2 Iris-setosa\n",
163 | "1 4.9 3.0 1.4 0.2 Iris-setosa\n",
164 | "2 4.7 3.2 1.3 0.2 Iris-setosa\n",
165 | "3 4.6 3.1 1.5 0.2 Iris-setosa\n",
166 | "4 5.0 3.6 1.4 0.2 Iris-setosa"
167 | ]
168 | },
169 | "execution_count": 4,
170 | "metadata": {},
171 | "output_type": "execute_result"
172 | }
173 | ],
174 | "source": [
175 | "df.head()"
176 | ]
177 | },
178 | {
179 | "cell_type": "markdown",
180 | "metadata": {},
181 | "source": [
182 | "# Train-Test-Split"
183 | ]
184 | },
185 | {
186 | "cell_type": "code",
187 | "execution_count": 5,
188 | "metadata": {},
189 | "outputs": [],
190 | "source": [
191 | "def train_test_split(df, test_size):\n",
192 | " \n",
193 | " if isinstance(test_size, float):\n",
194 | " test_size = round(test_size * len(df))\n",
195 | "\n",
196 | " indices = df.index.tolist()\n",
197 | " test_indices = random.sample(population=indices, k=test_size)\n",
198 | "\n",
199 | " test_df = df.loc[test_indices]\n",
200 | " train_df = df.drop(test_indices)\n",
201 | " \n",
202 | " return train_df, test_df"
203 | ]
204 | },
205 | {
206 | "cell_type": "code",
207 | "execution_count": 6,
208 | "metadata": {},
209 | "outputs": [],
210 | "source": [
211 | "random.seed(0)\n",
212 | "train_df, test_df = train_test_split(df, test_size=20)"
213 | ]
214 | },
215 | {
216 | "cell_type": "markdown",
217 | "metadata": {},
218 | "source": [
219 | "# Helper Functions\n",
220 | "\n",
221 | "The helper functions operate on a NumPy 2d-array. Therefore, let’s create a variable called “data” to see what we will be working with."
222 | ]
223 | },
224 | {
225 | "cell_type": "code",
226 | "execution_count": 7,
227 | "metadata": {},
228 | "outputs": [
229 | {
230 | "data": {
231 | "text/plain": [
232 | "array([[5.1, 3.5, 1.4, 0.2, 'Iris-setosa'],\n",
233 | " [4.9, 3.0, 1.4, 0.2, 'Iris-setosa'],\n",
234 | " [4.7, 3.2, 1.3, 0.2, 'Iris-setosa'],\n",
235 | " [4.6, 3.1, 1.5, 0.2, 'Iris-setosa'],\n",
236 | " [5.0, 3.6, 1.4, 0.2, 'Iris-setosa']], dtype=object)"
237 | ]
238 | },
239 | "execution_count": 7,
240 | "metadata": {},
241 | "output_type": "execute_result"
242 | }
243 | ],
244 | "source": [
245 | "data = train_df.values\n",
246 | "data[:5]"
247 | ]
248 | },
249 | {
250 | "cell_type": "markdown",
251 | "metadata": {},
252 | "source": [
253 | "### Data pure?"
254 | ]
255 | },
256 | {
257 | "cell_type": "code",
258 | "execution_count": 8,
259 | "metadata": {},
260 | "outputs": [],
261 | "source": [
262 | "def check_purity(data):\n",
263 | " \n",
264 | " label_column = data[:, -1]\n",
265 | " unique_classes = np.unique(label_column)\n",
266 | "\n",
267 | " if len(unique_classes) == 1:\n",
268 | " return True\n",
269 | " else:\n",
270 | " return False"
271 | ]
272 | },
273 | {
274 | "cell_type": "markdown",
275 | "metadata": {},
276 | "source": [
277 | "### Classify"
278 | ]
279 | },
280 | {
281 | "cell_type": "code",
282 | "execution_count": 9,
283 | "metadata": {},
284 | "outputs": [],
285 | "source": [
286 | "def classify_data(data):\n",
287 | " \n",
288 | " label_column = data[:, -1]\n",
289 | " unique_classes, counts_unique_classes = np.unique(label_column, return_counts=True)\n",
290 | "\n",
291 | " index = counts_unique_classes.argmax()\n",
292 | " classification = unique_classes[index]\n",
293 | " \n",
294 | " return classification"
295 | ]
296 | },
297 | {
298 | "cell_type": "markdown",
299 | "metadata": {},
300 | "source": [
301 | "### Potential splits?"
302 | ]
303 | },
304 | {
305 | "cell_type": "code",
306 | "execution_count": 10,
307 | "metadata": {},
308 | "outputs": [],
309 | "source": [
310 | "def get_potential_splits(data):\n",
311 | " \n",
312 | " potential_splits = {}\n",
313 | " _, n_columns = data.shape\n",
314 | " for column_index in range(n_columns - 1): # excluding the last column which is the label\n",
315 | " potential_splits[column_index] = []\n",
316 | " values = data[:, column_index]\n",
317 | " unique_values = np.unique(values)\n",
318 | "\n",
319 | " for index in range(len(unique_values)):\n",
320 | " if index != 0:\n",
321 | " current_value = unique_values[index]\n",
322 | " previous_value = unique_values[index - 1]\n",
323 | " potential_split = (current_value + previous_value) / 2\n",
324 | " \n",
325 | " potential_splits[column_index].append(potential_split)\n",
326 | " \n",
327 | " return potential_splits"
328 | ]
329 | },
330 | {
331 | "cell_type": "markdown",
332 | "metadata": {},
333 | "source": [
334 | "### Split Data"
335 | ]
336 | },
337 | {
338 | "cell_type": "code",
339 | "execution_count": 11,
340 | "metadata": {},
341 | "outputs": [],
342 | "source": [
343 | "def split_data(data, split_column, split_value):\n",
344 | " \n",
345 | " split_column_values = data[:, split_column]\n",
346 | "\n",
347 | " data_below = data[split_column_values <= split_value]\n",
348 | " data_above = data[split_column_values > split_value]\n",
349 | " \n",
350 | " return data_below, data_above"
351 | ]
352 | },
353 | {
354 | "cell_type": "markdown",
355 | "metadata": {},
356 | "source": [
357 | "### Lowest Overall Entropy?"
358 | ]
359 | },
360 | {
361 | "cell_type": "code",
362 | "execution_count": 12,
363 | "metadata": {},
364 | "outputs": [],
365 | "source": [
366 | "def calculate_entropy(data):\n",
367 | " \n",
368 | " label_column = data[:, -1]\n",
369 | " _, counts = np.unique(label_column, return_counts=True)\n",
370 | "\n",
371 | " probabilities = counts / counts.sum()\n",
372 | " entropy = sum(probabilities * -np.log2(probabilities))\n",
373 | " \n",
374 | " return entropy"
375 | ]
376 | },
377 | {
378 | "cell_type": "code",
379 | "execution_count": 13,
380 | "metadata": {},
381 | "outputs": [],
382 | "source": [
383 | "def calculate_overall_entropy(data_below, data_above):\n",
384 | " \n",
385 | " n = len(data_below) + len(data_above)\n",
386 | " p_data_below = len(data_below) / n\n",
387 | " p_data_above = len(data_above) / n\n",
388 | "\n",
389 | " overall_entropy = (p_data_below * calculate_entropy(data_below) \n",
390 | " + p_data_above * calculate_entropy(data_above))\n",
391 | " \n",
392 | " return overall_entropy"
393 | ]
394 | },
395 | {
396 | "cell_type": "code",
397 | "execution_count": 14,
398 | "metadata": {},
399 | "outputs": [],
400 | "source": [
401 | "def determine_best_split(data, potential_splits):\n",
402 | " \n",
403 | " overall_entropy = 9999\n",
404 | " for column_index in potential_splits:\n",
405 | " for value in potential_splits[column_index]:\n",
406 | " data_below, data_above = split_data(data, split_column=column_index, split_value=value)\n",
407 | " current_overall_entropy = calculate_overall_entropy(data_below, data_above)\n",
408 | "\n",
409 | " if current_overall_entropy <= overall_entropy:\n",
410 | " overall_entropy = current_overall_entropy\n",
411 | " best_split_column = column_index\n",
412 | " best_split_value = value\n",
413 | " \n",
414 | " return best_split_column, best_split_value"
415 | ]
416 | },
417 | {
418 | "cell_type": "markdown",
419 | "metadata": {},
420 | "source": [
421 | "# Decision Tree Algorithm"
422 | ]
423 | },
424 | {
425 | "cell_type": "markdown",
426 | "metadata": {},
427 | "source": [
428 | "### Representation of the Decision Tree"
429 | ]
430 | },
431 | {
432 | "cell_type": "code",
433 | "execution_count": 15,
434 | "metadata": {},
435 | "outputs": [],
436 | "source": [
437 | "sub_tree = {\"question\": [\"yes_answer\", \n",
438 | " \"no_answer\"]}"
439 | ]
440 | },
441 | {
442 | "cell_type": "code",
443 | "execution_count": 16,
444 | "metadata": {},
445 | "outputs": [],
446 | "source": [
447 | "example_tree = {\"petal_width <= 0.8\": [\"Iris-setosa\", \n",
448 | " {\"petal_width <= 1.65\": [{\"petal_length <= 4.9\": [\"Iris-versicolor\", \n",
449 | " \"Iris-virginica\"]}, \n",
450 | " \"Iris-virginica\"]}]}"
451 | ]
452 | },
453 | {
454 | "cell_type": "markdown",
455 | "metadata": {},
456 | "source": [
457 | "### Algorithm"
458 | ]
459 | },
460 | {
461 | "cell_type": "code",
462 | "execution_count": 17,
463 | "metadata": {},
464 | "outputs": [],
465 | "source": [
466 | "def decision_tree_algorithm(df, counter=0):\n",
467 | " \n",
468 | " # data preparations\n",
469 | " if counter == 0:\n",
470 | " data = df.values\n",
471 | " else:\n",
472 | " data = df \n",
473 | " \n",
474 | " \n",
475 | " # base cases\n",
476 | " if check_purity(data):\n",
477 | " classification = classify_data(data)\n",
478 | " return classification\n",
479 | "\n",
480 | " \n",
481 | " # recursive part\n",
482 | " else: \n",
483 | " counter += 1\n",
484 | "\n",
485 | " # helper functions \n",
486 | " potential_splits = get_potential_splits(data)\n",
487 | " split_column, split_value = determine_best_split(data, potential_splits)\n",
488 | " data_below, data_above = split_data(data, split_column, split_value)\n",
489 | " \n",
490 | " # instantiate sub-tree\n",
491 | " question = \"{} <= {}\".format(split_column, split_value)\n",
492 | " sub_tree = {question: []}\n",
493 | " \n",
494 | " # find answers (recursion)\n",
495 | " yes_answer = decision_tree_algorithm(data_below, counter)\n",
496 | " no_answer = decision_tree_algorithm(data_above, counter)\n",
497 | " \n",
498 | " sub_tree[question].append(yes_answer)\n",
499 | " sub_tree[question].append(no_answer)\n",
500 | " \n",
501 | " return sub_tree"
502 | ]
503 | },
504 | {
505 | "cell_type": "code",
506 | "execution_count": 18,
507 | "metadata": {},
508 | "outputs": [
509 | {
510 | "name": "stdout",
511 | "output_type": "stream",
512 | "text": [
513 | "{'3 <= 0.8': ['Iris-setosa',\n",
514 | " {'3 <= 1.65': [{'2 <= 4.95': ['Iris-versicolor',\n",
515 | " {'3 <= 1.55': ['Iris-virginica',\n",
516 | " 'Iris-versicolor']}]},\n",
517 | " {'2 <= 4.85': [{'1 <= 3.1': ['Iris-virginica',\n",
518 | " 'Iris-versicolor']},\n",
519 | " 'Iris-virginica']}]}]}\n"
520 | ]
521 | }
522 | ],
523 | "source": [
524 | "tree = decision_tree_algorithm(train_df)\n",
525 | "pprint(tree)"
526 | ]
527 | },
528 | {
529 | "cell_type": "code",
530 | "execution_count": null,
531 | "metadata": {},
532 | "outputs": [],
533 | "source": []
534 | }
535 | ],
536 | "metadata": {
537 | "kernelspec": {
538 | "display_name": "Python 3",
539 | "language": "python",
540 | "name": "python3"
541 | },
542 | "language_info": {
543 | "codemirror_mode": {
544 | "name": "ipython",
545 | "version": 3
546 | },
547 | "file_extension": ".py",
548 | "mimetype": "text/x-python",
549 | "name": "python",
550 | "nbconvert_exporter": "python",
551 | "pygments_lexer": "ipython3",
552 | "version": "3.7.3"
553 | }
554 | },
555 | "nbformat": 4,
556 | "nbformat_minor": 4
557 | }
558 |
--------------------------------------------------------------------------------
/notebooks/Video 06 - Main Algorithm 2.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "The goal of this notebook is to code a decision tree classifier that can be used with the following API:\n",
8 | "\n",
9 | "```Python\n",
10 | "df = pd.read_csv(\"data.csv\")\n",
11 | "\n",
12 | "train_df, test_df = train_test_split(df, test_size=0.2)\n",
13 | "tree = decision_tree_algorithm(train_df)\n",
14 | "accuracy = calculate_accuracy(test_df, tree)\n",
15 | "```\n",
16 | "\n",
17 | "The algorithm that is going to be implemented looks like this:\n",
18 | "\n",
19 | "
"
20 | ]
21 | },
22 | {
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "# Import Statements"
27 | ]
28 | },
29 | {
30 | "cell_type": "code",
31 | "execution_count": 1,
32 | "metadata": {},
33 | "outputs": [],
34 | "source": [
35 | "import numpy as np\n",
36 | "import pandas as pd\n",
37 | "\n",
38 | "import matplotlib.pyplot as plt\n",
39 | "import seaborn as sns\n",
40 | "\n",
41 | "import random\n",
42 | "from pprint import pprint"
43 | ]
44 | },
45 | {
46 | "cell_type": "code",
47 | "execution_count": 2,
48 | "metadata": {},
49 | "outputs": [],
50 | "source": [
51 | "%matplotlib inline\n",
52 | "sns.set_style(\"darkgrid\")"
53 | ]
54 | },
55 | {
56 | "cell_type": "markdown",
57 | "metadata": {},
58 | "source": [
59 | "# Load and Prepare Data"
60 | ]
61 | },
62 | {
63 | "cell_type": "markdown",
64 | "metadata": {},
65 | "source": [
66 | "#### Format of the data\n",
67 | "- the last column of the data frame must contain the label and it must also be called \"label\"\n",
68 | "- there should be no missing values in the data frame"
69 | ]
70 | },
71 | {
72 | "cell_type": "code",
73 | "execution_count": 3,
74 | "metadata": {},
75 | "outputs": [],
76 | "source": [
77 | "df = pd.read_csv(\"../data/Iris.csv\")\n",
78 | "df = df.drop(\"Id\", axis=1)\n",
79 | "df = df.rename(columns={\"species\": \"label\"})"
80 | ]
81 | },
82 | {
83 | "cell_type": "code",
84 | "execution_count": 4,
85 | "metadata": {},
86 | "outputs": [
87 | {
88 | "data": {
89 | "text/html": [
90 | "\n",
91 | "\n",
104 | "
\n",
105 | " \n",
106 | " \n",
107 | " | \n",
108 | " sepal_length | \n",
109 | " sepal_width | \n",
110 | " petal_length | \n",
111 | " petal_width | \n",
112 | " label | \n",
113 | "
\n",
114 | " \n",
115 | " \n",
116 | " \n",
117 | " 0 | \n",
118 | " 5.1 | \n",
119 | " 3.5 | \n",
120 | " 1.4 | \n",
121 | " 0.2 | \n",
122 | " Iris-setosa | \n",
123 | "
\n",
124 | " \n",
125 | " 1 | \n",
126 | " 4.9 | \n",
127 | " 3.0 | \n",
128 | " 1.4 | \n",
129 | " 0.2 | \n",
130 | " Iris-setosa | \n",
131 | "
\n",
132 | " \n",
133 | " 2 | \n",
134 | " 4.7 | \n",
135 | " 3.2 | \n",
136 | " 1.3 | \n",
137 | " 0.2 | \n",
138 | " Iris-setosa | \n",
139 | "
\n",
140 | " \n",
141 | " 3 | \n",
142 | " 4.6 | \n",
143 | " 3.1 | \n",
144 | " 1.5 | \n",
145 | " 0.2 | \n",
146 | " Iris-setosa | \n",
147 | "
\n",
148 | " \n",
149 | " 4 | \n",
150 | " 5.0 | \n",
151 | " 3.6 | \n",
152 | " 1.4 | \n",
153 | " 0.2 | \n",
154 | " Iris-setosa | \n",
155 | "
\n",
156 | " \n",
157 | "
\n",
158 | "
"
159 | ],
160 | "text/plain": [
161 | " sepal_length sepal_width petal_length petal_width label\n",
162 | "0 5.1 3.5 1.4 0.2 Iris-setosa\n",
163 | "1 4.9 3.0 1.4 0.2 Iris-setosa\n",
164 | "2 4.7 3.2 1.3 0.2 Iris-setosa\n",
165 | "3 4.6 3.1 1.5 0.2 Iris-setosa\n",
166 | "4 5.0 3.6 1.4 0.2 Iris-setosa"
167 | ]
168 | },
169 | "execution_count": 4,
170 | "metadata": {},
171 | "output_type": "execute_result"
172 | }
173 | ],
174 | "source": [
175 | "df.head()"
176 | ]
177 | },
178 | {
179 | "cell_type": "markdown",
180 | "metadata": {},
181 | "source": [
182 | "# Train-Test-Split"
183 | ]
184 | },
185 | {
186 | "cell_type": "code",
187 | "execution_count": 5,
188 | "metadata": {},
189 | "outputs": [],
190 | "source": [
191 | "def train_test_split(df, test_size):\n",
192 | " \n",
193 | " if isinstance(test_size, float):\n",
194 | " test_size = round(test_size * len(df))\n",
195 | "\n",
196 | " indices = df.index.tolist()\n",
197 | " test_indices = random.sample(population=indices, k=test_size)\n",
198 | "\n",
199 | " test_df = df.loc[test_indices]\n",
200 | " train_df = df.drop(test_indices)\n",
201 | " \n",
202 | " return train_df, test_df"
203 | ]
204 | },
205 | {
206 | "cell_type": "code",
207 | "execution_count": 6,
208 | "metadata": {},
209 | "outputs": [],
210 | "source": [
211 | "random.seed(0)\n",
212 | "train_df, test_df = train_test_split(df, test_size=20)"
213 | ]
214 | },
215 | {
216 | "cell_type": "markdown",
217 | "metadata": {},
218 | "source": [
219 | "# Helper Functions\n",
220 | "\n",
221 | "The helper functions operate on a NumPy 2d-array. Therefore, let’s create a variable called “data” to see what we will be working with."
222 | ]
223 | },
224 | {
225 | "cell_type": "code",
226 | "execution_count": 7,
227 | "metadata": {},
228 | "outputs": [
229 | {
230 | "data": {
231 | "text/plain": [
232 | "array([[5.1, 3.5, 1.4, 0.2, 'Iris-setosa'],\n",
233 | " [4.9, 3.0, 1.4, 0.2, 'Iris-setosa'],\n",
234 | " [4.7, 3.2, 1.3, 0.2, 'Iris-setosa'],\n",
235 | " [4.6, 3.1, 1.5, 0.2, 'Iris-setosa'],\n",
236 | " [5.0, 3.6, 1.4, 0.2, 'Iris-setosa']], dtype=object)"
237 | ]
238 | },
239 | "execution_count": 7,
240 | "metadata": {},
241 | "output_type": "execute_result"
242 | }
243 | ],
244 | "source": [
245 | "data = train_df.values\n",
246 | "data[:5]"
247 | ]
248 | },
249 | {
250 | "cell_type": "markdown",
251 | "metadata": {},
252 | "source": [
253 | "### Data pure?"
254 | ]
255 | },
256 | {
257 | "cell_type": "code",
258 | "execution_count": 8,
259 | "metadata": {},
260 | "outputs": [],
261 | "source": [
262 | "def check_purity(data):\n",
263 | " \n",
264 | " label_column = data[:, -1]\n",
265 | " unique_classes = np.unique(label_column)\n",
266 | "\n",
267 | " if len(unique_classes) == 1:\n",
268 | " return True\n",
269 | " else:\n",
270 | " return False"
271 | ]
272 | },
273 | {
274 | "cell_type": "markdown",
275 | "metadata": {},
276 | "source": [
277 | "### Classify"
278 | ]
279 | },
280 | {
281 | "cell_type": "code",
282 | "execution_count": 9,
283 | "metadata": {},
284 | "outputs": [],
285 | "source": [
286 | "def classify_data(data):\n",
287 | " \n",
288 | " label_column = data[:, -1]\n",
289 | " unique_classes, counts_unique_classes = np.unique(label_column, return_counts=True)\n",
290 | "\n",
291 | " index = counts_unique_classes.argmax()\n",
292 | " classification = unique_classes[index]\n",
293 | " \n",
294 | " return classification"
295 | ]
296 | },
297 | {
298 | "cell_type": "markdown",
299 | "metadata": {},
300 | "source": [
301 | "### Potential splits?"
302 | ]
303 | },
304 | {
305 | "cell_type": "code",
306 | "execution_count": 10,
307 | "metadata": {},
308 | "outputs": [],
309 | "source": [
310 | "def get_potential_splits(data):\n",
311 | " \n",
312 | " potential_splits = {}\n",
313 | " _, n_columns = data.shape\n",
314 | " for column_index in range(n_columns - 1): # excluding the last column which is the label\n",
315 | " potential_splits[column_index] = []\n",
316 | " values = data[:, column_index]\n",
317 | " unique_values = np.unique(values)\n",
318 | "\n",
319 | " for index in range(len(unique_values)):\n",
320 | " if index != 0:\n",
321 | " current_value = unique_values[index]\n",
322 | " previous_value = unique_values[index - 1]\n",
323 | " potential_split = (current_value + previous_value) / 2\n",
324 | " \n",
325 | " potential_splits[column_index].append(potential_split)\n",
326 | " \n",
327 | " return potential_splits"
328 | ]
329 | },
330 | {
331 | "cell_type": "markdown",
332 | "metadata": {},
333 | "source": [
334 | "### Split Data"
335 | ]
336 | },
337 | {
338 | "cell_type": "code",
339 | "execution_count": 11,
340 | "metadata": {},
341 | "outputs": [],
342 | "source": [
343 | "def split_data(data, split_column, split_value):\n",
344 | " \n",
345 | " split_column_values = data[:, split_column]\n",
346 | "\n",
347 | " data_below = data[split_column_values <= split_value]\n",
348 | " data_above = data[split_column_values > split_value]\n",
349 | " \n",
350 | " return data_below, data_above"
351 | ]
352 | },
353 | {
354 | "cell_type": "markdown",
355 | "metadata": {},
356 | "source": [
357 | "### Lowest Overall Entropy?"
358 | ]
359 | },
360 | {
361 | "cell_type": "code",
362 | "execution_count": 12,
363 | "metadata": {},
364 | "outputs": [],
365 | "source": [
366 | "def calculate_entropy(data):\n",
367 | " \n",
368 | " label_column = data[:, -1]\n",
369 | " _, counts = np.unique(label_column, return_counts=True)\n",
370 | "\n",
371 | " probabilities = counts / counts.sum()\n",
372 | " entropy = sum(probabilities * -np.log2(probabilities))\n",
373 | " \n",
374 | " return entropy"
375 | ]
376 | },
377 | {
378 | "cell_type": "code",
379 | "execution_count": 13,
380 | "metadata": {},
381 | "outputs": [],
382 | "source": [
383 | "def calculate_overall_entropy(data_below, data_above):\n",
384 | " \n",
385 | " n = len(data_below) + len(data_above)\n",
386 | " p_data_below = len(data_below) / n\n",
387 | " p_data_above = len(data_above) / n\n",
388 | "\n",
389 | " overall_entropy = (p_data_below * calculate_entropy(data_below) \n",
390 | " + p_data_above * calculate_entropy(data_above))\n",
391 | " \n",
392 | " return overall_entropy"
393 | ]
394 | },
395 | {
396 | "cell_type": "code",
397 | "execution_count": 14,
398 | "metadata": {},
399 | "outputs": [],
400 | "source": [
401 | "def determine_best_split(data, potential_splits):\n",
402 | " \n",
403 | " overall_entropy = 9999\n",
404 | " for column_index in potential_splits:\n",
405 | " for value in potential_splits[column_index]:\n",
406 | " data_below, data_above = split_data(data, split_column=column_index, split_value=value)\n",
407 | " current_overall_entropy = calculate_overall_entropy(data_below, data_above)\n",
408 | "\n",
409 | " if current_overall_entropy <= overall_entropy:\n",
410 | " overall_entropy = current_overall_entropy\n",
411 | " best_split_column = column_index\n",
412 | " best_split_value = value\n",
413 | " \n",
414 | " return best_split_column, best_split_value"
415 | ]
416 | },
417 | {
418 | "cell_type": "markdown",
419 | "metadata": {},
420 | "source": [
421 | "# Decision Tree Algorithm"
422 | ]
423 | },
424 | {
425 | "cell_type": "markdown",
426 | "metadata": {},
427 | "source": [
428 | "### Representation of the Decision Tree"
429 | ]
430 | },
431 | {
432 | "cell_type": "code",
433 | "execution_count": 15,
434 | "metadata": {},
435 | "outputs": [],
436 | "source": [
437 | "sub_tree = {\"question\": [\"yes_answer\", \n",
438 | " \"no_answer\"]}"
439 | ]
440 | },
441 | {
442 | "cell_type": "code",
443 | "execution_count": 16,
444 | "metadata": {},
445 | "outputs": [],
446 | "source": [
447 | "example_tree = {\"petal_width <= 0.8\": [\"Iris-setosa\", \n",
448 | " {\"petal_width <= 1.65\": [{\"petal_length <= 4.9\": [\"Iris-versicolor\", \n",
449 | " \"Iris-virginica\"]}, \n",
450 | " \"Iris-virginica\"]}]}"
451 | ]
452 | },
453 | {
454 | "cell_type": "markdown",
455 | "metadata": {},
456 | "source": [
457 | "### Algorithm"
458 | ]
459 | },
460 | {
461 | "cell_type": "code",
462 | "execution_count": 17,
463 | "metadata": {},
464 | "outputs": [],
465 | "source": [
466 | "def decision_tree_algorithm(df, counter=0, min_samples=2, max_depth=5):\n",
467 | " \n",
468 | " # data preparations\n",
469 | " if counter == 0:\n",
470 | " global COLUMN_HEADERS\n",
471 | " COLUMN_HEADERS = df.columns\n",
472 | " data = df.values\n",
473 | " else:\n",
474 | " data = df \n",
475 | " \n",
476 | " \n",
477 | " # base cases\n",
478 | " if (check_purity(data)) or (len(data) < min_samples) or (counter == max_depth):\n",
479 | " classification = classify_data(data)\n",
480 | " \n",
481 | " return classification\n",
482 | "\n",
483 | " \n",
484 | " # recursive part\n",
485 | " else: \n",
486 | " counter += 1\n",
487 | "\n",
488 | " # helper functions \n",
489 | " potential_splits = get_potential_splits(data)\n",
490 | " split_column, split_value = determine_best_split(data, potential_splits)\n",
491 | " data_below, data_above = split_data(data, split_column, split_value)\n",
492 | " \n",
493 | " # instantiate sub-tree\n",
494 | " feature_name = COLUMN_HEADERS[split_column]\n",
495 | " question = \"{} <= {}\".format(feature_name, split_value)\n",
496 | " sub_tree = {question: []}\n",
497 | " \n",
498 | " # find answers (recursion)\n",
499 | " yes_answer = decision_tree_algorithm(data_below, counter, min_samples, max_depth)\n",
500 | " no_answer = decision_tree_algorithm(data_above, counter, min_samples, max_depth)\n",
501 | " \n",
502 | " # If the answers are the same, then there is no point in asking the qestion.\n",
503 | " # This could happen when the data is classified even though it is not pure\n",
504 | " # yet (min_samples or max_depth base cases).\n",
505 | " if yes_answer == no_answer:\n",
506 | " sub_tree = yes_answer\n",
507 | " else:\n",
508 | " sub_tree[question].append(yes_answer)\n",
509 | " sub_tree[question].append(no_answer)\n",
510 | " \n",
511 | " return sub_tree"
512 | ]
513 | },
514 | {
515 | "cell_type": "code",
516 | "execution_count": 18,
517 | "metadata": {},
518 | "outputs": [
519 | {
520 | "name": "stdout",
521 | "output_type": "stream",
522 | "text": [
523 | "{'petal_width <= 0.8': ['Iris-setosa',\n",
524 | " {'petal_width <= 1.65': [{'petal_length <= 4.95': ['Iris-versicolor',\n",
525 | " 'Iris-virginica']},\n",
526 | " 'Iris-virginica']}]}\n"
527 | ]
528 | }
529 | ],
530 | "source": [
531 | "tree = decision_tree_algorithm(train_df, max_depth=3)\n",
532 | "pprint(tree)"
533 | ]
534 | },
535 | {
536 | "cell_type": "code",
537 | "execution_count": null,
538 | "metadata": {},
539 | "outputs": [],
540 | "source": []
541 | }
542 | ],
543 | "metadata": {
544 | "kernelspec": {
545 | "display_name": "Python 3",
546 | "language": "python",
547 | "name": "python3"
548 | },
549 | "language_info": {
550 | "codemirror_mode": {
551 | "name": "ipython",
552 | "version": 3
553 | },
554 | "file_extension": ".py",
555 | "mimetype": "text/x-python",
556 | "name": "python",
557 | "nbconvert_exporter": "python",
558 | "pygments_lexer": "ipython3",
559 | "version": "3.7.3"
560 | }
561 | },
562 | "nbformat": 4,
563 | "nbformat_minor": 4
564 | }
565 |
--------------------------------------------------------------------------------
/notebooks/Video 07 - Classification.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "The goal of this notebook is to code a decision tree classifier that can be used with the following API:\n",
8 | "\n",
9 | "```Python\n",
10 | "df = pd.read_csv(\"data.csv\")\n",
11 | "\n",
12 | "train_df, test_df = train_test_split(df, test_size=0.2)\n",
13 | "tree = decision_tree_algorithm(train_df)\n",
14 | "accuracy = calculate_accuracy(test_df, tree)\n",
15 | "```\n",
16 | "\n",
17 | "The algorithm that is going to be implemented looks like this:\n",
18 | "\n",
19 | "
"
20 | ]
21 | },
22 | {
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "# Import Statements"
27 | ]
28 | },
29 | {
30 | "cell_type": "code",
31 | "execution_count": 1,
32 | "metadata": {},
33 | "outputs": [],
34 | "source": [
35 | "import numpy as np\n",
36 | "import pandas as pd\n",
37 | "\n",
38 | "import matplotlib.pyplot as plt\n",
39 | "import seaborn as sns\n",
40 | "\n",
41 | "import random\n",
42 | "from pprint import pprint"
43 | ]
44 | },
45 | {
46 | "cell_type": "code",
47 | "execution_count": 2,
48 | "metadata": {},
49 | "outputs": [],
50 | "source": [
51 | "%matplotlib inline\n",
52 | "sns.set_style(\"darkgrid\")"
53 | ]
54 | },
55 | {
56 | "cell_type": "markdown",
57 | "metadata": {},
58 | "source": [
59 | "# Load and Prepare Data"
60 | ]
61 | },
62 | {
63 | "cell_type": "markdown",
64 | "metadata": {},
65 | "source": [
66 | "#### Format of the data\n",
67 | "- the last column of the data frame must contain the label and it must also be called \"label\"\n",
68 | "- there should be no missing values in the data frame"
69 | ]
70 | },
71 | {
72 | "cell_type": "code",
73 | "execution_count": 3,
74 | "metadata": {},
75 | "outputs": [],
76 | "source": [
77 | "df = pd.read_csv(\"../data/Iris.csv\")\n",
78 | "df = df.drop(\"Id\", axis=1)\n",
79 | "df = df.rename(columns={\"species\": \"label\"})"
80 | ]
81 | },
82 | {
83 | "cell_type": "code",
84 | "execution_count": 4,
85 | "metadata": {},
86 | "outputs": [
87 | {
88 | "data": {
89 | "text/html": [
90 | "\n",
91 | "\n",
104 | "
\n",
105 | " \n",
106 | " \n",
107 | " | \n",
108 | " sepal_length | \n",
109 | " sepal_width | \n",
110 | " petal_length | \n",
111 | " petal_width | \n",
112 | " label | \n",
113 | "
\n",
114 | " \n",
115 | " \n",
116 | " \n",
117 | " 0 | \n",
118 | " 5.1 | \n",
119 | " 3.5 | \n",
120 | " 1.4 | \n",
121 | " 0.2 | \n",
122 | " Iris-setosa | \n",
123 | "
\n",
124 | " \n",
125 | " 1 | \n",
126 | " 4.9 | \n",
127 | " 3.0 | \n",
128 | " 1.4 | \n",
129 | " 0.2 | \n",
130 | " Iris-setosa | \n",
131 | "
\n",
132 | " \n",
133 | " 2 | \n",
134 | " 4.7 | \n",
135 | " 3.2 | \n",
136 | " 1.3 | \n",
137 | " 0.2 | \n",
138 | " Iris-setosa | \n",
139 | "
\n",
140 | " \n",
141 | " 3 | \n",
142 | " 4.6 | \n",
143 | " 3.1 | \n",
144 | " 1.5 | \n",
145 | " 0.2 | \n",
146 | " Iris-setosa | \n",
147 | "
\n",
148 | " \n",
149 | " 4 | \n",
150 | " 5.0 | \n",
151 | " 3.6 | \n",
152 | " 1.4 | \n",
153 | " 0.2 | \n",
154 | " Iris-setosa | \n",
155 | "
\n",
156 | " \n",
157 | "
\n",
158 | "
"
159 | ],
160 | "text/plain": [
161 | " sepal_length sepal_width petal_length petal_width label\n",
162 | "0 5.1 3.5 1.4 0.2 Iris-setosa\n",
163 | "1 4.9 3.0 1.4 0.2 Iris-setosa\n",
164 | "2 4.7 3.2 1.3 0.2 Iris-setosa\n",
165 | "3 4.6 3.1 1.5 0.2 Iris-setosa\n",
166 | "4 5.0 3.6 1.4 0.2 Iris-setosa"
167 | ]
168 | },
169 | "execution_count": 4,
170 | "metadata": {},
171 | "output_type": "execute_result"
172 | }
173 | ],
174 | "source": [
175 | "df.head()"
176 | ]
177 | },
178 | {
179 | "cell_type": "markdown",
180 | "metadata": {},
181 | "source": [
182 | "# Train-Test-Split"
183 | ]
184 | },
185 | {
186 | "cell_type": "code",
187 | "execution_count": 5,
188 | "metadata": {},
189 | "outputs": [],
190 | "source": [
191 | "def train_test_split(df, test_size):\n",
192 | " \n",
193 | " if isinstance(test_size, float):\n",
194 | " test_size = round(test_size * len(df))\n",
195 | "\n",
196 | " indices = df.index.tolist()\n",
197 | " test_indices = random.sample(population=indices, k=test_size)\n",
198 | "\n",
199 | " test_df = df.loc[test_indices]\n",
200 | " train_df = df.drop(test_indices)\n",
201 | " \n",
202 | " return train_df, test_df"
203 | ]
204 | },
205 | {
206 | "cell_type": "code",
207 | "execution_count": 6,
208 | "metadata": {},
209 | "outputs": [],
210 | "source": [
211 | "random.seed(0)\n",
212 | "train_df, test_df = train_test_split(df, test_size=20)"
213 | ]
214 | },
215 | {
216 | "cell_type": "markdown",
217 | "metadata": {},
218 | "source": [
219 | "# Helper Functions\n",
220 | "\n",
221 | "The helper functions operate on a NumPy 2d-array. Therefore, let’s create a variable called “data” to see what we will be working with."
222 | ]
223 | },
224 | {
225 | "cell_type": "code",
226 | "execution_count": 7,
227 | "metadata": {},
228 | "outputs": [
229 | {
230 | "data": {
231 | "text/plain": [
232 | "array([[5.1, 3.5, 1.4, 0.2, 'Iris-setosa'],\n",
233 | " [4.9, 3.0, 1.4, 0.2, 'Iris-setosa'],\n",
234 | " [4.7, 3.2, 1.3, 0.2, 'Iris-setosa'],\n",
235 | " [4.6, 3.1, 1.5, 0.2, 'Iris-setosa'],\n",
236 | " [5.0, 3.6, 1.4, 0.2, 'Iris-setosa']], dtype=object)"
237 | ]
238 | },
239 | "execution_count": 7,
240 | "metadata": {},
241 | "output_type": "execute_result"
242 | }
243 | ],
244 | "source": [
245 | "data = train_df.values\n",
246 | "data[:5]"
247 | ]
248 | },
249 | {
250 | "cell_type": "markdown",
251 | "metadata": {},
252 | "source": [
253 | "### Data pure?"
254 | ]
255 | },
256 | {
257 | "cell_type": "code",
258 | "execution_count": 8,
259 | "metadata": {},
260 | "outputs": [],
261 | "source": [
262 | "def check_purity(data):\n",
263 | " \n",
264 | " label_column = data[:, -1]\n",
265 | " unique_classes = np.unique(label_column)\n",
266 | "\n",
267 | " if len(unique_classes) == 1:\n",
268 | " return True\n",
269 | " else:\n",
270 | " return False"
271 | ]
272 | },
273 | {
274 | "cell_type": "markdown",
275 | "metadata": {},
276 | "source": [
277 | "### Classify"
278 | ]
279 | },
280 | {
281 | "cell_type": "code",
282 | "execution_count": 9,
283 | "metadata": {},
284 | "outputs": [],
285 | "source": [
286 | "def classify_data(data):\n",
287 | " \n",
288 | " label_column = data[:, -1]\n",
289 | " unique_classes, counts_unique_classes = np.unique(label_column, return_counts=True)\n",
290 | "\n",
291 | " index = counts_unique_classes.argmax()\n",
292 | " classification = unique_classes[index]\n",
293 | " \n",
294 | " return classification"
295 | ]
296 | },
297 | {
298 | "cell_type": "markdown",
299 | "metadata": {},
300 | "source": [
301 | "### Potential splits?"
302 | ]
303 | },
304 | {
305 | "cell_type": "code",
306 | "execution_count": 10,
307 | "metadata": {},
308 | "outputs": [],
309 | "source": [
310 | "def get_potential_splits(data):\n",
311 | " \n",
312 | " potential_splits = {}\n",
313 | " _, n_columns = data.shape\n",
314 | " for column_index in range(n_columns - 1): # excluding the last column which is the label\n",
315 | " potential_splits[column_index] = []\n",
316 | " values = data[:, column_index]\n",
317 | " unique_values = np.unique(values)\n",
318 | "\n",
319 | " for index in range(len(unique_values)):\n",
320 | " if index != 0:\n",
321 | " current_value = unique_values[index]\n",
322 | " previous_value = unique_values[index - 1]\n",
323 | " potential_split = (current_value + previous_value) / 2\n",
324 | " \n",
325 | " potential_splits[column_index].append(potential_split)\n",
326 | " \n",
327 | " return potential_splits"
328 | ]
329 | },
330 | {
331 | "cell_type": "markdown",
332 | "metadata": {},
333 | "source": [
334 | "### Split Data"
335 | ]
336 | },
337 | {
338 | "cell_type": "code",
339 | "execution_count": 11,
340 | "metadata": {},
341 | "outputs": [],
342 | "source": [
343 | "def split_data(data, split_column, split_value):\n",
344 | " \n",
345 | " split_column_values = data[:, split_column]\n",
346 | "\n",
347 | " data_below = data[split_column_values <= split_value]\n",
348 | " data_above = data[split_column_values > split_value]\n",
349 | " \n",
350 | " return data_below, data_above"
351 | ]
352 | },
353 | {
354 | "cell_type": "markdown",
355 | "metadata": {},
356 | "source": [
357 | "### Lowest Overall Entropy?"
358 | ]
359 | },
360 | {
361 | "cell_type": "code",
362 | "execution_count": 12,
363 | "metadata": {},
364 | "outputs": [],
365 | "source": [
366 | "def calculate_entropy(data):\n",
367 | " \n",
368 | " label_column = data[:, -1]\n",
369 | " _, counts = np.unique(label_column, return_counts=True)\n",
370 | "\n",
371 | " probabilities = counts / counts.sum()\n",
372 | " entropy = sum(probabilities * -np.log2(probabilities))\n",
373 | " \n",
374 | " return entropy"
375 | ]
376 | },
377 | {
378 | "cell_type": "code",
379 | "execution_count": 13,
380 | "metadata": {},
381 | "outputs": [],
382 | "source": [
383 | "def calculate_overall_entropy(data_below, data_above):\n",
384 | " \n",
385 | " n = len(data_below) + len(data_above)\n",
386 | " p_data_below = len(data_below) / n\n",
387 | " p_data_above = len(data_above) / n\n",
388 | "\n",
389 | " overall_entropy = (p_data_below * calculate_entropy(data_below) \n",
390 | " + p_data_above * calculate_entropy(data_above))\n",
391 | " \n",
392 | " return overall_entropy"
393 | ]
394 | },
395 | {
396 | "cell_type": "code",
397 | "execution_count": 14,
398 | "metadata": {},
399 | "outputs": [],
400 | "source": [
401 | "def determine_best_split(data, potential_splits):\n",
402 | " \n",
403 | " overall_entropy = 9999\n",
404 | " for column_index in potential_splits:\n",
405 | " for value in potential_splits[column_index]:\n",
406 | " data_below, data_above = split_data(data, split_column=column_index, split_value=value)\n",
407 | " current_overall_entropy = calculate_overall_entropy(data_below, data_above)\n",
408 | "\n",
409 | " if current_overall_entropy <= overall_entropy:\n",
410 | " overall_entropy = current_overall_entropy\n",
411 | " best_split_column = column_index\n",
412 | " best_split_value = value\n",
413 | " \n",
414 | " return best_split_column, best_split_value"
415 | ]
416 | },
417 | {
418 | "cell_type": "markdown",
419 | "metadata": {},
420 | "source": [
421 | "# Decision Tree Algorithm"
422 | ]
423 | },
424 | {
425 | "cell_type": "markdown",
426 | "metadata": {},
427 | "source": [
428 | "### Representation of the Decision Tree"
429 | ]
430 | },
431 | {
432 | "cell_type": "code",
433 | "execution_count": 15,
434 | "metadata": {},
435 | "outputs": [],
436 | "source": [
437 | "sub_tree = {\"question\": [\"yes_answer\", \n",
438 | " \"no_answer\"]}"
439 | ]
440 | },
441 | {
442 | "cell_type": "code",
443 | "execution_count": 16,
444 | "metadata": {},
445 | "outputs": [],
446 | "source": [
447 | "example_tree = {\"petal_width <= 0.8\": [\"Iris-setosa\", \n",
448 | " {\"petal_width <= 1.65\": [{\"petal_length <= 4.9\": [\"Iris-versicolor\", \n",
449 | " \"Iris-virginica\"]}, \n",
450 | " \"Iris-virginica\"]}]}"
451 | ]
452 | },
453 | {
454 | "cell_type": "markdown",
455 | "metadata": {},
456 | "source": [
457 | "### Algorithm"
458 | ]
459 | },
460 | {
461 | "cell_type": "code",
462 | "execution_count": 17,
463 | "metadata": {},
464 | "outputs": [],
465 | "source": [
466 | "def decision_tree_algorithm(df, counter=0, min_samples=2, max_depth=5):\n",
467 | " \n",
468 | " # data preparations\n",
469 | " if counter == 0:\n",
470 | " global COLUMN_HEADERS\n",
471 | " COLUMN_HEADERS = df.columns\n",
472 | " data = df.values\n",
473 | " else:\n",
474 | " data = df \n",
475 | " \n",
476 | " \n",
477 | " # base cases\n",
478 | " if (check_purity(data)) or (len(data) < min_samples) or (counter == max_depth):\n",
479 | " classification = classify_data(data)\n",
480 | " \n",
481 | " return classification\n",
482 | "\n",
483 | " \n",
484 | " # recursive part\n",
485 | " else: \n",
486 | " counter += 1\n",
487 | "\n",
488 | " # helper functions \n",
489 | " potential_splits = get_potential_splits(data)\n",
490 | " split_column, split_value = determine_best_split(data, potential_splits)\n",
491 | " data_below, data_above = split_data(data, split_column, split_value)\n",
492 | " \n",
493 | " # instantiate sub-tree\n",
494 | " feature_name = COLUMN_HEADERS[split_column]\n",
495 | " question = \"{} <= {}\".format(feature_name, split_value)\n",
496 | " sub_tree = {question: []}\n",
497 | " \n",
498 | " # find answers (recursion)\n",
499 | " yes_answer = decision_tree_algorithm(data_below, counter, min_samples, max_depth)\n",
500 | " no_answer = decision_tree_algorithm(data_above, counter, min_samples, max_depth)\n",
501 | " \n",
502 | " # If the answers are the same, then there is no point in asking the qestion.\n",
503 | " # This could happen when the data is classified even though it is not pure\n",
504 | " # yet (min_samples or max_depth base cases).\n",
505 | " if yes_answer == no_answer:\n",
506 | " sub_tree = yes_answer\n",
507 | " else:\n",
508 | " sub_tree[question].append(yes_answer)\n",
509 | " sub_tree[question].append(no_answer)\n",
510 | " \n",
511 | " return sub_tree"
512 | ]
513 | },
514 | {
515 | "cell_type": "code",
516 | "execution_count": 18,
517 | "metadata": {},
518 | "outputs": [
519 | {
520 | "name": "stdout",
521 | "output_type": "stream",
522 | "text": [
523 | "{'petal_width <= 0.8': ['Iris-setosa',\n",
524 | " {'petal_width <= 1.65': [{'petal_length <= 4.95': ['Iris-versicolor',\n",
525 | " 'Iris-virginica']},\n",
526 | " 'Iris-virginica']}]}\n"
527 | ]
528 | }
529 | ],
530 | "source": [
531 | "tree = decision_tree_algorithm(train_df, max_depth=3)\n",
532 | "pprint(tree)"
533 | ]
534 | },
535 | {
536 | "cell_type": "markdown",
537 | "metadata": {},
538 | "source": [
539 | "# Classification"
540 | ]
541 | },
542 | {
543 | "cell_type": "code",
544 | "execution_count": 19,
545 | "metadata": {},
546 | "outputs": [
547 | {
548 | "data": {
549 | "text/plain": [
550 | "{'question': ['yes_answer', 'no_answer']}"
551 | ]
552 | },
553 | "execution_count": 19,
554 | "metadata": {},
555 | "output_type": "execute_result"
556 | }
557 | ],
558 | "source": [
559 | "sub_tree"
560 | ]
561 | },
562 | {
563 | "cell_type": "code",
564 | "execution_count": 20,
565 | "metadata": {},
566 | "outputs": [
567 | {
568 | "data": {
569 | "text/plain": [
570 | "sepal_length 5.1\n",
571 | "sepal_width 2.5\n",
572 | "petal_length 3\n",
573 | "petal_width 1.1\n",
574 | "label Iris-versicolor\n",
575 | "Name: 98, dtype: object"
576 | ]
577 | },
578 | "execution_count": 20,
579 | "metadata": {},
580 | "output_type": "execute_result"
581 | }
582 | ],
583 | "source": [
584 | "example = test_df.iloc[0]\n",
585 | "example"
586 | ]
587 | },
588 | {
589 | "cell_type": "code",
590 | "execution_count": 21,
591 | "metadata": {},
592 | "outputs": [],
593 | "source": [
594 | "def classify_example(example, tree):\n",
595 | " question = list(tree.keys())[0]\n",
596 | " feature_name, comparison_operator, value = question.split()\n",
597 | "\n",
598 | " # ask question\n",
599 | " if example[feature_name] <= float(value):\n",
600 | " answer = tree[question][0]\n",
601 | " else:\n",
602 | " answer = tree[question][1]\n",
603 | "\n",
604 | " # base case\n",
605 | " if not isinstance(answer, dict):\n",
606 | " return answer\n",
607 | " \n",
608 | " # recursive part\n",
609 | " else:\n",
610 | " residual_tree = answer\n",
611 | " return classify_example(example, residual_tree)"
612 | ]
613 | },
614 | {
615 | "cell_type": "code",
616 | "execution_count": 22,
617 | "metadata": {},
618 | "outputs": [
619 | {
620 | "data": {
621 | "text/plain": [
622 | "'Iris-versicolor'"
623 | ]
624 | },
625 | "execution_count": 22,
626 | "metadata": {},
627 | "output_type": "execute_result"
628 | }
629 | ],
630 | "source": [
631 | "classify_example(example, tree)"
632 | ]
633 | },
634 | {
635 | "cell_type": "markdown",
636 | "metadata": {},
637 | "source": [
638 | "# Calculate Accuracy"
639 | ]
640 | },
641 | {
642 | "cell_type": "code",
643 | "execution_count": 23,
644 | "metadata": {},
645 | "outputs": [],
646 | "source": [
647 | "def calculate_accuracy(df, tree):\n",
648 | "\n",
649 | " df[\"classification\"] = df.apply(classify_example, axis=1, args=(tree,))\n",
650 | " df[\"classification_correct\"] = df[\"classification\"] == df[\"label\"]\n",
651 | " \n",
652 | " accuracy = df[\"classification_correct\"].mean()\n",
653 | " \n",
654 | " return accuracy"
655 | ]
656 | },
657 | {
658 | "cell_type": "code",
659 | "execution_count": 24,
660 | "metadata": {},
661 | "outputs": [
662 | {
663 | "data": {
664 | "text/plain": [
665 | "0.95"
666 | ]
667 | },
668 | "execution_count": 24,
669 | "metadata": {},
670 | "output_type": "execute_result"
671 | }
672 | ],
673 | "source": [
674 | "accuracy = calculate_accuracy(test_df, tree)\n",
675 | "accuracy"
676 | ]
677 | },
678 | {
679 | "cell_type": "code",
680 | "execution_count": null,
681 | "metadata": {
682 | "collapsed": true,
683 | "jupyter": {
684 | "outputs_hidden": true
685 | }
686 | },
687 | "outputs": [],
688 | "source": []
689 | }
690 | ],
691 | "metadata": {
692 | "kernelspec": {
693 | "display_name": "Python 3",
694 | "language": "python",
695 | "name": "python3"
696 | },
697 | "language_info": {
698 | "codemirror_mode": {
699 | "name": "ipython",
700 | "version": 3
701 | },
702 | "file_extension": ".py",
703 | "mimetype": "text/x-python",
704 | "name": "python",
705 | "nbconvert_exporter": "python",
706 | "pygments_lexer": "ipython3",
707 | "version": "3.7.3"
708 | }
709 | },
710 | "nbformat": 4,
711 | "nbformat_minor": 4
712 | }
713 |
--------------------------------------------------------------------------------
/notebooks/Video 08 - Categorical Features.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "The goal of this notebook is to code a decision tree classifier that can be used with the following API:\n",
8 | "\n",
9 | "```Python\n",
10 | "df = pd.read_csv(\"data.csv\")\n",
11 | "\n",
12 | "train_df, test_df = train_test_split(df, test_size=0.2)\n",
13 | "tree = decision_tree_algorithm(train_df)\n",
14 | "accuracy = calculate_accuracy(test_df, tree)\n",
15 | "```\n",
16 | "\n",
17 | "The algorithm that is going to be implemented looks like this:\n",
18 | "\n",
19 | "
"
20 | ]
21 | },
22 | {
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "# Import Statements"
27 | ]
28 | },
29 | {
30 | "cell_type": "code",
31 | "execution_count": 1,
32 | "metadata": {},
33 | "outputs": [],
34 | "source": [
35 | "import numpy as np\n",
36 | "import pandas as pd\n",
37 | "\n",
38 | "import matplotlib.pyplot as plt\n",
39 | "import seaborn as sns\n",
40 | "\n",
41 | "import random\n",
42 | "from pprint import pprint"
43 | ]
44 | },
45 | {
46 | "cell_type": "code",
47 | "execution_count": 2,
48 | "metadata": {},
49 | "outputs": [],
50 | "source": [
51 | "%matplotlib inline\n",
52 | "sns.set_style(\"darkgrid\")"
53 | ]
54 | },
55 | {
56 | "cell_type": "markdown",
57 | "metadata": {},
58 | "source": [
59 | "# Load and Prepare Data"
60 | ]
61 | },
62 | {
63 | "cell_type": "markdown",
64 | "metadata": {},
65 | "source": [
66 | "#### Format of the data\n",
67 | "- the last column of the data frame must contain the label and it must also be called \"label\"\n",
68 | "- there should be no missing values in the data frame"
69 | ]
70 | },
71 | {
72 | "cell_type": "code",
73 | "execution_count": 3,
74 | "metadata": {},
75 | "outputs": [],
76 | "source": [
77 | "df = pd.read_csv(\"../data/Iris.csv\")\n",
78 | "df = df.drop(\"Id\", axis=1)\n",
79 | "df = df.rename(columns={\"species\": \"label\"})"
80 | ]
81 | },
82 | {
83 | "cell_type": "code",
84 | "execution_count": 4,
85 | "metadata": {},
86 | "outputs": [
87 | {
88 | "data": {
89 | "text/html": [
90 | "\n",
91 | "\n",
104 | "
\n",
105 | " \n",
106 | " \n",
107 | " | \n",
108 | " sepal_length | \n",
109 | " sepal_width | \n",
110 | " petal_length | \n",
111 | " petal_width | \n",
112 | " label | \n",
113 | "
\n",
114 | " \n",
115 | " \n",
116 | " \n",
117 | " 0 | \n",
118 | " 5.1 | \n",
119 | " 3.5 | \n",
120 | " 1.4 | \n",
121 | " 0.2 | \n",
122 | " Iris-setosa | \n",
123 | "
\n",
124 | " \n",
125 | " 1 | \n",
126 | " 4.9 | \n",
127 | " 3.0 | \n",
128 | " 1.4 | \n",
129 | " 0.2 | \n",
130 | " Iris-setosa | \n",
131 | "
\n",
132 | " \n",
133 | " 2 | \n",
134 | " 4.7 | \n",
135 | " 3.2 | \n",
136 | " 1.3 | \n",
137 | " 0.2 | \n",
138 | " Iris-setosa | \n",
139 | "
\n",
140 | " \n",
141 | " 3 | \n",
142 | " 4.6 | \n",
143 | " 3.1 | \n",
144 | " 1.5 | \n",
145 | " 0.2 | \n",
146 | " Iris-setosa | \n",
147 | "
\n",
148 | " \n",
149 | " 4 | \n",
150 | " 5.0 | \n",
151 | " 3.6 | \n",
152 | " 1.4 | \n",
153 | " 0.2 | \n",
154 | " Iris-setosa | \n",
155 | "
\n",
156 | " \n",
157 | "
\n",
158 | "
"
159 | ],
160 | "text/plain": [
161 | " sepal_length sepal_width petal_length petal_width label\n",
162 | "0 5.1 3.5 1.4 0.2 Iris-setosa\n",
163 | "1 4.9 3.0 1.4 0.2 Iris-setosa\n",
164 | "2 4.7 3.2 1.3 0.2 Iris-setosa\n",
165 | "3 4.6 3.1 1.5 0.2 Iris-setosa\n",
166 | "4 5.0 3.6 1.4 0.2 Iris-setosa"
167 | ]
168 | },
169 | "execution_count": 4,
170 | "metadata": {},
171 | "output_type": "execute_result"
172 | }
173 | ],
174 | "source": [
175 | "df.head()"
176 | ]
177 | },
178 | {
179 | "cell_type": "markdown",
180 | "metadata": {},
181 | "source": [
182 | "# Train-Test-Split"
183 | ]
184 | },
185 | {
186 | "cell_type": "code",
187 | "execution_count": 5,
188 | "metadata": {},
189 | "outputs": [],
190 | "source": [
191 | "def train_test_split(df, test_size):\n",
192 | " \n",
193 | " if isinstance(test_size, float):\n",
194 | " test_size = round(test_size * len(df))\n",
195 | "\n",
196 | " indices = df.index.tolist()\n",
197 | " test_indices = random.sample(population=indices, k=test_size)\n",
198 | "\n",
199 | " test_df = df.loc[test_indices]\n",
200 | " train_df = df.drop(test_indices)\n",
201 | " \n",
202 | " return train_df, test_df"
203 | ]
204 | },
205 | {
206 | "cell_type": "code",
207 | "execution_count": 6,
208 | "metadata": {},
209 | "outputs": [],
210 | "source": [
211 | "random.seed(0)\n",
212 | "train_df, test_df = train_test_split(df, test_size=20)"
213 | ]
214 | },
215 | {
216 | "cell_type": "markdown",
217 | "metadata": {},
218 | "source": [
219 | "# Helper Functions\n",
220 | "\n",
221 | "The helper functions operate on a NumPy 2d-array. Therefore, let’s create a variable called “data” to see what we will be working with."
222 | ]
223 | },
224 | {
225 | "cell_type": "code",
226 | "execution_count": 7,
227 | "metadata": {},
228 | "outputs": [
229 | {
230 | "data": {
231 | "text/plain": [
232 | "array([[5.1, 3.5, 1.4, 0.2, 'Iris-setosa'],\n",
233 | " [4.9, 3.0, 1.4, 0.2, 'Iris-setosa'],\n",
234 | " [4.7, 3.2, 1.3, 0.2, 'Iris-setosa'],\n",
235 | " [4.6, 3.1, 1.5, 0.2, 'Iris-setosa'],\n",
236 | " [5.0, 3.6, 1.4, 0.2, 'Iris-setosa']], dtype=object)"
237 | ]
238 | },
239 | "execution_count": 7,
240 | "metadata": {},
241 | "output_type": "execute_result"
242 | }
243 | ],
244 | "source": [
245 | "data = train_df.values\n",
246 | "data[:5]"
247 | ]
248 | },
249 | {
250 | "cell_type": "markdown",
251 | "metadata": {},
252 | "source": [
253 | "### Data pure?"
254 | ]
255 | },
256 | {
257 | "cell_type": "code",
258 | "execution_count": 8,
259 | "metadata": {},
260 | "outputs": [],
261 | "source": [
262 | "def check_purity(data):\n",
263 | " \n",
264 | " label_column = data[:, -1]\n",
265 | " unique_classes = np.unique(label_column)\n",
266 | "\n",
267 | " if len(unique_classes) == 1:\n",
268 | " return True\n",
269 | " else:\n",
270 | " return False"
271 | ]
272 | },
273 | {
274 | "cell_type": "markdown",
275 | "metadata": {},
276 | "source": [
277 | "### Classify"
278 | ]
279 | },
280 | {
281 | "cell_type": "code",
282 | "execution_count": 9,
283 | "metadata": {},
284 | "outputs": [],
285 | "source": [
286 | "def classify_data(data):\n",
287 | " \n",
288 | " label_column = data[:, -1]\n",
289 | " unique_classes, counts_unique_classes = np.unique(label_column, return_counts=True)\n",
290 | "\n",
291 | " index = counts_unique_classes.argmax()\n",
292 | " classification = unique_classes[index]\n",
293 | " \n",
294 | " return classification"
295 | ]
296 | },
297 | {
298 | "cell_type": "markdown",
299 | "metadata": {},
300 | "source": [
301 | "### Potential splits?"
302 | ]
303 | },
304 | {
305 | "cell_type": "code",
306 | "execution_count": 10,
307 | "metadata": {},
308 | "outputs": [],
309 | "source": [
310 | "def get_potential_splits(data):\n",
311 | " \n",
312 | " potential_splits = {}\n",
313 | " _, n_columns = data.shape\n",
314 | " for column_index in range(n_columns - 1): # excluding the last column which is the label\n",
315 | " values = data[:, column_index]\n",
316 | " unique_values = np.unique(values)\n",
317 | " \n",
318 | " type_of_feature = FEATURE_TYPES[column_index]\n",
319 | " if type_of_feature == \"continuous\":\n",
320 | " potential_splits[column_index] = []\n",
321 | " for index in range(len(unique_values)):\n",
322 | " if index != 0:\n",
323 | " current_value = unique_values[index]\n",
324 | " previous_value = unique_values[index - 1]\n",
325 | " potential_split = (current_value + previous_value) / 2\n",
326 | "\n",
327 | " potential_splits[column_index].append(potential_split)\n",
328 | " \n",
329 | " # feature is categorical\n",
330 | " # (there need to be at least 2 unique values, otherwise in the\n",
331 | " # split_data function data_below would contain all data points\n",
332 | " # and data_above would be empty)\n",
333 | " elif len(unique_values) > 1:\n",
334 | " potential_splits[column_index] = unique_values\n",
335 | " \n",
336 | " return potential_splits"
337 | ]
338 | },
339 | {
340 | "cell_type": "markdown",
341 | "metadata": {},
342 | "source": [
343 | "### Split Data"
344 | ]
345 | },
346 | {
347 | "cell_type": "code",
348 | "execution_count": 11,
349 | "metadata": {},
350 | "outputs": [],
351 | "source": [
352 | "def split_data(data, split_column, split_value):\n",
353 | " \n",
354 | " split_column_values = data[:, split_column]\n",
355 | "\n",
356 | " type_of_feature = FEATURE_TYPES[split_column]\n",
357 | " if type_of_feature == \"continuous\":\n",
358 | " data_below = data[split_column_values <= split_value]\n",
359 | " data_above = data[split_column_values > split_value]\n",
360 | " \n",
361 | " # feature is categorical \n",
362 | " else:\n",
363 | " data_below = data[split_column_values == split_value]\n",
364 | " data_above = data[split_column_values != split_value]\n",
365 | " \n",
366 | " return data_below, data_above"
367 | ]
368 | },
369 | {
370 | "cell_type": "markdown",
371 | "metadata": {},
372 | "source": [
373 | "### Lowest Overall Entropy?"
374 | ]
375 | },
376 | {
377 | "cell_type": "code",
378 | "execution_count": 12,
379 | "metadata": {},
380 | "outputs": [],
381 | "source": [
382 | "def calculate_entropy(data):\n",
383 | " \n",
384 | " label_column = data[:, -1]\n",
385 | " _, counts = np.unique(label_column, return_counts=True)\n",
386 | "\n",
387 | " probabilities = counts / counts.sum()\n",
388 | " entropy = sum(probabilities * -np.log2(probabilities))\n",
389 | " \n",
390 | " return entropy"
391 | ]
392 | },
393 | {
394 | "cell_type": "code",
395 | "execution_count": 13,
396 | "metadata": {},
397 | "outputs": [],
398 | "source": [
399 | "def calculate_overall_entropy(data_below, data_above):\n",
400 | " \n",
401 | " n = len(data_below) + len(data_above)\n",
402 | " p_data_below = len(data_below) / n\n",
403 | " p_data_above = len(data_above) / n\n",
404 | "\n",
405 | " overall_entropy = (p_data_below * calculate_entropy(data_below) \n",
406 | " + p_data_above * calculate_entropy(data_above))\n",
407 | " \n",
408 | " return overall_entropy"
409 | ]
410 | },
411 | {
412 | "cell_type": "code",
413 | "execution_count": 14,
414 | "metadata": {},
415 | "outputs": [],
416 | "source": [
417 | "def determine_best_split(data, potential_splits):\n",
418 | " \n",
419 | " overall_entropy = 9999\n",
420 | " for column_index in potential_splits:\n",
421 | " for value in potential_splits[column_index]:\n",
422 | " data_below, data_above = split_data(data, split_column=column_index, split_value=value)\n",
423 | " current_overall_entropy = calculate_overall_entropy(data_below, data_above)\n",
424 | "\n",
425 | " if current_overall_entropy <= overall_entropy:\n",
426 | " overall_entropy = current_overall_entropy\n",
427 | " best_split_column = column_index\n",
428 | " best_split_value = value\n",
429 | " \n",
430 | " return best_split_column, best_split_value"
431 | ]
432 | },
433 | {
434 | "cell_type": "markdown",
435 | "metadata": {},
436 | "source": [
437 | "# Decision Tree Algorithm"
438 | ]
439 | },
440 | {
441 | "cell_type": "markdown",
442 | "metadata": {},
443 | "source": [
444 | "### Representation of the Decision Tree"
445 | ]
446 | },
447 | {
448 | "cell_type": "code",
449 | "execution_count": 15,
450 | "metadata": {},
451 | "outputs": [],
452 | "source": [
453 | "sub_tree = {\"question\": [\"yes_answer\", \n",
454 | " \"no_answer\"]}"
455 | ]
456 | },
457 | {
458 | "cell_type": "code",
459 | "execution_count": 16,
460 | "metadata": {},
461 | "outputs": [],
462 | "source": [
463 | "example_tree = {\"petal_width <= 0.8\": [\"Iris-setosa\", \n",
464 | " {\"petal_width <= 1.65\": [{\"petal_length <= 4.9\": [\"Iris-versicolor\", \n",
465 | " \"Iris-virginica\"]}, \n",
466 | " \"Iris-virginica\"]}]}"
467 | ]
468 | },
469 | {
470 | "cell_type": "markdown",
471 | "metadata": {},
472 | "source": [
473 | "### Determine Type of Feature"
474 | ]
475 | },
476 | {
477 | "cell_type": "code",
478 | "execution_count": 17,
479 | "metadata": {},
480 | "outputs": [],
481 | "source": [
482 | "def determine_type_of_feature(df):\n",
483 | " \n",
484 | " feature_types = []\n",
485 | " n_unique_values_treshold = 15\n",
486 | " for feature in df.columns:\n",
487 | " if feature != \"label\":\n",
488 | " unique_values = df[feature].unique()\n",
489 | " example_value = unique_values[0]\n",
490 | "\n",
491 | " if (isinstance(example_value, str)) or (len(unique_values) <= n_unique_values_treshold):\n",
492 | " feature_types.append(\"categorical\")\n",
493 | " else:\n",
494 | " feature_types.append(\"continuous\")\n",
495 | " \n",
496 | " return feature_types"
497 | ]
498 | },
499 | {
500 | "cell_type": "markdown",
501 | "metadata": {},
502 | "source": [
503 | "### Algorithm"
504 | ]
505 | },
506 | {
507 | "cell_type": "code",
508 | "execution_count": 18,
509 | "metadata": {},
510 | "outputs": [],
511 | "source": [
512 | "def decision_tree_algorithm(df, counter=0, min_samples=2, max_depth=5):\n",
513 | " \n",
514 | " # data preparations\n",
515 | " if counter == 0:\n",
516 | " global COLUMN_HEADERS, FEATURE_TYPES\n",
517 | " COLUMN_HEADERS = df.columns\n",
518 | " FEATURE_TYPES = determine_type_of_feature(df)\n",
519 | " data = df.values\n",
520 | " else:\n",
521 | " data = df \n",
522 | " \n",
523 | " \n",
524 | " # base cases\n",
525 | " if (check_purity(data)) or (len(data) < min_samples) or (counter == max_depth):\n",
526 | " classification = classify_data(data)\n",
527 | " \n",
528 | " return classification\n",
529 | "\n",
530 | " \n",
531 | " # recursive part\n",
532 | " else: \n",
533 | " counter += 1\n",
534 | "\n",
535 | " # helper functions \n",
536 | " potential_splits = get_potential_splits(data)\n",
537 | " split_column, split_value = determine_best_split(data, potential_splits)\n",
538 | " data_below, data_above = split_data(data, split_column, split_value)\n",
539 | " \n",
540 | " # determine question\n",
541 | " feature_name = COLUMN_HEADERS[split_column]\n",
542 | " type_of_feature = FEATURE_TYPES[split_column]\n",
543 | " if type_of_feature == \"continuous\":\n",
544 | " question = \"{} <= {}\".format(feature_name, split_value)\n",
545 | " \n",
546 | " # feature is categorical\n",
547 | " else:\n",
548 | " question = \"{} = {}\".format(feature_name, split_value)\n",
549 | " \n",
550 | " # instantiate sub-tree\n",
551 | " sub_tree = {question: []}\n",
552 | " \n",
553 | " # find answers (recursion)\n",
554 | " yes_answer = decision_tree_algorithm(data_below, counter, min_samples, max_depth)\n",
555 | " no_answer = decision_tree_algorithm(data_above, counter, min_samples, max_depth)\n",
556 | " \n",
557 | " # If the answers are the same, then there is no point in asking the qestion.\n",
558 | " # This could happen when the data is classified even though it is not pure\n",
559 | " # yet (min_samples or max_depth base case).\n",
560 | " if yes_answer == no_answer:\n",
561 | " sub_tree = yes_answer\n",
562 | " else:\n",
563 | " sub_tree[question].append(yes_answer)\n",
564 | " sub_tree[question].append(no_answer)\n",
565 | " \n",
566 | " return sub_tree"
567 | ]
568 | },
569 | {
570 | "cell_type": "code",
571 | "execution_count": 19,
572 | "metadata": {},
573 | "outputs": [
574 | {
575 | "name": "stdout",
576 | "output_type": "stream",
577 | "text": [
578 | "{'petal_width <= 0.8': ['Iris-setosa',\n",
579 | " {'petal_width <= 1.65': [{'petal_length <= 4.95': ['Iris-versicolor',\n",
580 | " 'Iris-virginica']},\n",
581 | " 'Iris-virginica']}]}\n"
582 | ]
583 | }
584 | ],
585 | "source": [
586 | "tree = decision_tree_algorithm(train_df, max_depth=3)\n",
587 | "pprint(tree)"
588 | ]
589 | },
590 | {
591 | "cell_type": "markdown",
592 | "metadata": {},
593 | "source": [
594 | "# Classification"
595 | ]
596 | },
597 | {
598 | "cell_type": "code",
599 | "execution_count": 20,
600 | "metadata": {},
601 | "outputs": [
602 | {
603 | "data": {
604 | "text/plain": [
605 | "{'question': ['yes_answer', 'no_answer']}"
606 | ]
607 | },
608 | "execution_count": 20,
609 | "metadata": {},
610 | "output_type": "execute_result"
611 | }
612 | ],
613 | "source": [
614 | "sub_tree"
615 | ]
616 | },
617 | {
618 | "cell_type": "code",
619 | "execution_count": 21,
620 | "metadata": {},
621 | "outputs": [
622 | {
623 | "data": {
624 | "text/plain": [
625 | "sepal_length 5.1\n",
626 | "sepal_width 2.5\n",
627 | "petal_length 3\n",
628 | "petal_width 1.1\n",
629 | "label Iris-versicolor\n",
630 | "Name: 98, dtype: object"
631 | ]
632 | },
633 | "execution_count": 21,
634 | "metadata": {},
635 | "output_type": "execute_result"
636 | }
637 | ],
638 | "source": [
639 | "example = test_df.iloc[0]\n",
640 | "example"
641 | ]
642 | },
643 | {
644 | "cell_type": "code",
645 | "execution_count": 22,
646 | "metadata": {},
647 | "outputs": [],
648 | "source": [
649 | "def classify_example(example, tree):\n",
650 | " question = list(tree.keys())[0]\n",
651 | " feature_name, comparison_operator, value = question.split(\" \")\n",
652 | "\n",
653 | " # ask question\n",
654 | " if comparison_operator == \"<=\": # feature is continuous\n",
655 | " if example[feature_name] <= float(value):\n",
656 | " answer = tree[question][0]\n",
657 | " else:\n",
658 | " answer = tree[question][1]\n",
659 | " \n",
660 | " # feature is categorical\n",
661 | " else:\n",
662 | " if str(example[feature_name]) == value:\n",
663 | " answer = tree[question][0]\n",
664 | " else:\n",
665 | " answer = tree[question][1]\n",
666 | "\n",
667 | " # base case\n",
668 | " if not isinstance(answer, dict):\n",
669 | " return answer\n",
670 | " \n",
671 | " # recursive part\n",
672 | " else:\n",
673 | " residual_tree = answer\n",
674 | " return classify_example(example, residual_tree)"
675 | ]
676 | },
677 | {
678 | "cell_type": "code",
679 | "execution_count": 23,
680 | "metadata": {},
681 | "outputs": [
682 | {
683 | "data": {
684 | "text/plain": [
685 | "'Iris-versicolor'"
686 | ]
687 | },
688 | "execution_count": 23,
689 | "metadata": {},
690 | "output_type": "execute_result"
691 | }
692 | ],
693 | "source": [
694 | "classify_example(example, tree)"
695 | ]
696 | },
697 | {
698 | "cell_type": "markdown",
699 | "metadata": {},
700 | "source": [
701 | "# Calculate Accuracy"
702 | ]
703 | },
704 | {
705 | "cell_type": "code",
706 | "execution_count": 24,
707 | "metadata": {},
708 | "outputs": [],
709 | "source": [
710 | "def calculate_accuracy(df, tree):\n",
711 | "\n",
712 | " df[\"classification\"] = df.apply(classify_example, axis=1, args=(tree,))\n",
713 | " df[\"classification_correct\"] = df[\"classification\"] == df[\"label\"]\n",
714 | " \n",
715 | " accuracy = df[\"classification_correct\"].mean()\n",
716 | " \n",
717 | " return accuracy"
718 | ]
719 | },
720 | {
721 | "cell_type": "code",
722 | "execution_count": 25,
723 | "metadata": {},
724 | "outputs": [
725 | {
726 | "data": {
727 | "text/plain": [
728 | "0.95"
729 | ]
730 | },
731 | "execution_count": 25,
732 | "metadata": {},
733 | "output_type": "execute_result"
734 | }
735 | ],
736 | "source": [
737 | "accuracy = calculate_accuracy(test_df, tree)\n",
738 | "accuracy"
739 | ]
740 | },
741 | {
742 | "cell_type": "markdown",
743 | "metadata": {},
744 | "source": [
745 | "# Titanic Data Set"
746 | ]
747 | },
748 | {
749 | "cell_type": "markdown",
750 | "metadata": {},
751 | "source": [
752 | "### Load and Prepare Data"
753 | ]
754 | },
755 | {
756 | "cell_type": "code",
757 | "execution_count": 26,
758 | "metadata": {},
759 | "outputs": [],
760 | "source": [
761 | "df = pd.read_csv(\"../data/Titanic.csv\")\n",
762 | "df[\"label\"] = df.Survived\n",
763 | "df = df.drop([\"PassengerId\", \"Survived\", \"Name\", \"Ticket\", \"Cabin\"], axis=1)\n",
764 | "\n",
765 | "# handling missing values\n",
766 | "median_age = df.Age.median()\n",
767 | "mode_embarked = df.Embarked.mode()[0]\n",
768 | "\n",
769 | "df = df.fillna({\"Age\": median_age, \"Embarked\": mode_embarked})"
770 | ]
771 | },
772 | {
773 | "cell_type": "markdown",
774 | "metadata": {},
775 | "source": [
776 | "### Decision Tree Algorithm"
777 | ]
778 | },
779 | {
780 | "cell_type": "code",
781 | "execution_count": 27,
782 | "metadata": {},
783 | "outputs": [
784 | {
785 | "name": "stdout",
786 | "output_type": "stream",
787 | "text": [
788 | "{'Sex = male': [{'Fare <= 9.49165': [0,\n",
789 | " {'Age <= 6.5': [1,\n",
790 | " 0]}]},\n",
791 | " {'Pclass = 3': [{'Fare <= 24.808349999999997': [1,\n",
792 | " 0]},\n",
793 | " 1]}]}\n"
794 | ]
795 | },
796 | {
797 | "data": {
798 | "text/plain": [
799 | "0.7752808988764045"
800 | ]
801 | },
802 | "execution_count": 27,
803 | "metadata": {},
804 | "output_type": "execute_result"
805 | }
806 | ],
807 | "source": [
808 | "random.seed(0)\n",
809 | "\n",
810 | "train_df, test_df = train_test_split(df, 0.2)\n",
811 | "tree = decision_tree_algorithm(train_df, max_depth=3)\n",
812 | "accuracy = calculate_accuracy(test_df, tree)\n",
813 | "\n",
814 | "pprint(tree, width=50)\n",
815 | "accuracy"
816 | ]
817 | },
818 | {
819 | "cell_type": "code",
820 | "execution_count": null,
821 | "metadata": {
822 | "collapsed": true,
823 | "jupyter": {
824 | "outputs_hidden": true
825 | }
826 | },
827 | "outputs": [],
828 | "source": []
829 | }
830 | ],
831 | "metadata": {
832 | "kernelspec": {
833 | "display_name": "Python 3",
834 | "language": "python",
835 | "name": "python3"
836 | },
837 | "language_info": {
838 | "codemirror_mode": {
839 | "name": "ipython",
840 | "version": 3
841 | },
842 | "file_extension": ".py",
843 | "mimetype": "text/x-python",
844 | "name": "python",
845 | "nbconvert_exporter": "python",
846 | "pygments_lexer": "ipython3",
847 | "version": "3.7.3"
848 | }
849 | },
850 | "nbformat": 4,
851 | "nbformat_minor": 4
852 | }
853 |
--------------------------------------------------------------------------------
/notebooks/Video 09 - Code Update.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "The goal of this notebook is to code a decision tree classifier that can be used with the following API:\n",
8 | "\n",
9 | "```Python\n",
10 | "df = pd.read_csv(\"data.csv\")\n",
11 | "\n",
12 | "train_df, test_df = train_test_split(df, test_size=0.2)\n",
13 | "tree = decision_tree_algorithm(train_df)\n",
14 | "accuracy = calculate_accuracy(test_df, tree)\n",
15 | "```\n",
16 | "\n",
17 | "The algorithm that is going to be implemented looks like this:\n",
18 | "\n",
19 | "
"
20 | ]
21 | },
22 | {
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "# Import Statements"
27 | ]
28 | },
29 | {
30 | "cell_type": "code",
31 | "execution_count": 1,
32 | "metadata": {},
33 | "outputs": [],
34 | "source": [
35 | "import numpy as np\n",
36 | "import pandas as pd\n",
37 | "\n",
38 | "import matplotlib.pyplot as plt\n",
39 | "import seaborn as sns\n",
40 | "\n",
41 | "import random\n",
42 | "from pprint import pprint"
43 | ]
44 | },
45 | {
46 | "cell_type": "code",
47 | "execution_count": 2,
48 | "metadata": {},
49 | "outputs": [],
50 | "source": [
51 | "%matplotlib inline\n",
52 | "sns.set_style(\"darkgrid\")"
53 | ]
54 | },
55 | {
56 | "cell_type": "markdown",
57 | "metadata": {},
58 | "source": [
59 | "# Load and Prepare Data"
60 | ]
61 | },
62 | {
63 | "cell_type": "markdown",
64 | "metadata": {},
65 | "source": [
66 | "#### Format of the data\n",
67 | "- the last column of the data frame must contain the label and it must also be called \"label\"\n",
68 | "- there should be no missing values in the data frame"
69 | ]
70 | },
71 | {
72 | "cell_type": "code",
73 | "execution_count": 3,
74 | "metadata": {},
75 | "outputs": [],
76 | "source": [
77 | "df = pd.read_csv(\"../data/Iris.csv\")\n",
78 | "df = df.drop(\"Id\", axis=1)\n",
79 | "df = df.rename(columns={\"species\": \"label\"})"
80 | ]
81 | },
82 | {
83 | "cell_type": "code",
84 | "execution_count": 4,
85 | "metadata": {},
86 | "outputs": [
87 | {
88 | "data": {
89 | "text/html": [
90 | "\n",
91 | "\n",
104 | "
\n",
105 | " \n",
106 | " \n",
107 | " | \n",
108 | " sepal_length | \n",
109 | " sepal_width | \n",
110 | " petal_length | \n",
111 | " petal_width | \n",
112 | " label | \n",
113 | "
\n",
114 | " \n",
115 | " \n",
116 | " \n",
117 | " 0 | \n",
118 | " 5.1 | \n",
119 | " 3.5 | \n",
120 | " 1.4 | \n",
121 | " 0.2 | \n",
122 | " Iris-setosa | \n",
123 | "
\n",
124 | " \n",
125 | " 1 | \n",
126 | " 4.9 | \n",
127 | " 3.0 | \n",
128 | " 1.4 | \n",
129 | " 0.2 | \n",
130 | " Iris-setosa | \n",
131 | "
\n",
132 | " \n",
133 | " 2 | \n",
134 | " 4.7 | \n",
135 | " 3.2 | \n",
136 | " 1.3 | \n",
137 | " 0.2 | \n",
138 | " Iris-setosa | \n",
139 | "
\n",
140 | " \n",
141 | " 3 | \n",
142 | " 4.6 | \n",
143 | " 3.1 | \n",
144 | " 1.5 | \n",
145 | " 0.2 | \n",
146 | " Iris-setosa | \n",
147 | "
\n",
148 | " \n",
149 | " 4 | \n",
150 | " 5.0 | \n",
151 | " 3.6 | \n",
152 | " 1.4 | \n",
153 | " 0.2 | \n",
154 | " Iris-setosa | \n",
155 | "
\n",
156 | " \n",
157 | "
\n",
158 | "
"
159 | ],
160 | "text/plain": [
161 | " sepal_length sepal_width petal_length petal_width label\n",
162 | "0 5.1 3.5 1.4 0.2 Iris-setosa\n",
163 | "1 4.9 3.0 1.4 0.2 Iris-setosa\n",
164 | "2 4.7 3.2 1.3 0.2 Iris-setosa\n",
165 | "3 4.6 3.1 1.5 0.2 Iris-setosa\n",
166 | "4 5.0 3.6 1.4 0.2 Iris-setosa"
167 | ]
168 | },
169 | "execution_count": 4,
170 | "metadata": {},
171 | "output_type": "execute_result"
172 | }
173 | ],
174 | "source": [
175 | "df.head()"
176 | ]
177 | },
178 | {
179 | "cell_type": "markdown",
180 | "metadata": {},
181 | "source": [
182 | "# Train-Test-Split"
183 | ]
184 | },
185 | {
186 | "cell_type": "code",
187 | "execution_count": 5,
188 | "metadata": {},
189 | "outputs": [],
190 | "source": [
191 | "def train_test_split(df, test_size):\n",
192 | " \n",
193 | " if isinstance(test_size, float):\n",
194 | " test_size = round(test_size * len(df))\n",
195 | "\n",
196 | " indices = df.index.tolist()\n",
197 | " test_indices = random.sample(population=indices, k=test_size)\n",
198 | "\n",
199 | " test_df = df.loc[test_indices]\n",
200 | " train_df = df.drop(test_indices)\n",
201 | " \n",
202 | " return train_df, test_df"
203 | ]
204 | },
205 | {
206 | "cell_type": "code",
207 | "execution_count": 6,
208 | "metadata": {},
209 | "outputs": [],
210 | "source": [
211 | "random.seed(0)\n",
212 | "train_df, test_df = train_test_split(df, test_size=20)"
213 | ]
214 | },
215 | {
216 | "cell_type": "markdown",
217 | "metadata": {},
218 | "source": [
219 | "# Helper Functions\n",
220 | "\n",
221 | "The helper functions operate on a NumPy 2d-array. Therefore, let’s create a variable called “data” to see what we will be working with."
222 | ]
223 | },
224 | {
225 | "cell_type": "code",
226 | "execution_count": 7,
227 | "metadata": {},
228 | "outputs": [
229 | {
230 | "data": {
231 | "text/plain": [
232 | "array([[5.1, 3.5, 1.4, 0.2, 'Iris-setosa'],\n",
233 | " [4.9, 3.0, 1.4, 0.2, 'Iris-setosa'],\n",
234 | " [4.7, 3.2, 1.3, 0.2, 'Iris-setosa'],\n",
235 | " [4.6, 3.1, 1.5, 0.2, 'Iris-setosa'],\n",
236 | " [5.0, 3.6, 1.4, 0.2, 'Iris-setosa']], dtype=object)"
237 | ]
238 | },
239 | "execution_count": 7,
240 | "metadata": {},
241 | "output_type": "execute_result"
242 | }
243 | ],
244 | "source": [
245 | "data = train_df.values\n",
246 | "data[:5]"
247 | ]
248 | },
249 | {
250 | "cell_type": "markdown",
251 | "metadata": {},
252 | "source": [
253 | "### Data pure?"
254 | ]
255 | },
256 | {
257 | "cell_type": "code",
258 | "execution_count": 8,
259 | "metadata": {},
260 | "outputs": [],
261 | "source": [
262 | "def check_purity(data):\n",
263 | " \n",
264 | " label_column = data[:, -1]\n",
265 | " unique_classes = np.unique(label_column)\n",
266 | "\n",
267 | " if len(unique_classes) == 1:\n",
268 | " return True\n",
269 | " else:\n",
270 | " return False"
271 | ]
272 | },
273 | {
274 | "cell_type": "markdown",
275 | "metadata": {},
276 | "source": [
277 | "### Classify"
278 | ]
279 | },
280 | {
281 | "cell_type": "code",
282 | "execution_count": 9,
283 | "metadata": {},
284 | "outputs": [],
285 | "source": [
286 | "def classify_data(data):\n",
287 | " \n",
288 | " label_column = data[:, -1]\n",
289 | " unique_classes, counts_unique_classes = np.unique(label_column, return_counts=True)\n",
290 | "\n",
291 | " index = counts_unique_classes.argmax()\n",
292 | " classification = unique_classes[index]\n",
293 | " \n",
294 | " return classification"
295 | ]
296 | },
297 | {
298 | "cell_type": "markdown",
299 | "metadata": {},
300 | "source": [
301 | "### Potential splits?"
302 | ]
303 | },
304 | {
305 | "cell_type": "code",
306 | "execution_count": 10,
307 | "metadata": {},
308 | "outputs": [],
309 | "source": [
310 | "def get_potential_splits(data):\n",
311 | " \n",
312 | " potential_splits = {}\n",
313 | " _, n_columns = data.shape\n",
314 | " for column_index in range(n_columns - 1): # excluding the last column which is the label\n",
315 | " values = data[:, column_index]\n",
316 | " unique_values = np.unique(values)\n",
317 | " \n",
318 | " potential_splits[column_index] = unique_values\n",
319 | " \n",
320 | " return potential_splits"
321 | ]
322 | },
323 | {
324 | "cell_type": "markdown",
325 | "metadata": {},
326 | "source": [
327 | "### Split Data"
328 | ]
329 | },
330 | {
331 | "cell_type": "code",
332 | "execution_count": 11,
333 | "metadata": {},
334 | "outputs": [],
335 | "source": [
336 | "def split_data(data, split_column, split_value):\n",
337 | " \n",
338 | " split_column_values = data[:, split_column]\n",
339 | "\n",
340 | " type_of_feature = FEATURE_TYPES[split_column]\n",
341 | " if type_of_feature == \"continuous\":\n",
342 | " data_below = data[split_column_values <= split_value]\n",
343 | " data_above = data[split_column_values > split_value]\n",
344 | " \n",
345 | " # feature is categorical \n",
346 | " else:\n",
347 | " data_below = data[split_column_values == split_value]\n",
348 | " data_above = data[split_column_values != split_value]\n",
349 | " \n",
350 | " return data_below, data_above"
351 | ]
352 | },
353 | {
354 | "cell_type": "markdown",
355 | "metadata": {},
356 | "source": [
357 | "### Lowest Overall Entropy?"
358 | ]
359 | },
360 | {
361 | "cell_type": "code",
362 | "execution_count": 12,
363 | "metadata": {},
364 | "outputs": [],
365 | "source": [
366 | "def calculate_entropy(data):\n",
367 | " \n",
368 | " label_column = data[:, -1]\n",
369 | " _, counts = np.unique(label_column, return_counts=True)\n",
370 | "\n",
371 | " probabilities = counts / counts.sum()\n",
372 | " entropy = sum(probabilities * -np.log2(probabilities))\n",
373 | " \n",
374 | " return entropy"
375 | ]
376 | },
377 | {
378 | "cell_type": "code",
379 | "execution_count": 13,
380 | "metadata": {},
381 | "outputs": [],
382 | "source": [
383 | "def calculate_overall_entropy(data_below, data_above):\n",
384 | " \n",
385 | " n = len(data_below) + len(data_above)\n",
386 | " p_data_below = len(data_below) / n\n",
387 | " p_data_above = len(data_above) / n\n",
388 | "\n",
389 | " overall_entropy = (p_data_below * calculate_entropy(data_below) \n",
390 | " + p_data_above * calculate_entropy(data_above))\n",
391 | " \n",
392 | " return overall_entropy"
393 | ]
394 | },
395 | {
396 | "cell_type": "code",
397 | "execution_count": 14,
398 | "metadata": {},
399 | "outputs": [],
400 | "source": [
401 | "def determine_best_split(data, potential_splits):\n",
402 | " \n",
403 | " overall_entropy = 9999\n",
404 | " for column_index in potential_splits:\n",
405 | " for value in potential_splits[column_index]:\n",
406 | " data_below, data_above = split_data(data, split_column=column_index, split_value=value)\n",
407 | " current_overall_entropy = calculate_overall_entropy(data_below, data_above)\n",
408 | "\n",
409 | " if current_overall_entropy <= overall_entropy:\n",
410 | " overall_entropy = current_overall_entropy\n",
411 | " best_split_column = column_index\n",
412 | " best_split_value = value\n",
413 | " \n",
414 | " return best_split_column, best_split_value"
415 | ]
416 | },
417 | {
418 | "cell_type": "markdown",
419 | "metadata": {},
420 | "source": [
421 | "# Decision Tree Algorithm"
422 | ]
423 | },
424 | {
425 | "cell_type": "markdown",
426 | "metadata": {},
427 | "source": [
428 | "### Representation of the Decision Tree"
429 | ]
430 | },
431 | {
432 | "cell_type": "code",
433 | "execution_count": 15,
434 | "metadata": {},
435 | "outputs": [],
436 | "source": [
437 | "sub_tree = {\"question\": [\"yes_answer\", \n",
438 | " \"no_answer\"]}"
439 | ]
440 | },
441 | {
442 | "cell_type": "code",
443 | "execution_count": 16,
444 | "metadata": {},
445 | "outputs": [],
446 | "source": [
447 | "example_tree = {\"petal_width <= 0.8\": [\"Iris-setosa\", \n",
448 | " {\"petal_width <= 1.65\": [{\"petal_length <= 4.9\": [\"Iris-versicolor\", \n",
449 | " \"Iris-virginica\"]}, \n",
450 | " \"Iris-virginica\"]}]}"
451 | ]
452 | },
453 | {
454 | "cell_type": "markdown",
455 | "metadata": {},
456 | "source": [
457 | "### Determine Type of Feature"
458 | ]
459 | },
460 | {
461 | "cell_type": "code",
462 | "execution_count": 17,
463 | "metadata": {},
464 | "outputs": [],
465 | "source": [
466 | "def determine_type_of_feature(df):\n",
467 | " \n",
468 | " feature_types = []\n",
469 | " n_unique_values_treshold = 15\n",
470 | " for feature in df.columns:\n",
471 | " if feature != \"label\":\n",
472 | " unique_values = df[feature].unique()\n",
473 | " example_value = unique_values[0]\n",
474 | "\n",
475 | " if (isinstance(example_value, str)) or (len(unique_values) <= n_unique_values_treshold):\n",
476 | " feature_types.append(\"categorical\")\n",
477 | " else:\n",
478 | " feature_types.append(\"continuous\")\n",
479 | " \n",
480 | " return feature_types"
481 | ]
482 | },
483 | {
484 | "cell_type": "markdown",
485 | "metadata": {},
486 | "source": [
487 | "### Algorithm"
488 | ]
489 | },
490 | {
491 | "cell_type": "code",
492 | "execution_count": 18,
493 | "metadata": {},
494 | "outputs": [],
495 | "source": [
496 | "def decision_tree_algorithm(df, counter=0, min_samples=2, max_depth=5):\n",
497 | " \n",
498 | " # data preparations\n",
499 | " if counter == 0:\n",
500 | " global COLUMN_HEADERS, FEATURE_TYPES\n",
501 | " COLUMN_HEADERS = df.columns\n",
502 | " FEATURE_TYPES = determine_type_of_feature(df)\n",
503 | " data = df.values\n",
504 | " else:\n",
505 | " data = df \n",
506 | " \n",
507 | " \n",
508 | " # base cases\n",
509 | " if (check_purity(data)) or (len(data) < min_samples) or (counter == max_depth):\n",
510 | " classification = classify_data(data)\n",
511 | " \n",
512 | " return classification\n",
513 | "\n",
514 | " \n",
515 | " # recursive part\n",
516 | " else: \n",
517 | " counter += 1\n",
518 | "\n",
519 | " # helper functions \n",
520 | " potential_splits = get_potential_splits(data)\n",
521 | " split_column, split_value = determine_best_split(data, potential_splits)\n",
522 | " data_below, data_above = split_data(data, split_column, split_value)\n",
523 | " \n",
524 | " # check for empty data\n",
525 | " if len(data_below) == 0 or len(data_above) == 0:\n",
526 | " classification = classify_data(data)\n",
527 | " return classification\n",
528 | " \n",
529 | " # determine question\n",
530 | " feature_name = COLUMN_HEADERS[split_column]\n",
531 | " type_of_feature = FEATURE_TYPES[split_column]\n",
532 | " if type_of_feature == \"continuous\":\n",
533 | " question = \"{} <= {}\".format(feature_name, split_value)\n",
534 | " \n",
535 | " # feature is categorical\n",
536 | " else:\n",
537 | " question = \"{} = {}\".format(feature_name, split_value)\n",
538 | " \n",
539 | " # instantiate sub-tree\n",
540 | " sub_tree = {question: []}\n",
541 | " \n",
542 | " # find answers (recursion)\n",
543 | " yes_answer = decision_tree_algorithm(data_below, counter, min_samples, max_depth)\n",
544 | " no_answer = decision_tree_algorithm(data_above, counter, min_samples, max_depth)\n",
545 | " \n",
546 | " # If the answers are the same, then there is no point in asking the qestion.\n",
547 | " # This could happen when the data is classified even though it is not pure\n",
548 | " # yet (min_samples or max_depth base case).\n",
549 | " if yes_answer == no_answer:\n",
550 | " sub_tree = yes_answer\n",
551 | " else:\n",
552 | " sub_tree[question].append(yes_answer)\n",
553 | " sub_tree[question].append(no_answer)\n",
554 | " \n",
555 | " return sub_tree"
556 | ]
557 | },
558 | {
559 | "cell_type": "code",
560 | "execution_count": 19,
561 | "metadata": {},
562 | "outputs": [
563 | {
564 | "name": "stdout",
565 | "output_type": "stream",
566 | "text": [
567 | "{'petal_width <= 0.6': ['Iris-setosa',\n",
568 | " {'petal_width <= 1.6': [{'petal_length <= 4.9': ['Iris-versicolor',\n",
569 | " 'Iris-virginica']},\n",
570 | " 'Iris-virginica']}]}\n"
571 | ]
572 | }
573 | ],
574 | "source": [
575 | "tree = decision_tree_algorithm(train_df, max_depth=3)\n",
576 | "pprint(tree)"
577 | ]
578 | },
579 | {
580 | "cell_type": "markdown",
581 | "metadata": {},
582 | "source": [
583 | "# Classification"
584 | ]
585 | },
586 | {
587 | "cell_type": "code",
588 | "execution_count": 20,
589 | "metadata": {},
590 | "outputs": [
591 | {
592 | "data": {
593 | "text/plain": [
594 | "{'question': ['yes_answer', 'no_answer']}"
595 | ]
596 | },
597 | "execution_count": 20,
598 | "metadata": {},
599 | "output_type": "execute_result"
600 | }
601 | ],
602 | "source": [
603 | "sub_tree"
604 | ]
605 | },
606 | {
607 | "cell_type": "code",
608 | "execution_count": 21,
609 | "metadata": {},
610 | "outputs": [
611 | {
612 | "data": {
613 | "text/plain": [
614 | "sepal_length 5.1\n",
615 | "sepal_width 2.5\n",
616 | "petal_length 3\n",
617 | "petal_width 1.1\n",
618 | "label Iris-versicolor\n",
619 | "Name: 98, dtype: object"
620 | ]
621 | },
622 | "execution_count": 21,
623 | "metadata": {},
624 | "output_type": "execute_result"
625 | }
626 | ],
627 | "source": [
628 | "example = test_df.iloc[0]\n",
629 | "example"
630 | ]
631 | },
632 | {
633 | "cell_type": "code",
634 | "execution_count": 22,
635 | "metadata": {},
636 | "outputs": [],
637 | "source": [
638 | "def classify_example(example, tree):\n",
639 | " question = list(tree.keys())[0]\n",
640 | " feature_name, comparison_operator, value = question.split(\" \")\n",
641 | "\n",
642 | " # ask question\n",
643 | " if comparison_operator == \"<=\": # feature is continuous\n",
644 | " if example[feature_name] <= float(value):\n",
645 | " answer = tree[question][0]\n",
646 | " else:\n",
647 | " answer = tree[question][1]\n",
648 | " \n",
649 | " # feature is categorical\n",
650 | " else:\n",
651 | " if str(example[feature_name]) == value:\n",
652 | " answer = tree[question][0]\n",
653 | " else:\n",
654 | " answer = tree[question][1]\n",
655 | "\n",
656 | " # base case\n",
657 | " if not isinstance(answer, dict):\n",
658 | " return answer\n",
659 | " \n",
660 | " # recursive part\n",
661 | " else:\n",
662 | " residual_tree = answer\n",
663 | " return classify_example(example, residual_tree)"
664 | ]
665 | },
666 | {
667 | "cell_type": "code",
668 | "execution_count": 23,
669 | "metadata": {},
670 | "outputs": [
671 | {
672 | "data": {
673 | "text/plain": [
674 | "'Iris-versicolor'"
675 | ]
676 | },
677 | "execution_count": 23,
678 | "metadata": {},
679 | "output_type": "execute_result"
680 | }
681 | ],
682 | "source": [
683 | "classify_example(example, tree)"
684 | ]
685 | },
686 | {
687 | "cell_type": "markdown",
688 | "metadata": {},
689 | "source": [
690 | "# Calculate Accuracy"
691 | ]
692 | },
693 | {
694 | "cell_type": "code",
695 | "execution_count": 24,
696 | "metadata": {},
697 | "outputs": [],
698 | "source": [
699 | "def calculate_accuracy(df, tree):\n",
700 | "\n",
701 | " df[\"classification\"] = df.apply(classify_example, axis=1, args=(tree,))\n",
702 | " df[\"classification_correct\"] = df[\"classification\"] == df[\"label\"]\n",
703 | " \n",
704 | " accuracy = df[\"classification_correct\"].mean()\n",
705 | " \n",
706 | " return accuracy"
707 | ]
708 | },
709 | {
710 | "cell_type": "code",
711 | "execution_count": 25,
712 | "metadata": {},
713 | "outputs": [
714 | {
715 | "data": {
716 | "text/plain": [
717 | "0.95"
718 | ]
719 | },
720 | "execution_count": 25,
721 | "metadata": {},
722 | "output_type": "execute_result"
723 | }
724 | ],
725 | "source": [
726 | "accuracy = calculate_accuracy(test_df, tree)\n",
727 | "accuracy"
728 | ]
729 | },
730 | {
731 | "cell_type": "markdown",
732 | "metadata": {},
733 | "source": [
734 | "# Titanic Data Set"
735 | ]
736 | },
737 | {
738 | "cell_type": "markdown",
739 | "metadata": {},
740 | "source": [
741 | "### Load and Prepare Data"
742 | ]
743 | },
744 | {
745 | "cell_type": "code",
746 | "execution_count": 26,
747 | "metadata": {},
748 | "outputs": [],
749 | "source": [
750 | "df = pd.read_csv(\"../data/Titanic.csv\")\n",
751 | "df[\"label\"] = df.Survived\n",
752 | "df = df.drop([\"PassengerId\", \"Survived\", \"Name\", \"Ticket\", \"Cabin\"], axis=1)\n",
753 | "\n",
754 | "# handling missing values\n",
755 | "median_age = df.Age.median()\n",
756 | "mode_embarked = df.Embarked.mode()[0]\n",
757 | "\n",
758 | "df = df.fillna({\"Age\": median_age, \"Embarked\": mode_embarked})"
759 | ]
760 | },
761 | {
762 | "cell_type": "markdown",
763 | "metadata": {},
764 | "source": [
765 | "### Decision Tree Algorithm"
766 | ]
767 | },
768 | {
769 | "cell_type": "code",
770 | "execution_count": 27,
771 | "metadata": {},
772 | "outputs": [
773 | {
774 | "name": "stdout",
775 | "output_type": "stream",
776 | "text": [
777 | "{'Sex = male': [{'Fare <= 9.4833': [{'Age <= 32.0': [{'Age <= 30.5': [{'Fare <= 7.7958': [{'Fare <= 7.7417': [{'Fare <= 7.2292': [{'Age <= 27.0': [{'Age <= 25.0': [0,\n",
778 | " 1]},\n",
779 | " 0]},\n",
780 | " 0]},\n",
781 | " {'Age <= 19.0': [0,\n",
782 | " {'Age <= 21.0': [1,\n",
783 | " 0]}]}]},\n",
784 | " {'Age <= 20.0': [{'Fare <= 8.05': [{'Fare <= 7.8958': [0,\n",
785 | " {'Fare <= 7.925': [1,\n",
786 | " 0]}]},\n",
787 | " 0]},\n",
788 | " {'Fare <= 8.4583': [0,\n",
789 | " {'Fare <= 8.6625': [{'Age <= 26.0': [0,\n",
790 | " {'Age <= 27.0': [1,\n",
791 | " 0]}]},\n",
792 | " 0]}]}]}]},\n",
793 | " {'Fare <= 7.775': [0,\n",
794 | " {'Fare <= 7.8542': [1,\n",
795 | " {'Age <= 31.0': [1,\n",
796 | " 0]}]}]}]},\n",
797 | " 0]},\n",
798 | " {'Age <= 6.0': [{'Pclass = 3': [{'Fare <= 20.575': [1,\n",
799 | " {'Fare <= 31.275': [0,\n",
800 | " {'Fare <= 31.3875': [1,\n",
801 | " 0]}]}]},\n",
802 | " 1]},\n",
803 | " {'Pclass = 1': [{'Age <= 52.0': [{'Fare <= 30.5': [{'Fare <= 26.0': [0,\n",
804 | " {'Fare <= 29.7': [{'Fare <= 26.55': [1,\n",
805 | " 0]},\n",
806 | " 1]}]},\n",
807 | " {'Fare <= 227.525': [{'SibSp = 0': [{'Age <= 17.0': [1,\n",
808 | " 0]},\n",
809 | " {'Fare <= 110.8833': [{'Fare <= 57.0': [1,\n",
810 | " 0]},\n",
811 | " 1]}]},\n",
812 | " 1]}]},\n",
813 | " {'Age <= 71.0': [{'Embarked = S': [0,\n",
814 | " {'Age <= 56.0': [1,\n",
815 | " 0]}]},\n",
816 | " 1]}]},\n",
817 | " {'Age <= 34.0': [{'Fare <= 56.4958': [{'Fare <= 46.9': [{'Embarked = C': [{'Pclass = 3': [{'Parch = 1': [1,\n",
818 | " 0]},\n",
819 | " 0]},\n",
820 | " {'Age <= 9.0': [{'SibSp = 4': [0,\n",
821 | " 1]},\n",
822 | " 0]}]},\n",
823 | " {'Age <= 28.0': [{'Age <= 26.0': [1,\n",
824 | " 0]},\n",
825 | " 1]}]},\n",
826 | " 0]},\n",
827 | " {'Fare <= 10.5': [1,\n",
828 | " 0]}]}]}]}]},\n",
829 | " {'Pclass = 3': [{'Fare <= 24.15': [{'Age <= 36.0': [{'Embarked = S': [{'Age <= 31.0': [{'Fare <= 16.7': [{'Fare <= 10.5167': [{'Fare <= 9.8417': [{'Age <= 19.0': [1,\n",
830 | " 0]},\n",
831 | " 0]},\n",
832 | " {'Fare <= 12.475': [1,\n",
833 | " {'Fare <= 14.4': [0,\n",
834 | " 1]}]}]},\n",
835 | " {'Fare <= 18.0': [0,\n",
836 | " {'Fare <= 20.525': [1,\n",
837 | " 0]}]}]},\n",
838 | " 1]},\n",
839 | " {'Age <= 16.0': [1,\n",
840 | " {'Age <= 18.0': [0,\n",
841 | " {'Age <= 29.0': [{'Fare <= 7.8792': [1,\n",
842 | " {'Fare <= 15.2458': [0,\n",
843 | " 1]}]},\n",
844 | " 0]}]}]}]},\n",
845 | " 0]},\n",
846 | " {'Fare <= 31.275': [0,\n",
847 | " {'Fare <= 31.3875': [1,\n",
848 | " 0]}]}]},\n",
849 | " {'Fare <= 28.7125': [{'Fare <= 27.75': [{'Age <= 23.0': [1,\n",
850 | " {'Age <= 55.0': [{'Age <= 26.0': [{'Age <= 25.0': [{'Fare <= 13.0': [0,\n",
851 | " 1]},\n",
852 | " 0]},\n",
853 | " {'Age <= 36.0': [1,\n",
854 | " {'Age <= 38.0': [0,\n",
855 | " 1]}]}]},\n",
856 | " {'Fare <= 10.5': [0,\n",
857 | " 1]}]}]},\n",
858 | " 0]},\n",
859 | " 1]}]}]}\n"
860 | ]
861 | },
862 | {
863 | "data": {
864 | "text/plain": [
865 | "0.7808988764044944"
866 | ]
867 | },
868 | "execution_count": 27,
869 | "metadata": {},
870 | "output_type": "execute_result"
871 | }
872 | ],
873 | "source": [
874 | "random.seed(0)\n",
875 | "\n",
876 | "train_df, test_df = train_test_split(df, 0.2)\n",
877 | "tree = decision_tree_algorithm(train_df, max_depth=10)\n",
878 | "accuracy = calculate_accuracy(test_df, tree)\n",
879 | "\n",
880 | "pprint(tree, width=50)\n",
881 | "accuracy"
882 | ]
883 | },
884 | {
885 | "cell_type": "code",
886 | "execution_count": null,
887 | "metadata": {},
888 | "outputs": [],
889 | "source": []
890 | }
891 | ],
892 | "metadata": {
893 | "kernelspec": {
894 | "display_name": "Python 3",
895 | "language": "python",
896 | "name": "python3"
897 | },
898 | "language_info": {
899 | "codemirror_mode": {
900 | "name": "ipython",
901 | "version": 3
902 | },
903 | "file_extension": ".py",
904 | "mimetype": "text/x-python",
905 | "name": "python",
906 | "nbconvert_exporter": "python",
907 | "pygments_lexer": "ipython3",
908 | "version": "3.7.3"
909 | }
910 | },
911 | "nbformat": 4,
912 | "nbformat_minor": 4
913 | }
914 |
--------------------------------------------------------------------------------
/notebooks/decision_tree_functions.py:
--------------------------------------------------------------------------------
1 | # coding: utf-8
2 |
3 | import numpy as np
4 | import pandas as pd
5 |
6 |
7 | # 1. Decision Tree helper functions
8 | # 1.1 Data pure?
9 | def check_purity(data):
10 |
11 | label_column = data[:, -1]
12 | unique_classes = np.unique(label_column)
13 |
14 | if len(unique_classes) == 1:
15 | return True
16 | else:
17 | return False
18 |
19 |
20 | # 1.2 Create Leaf
21 | def create_leaf(data, ml_task):
22 |
23 | label_column = data[:, -1]
24 | if ml_task == "regression":
25 | leaf = np.mean(label_column)
26 |
27 | # classfication
28 | else:
29 | unique_classes, counts_unique_classes = np.unique(label_column, return_counts=True)
30 | index = counts_unique_classes.argmax()
31 | leaf = unique_classes[index]
32 |
33 | return leaf
34 |
35 |
36 | # 1.3 Determine potential splits
37 | def get_potential_splits(data):
38 |
39 | potential_splits = {}
40 | _, n_columns = data.shape
41 | for column_index in range(n_columns - 1): # excluding the last column which is the label
42 | values = data[:, column_index]
43 | unique_values = np.unique(values)
44 |
45 | potential_splits[column_index] = unique_values
46 |
47 | return potential_splits
48 |
49 |
50 | # 1.4 Determine Best Split
51 | def calculate_entropy(data):
52 |
53 | label_column = data[:, -1]
54 | _, counts = np.unique(label_column, return_counts=True)
55 |
56 | probabilities = counts / counts.sum()
57 | entropy = sum(probabilities * -np.log2(probabilities))
58 |
59 | return entropy
60 |
61 |
62 | def calculate_mse(data):
63 | actual_values = data[:, -1]
64 | if len(actual_values) == 0: # empty data
65 | mse = 0
66 |
67 | else:
68 | prediction = np.mean(actual_values)
69 | mse = np.mean((actual_values - prediction) **2)
70 |
71 | return mse
72 |
73 |
74 | def calculate_overall_metric(data_below, data_above, metric_function):
75 |
76 | n = len(data_below) + len(data_above)
77 | p_data_below = len(data_below) / n
78 | p_data_above = len(data_above) / n
79 |
80 | overall_metric = (p_data_below * metric_function(data_below)
81 | + p_data_above * metric_function(data_above))
82 |
83 | return overall_metric
84 |
85 |
86 | def determine_best_split(data, potential_splits, ml_task):
87 |
88 | first_iteration = True
89 | for column_index in potential_splits:
90 | for value in potential_splits[column_index]:
91 | data_below, data_above = split_data(data, split_column=column_index, split_value=value)
92 |
93 | if ml_task == "regression":
94 | current_overall_metric = calculate_overall_metric(data_below, data_above, metric_function=calculate_mse)
95 |
96 | # classification
97 | else:
98 | current_overall_metric = calculate_overall_metric(data_below, data_above, metric_function=calculate_entropy)
99 |
100 | if first_iteration or current_overall_metric <= best_overall_metric:
101 | first_iteration = False
102 |
103 | best_overall_metric = current_overall_metric
104 | best_split_column = column_index
105 | best_split_value = value
106 |
107 | return best_split_column, best_split_value
108 |
109 |
110 | # 1.5 Split data
111 | def split_data(data, split_column, split_value):
112 |
113 | split_column_values = data[:, split_column]
114 |
115 | type_of_feature = FEATURE_TYPES[split_column]
116 | if type_of_feature == "continuous":
117 | data_below = data[split_column_values <= split_value]
118 | data_above = data[split_column_values > split_value]
119 |
120 | # feature is categorical
121 | else:
122 | data_below = data[split_column_values == split_value]
123 | data_above = data[split_column_values != split_value]
124 |
125 | return data_below, data_above
126 |
127 |
128 | # 2. Decision Tree Algorithm
129 | # 2.1 Helper Function
130 | def determine_type_of_feature(df):
131 |
132 | feature_types = []
133 | n_unique_values_treshold = 15
134 | for feature in df.columns:
135 | if feature != "label":
136 | unique_values = df[feature].unique()
137 | example_value = unique_values[0]
138 |
139 | if (isinstance(example_value, str)) or (len(unique_values) <= n_unique_values_treshold):
140 | feature_types.append("categorical")
141 | else:
142 | feature_types.append("continuous")
143 |
144 | return feature_types
145 |
146 |
147 | # 2.2 Algorithm
148 | def decision_tree_algorithm(df, ml_task, counter=0, min_samples=2, max_depth=5):
149 |
150 | # data preparations
151 | if counter == 0:
152 | global COLUMN_HEADERS, FEATURE_TYPES
153 | COLUMN_HEADERS = df.columns
154 | FEATURE_TYPES = determine_type_of_feature(df)
155 | data = df.values
156 | else:
157 | data = df
158 |
159 |
160 | # base cases
161 | if (check_purity(data)) or (len(data) < min_samples) or (counter == max_depth):
162 | leaf = create_leaf(data, ml_task)
163 | return leaf
164 |
165 |
166 | # recursive part
167 | else:
168 | counter += 1
169 |
170 | # helper functions
171 | potential_splits = get_potential_splits(data)
172 | split_column, split_value = determine_best_split(data, potential_splits, ml_task)
173 | data_below, data_above = split_data(data, split_column, split_value)
174 |
175 | # check for empty data
176 | if len(data_below) == 0 or len(data_above) == 0:
177 | leaf = create_leaf(data, ml_task)
178 | return leaf
179 |
180 | # determine question
181 | feature_name = COLUMN_HEADERS[split_column]
182 | type_of_feature = FEATURE_TYPES[split_column]
183 | if type_of_feature == "continuous":
184 | question = "{} <= {}".format(feature_name, split_value)
185 |
186 | # feature is categorical
187 | else:
188 | question = "{} = {}".format(feature_name, split_value)
189 |
190 | # instantiate sub-tree
191 | sub_tree = {question: []}
192 |
193 | # find answers (recursion)
194 | yes_answer = decision_tree_algorithm(data_below, ml_task, counter, min_samples, max_depth)
195 | no_answer = decision_tree_algorithm(data_above, ml_task, counter, min_samples, max_depth)
196 |
197 | # If the answers are the same, then there is no point in asking the qestion.
198 | # This could happen when the data is classified even though it is not pure
199 | # yet (min_samples or max_depth base case).
200 | if yes_answer == no_answer:
201 | sub_tree = yes_answer
202 | else:
203 | sub_tree[question].append(yes_answer)
204 | sub_tree[question].append(no_answer)
205 |
206 | return sub_tree
207 |
208 |
209 | # 3. Make predictions
210 | # 3.1 One example
211 | def predict_example(example, tree):
212 |
213 | # tree is just a root node
214 | if not isinstance(tree, dict):
215 | return tree
216 |
217 | question = list(tree.keys())[0]
218 | feature_name, comparison_operator, value = question.split(" ")
219 |
220 | # ask question
221 | if comparison_operator == "<=":
222 | if example[feature_name] <= float(value):
223 | answer = tree[question][0]
224 | else:
225 | answer = tree[question][1]
226 |
227 | # feature is categorical
228 | else:
229 | if str(example[feature_name]) == value:
230 | answer = tree[question][0]
231 | else:
232 | answer = tree[question][1]
233 |
234 | # base case
235 | if not isinstance(answer, dict):
236 | return answer
237 |
238 | # recursive part
239 | else:
240 | residual_tree = answer
241 | return predict_example(example, residual_tree)
242 |
243 |
244 | # 3.2 All examples of a dataframe
245 | def make_predictions(df, tree):
246 |
247 | if len(df) != 0:
248 | predictions = df.apply(predict_example, args=(tree,), axis=1)
249 | else:
250 | # "df.apply()"" with empty dataframe returns an empty dataframe,
251 | # but "predictions" should be a series instead
252 | predictions = pd.Series()
253 |
254 | return predictions
255 |
256 |
257 | # 3.3 Accuracy
258 | def calculate_accuracy(df, tree):
259 | predictions = make_predictions(df, tree)
260 | predictions_correct = predictions == df.label
261 | accuracy = predictions_correct.mean()
262 |
263 | return accuracy
264 |
--------------------------------------------------------------------------------
/notebooks/helper_functions.py:
--------------------------------------------------------------------------------
1 | # coding: utf-8
2 |
3 | import matplotlib.pyplot as plt
4 | import numpy as np
5 | import pandas as pd
6 | import random
7 | import seaborn as sns
8 |
9 | sns.set_style("darkgrid")
10 |
11 |
12 | def train_test_split(df, test_size):
13 |
14 | if isinstance(test_size, float):
15 | test_size = round(test_size * len(df))
16 |
17 | indices = df.index.tolist()
18 | test_indices = random.sample(population=indices, k=test_size)
19 |
20 | test_df = df.loc[test_indices]
21 | train_df = df.drop(test_indices)
22 |
23 | return train_df, test_df
24 |
25 |
26 | def generate_data(n, specific_outliers=[], n_random_outliers=None):
27 |
28 | # create data
29 | data = np.random.random(size=(n, 2)) * 10
30 | data = data.round(decimals=1)
31 | df = pd.DataFrame(data, columns=["x", "y"])
32 | df["label"] = df.x <= 5
33 |
34 | # add specific outlier data points
35 | for outlier_coordinates in specific_outliers:
36 | df = df.append({"x": outlier_coordinates[0],
37 | "y": outlier_coordinates[1],
38 | "label": True},
39 | ignore_index=True)
40 |
41 | ## add random outlier data points
42 | if n_random_outliers:
43 | outlier_x_values = (6 - 5) * np.random.random(size=n_random_outliers) + 5 # value between 5 and 6
44 | outlier_y_values = np.random.random(size=n_random_outliers) * 10
45 |
46 | df_outliers = pd.DataFrame({"x": outlier_x_values.round(decimals=2),
47 | "y": outlier_y_values.round(decimals=2),
48 | "label": [True] * n_random_outliers})
49 |
50 | df = df.append(df_outliers, ignore_index=True)
51 |
52 | return df
53 |
54 |
55 | def plot_decision_boundaries(tree, x_min, x_max, y_min, y_max):
56 | color_keys = {True: "orange", False: "blue"}
57 |
58 | # recursive part
59 | if isinstance(tree, dict):
60 | question = list(tree.keys())[0]
61 | yes_answer, no_answer = tree[question]
62 | feature, _, value = question.split()
63 |
64 | if feature == "x":
65 | plot_decision_boundaries(yes_answer, x_min, float(value), y_min, y_max)
66 | plot_decision_boundaries(no_answer, float(value), x_max, y_min, y_max)
67 | else:
68 | plot_decision_boundaries(yes_answer, x_min, x_max, y_min, float(value))
69 | plot_decision_boundaries(no_answer, x_min, x_max, float(value), y_max)
70 |
71 | # "tree" is a leaf
72 | else:
73 | plt.fill_between(x=[x_min, x_max], y1=y_min, y2=y_max, alpha=0.2, color=color_keys[tree])
74 |
75 | return
76 |
77 |
78 | def create_plot(df, tree=None, title=None):
79 |
80 | sns.lmplot(data=df, x="x", y="y", hue="label",
81 | fit_reg=False, height=4, aspect=1.5, legend=False)
82 | plt.title(title)
83 |
84 | if tree or tree == False: # root of the tree might just be a leave with "False"
85 | x_min, x_max = round(df.x.min()), round(df.x.max())
86 | y_min, y_max = round(df.y.min()), round(df.y.max())
87 |
88 | plot_decision_boundaries(tree, x_min, x_max, y_min, y_max)
89 |
90 | return
--------------------------------------------------------------------------------