├── LICENSE
├── MIT Machine Learning Lectures
├── lecture-1-introduction.pdf
├── lecture-10-boosting.pdf
├── lecture-11-complexity.pdf
├── lecture-12-risk minimization.pdf
├── lecture-13-mixtures.pdf
├── lecture-14-em algorithm.pdf
├── lecture-15-mixture classifiers.pdf
├── lecture-16-clustering.pdf
├── lecture-17-semi-supervised.pdf
├── lecture-18-hmm.pdf
├── lecture-19-hmm2.pdf
├── lecture-2-linear regression.pdf
├── lecture-20-bayesian networks.pdf
├── lecture-21-markov random field.pdf
├── lecture-22-messgae passing.pdf
├── lecture-23-learning bayesian nets.pdf
├── lecture-24-learning bayesian nets2.pdf
├── lecture-3-additive regression.pdf
├── lecture-4-regression model.pdf
├── lecture-5-logistic.pdf
├── lecture-6-logistic 2.pdf
├── lecture-7-kernel methods.pdf
├── lecture-8-kernel methods 2.pdf
└── lecture-9-feature selection.pdf
├── README.md
├── StatsLearning
├── 1_linear_regression.R
└── 2_logistic_regression.R
└── contributing.md
/LICENSE:
--------------------------------------------------------------------------------
1 | CC0 1.0 Universal
2 |
3 | Statement of Purpose
4 |
5 | The laws of most jurisdictions throughout the world automatically confer
6 | exclusive Copyright and Related Rights (defined below) upon the creator and
7 | subsequent owner(s) (each and all, an "owner") of an original work of
8 | authorship and/or a database (each, a "Work").
9 |
10 | Certain owners wish to permanently relinquish those rights to a Work for the
11 | purpose of contributing to a commons of creative, cultural and scientific
12 | works ("Commons") that the public can reliably and without fear of later
13 | claims of infringement build upon, modify, incorporate in other works, reuse
14 | and redistribute as freely as possible in any form whatsoever and for any
15 | purposes, including without limitation commercial purposes. These owners may
16 | contribute to the Commons to promote the ideal of a free culture and the
17 | further production of creative, cultural and scientific works, or to gain
18 | reputation or greater distribution for their Work in part through the use and
19 | efforts of others.
20 |
21 | For these and/or other purposes and motivations, and without any expectation
22 | of additional consideration or compensation, the person associating CC0 with a
23 | Work (the "Affirmer"), to the extent that he or she is an owner of Copyright
24 | and Related Rights in the Work, voluntarily elects to apply CC0 to the Work
25 | and publicly distribute the Work under its terms, with knowledge of his or her
26 | Copyright and Related Rights in the Work and the meaning and intended legal
27 | effect of CC0 on those rights.
28 |
29 | 1. Copyright and Related Rights. A Work made available under CC0 may be
30 | protected by copyright and related or neighboring rights ("Copyright and
31 | Related Rights"). Copyright and Related Rights include, but are not limited
32 | to, the following:
33 |
34 | i. the right to reproduce, adapt, distribute, perform, display, communicate,
35 | and translate a Work;
36 |
37 | ii. moral rights retained by the original author(s) and/or performer(s);
38 |
39 | iii. publicity and privacy rights pertaining to a person's image or likeness
40 | depicted in a Work;
41 |
42 | iv. rights protecting against unfair competition in regards to a Work,
43 | subject to the limitations in paragraph 4(a), below;
44 |
45 | v. rights protecting the extraction, dissemination, use and reuse of data in
46 | a Work;
47 |
48 | vi. database rights (such as those arising under Directive 96/9/EC of the
49 | European Parliament and of the Council of 11 March 1996 on the legal
50 | protection of databases, and under any national implementation thereof,
51 | including any amended or successor version of such directive); and
52 |
53 | vii. other similar, equivalent or corresponding rights throughout the world
54 | based on applicable law or treaty, and any national implementations thereof.
55 |
56 | 2. Waiver. To the greatest extent permitted by, but not in contravention of,
57 | applicable law, Affirmer hereby overtly, fully, permanently, irrevocably and
58 | unconditionally waives, abandons, and surrenders all of Affirmer's Copyright
59 | and Related Rights and associated claims and causes of action, whether now
60 | known or unknown (including existing as well as future claims and causes of
61 | action), in the Work (i) in all territories worldwide, (ii) for the maximum
62 | duration provided by applicable law or treaty (including future time
63 | extensions), (iii) in any current or future medium and for any number of
64 | copies, and (iv) for any purpose whatsoever, including without limitation
65 | commercial, advertising or promotional purposes (the "Waiver"). Affirmer makes
66 | the Waiver for the benefit of each member of the public at large and to the
67 | detriment of Affirmer's heirs and successors, fully intending that such Waiver
68 | shall not be subject to revocation, rescission, cancellation, termination, or
69 | any other legal or equitable action to disrupt the quiet enjoyment of the Work
70 | by the public as contemplated by Affirmer's express Statement of Purpose.
71 |
72 | 3. Public License Fallback. Should any part of the Waiver for any reason be
73 | judged legally invalid or ineffective under applicable law, then the Waiver
74 | shall be preserved to the maximum extent permitted taking into account
75 | Affirmer's express Statement of Purpose. In addition, to the extent the Waiver
76 | is so judged Affirmer hereby grants to each affected person a royalty-free,
77 | non transferable, non sublicensable, non exclusive, irrevocable and
78 | unconditional license to exercise Affirmer's Copyright and Related Rights in
79 | the Work (i) in all territories worldwide, (ii) for the maximum duration
80 | provided by applicable law or treaty (including future time extensions), (iii)
81 | in any current or future medium and for any number of copies, and (iv) for any
82 | purpose whatsoever, including without limitation commercial, advertising or
83 | promotional purposes (the "License"). The License shall be deemed effective as
84 | of the date CC0 was applied by Affirmer to the Work. Should any part of the
85 | License for any reason be judged legally invalid or ineffective under
86 | applicable law, such partial invalidity or ineffectiveness shall not
87 | invalidate the remainder of the License, and in such case Affirmer hereby
88 | affirms that he or she will not (i) exercise any of his or her remaining
89 | Copyright and Related Rights in the Work or (ii) assert any associated claims
90 | and causes of action with respect to the Work, in either case contrary to
91 | Affirmer's express Statement of Purpose.
92 |
93 | 4. Limitations and Disclaimers.
94 |
95 | a. No trademark or patent rights held by Affirmer are waived, abandoned,
96 | surrendered, licensed or otherwise affected by this document.
97 |
98 | b. Affirmer offers the Work as-is and makes no representations or warranties
99 | of any kind concerning the Work, express, implied, statutory or otherwise,
100 | including without limitation warranties of title, merchantability, fitness
101 | for a particular purpose, non infringement, or the absence of latent or
102 | other defects, accuracy, or the present or absence of errors, whether or not
103 | discoverable, all to the greatest extent permissible under applicable law.
104 |
105 | c. Affirmer disclaims responsibility for clearing rights of other persons
106 | that may apply to the Work or any use thereof, including without limitation
107 | any person's Copyright and Related Rights in the Work. Further, Affirmer
108 | disclaims responsibility for obtaining any necessary consents, permissions
109 | or other rights required for any use of the Work.
110 |
111 | d. Affirmer understands and acknowledges that Creative Commons is not a
112 | party to this document and has no duty or obligation with respect to this
113 | CC0 or use of the Work.
114 |
115 | For more information, please see
116 |
117 |
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-1-introduction.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-1-introduction.pdf
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-10-boosting.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-10-boosting.pdf
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-11-complexity.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-11-complexity.pdf
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-12-risk minimization.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-12-risk minimization.pdf
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-13-mixtures.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-13-mixtures.pdf
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-14-em algorithm.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-14-em algorithm.pdf
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-15-mixture classifiers.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-15-mixture classifiers.pdf
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-16-clustering.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-16-clustering.pdf
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-17-semi-supervised.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-17-semi-supervised.pdf
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-18-hmm.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-18-hmm.pdf
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-19-hmm2.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-19-hmm2.pdf
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-2-linear regression.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-2-linear regression.pdf
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-20-bayesian networks.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-20-bayesian networks.pdf
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-21-markov random field.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-21-markov random field.pdf
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-22-messgae passing.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-22-messgae passing.pdf
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-23-learning bayesian nets.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-23-learning bayesian nets.pdf
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-24-learning bayesian nets2.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-24-learning bayesian nets2.pdf
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-3-additive regression.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-3-additive regression.pdf
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-4-regression model.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-4-regression model.pdf
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-5-logistic.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-5-logistic.pdf
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-6-logistic 2.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-6-logistic 2.pdf
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-7-kernel methods.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-7-kernel methods.pdf
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-8-kernel methods 2.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-8-kernel methods 2.pdf
--------------------------------------------------------------------------------
/MIT Machine Learning Lectures/lecture-9-feature selection.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aymericdamien/Machine-Learning-Tutorials/46714077b3776c711a96d145323cda76e9fe3910/MIT Machine Learning Lectures/lecture-9-feature selection.pdf
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Machine Learning Tutorials [](https://github.com/sindresorhus/awesome)
2 |
3 | This repository contains a topic-wise curated list of Machine Learning and Deep Learning tutorials, articles and other resources. Other awesome lists can be found in this [list](https://github.com/sindresorhus/awesome).
4 |
5 | If you want to contribute to this list, please read [Contributing Guidelines](https://github.com/ujjwalkarn/Machine-Learning-Tutorials/blob/master/contributing.md).
6 |
7 | ##Table of Contents
8 | - [Miscellaneous](#general)
9 | - [Interview Resources](#interview)
10 | - [Artificial Intelligence](#ai)
11 | - [Genetic Algorithms](#ga)
12 | - [Statistics](#stat)
13 | - [Useful Blogs](#blogs)
14 | - [Resources on Quora](#quora)
15 | - [Resources on Kaggle](#kaggle)
16 | - [Cheat Sheets](#cs)
17 | - [Classification](#classification)
18 | - [Linear Regression](#linear)
19 | - [Logistic Regression](#logistic)
20 | - [Model Validation using Resampling](#validation)
21 | - [Cross Validation](#cross)
22 | - [Bootstraping](#boot)
23 | - [Deep Learning](#deep)
24 | - [Frameworks](#frame)
25 | - [Feed Forward Networks](#feed)
26 | - [Recurrent Neural Nets, LSTM, GRU](#rnn)
27 | - [Restricted Boltzmann Machine, DBNs](#rbm)
28 | - [Autoencoders](#auto)
29 | - [Convolution Neural Nets](#cnn)
30 | - [Natural Language Processing](#nlp)
31 | - [Topic Modeling, LDA](#topic)
32 | - [Word2Vec](#word2vec)
33 | - [Computer Vision](#vision)
34 | - [Support Vector Machine](#svm)
35 | - [Reinforcement Learning](#rl)
36 | - [Decision Trees](#dt)
37 | - [Random Forest / Bagging](#rf)
38 | - [Boosting](#gbm)
39 | - [Ensembles](#ensem)
40 | - [Stacking Models](#stack)
41 | - [VC Dimension](#vc)
42 | - [Bayesian Machine Learning](#bayes)
43 | - [Semi Supervised Learning](#semi)
44 | - [Optimizations](#opt)
45 | - [Other Useful Tutorials](#other)
46 |
47 |
48 | ##Miscellaneous
49 | - [A curated list of awesome Machine Learning frameworks, libraries and software](https://github.com/josephmisiti/awesome-machine-learning)
50 | - [A curated list of awesome data visualization libraries and resources.](https://github.com/fasouto/awesome-dataviz)
51 | - [An awesome Data Science repository to learn and apply for real world problems](https://github.com/okulbilisim/awesome-datascience)
52 | - [The Open Source Data Science Masters](http://datasciencemasters.org/)
53 | - [Machine Learning FAQs on Cross Validated](http://stats.stackexchange.com/questions/tagged/machine-learning)
54 | - [List of Machine Learning University Courses](https://github.com/prakhar1989/awesome-courses#machine-learning)
55 | - [Machine Learning algorithms that you should always have a strong understanding of](https://www.quora.com/What-are-some-Machine-Learning-algorithms-that-you-should-always-have-a-strong-understanding-of-and-why)
56 | - [Differnce between Linearly Independent, Orthogonal, and Uncorrelated Variables](https://www.psych.umn.edu/faculty/waller/classes/FA2010/Readings/rodgers.pdf)
57 | - [List of Machine Learning Concepts](https://en.wikipedia.org/wiki/List_of_machine_learning_concepts)
58 | - [Slides on Several Machine Learning Topics](http://www.slideshare.net/pierluca.lanzi/presentations)
59 | - [MIT Machine Learning Lecture Slides](http://www.ai.mit.edu/courses/6.867-f04/lectures.html)
60 | - [Comparison Supervised Learning Algorithms](http://www.dataschool.io/comparing-supervised-learning-algorithms/)
61 | - [Learning Data Science Fundamentals](http://www.dataschool.io/learning-data-science-fundamentals/)
62 | - [Machine Learning mistakes to avoid](https://medium.com/@nomadic_mind/new-to-machine-learning-avoid-these-three-mistakes-73258b3848a4#.lih061l3l)
63 | - [Statistical Machine Learning Course](http://www.stat.cmu.edu/~larry/=sml/)
64 | - [TheAnalyticsEdge edX Notes and Codes](https://github.com/pedrosan/TheAnalyticsEdge)
65 |
66 |
67 | ##Interview Resources
68 | - [How can a computer science graduate student prepare himself for data scientist interviews?](https://www.quora.com/How-can-a-computer-science-graduate-student-prepare-himself-for-data-scientist-machine-learning-intern-interviews)
69 | - [How do I learn Machine Learning?](https://www.quora.com/How-do-I-learn-machine-learning-1)
70 | - [FAQs about Data Science Interviews](https://www.quora.com/topic/Data-Science-Interviews/faq)
71 | - [What are the key skills of a data scientist?](https://www.quora.com/What-are-the-key-skills-of-a-data-scientist)
72 |
73 |
74 | ##Artificial Intelligence
75 | - [Awesome Artificial Intelligence (GitHub Repo)](https://github.com/owainlewis/awesome-artificial-intelligence)
76 | - [edX course | Klein & Abbeel](https://courses.edx.org/courses/BerkeleyX/CS188x_1/1T2013/info)
77 | - [Udacity Course | Norvig & Thrun](https://www.udacity.com/course/intro-to-artificial-intelligence--cs271)
78 | - [TED talks on AI](http://www.ted.com/playlists/310/talks_on_artificial_intelligen)
79 |
80 |
81 | ##Genetic Algorithms
82 | - [Genetic Algorithms Wikipedia Page](https://en.wikipedia.org/wiki/Genetic_algorithm)
83 | - [Simple Implementation of Genetic Algorithms in Python (Part 1)](http://outlace.com/Simple-Genetic-Algorithm-in-15-lines-of-Python/), [Part 2](http://outlace.com/Simple-Genetic-Algorithm-Python-Addendum/)
84 | - [Genetic Algorithms vs Artificial Neural Networks](http://stackoverflow.com/questions/1402370/when-to-use-genetic-algorithms-vs-when-to-use-neural-networks)
85 | - [Genetic Algorithms Explained in Plain English](http://www.ai-junkie.com/ga/intro/gat1.html)
86 | - [Genetic Programming](https://en.wikipedia.org/wiki/Genetic_programming)
87 | - [Genetic Programming in Python (GitHub)](https://github.com/trevorstephens/gplearn)
88 | - [Genetic Alogorithms vs Genetic Programming (Quora)](https://www.quora.com/Whats-the-difference-between-Genetic-Algorithms-and-Genetic-Programming), [StackOverflow](http://stackoverflow.com/questions/3819977/what-are-the-differences-between-genetic-algorithms-and-genetic-programming)
89 |
90 |
91 | ##Statistics
92 | - [Stat Trek Website](http://stattrek.com/) - A dedicated website to teach yourselves Statistics
93 | - [Learn Statistics Using Python](https://github.com/rouseguy/intro2stats) - Learn Statistics using an application-centric programming approach
94 | - [Statistics for Hackers | Slides | @jakevdp](https://speakerdeck.com/jakevdp/statistics-for-hackers) - Slides by Jake VanderPlas
95 | - [Online Statistics Book](http://onlinestatbook.com/2/index.html) - An Interactive Multimedia Course for Studying Statistics
96 | - [What is a Sampling Distribution?](http://stattrek.com/sampling/sampling-distribution.aspx)
97 | - Tutorials
98 | - [AP Statistics Tutorial](http://stattrek.com/tutorials/ap-statistics-tutorial.aspx)
99 | - [Statistics and Probability Tutorial](http://stattrek.com/tutorials/statistics-tutorial.aspx)
100 | - [Matrix Algebra Tutorial](http://stattrek.com/tutorials/matrix-algebra-tutorial.aspx)
101 | - [What is an Unbiased Estimator?](https://www.physicsforums.com/threads/what-is-an-unbiased-estimator.547728/)
102 | - [Goodness of Fit Explained](https://en.wikipedia.org/wiki/Goodness_of_fit)
103 | - [What are QQ Plots?](http://onlinestatbook.com/2/advanced_graphs/q-q_plots.html)
104 |
105 |
106 | ##Useful Blogs
107 | - [Edwin Chen's Blog](http://blog.echen.me/) - A blog about Math, stats, ML, crowdsourcing, data science
108 | - [The Data School Blog](http://www.dataschool.io/) - Data science for beginners!
109 | - [ML Wave](http://mlwave.com/) - A blog for Learning Machine Learning
110 | - [Andrej Karpathy](http://karpathy.github.io/) - A blog about Deep Learning and Data Science in general
111 | - [Colah's Blog](http://colah.github.io/) - Awesome Neural Networks Blog
112 | - [Alex Minnaar's Blog](http://alexminnaar.com/) - A blog about Machine Learning and Software Engineering
113 | - [Statistically Significant](http://andland.github.io/) - Andrew Landgraf's Data Science Blog
114 | - [Simply Statistics](http://simplystatistics.org/) - A blog by three biostatistics professors
115 | - [Yanir Seroussi's Blog](http://yanirseroussi.com/) - A blog about Data Science and beyond
116 | - [fastML](http://fastml.com/) - Machine learning made easy
117 | - [Trevor Stephens Blog](http://trevorstephens.com/) - Trevor Stephens Personal Page
118 | - [no free hunch | kaggle](http://blog.kaggle.com/) - The Kaggle Blog about all things Data Science
119 | - [A Quantitative Journey | outlace](http://outlace.com/) - learning quantitative applications
120 | - [r4stats](http://r4stats.com/) - analyze the world of data science, and to help people learn to use R
121 | - [Variance Explained](http://varianceexplained.org/) - David Robinson's Blog
122 | - [AI Junkie](http://www.ai-junkie.com/) - a blog about Artificial Intellingence
123 |
124 |
125 | ##Resources on Quora
126 | - [Most Viewed Machine Learning writers](https://www.quora.com/topic/Machine-Learning/writers)
127 | - [Data Science Topic on Quora](https://www.quora.com/Data-Science)
128 | - [William Chen's Answers](https://www.quora.com/William-Chen-6/answers)
129 | - [Michael Hochster's Answers](https://www.quora.com/Michael-Hochster/answers)
130 | - [Ricardo Vladimiro's Answers](https://www.quora.com/Ricardo-Vladimiro-1/answers)
131 | - [Storytelling with Statistics](https://datastories.quora.com/)
132 | - [Data Science FAQs on Quora](https://www.quora.com/topic/Data-Science/faq)
133 | - [Machine Learning FAQs on Quora](https://www.quora.com/topic/Machine-Learning/faq)
134 |
135 |
136 | ##Kaggle Competitions WriteUp
137 | - [How to almost win Kaggle Competitions](http://yanirseroussi.com/2014/08/24/how-to-almost-win-kaggle-competitions/)
138 | - [Convolution Neural Networks for EEG detection](http://blog.kaggle.com/2015/10/05/grasp-and-lift-eeg-detection-winners-interview-3rd-place-team-hedj/)
139 | - [Facebook Recruiting III Explained](http://alexminnaar.com/tag/kaggle-competitions.html)
140 | - [Predicting CTR with Online ML](http://mlwave.com/predicting-click-through-rates-with-online-machine-learning/)
141 |
142 |
143 | ##Cheat Sheets
144 | - [Probability Cheat Sheet](http://static1.squarespace.com/static/54bf3241e4b0f0d81bf7ff36/t/55e9494fe4b011aed10e48e5/1441352015658/probability_cheatsheet.pdf), [Source](http://www.wzchen.com/probability-cheatsheet/)
145 | - [Machine Learning Cheat Sheet](https://github.com/soulmachine/machine-learning-cheat-sheet)
146 |
147 |
148 | ##Classification
149 | - [Does Balancing Classes Improve Classifier Performance?](http://www.win-vector.com/blog/2015/02/does-balancing-classes-improve-classifier-performance/)
150 | - [What is Deviance?](http://stats.stackexchange.com/questions/6581/what-is-deviance-specifically-in-cart-rpart)
151 | - [When to choose which machine learning classifier?](http://stackoverflow.com/questions/2595176/when-to-choose-which-machine-learning-classifier)
152 | - [What are the advantages of different classification algorithms?](https://www.quora.com/What-are-the-advantages-of-different-classification-algorithms)
153 | - [ROC and AUC Explained](http://www.dataschool.io/roc-curves-and-auc-explained/)
154 | - [An introduction to ROC analysis](https://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf)
155 | - [Simple guide to confusion matrix terminology](http://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/)
156 |
157 |
158 |
159 | ##Linear Regression
160 | - [General](#general-)
161 | - [Assumptions of Linear Regression](http://pareonline.net/getvn.asp?n=2&v=8), [Stack Exchange](http://stats.stackexchange.com/questions/16381/what-is-a-complete-list-of-the-usual-assumptions-for-linear-regression)
162 | - [Linear Regression Comprehensive Resource](http://people.duke.edu/~rnau/regintro.htm)
163 | - [Applying and Interpreting Linear Regression](http://www.dataschool.io/applying-and-interpreting-linear-regression/)
164 | - [What does having constant variance in a linear regression model mean?](http://stats.stackexchange.com/questions/52089/what-does-having-constant-variance-in-a-linear-regression-model-mean/52107?stw=2#52107)
165 | - [Difference between linear regression on y with x and x with y](http://stats.stackexchange.com/questions/22718/what-is-the-difference-between-linear-regression-on-y-with-x-and-x-with-y?lq=1)
166 | - [Is linear regression valid when the dependant variable is not normally distributed?](http://www.researchgate.net/post/Is_linear_regression_valid_when_the_outcome_dependant_variable_not_normally_distributed)
167 | - Multicollinearity and VIF
168 | - [Dummy Variable Trap | Multicollinearity](https://en.wikipedia.org/wiki/Multicollinearity)
169 | - [Dealing with multicollinearity using VIFs](http://jonlefcheck.net/2012/12/28/dealing-with-multicollinearity-using-variance-inflation-factors/)
170 |
171 | - [Residual Analysis](#residuals-)
172 | - [Interpreting plot.lm() in R](http://stats.stackexchange.com/questions/58141/interpreting-plot-lm)
173 | - [How to interpret a QQ plot?](http://stats.stackexchange.com/questions/101274/how-to-interpret-a-qq-plot?lq=1)
174 | - [Interpreting Residuals vs Fitted Plot](http://stats.stackexchange.com/questions/76226/interpreting-the-residuals-vs-fitted-values-plot-for-verifying-the-assumptions)
175 |
176 | - [Outliers](#outliers-)
177 | - [How should outliers be dealt with?](http://stats.stackexchange.com/questions/175/how-should-outliers-be-dealt-with-in-linear-regression-analysis)
178 |
179 | - [Elastic Net](https://en.wikipedia.org/wiki/Elastic_net_regularization)
180 | - [Regularization and Variable Selection via the
181 | Elastic Net](https://web.stanford.edu/~hastie/Papers/elasticnet.pdf)
182 |
183 |
184 | ##Logistic Regression
185 | - [Logistic Regression Wiki](https://en.wikipedia.org/wiki/Logistic_regression)
186 | - [Geometric Intuition of Logistic Regression](http://florianhartl.com/logistic-regression-geometric-intuition.html)
187 | - [Obtaining predicted categories (choosing threshold)](http://stats.stackexchange.com/questions/25389/obtaining-predicted-values-y-1-or-0-from-a-logistic-regression-model-fit)
188 | - [Residuals in logistic regression](http://stats.stackexchange.com/questions/1432/what-do-the-residuals-in-a-logistic-regression-mean)
189 | - [Difference between logit and probit models](http://stats.stackexchange.com/questions/20523/difference-between-logit-and-probit-models#30909), [Logistic Regression Wiki](https://en.wikipedia.org/wiki/Logistic_regression), [Probit Model Wiki](https://en.wikipedia.org/wiki/Probit_model)
190 | - [Pseudo R2 for Logistic Regression](http://stats.stackexchange.com/questions/3559/which-pseudo-r2-measure-is-the-one-to-report-for-logistic-regression-cox-s), [How to calculate](http://stats.stackexchange.com/questions/8511/how-to-calculate-pseudo-r2-from-rs-logistic-regression), [Other Details](http://www.ats.ucla.edu/stat/mult_pkg/faq/general/Psuedo_RSquareds.htm)
191 |
192 |
193 | ##Model Validation using Resampling
194 |
195 | - [Resampling Explained](https://en.wikipedia.org/wiki/Resampling_(statistics))
196 | - [Partioning data set in R](http://stackoverflow.com/questions/13536537/partitioning-data-set-in-r-based-on-multiple-classes-of-observations)
197 | - [Implementing hold-out Validaion in R](http://stackoverflow.com/questions/22972854/how-to-implement-a-hold-out-validation-in-r), [2](http://www.gettinggeneticsdone.com/2011/02/split-data-frame-into-testing-and.html)
198 |
199 |
200 | - [Cross Validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics))
201 | - [Training with Full dataset after CV?](http://stats.stackexchange.com/questions/11602/training-with-the-full-dataset-after-cross-validation)
202 | - [Which CV method is best?](http://stats.stackexchange.com/questions/103459/how-do-i-know-which-method-of-cross-validation-is-best)
203 | - [Variance Estimates in k-fold CV](http://stats.stackexchange.com/questions/31190/variance-estimates-in-k-fold-cross-validation)
204 | - [Is CV a subsitute for Validation Set?](http://stats.stackexchange.com/questions/18856/is-cross-validation-a-proper-substitute-for-validation-set)
205 | - [Choice of k in k-fold CV](http://stats.stackexchange.com/questions/27730/choice-of-k-in-k-fold-cross-validation)
206 | - [CV for ensemble learning](http://stats.stackexchange.com/questions/102631/k-fold-cross-validation-of-ensemble-learning)
207 | - [k-fold CV in R](http://stackoverflow.com/questions/22909197/creating-folds-for-k-fold-cv-in-r-using-caret)
208 | - [Good Resources](http://www.chioka.in/tag/cross-validation/)
209 | - Overfitting and Cross Validation
210 | - [Preventing Overfitting the Cross Validation Data | Andrew Ng](http://ai.stanford.edu/~ang/papers/cv-final.pdf)
211 | - [Over-fitting in Model Selection and Subsequent Selection Bias in
212 | Performance Evaluation](http://www.jmlr.org/papers/volume11/cawley10a/cawley10a.pdf)
213 | - [CV for detecting and preventing Overfitting](http://www.autonlab.org/tutorials/overfit10.pdf)
214 | - [How does CV overcome the Overfitting Problem](http://stats.stackexchange.com/questions/9053/how-does-cross-validation-overcome-the-overfitting-problem)
215 |
216 |
217 |
218 |
219 | - [Bootstrapping](https://en.wikipedia.org/wiki/Bootstrapping_(statistics))
220 | - [Why Bootstrapping Works?](http://stats.stackexchange.com/questions/26088/explaining-to-laypeople-why-bootstrapping-works)
221 | - [Good Animation](https://www.stat.auckland.ac.nz/~wild/BootAnim/)
222 | - [Example of Bootstapping](http://statistics.about.com/od/Applications/a/Example-Of-Bootstrapping.htm)
223 | - [Understanding Bootstapping for Validation and Model Selection](http://stats.stackexchange.com/questions/14516/understanding-bootstrapping-for-validation-and-model-selection?rq=1)
224 | - [Cross Validation vs Bootstrap to estimate prediction error](http://stats.stackexchange.com/questions/18348/differences-between-cross-validation-and-bootstrapping-to-estimate-the-predictio), [Cross-validation vs .632 bootstrapping to evaluate classification performance](http://stats.stackexchange.com/questions/71184/cross-validation-or-bootstrapping-to-evaluate-classification-performance)
225 |
226 |
227 |
228 | ##Deep Learning
229 | - [A curated list of awesome Deep Learning tutorials, projects and communities](https://github.com/ChristosChristofidis/awesome-deep-learning)
230 | - [Lots of Deep Learning Resources](http://deeplearning4j.org/documentation.html)
231 | - [Interesting Deep Learning and NLP Projects (Stanford)](http://cs224d.stanford.edu/reports.html), [Website](http://cs224d.stanford.edu/)
232 | - [Core Concepts of Deep Learning](http://devblogs.nvidia.com/parallelforall/deep-learning-nutshell-core-concepts/)
233 | - [Understanding Natural Language with Deep Neural Networks Using Torch](http://devblogs.nvidia.com/parallelforall/understanding-natural-language-deep-neural-networks-using-torch/)
234 | - [Stanford Deep Learning Tutorial](http://ufldl.stanford.edu/tutorial/)
235 | - [Deep Learning FAQs on Quora](https://www.quora.com/topic/Deep-Learning/faq)
236 | - [Google+ Deep Learning Page](https://plus.google.com/communities/112866381580457264725)
237 | - [Recent Reddit AMAs related to Deep Learning](http://deeplearning.net/2014/11/22/recent-reddit-amas-about-deep-learning/), [Another AMA](https://www.reddit.com/r/IAmA/comments/3mdk9v/we_are_google_researchers_working_on_deep/)
238 | - [Where to Learn Deep Learning?](http://www.kdnuggets.com/2014/05/learn-deep-learning-courses-tutorials-overviews.html)
239 | - [Deep Learning nvidia concepts](http://devblogs.nvidia.com/parallelforall/deep-learning-nutshell-core-concepts/)
240 | - [Introduction to Deep Learning Using Python (GitHub)](https://github.com/rouseguy/intro2deeplearning), [Good Introduction Slides](https://speakerdeck.com/bargava/introduction-to-deep-learning)
241 | - [Video Lectures Oxford 2015](https://www.youtube.com/playlist?list=PLE6Wd9FR--EfW8dtjAuPoTuPcqmOV53Fu), [Video Lectures Summer School Montreal](http://videolectures.net/deeplearning2015_montreal/)
242 | - [Deep Learning Software List](http://deeplearning.net/software_links/)
243 | - [Hacker's guide to Neural Nets](http://karpathy.github.io/neuralnets/)
244 | - [Top arxiv Deep Learning Papers explained](http://www.kdnuggets.com/2015/10/top-arxiv-deep-learning-papers-explained.html)
245 | - [Geoff Hinton Youtube Vidoes on Deep Learning](https://www.youtube.com/watch?v=IcOMKXAw5VA)
246 | - [Awesome Deep Learning Reading List](http://deeplearning.net/reading-list/)
247 | - [Deep Learning Comprehensive Website](http://deeplearning.net/), [Software](http://deeplearning.net/software_links/)
248 | - [deeplearning Tutorials](http://deeplearning4j.org/)
249 | - [AWESOME! Deep Learning Tutorial](http://www.toptal.com/machine-learning/an-introduction-to-deep-learning-from-perceptrons-to-deep-networks)
250 | - [Deep Learning Basics](http://alexminnaar.com/deep-learning-basics-neural-networks-backpropagation-and-stochastic-gradient-descent.html)
251 | - [Stanford Tutorials](http://ufldl.stanford.edu/tutorial/supervised/MultiLayerNeuralNetworks/)
252 | - [Train, Validation & Test in Artificial Neural Networks](http://stackoverflow.com/questions/2976452/whats-is-the-difference-between-train-validation-and-test-set-in-neural-networ)
253 | - [Artificial Neural Networks Tutorials](http://stackoverflow.com/questions/478947/what-are-some-good-resources-for-learning-about-artificial-neural-networks)
254 | - [Neural Networks FAQs on Stack Overflow](http://stackoverflow.com/questions/tagged/neural-network?sort=votes&pageSize=50)
255 | - [Deep Learning Tutorials on deeplearning.net](http://deeplearning.net/tutorial/index.html)
256 |
257 | - Neural Machine Translation
258 | - [Introduction to Neural Machine Translation with GPUs (part 1)](http://devblogs.nvidia.com/parallelforall/introduction-neural-machine-translation-with-gpus/), [Part 2](http://devblogs.nvidia.com/parallelforall/introduction-neural-machine-translation-gpus-part-2/), [Part 3](http://devblogs.nvidia.com/parallelforall/introduction-neural-machine-translation-gpus-part-3/)
259 | - [Deep Speech: Accurate Speech Recognition with GPU-Accelerated Deep Learning](http://devblogs.nvidia.com/parallelforall/deep-speech-accurate-speech-recognition-gpu-accelerated-deep-learning/)
260 |
261 |
262 | - Deep Learning Frameworks
263 | - [Torch vs. Theano](http://fastml.com/torch-vs-theano/)
264 | - [dl4j vs. torch7 vs. theano](http://deeplearning4j.org/compare-dl4j-torch7-pylearn.html)
265 | - [Deep Learning Libraries by Language](http://www.teglor.com/b/deep-learning-libraries-language-cm569/)
266 |
267 | - [Theano](https://en.wikipedia.org/wiki/Theano_(software))
268 | - [Website](http://deeplearning.net/software/theano/)
269 | - [Theano Introduction](http://www.wildml.com/2015/09/speeding-up-your-neural-network-with-theano-and-the-gpu/)
270 | - [Theano Tutorial](http://outlace.com/Beginner-Tutorial-Theano/)
271 | - [Good Theano Tutorial](http://deeplearning.net/software/theano/tutorial/)
272 | - [Logistic Regression using Theano for classifying digits](http://deeplearning.net/tutorial/logreg.html#logreg)
273 | - [MLP using Theano](http://deeplearning.net/tutorial/mlp.html#mlp)
274 | - [CNN using Theano](http://deeplearning.net/tutorial/lenet.html#lenet)
275 | - [RNNs using Theano](http://deeplearning.net/tutorial/rnnslu.html#rnnslu)
276 | - [LSTM for Sentiment Analysis in Theano](http://deeplearning.net/tutorial/lstm.html#lstm)
277 | - [RBM using Theano](http://deeplearning.net/tutorial/rbm.html#rbm)
278 | - [DBNs using Theano](http://deeplearning.net/tutorial/DBN.html#dbn)
279 | - [All Codes](https://github.com/lisa-lab/DeepLearningTutorials)
280 |
281 | - [Torch](http://torch.ch/)
282 | - [Torch ML Tutorial](http://code.madbits.com/wiki/doku.php), [Code](https://github.com/torch/tutorials)
283 | - [Intro to Torch](http://ml.informatik.uni-freiburg.de/_media/teaching/ws1415/presentation_dl_lect3.pdf)
284 | - [Learning Torch GitHub Repo](https://github.com/chetannaik/learning_torch)
285 | - [Awesome-Torch (Repository on GitHub)](https://github.com/carpedm20/awesome-torch)
286 | - [Machine Learning using Torch Oxford Univ](https://www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/), [Code](https://github.com/oxford-cs-ml-2015)
287 | - [Torch Internals Overview](https://apaszke.github.io/torch-internals.html)
288 | - [Torch Cheatsheet](https://github.com/torch/torch7/wiki/Cheatsheet)
289 | - [Understanding Natural Language with Deep Neural Networks Using Torch](http://devblogs.nvidia.com/parallelforall/understanding-natural-language-deep-neural-networks-using-torch/)
290 |
291 | - Caffe
292 | - [Deep Learning for Computer Vision with Caffe and cuDNN](http://devblogs.nvidia.com/parallelforall/deep-learning-computer-vision-caffe-cudnn/)
293 |
294 | - TensorFlow
295 | - [Website](http://tensorflow.org/)
296 | - [TensorFlow Examples for Beginners](https://github.com/aymericdamien/TensorFlow-Examples)
297 | - [Learning TensorFlow GitHub Repo](https://github.com/chetannaik/learning_tensorflow)
298 | - [Benchmark TensorFlow GitHub](https://github.com/soumith/convnet-benchmarks/issues/66)
299 |
300 |
301 |
302 | - Feed Forward Networks
303 | - [Implementing a Neural Network from scratch](http://www.wildml.com/2015/09/implementing-a-neural-network-from-scratch/), [Code](https://github.com/dennybritz/nn-from-scratch)
304 | - [Speeding up your Neural Network with Theano and the gpu](http://www.wildml.com/2015/09/speeding-up-your-neural-network-with-theano-and-the-gpu/), [Code](https://github.com/dennybritz/nn-theano)
305 | - [Basic ANN Theory](https://takinginitiative.wordpress.com/2008/04/03/basic-neural-network-tutorial-theory/)
306 | - [Role of Bias in Neural Networks](http://stackoverflow.com/questions/2480650/role-of-bias-in-neural-networks)
307 | - [Choosing number of hidden layers and nodes](http://stackoverflow.com/questions/3345079/estimating-the-number-of-neurons-and-number-of-layers-of-an-artificial-neural-ne),[2](http://stackoverflow.com/questions/10565868/multi-layer-perceptron-mlp-architecture-criteria-for-choosing-number-of-hidde?lq=1),[3](http://stackoverflow.com/questions/9436209/how-to-choose-number-of-hidden-layers-and-nodes-in-neural-network/2#)
308 | - [Backpropagation Explained](http://home.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html)
309 | - [ANN implemented in C++ | AI Junkie](http://www.ai-junkie.com/ann/evolved/nnt6.html)
310 | - [Simple Implementation](http://stackoverflow.com/questions/15395835/simple-multi-layer-neural-network-implementation)
311 | - [NN for Beginners](http://www.codeproject.com/Articles/16419/AI-Neural-Network-for-beginners-Part-of)
312 | - [Regression and Classification with NNs (Slides)](http://www.autonlab.org/tutorials/neural13.pdf)
313 | - [Another Intro](http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html)
314 |
315 |
316 | - Recurrent and LSTM Networks
317 | - [awesome-rnn: list of resources (GitHub Repo)](https://github.com/kjw0612/awesome-rnn)
318 | - [Recurrent Neural Net Tutorial Part 1](http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/), [Part 2] (http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-2-implementing-a-language-model-rnn-with-python-numpy-and-theano/), [Part 3] (http://www.wildml.com/2015/10/recurrent-neural-networks-tutorial-part-3-backpropagation-through-time-and-vanishing-gradients/), [Code](https://github.com/dennybritz/rnn-tutorial-rnnlm/)
319 | - [NLP RNN Representations](http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/)
320 | - [The Unreasonable effectiveness of RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/), [Torch Code](https://github.com/karpathy/char-rnn), [Python Code](https://gist.github.com/karpathy/d4dee566867f8291f086)
321 | - [Intro to RNN](http://deeplearning4j.org/recurrentnetwork.html), [LSTM](http://deeplearning4j.org/lstm.html)
322 | - [An application of RNN](http://hackaday.com/2015/10/15/73-computer-scientists-created-a-neural-net-and-you-wont-believe-what-happened-next/)
323 | - [Optimizing RNN Performance](http://svail.github.io/)
324 | - [Simple RNN](http://outlace.com/Simple-Recurrent-Neural-Network/)
325 | - [Auto-Generating Clickbait with RNN](http://larseidnes.com/2015/10/13/auto-generating-clickbait-with-recurrent-neural-networks/)
326 | - [Sequence Learning using RNN (Slides)](http://www.slideshare.net/indicods/general-sequence-learning-with-recurrent-neural-networks-for-next-ml)
327 | - [Machine Translation using RNN (Paper)](http://emnlp2014.org/papers/pdf/EMNLP2014179.pdf)
328 | - [Music generation using RNNs (Keras)](https://github.com/MattVitelli/GRUV)
329 | - [Using RNN to create on-the-fly dialogue (Keras)](http://neuralniche.com/post/tutorial/)
330 | - Long Short Term Memory (LSTM)
331 | - [Understanding LSTM Networks](http://colah.github.io/posts/2015-08-Understanding-LSTMs/)
332 | - [LSTM explained](https://apaszke.github.io/lstm-explained.html)
333 | - [LSTM](http://deeplearning4j.org/lstm.html)
334 | - [Implementing LSTM from scratch](http://www.wildml.com/2015/10/recurrent-neural-network-tutorial-part-4-implementing-a-grulstm-rnn-with-python-and-theano/), [Python/Theano code](https://github.com/dennybritz/rnn-tutorial-gru-lstm)
335 | - [Torch Code](https://github.com/karpathy/char-rnn), [Torch](https://github.com/apaszke/kaggle-grasp-and-lift)
336 | - [LSTM for Sentiment Analysis in Theano](http://deeplearning.net/tutorial/lstm.html#lstm)
337 | - [Deep Learning for Visual Q&A | LSTM | CNN](http://avisingh599.github.io/deeplearning/visual-qa/), [Code](https://github.com/avisingh599/visual-qa)
338 | - [Computer Responds to email | Google](http://googleresearch.blogspot.in/2015/11/computer-respond-to-this-email.html)
339 | - [LSTM dramatically improves Google Voice Search](http://googleresearch.blogspot.ch/2015/09/google-voice-search-faster-and-more.html), [2](http://deeplearning.net/2015/09/30/long-short-term-memory-dramatically-improves-google-voice-etc-now-available-to-a-billion-users/)
340 | - [Understanding Natural Language with Deep Neural Networks Using Torch](http://devblogs.nvidia.com/parallelforall/understanding-natural-language-deep-neural-networks-using-torch/)
341 | - Gated Recurrent Units (GRU)
342 | - [LSTM vs GRU](http://www.wildml.com/2015/10/recurrent-neural-network-tutorial-part-4-implementing-a-grulstm-rnn-with-python-and-theano/)
343 |
344 |
345 | - [Recursive Neural Network (not Recurrent)](https://en.wikipedia.org/wiki/Recursive_neural_network)
346 | - [Recursive Neural Tensor Network (RNTN)](http://deeplearning4j.org/recursiveneuraltensornetwork.html)
347 | - [word2vec, DBN, RNTN for Sentiment Analysis ](http://deeplearning4j.org/zh-sentiment_analysis_word2vec.html)
348 |
349 |
350 | - Restricted Boltzmann Machine
351 | - [Beginner's Guide about RBMs](http://deeplearning4j.org/restrictedboltzmannmachine.html)
352 | - [Another Good Tutorial](http://deeplearning.net/tutorial/rbm.html)
353 | - [Introduction to RBMs](http://blog.echen.me/2011/07/18/introduction-to-restricted-boltzmann-machines/)
354 | - [Hinton's Guide to Training RBMs](https://www.cs.toronto.edu/~hinton/absps/guideTR.pdf)
355 | - [RBMs in R](https://github.com/zachmayer/rbm)
356 | - [Deep Belief Networks Tutorial](http://deeplearning4j.org/deepbeliefnetwork.html)
357 | - [word2vec, DBN, RNTN for Sentiment Analysis ](http://deeplearning4j.org/zh-sentiment_analysis_word2vec.html)
358 |
359 |
360 | - Autoencoders: Unsupervised (applies BackProp after setting target = input)
361 | - [Andrew Ng Sparse Autoencoders pdf](https://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf)
362 | - [Deep Autoencoders Tutorial](http://deeplearning4j.org/deepautoencoder.html)
363 | - [Denoising Autoencoders](http://deeplearning.net/tutorial/dA.html), [Theano Code](http://deeplearning.net/tutorial/code/dA.py)
364 | - [Stacked Denoising Autoencoders](http://deeplearning.net/tutorial/SdA.html#sda)
365 |
366 |
367 |
368 | - Convolution Networks
369 | - [Awesome Deep Vision: List of Resources (GitHub)](https://github.com/kjw0612/awesome-deep-vision)
370 | - [Intro to CNNs](http://deeplearning4j.org/convolutionalnets.html)
371 | - [Understanding CNN for NLP](http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/)
372 | - [Stanford Notes](http://vision.stanford.edu/teaching/cs231n/), [Codes](http://cs231n.github.io/), [GitHub](https://github.com/cs231n/cs231n.github.io)
373 | - [JavaScript Library (Browser Based) for CNNs](http://cs.stanford.edu/people/karpathy/convnetjs/)
374 | - [Using CNNs to detect facial keypoints](http://danielnouri.org/notes/2014/12/17/using-convolutional-neural-nets-to-detect-facial-keypoints-tutorial/)
375 | - [Deep learning to classify business photos at Yelp](http://engineeringblog.yelp.com/2015/10/how-we-use-deep-learning-to-classify-business-photos-at-yelp.html)
376 | - [Interview with Yann LeCun | Kaggle](http://blog.kaggle.com/2014/12/22/convolutional-nets-and-cifar-10-an-interview-with-yan-lecun/)
377 | - [Visualising and Understanding CNNs](https://www.cs.nyu.edu/~fergus/papers/zeilerECCV2014.pdf)
378 |
379 |
380 |
381 | ##Natural Language Processing
382 | - [A curated list of speech and natural language processing resources](https://github.com/edobashira/speech-language-processing)
383 | - [Understanding Natural Language with Deep Neural Networks Using Torch](http://devblogs.nvidia.com/parallelforall/understanding-natural-language-deep-neural-networks-using-torch/)
384 | - [tf-idf explained](http://michaelerasm.us/tf-idf-in-10-minutes/)
385 | - [Interesting Deep Learning NLP Projects Stanford](http://cs224d.stanford.edu/reports.html), [Website](http://cs224d.stanford.edu/)
386 | - [NLP from Scratch | Google Paper](https://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/35671.pdf)
387 | - [Graph Based Semi Supervised Learning for NLP](http://graph-ssl.wdfiles.com/local--files/blog%3A_start/graph_ssl_acl12_tutorial_slides_final.pdf)
388 | - [Bag of Words](https://en.wikipedia.org/wiki/Bag-of-words_model)
389 | - [Classification text with Bag of Words](http://fastml.com/classifying-text-with-bag-of-words-a-tutorial/)
390 |
391 | - [Topic Modeling](https://en.wikipedia.org/wiki/Topic_model)
392 | - [LDA](https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation), [LSA](https://en.wikipedia.org/wiki/Latent_semantic_analysis), [Probabilistic LSA](https://en.wikipedia.org/wiki/Probabilistic_latent_semantic_analysis)
393 | - [Awesome LDA Explanation!](http://blog.echen.me/2011/08/22/introduction-to-latent-dirichlet-allocation/). [Another good explanation](http://confusedlanguagetech.blogspot.in/2012/07/jordan-boyd-graber-and-philip-resnik.html)
394 | - [The LDA Buffet- Intuitive Explanation](http://www.matthewjockers.net/2011/09/29/the-lda-buffet-is-now-open-or-latent-dirichlet-allocation-for-english-majors/)
395 | - [Difference between LSI and LDA](https://www.quora.com/Whats-the-difference-between-Latent-Semantic-Indexing-LSI-and-Latent-Dirichlet-Allocation-LDA)
396 | - [Original LDA Paper](https://www.cs.princeton.edu/~blei/papers/BleiNgJordan2003.pdf)
397 | - [alpha and beta in LDA](http://datascience.stackexchange.com/questions/199/what-does-the-alpha-and-beta-hyperparameters-contribute-to-in-latent-dirichlet-a)
398 | - [Intuitive explanation of the Dirichlet distribution](https://www.quora.com/What-is-an-intuitive-explanation-of-the-Dirichlet-distribution)
399 | - [Topic modeling made just simple enough](http://tedunderwood.com/2012/04/07/topic-modeling-made-just-simple-enough/)
400 | - [Online LDA](http://alexminnaar.com/online-latent-dirichlet-allocation-the-best-option-for-topic-modeling-with-large-data-sets.html), [Online LDA with Spark](http://alexminnaar.com/distributed-online-latent-dirichlet-allocation-with-apache-spark.html)
401 | - [LDA in Scala](http://alexminnaar.com/latent-dirichlet-allocation-in-scala-part-i-the-theory.html), [Part 2](http://alexminnaar.com/latent-dirichlet-allocation-in-scala-part-ii-the-code.html)
402 | - [Segmentation of Twitter Timelines via Topic Modeling](http://alexperrier.github.io/jekyll/update/2015/09/16/segmentation_twitter_timelines_lda_vs_lsa.html)
403 | - [Topic Modeling of Twitter Followers](http://alexperrier.github.io/jekyll/update/2015/09/04/topic-modeling-of-twitter-followers.html)
404 |
405 |
406 | - word2vec
407 | - [Google word2vec](https://code.google.com/p/word2vec/)
408 | - [Bag of Words Model Wiki](https://en.wikipedia.org/wiki/Bag-of-words_model)
409 | - [A closer look at Skip Gram Modeling](http://homepages.inf.ed.ac.uk/ballison/pdf/lrec_skipgrams.pdf)
410 | - [Skip Gram Model Tutorial](http://alexminnaar.com/word2vec-tutorial-part-i-the-skip-gram-model.html), [CBoW Model](http://alexminnaar.com/word2vec-tutorial-part-ii-the-continuous-bag-of-words-model.html)
411 | - [Word Vectors Kaggle Tutorial Python](https://www.kaggle.com/c/word2vec-nlp-tutorial/details/part-2-word-vectors), [Part 2](https://www.kaggle.com/c/word2vec-nlp-tutorial/details/part-3-more-fun-with-word-vectors)
412 | - [Making sense of word2vec](http://rare-technologies.com/making-sense-of-word2vec/)
413 | - [word2vec explained on deeplearning4j](http://deeplearning4j.org/word2vec.html)
414 | - [Quora word2vec](https://www.quora.com/How-does-word2vec-work)
415 | - [Other Quora Resources](https://www.quora.com/What-are-the-continuous-bag-of-words-and-skip-gram-architectures-in-laymans-terms), [2](https://www.quora.com/What-is-the-difference-between-the-Bag-of-Words-model-and-the-Continuous-Bag-of-Words-model), [3](https://www.quora.com/Is-skip-gram-negative-sampling-better-than-CBOW-NS-for-word2vec-If-so-why)
416 | - [word2vec, DBN, RNTN for Sentiment Analysis ](http://deeplearning4j.org/zh-sentiment_analysis_word2vec.html)
417 |
418 | - Text Clustering
419 | - [How string clustering works](http://stackoverflow.com/questions/8196371/how-clustering-works-especially-string-clustering)
420 | - [Levenshtein distance for measuring the difference between two sequences](https://en.wikipedia.org/wiki/Levenshtein_distance)
421 | - [Text clustering with Levenshtein distances](http://stackoverflow.com/questions/21511801/text-clustering-with-levenshtein-distances)
422 |
423 | - Text Classification
424 | - [Classification Text with Bag of Words](http://fastml.com/classifying-text-with-bag-of-words-a-tutorial/)
425 |
426 | - [Language learning with NLP and reinforcement learning](http://blog.dennybritz.com/2015/09/11/reimagining-language-learning-with-nlp-and-reinforcement-learning/)
427 | - [Kaggle Tutorial Bag of Words and Word vectors](https://www.kaggle.com/c/word2vec-nlp-tutorial/details/part-1-for-beginners-bag-of-words), [Part 2](https://www.kaggle.com/c/word2vec-nlp-tutorial/details/part-2-word-vectors), [Part 3](https://www.kaggle.com/c/word2vec-nlp-tutorial/details/part-3-more-fun-with-word-vectors)
428 | - [What would Shakespeare say (NLP Tutorial)](https://gigadom.wordpress.com/2015/10/02/natural-language-processing-what-would-shakespeare-say/)
429 | - [A closer look at Skip Gram Modeling](http://homepages.inf.ed.ac.uk/ballison/pdf/lrec_skipgrams.pdf)
430 |
431 |
432 | ##Computer Vision
433 | - [Awesome computer vision (github)](https://github.com/jbhuang0604/awesome-computer-vision)
434 | - [Awesome deep vision (github)](https://github.com/kjw0612/awesome-deep-vision)
435 |
436 |
437 |
438 | ##Support Vector Machine
439 | - [Highest Voted Questions about SVMs on Cross Validated](http://stats.stackexchange.com/questions/tagged/svm)
440 | - [Help me Understand SVMs!](http://stats.stackexchange.com/questions/3947/help-me-understand-support-vector-machines)
441 | - [SVM in Layman's terms](https://www.quora.com/What-does-support-vector-machine-SVM-mean-in-laymans-terms)
442 | - [How does SVM Work | Comparisons](http://stats.stackexchange.com/questions/23391/how-does-a-support-vector-machine-svm-work)
443 | - [A tutorial on SVMs](http://alex.smola.org/papers/2003/SmoSch03b.pdf)
444 | - [Practical Guide to SVC](http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf), [Slides](http://www.csie.ntu.edu.tw/~cjlin/talks/freiburg.pdf)
445 | - [Introductory Overview of SVMs](http://www.statsoft.com/Textbook/Support-Vector-Machines)
446 | - Comparisons
447 | - [SVMs > ANNs](http://stackoverflow.com/questions/6699222/support-vector-machines-better-than-artificial-neural-networks-in-which-learn?rq=1), [ANNs > SVMs](http://stackoverflow.com/questions/11632516/what-are-advantages-of-artificial-neural-networks-over-support-vector-machines), [Another Comparison](http://www.svms.org/anns.html)
448 | - [Trees > SVMs](http://stats.stackexchange.com/questions/57438/why-is-svm-not-so-good-as-decision-tree-on-the-same-data)
449 | - [Kernel Logistic Regression vs SVM](http://stats.stackexchange.com/questions/43996/kernel-logistic-regression-vs-svm)
450 | - [Logistic Regression vs SVM](http://stats.stackexchange.com/questions/58684/regularized-logistic-regression-and-support-vector-machine), [2](http://stats.stackexchange.com/questions/95340/svm-v-s-logistic-regression), [3](https://www.quora.com/Support-Vector-Machines/What-is-the-difference-between-Linear-SVMs-and-Logistic-Regression)
451 | - [Optimization Algorithms in Support Vector Machines](http://pages.cs.wisc.edu/~swright/talks/sjw-complearning.pdf)
452 | - [Variable Importance from SVM](http://stats.stackexchange.com/questions/2179/variable-importance-from-svm)
453 | - Software
454 | - [LIBSVM](https://www.csie.ntu.edu.tw/~cjlin/libsvm/)
455 | - [Intro to SVM in R](http://cbio.ensmp.fr/~jvert/svn/tutorials/practical/svmbasic/svmbasic_notes.pdf)
456 | - Kernels
457 | - [What are Kernels in ML and SVM?](https://www.quora.com/What-are-Kernels-in-Machine-Learning-and-SVM)
458 | - [Intuition Behind Gaussian Kernel in SVMs?](https://www.quora.com/Support-Vector-Machines/What-is-the-intuition-behind-Gaussian-kernel-in-SVM)
459 | - Probabilities post SVM
460 | - [Platt's Probabilistic Outputs for SVM](http://www.csie.ntu.edu.tw/~htlin/paper/doc/plattprob.pdf)
461 | - [Platt Calibration Wiki](https://en.wikipedia.org/wiki/Platt_scaling)
462 | - [Why use Platts Scaling](http://stats.stackexchange.com/questions/5196/why-use-platts-scaling)
463 | - [Classifier Classification with Platt's Scaling](http://fastml.com/classifier-calibration-with-platts-scaling-and-isotonic-regression/)
464 |
465 |
466 |
467 | ##Reinforcement Learning
468 | - [Awesome Reinforcement Learning (GitHub)](https://github.com/aikorea/awesome-rl)
469 | - [RL Tutorial Part 1](http://outlace.com/Reinforcement-Learning-Part-1/), [Part 2](http://outlace.com/Reinforcement-Learning-Part-2/)
470 |
471 |
472 | ##Decision Trees
473 | - [Wikipedia Page - Lots of Good Info](https://en.wikipedia.org/wiki/Decision_tree_learning)
474 | - [FAQs about Decision Trees](http://stats.stackexchange.com/questions/tagged/cart)
475 | - [Brief Tour of Trees and Forests](http://statistical-research.com/a-brief-tour-of-the-trees-and-forests/)
476 | - [Tree Based Models in R](http://www.statmethods.net/advstats/cart.html)
477 | - [How Decision Trees work?](http://www.aihorizon.com/essays/generalai/decision_trees.htm)
478 | - [Weak side of Decision Trees](http://stats.stackexchange.com/questions/1292/what-is-the-weak-side-of-decision-trees)
479 | - [Thorough Explanation and different algorithms](http://www.ise.bgu.ac.il/faculty/liorr/hbchap9.pdf)
480 | - [What is entropy and information gain in the context of building decision trees?](http://stackoverflow.com/questions/1859554/what-is-entropy-and-information-gain)
481 | - [Slides Related to Decision Trees](http://www.slideshare.net/pierluca.lanzi/machine-learning-and-data-mining-11-decision-trees)
482 | - [How do decision tree learning algorithms deal with missing values?](http://stats.stackexchange.com/questions/96025/how-do-decision-tree-learning-algorithms-deal-with-missing-values-under-the-hoo)
483 | - [Using Surrogates to Improve Datasets with Missing Values](http://www.salford-systems.com/videos/tutorials/tips-and-tricks/using-surrogates-to-improve-datasets-with-missing-values)
484 | - [Good Article](https://www.mindtools.com/dectree.html)
485 | - [Are decision trees almost always binary trees?](http://stats.stackexchange.com/questions/12187/are-decision-trees-almost-always-binary-trees)
486 | - [Pruning Decision Trees](https://en.wikipedia.org/wiki/Pruning_(decision_trees)), [Grafting of Decision Trees](https://en.wikipedia.org/wiki/Grafting_(decision_trees))
487 | - [What is Deviance in context of Decision Trees?](http://stats.stackexchange.com/questions/6581/what-is-deviance-specifically-in-cart-rpart)
488 | - Comparison of Different Algorithms
489 | - [CART vs CTREE](http://stats.stackexchange.com/questions/12140/conditional-inference-trees-vs-traditional-decision-trees)
490 | - [Comparison of complexity or performance](https://stackoverflow.com/questions/9979461/different-decision-tree-algorithms-with-comparison-of-complexity-or-performance)
491 | - [CHAID vs CART](http://stats.stackexchange.com/questions/61230/chaid-vs-crt-or-cart) , [CART vs CHAID](http://www.bzst.com/2006/10/classification-trees-cart-vs-chaid.html)
492 | - [Good Article on comparison](http://www.ftpress.com/articles/article.aspx?p=2248639&seqNum=11)
493 | - CART
494 | - [Recursive Partitioning Wikipedia](https://en.wikipedia.org/wiki/Recursive_partitioning)
495 | - [CART Explained](http://documents.software.dell.com/Statistics/Textbook/Classification-and-Regression-Trees)
496 | - [How to measure/rank “variable importance” when using CART?](http://stats.stackexchange.com/questions/6478/how-to-measure-rank-variable-importance-when-using-cart-specifically-using)
497 | - [Pruning a Tree in R](http://stackoverflow.com/questions/15318409/how-to-prune-a-tree-in-r)
498 | - [Does rpart use multivariate splits by default?](http://stats.stackexchange.com/questions/4356/does-rpart-use-multivariate-splits-by-default)
499 | - [FAQs about Recursive Partitioning](http://stats.stackexchange.com/questions/tagged/rpart)
500 | - CTREE
501 | - [party package in R](https://cran.r-project.org/web/packages/party/party.pdf)
502 | - [Show volumne in each node using ctree in R](http://stackoverflow.com/questions/13772715/show-volume-in-each-node-using-ctree-plot-in-r)
503 | - [How to extract tree structure from ctree function?](http://stackoverflow.com/questions/8675664/how-to-extract-tree-structure-from-ctree-function)
504 | - CHAID
505 | - [Wikipedia Artice on CHAID](https://en.wikipedia.org/wiki/CHAID)
506 | - [Basic Introduction to CHAID](https://smartdrill.com/Introduction-to-CHAID.html)
507 | - [Good Tutorial on CHAID](http://www.statsoft.com/Textbook/CHAID-Analysis)
508 | - MARS
509 | - [Wikipedia Article on MARS](https://en.wikipedia.org/wiki/Multivariate_adaptive_regression_splines)
510 | - Probabilistic Decision Trees
511 | - [Bayesian Learning in Probabilistic Decision Trees](http://www.stats.org.uk/bayesian/Jordan.pdf)
512 | - [Probabilistic Trees Research Paper](http://people.stern.nyu.edu/adamodar/pdfiles/papers/probabilistic.pdf)
513 |
514 |
515 | ##Random Forest / Bagging
516 | - [Awesome Random Forest (GitHub)**](https://github.com/kjw0612/awesome-random-forest)
517 | - [How to tune RF parameters in practice?](https://www.kaggle.com/forums/f/15/kaggle-forum/t/4092/how-to-tune-rf-parameters-in-practice)
518 | - [Measures of variable importance in random forests](http://stats.stackexchange.com/questions/12605/measures-of-variable-importance-in-random-forests)
519 | - [Compare R-squared from two different Random Forest models](http://stats.stackexchange.com/questions/13869/compare-r-squared-from-two-different-random-forest-models)
520 | - [OOB Estimate Explained | RF vs LDA](https://stat.ethz.ch/education/semesters/ss2012/ams/slides/v10.2.pdf)
521 | - [Evaluating Random Forests for Survival Analysis Using Prediction Error Curve](http://www.jstatsoft.org/article/view/v050i11)
522 | - [Why doesn't Random Forest handle missing values in predictors?](http://stats.stackexchange.com/questions/98953/why-doesnt-random-forest-handle-missing-values-in-predictors)
523 | - [How to build random forests in R with missing (NA) values?](http://stackoverflow.com/questions/8370455/how-to-build-random-forests-in-r-with-missing-na-values)
524 | - [FAQs about Random Forest](http://stats.stackexchange.com/questions/tagged/random-forest), [More FAQs](http://stackoverflow.com/questions/tagged/random-forest)
525 | - [Obtaining knowledge from a random forest](http://stats.stackexchange.com/questions/21152/obtaining-knowledge-from-a-random-forest)
526 | - [Some Questions for R implementation](http://stackoverflow.com/questions/20537186/getting-predictions-after-rfimpute), [2](http://stats.stackexchange.com/questions/81609/whether-preprocessing-is-needed-before-prediction-using-finalmodel-of-randomfore), [3](http://stackoverflow.com/questions/17059432/random-forest-package-in-r-shows-error-during-prediction-if-there-are-new-fact)
527 |
528 |
529 | ##Boosting
530 | - [Boosting for Better Predictions](http://www.datasciencecentral.com/profiles/blogs/boosting-algorithms-for-better-predictions)
531 | - [Boosting Wikipedia Page](https://en.wikipedia.org/wiki/Boosting_(machine_learning))
532 | - [Introduction to Boosted Trees | Tianqi Chen](https://homes.cs.washington.edu/~tqchen/pdf/BoostedTree.pdf)
533 | - Gradient Boosting Machine
534 | - [Gradiet Boosting Wiki](https://en.wikipedia.org/wiki/Gradient_boosting)
535 | - [Guidelines for GBM parameters in R](http://stats.stackexchange.com/questions/25748/what-are-some-useful-guidelines-for-gbm-parameters), [Strategy to set parameters](http://stats.stackexchange.com/questions/35984/strategy-to-set-the-gbm-parameters)
536 | - [Meaning of Interaction Depth](http://stats.stackexchange.com/questions/16501/what-does-interaction-depth-mean-in-gbm), [2](http://stats.stackexchange.com/questions/16501/what-does-interaction-depth-mean-in-gbm)
537 | - [Role of n.minobsinnode parameter of GBM in R](http://stats.stackexchange.com/questions/30645/role-of-n-minobsinnode-parameter-of-gbm-in-r)
538 | - [GBM in R](http://www.slideshare.net/mark_landry/gbm-package-in-r)
539 | - [FAQs about GBM](http://stats.stackexchange.com/tags/gbm/hot)
540 | - [GBM vs xgboost](https://www.kaggle.com/c/higgs-boson/forums/t/9497/r-s-gbm-vs-python-s-xgboost)
541 |
542 | - xgboost
543 | - [xgboost tuning kaggle](https://www.kaggle.com/khozzy/rossmann-store-sales/xgboost-parameter-tuning-template/log)
544 | - [xgboost vs gbm](https://www.kaggle.com/c/otto-group-product-classification-challenge/forums/t/13012/question-to-experienced-kagglers-and-anyone-who-wants-to-take-a-shot/68296#post68296)
545 | - [xgboost survey](https://www.kaggle.com/c/higgs-boson/forums/t/10335/xgboost-post-competition-survey)
546 | - AdaBoost
547 | - [AdaBoost Wiki](https://en.wikipedia.org/wiki/AdaBoost), [Python Code](https://gist.github.com/tristanwietsma/5486024)
548 | - [AdaBoost Sparse Input Support](http://hamzehal.blogspot.com/2014/06/adaboost-sparse-input-support.html)
549 | - [adaBag R package](https://cran.r-project.org/web/packages/adabag/adabag.pdf)
550 | - [Tutorial](http://math.mit.edu/~rothvoss/18.304.3PM/Presentations/1-Eric-Boosting304FinalRpdf.pdf)
551 |
552 |
553 | ##Ensembles
554 | - [Wikipedia Article on Ensemble Learning](https://en.wikipedia.org/wiki/Ensemble_learning)
555 | - [Kaggle Ensembling Guide](http://mlwave.com/kaggle-ensembling-guide/)
556 | - [The Power of Simple Ensembles](http://www.overkillanalytics.net/more-is-always-better-the-power-of-simple-ensembles/)
557 | - [Ensemble Learning Intro](http://machine-learning.martinsewell.com/ensembles/)
558 | - [Ensemble Learning Paper](http://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/springerEBR09.pdf)
559 | - [Ensembling models with R](http://amunategui.github.io/blending-models/), [Ensembling Regression Models in R](http://stats.stackexchange.com/questions/26790/ensembling-regression-models), [Intro to Ensembles in R](http://www.vikparuchuri.com/blog/intro-to-ensemble-learning-in-r/)
560 | - [Ensembling Models with caret](http://stats.stackexchange.com/questions/27361/stacking-ensembling-models-with-caret)
561 | - [Bagging vs Boosting vs Stacking](http://stats.stackexchange.com/questions/18891/bagging-boosting-and-stacking-in-machine-learning)
562 | - [Good Resources | Kaggle Africa Soil Property Prediction](https://www.kaggle.com/c/afsis-soil-properties/forums/t/10391/best-ensemble-references)
563 | - [Boosting vs Bagging](http://www.chioka.in/which-is-better-boosting-or-bagging/)
564 | - [Resources for learning how to implement ensemble methods](http://stats.stackexchange.com/questions/32703/resources-for-learning-how-to-implement-ensemble-methods)
565 | - [How are classifications merged in an ensemble classifier?](http://stats.stackexchange.com/questions/21502/how-are-classifications-merged-in-an-ensemble-classifier)
566 |
567 |
568 | ##Stacking Models
569 | - [Stacking, Blending and Stacked Generalization](http://www.chioka.in/stacking-blending-and-stacked-generalization/)
570 | - [Stacked Generalization (Stacking)](http://machine-learning.martinsewell.com/ensembles/stacking/)
571 | - [Stacked Generalization: when does it work?](http://www.ijcai.org/Past%20Proceedings/IJCAI-97-VOL2/PDF/011.pdf)
572 | - [Stacked Generalization Paper](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.56.1533&rep=rep1&type=pdf)
573 |
574 |
575 | ##Vapnik–Chervonenkis Dimension
576 | - [Wikipedia article on VC Dimension](https://en.wikipedia.org/wiki/VC_dimension)
577 | - [Intuitive Explanantion of VC Dimension](https://www.quora.com/Explain-VC-dimension-and-shattering-in-lucid-Way)
578 | - [Video explaining VC Dimension](https://www.youtube.com/watch?v=puDzy2XmR5c)
579 | - [Introduction to VC Dimension](http://www.svms.org/vc-dimension/)
580 | - [FAQs about VC Dimension](http://stats.stackexchange.com/questions/tagged/vc-dimension)
581 | - [Do ensemble techniques increase VC-dimension?](http://stats.stackexchange.com/questions/78076/do-ensemble-techniques-increase-vc-dimension)
582 |
583 |
584 |
585 | ##Bayesian Machine Learning
586 | - [Bayesian Methods for Hackers (using pyMC)](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers)
587 | - [Should all Machine Learning be Bayesian?](http://videolectures.net/bark08_ghahramani_samlbb/)
588 | - [Tutorial on Bayesian Optimisation for Machine Learning](http://www.iro.umontreal.ca/~bengioy/cifar/NCAP2014-summerschool/slides/Ryan_adams_140814_bayesopt_ncap.pdf)
589 | - [Bayesian Reasoning and Deep Learning](http://blog.shakirm.com/2015/10/bayesian-reasoning-and-deep-learning/), [Slides](http://blog.shakirm.com/wp-content/uploads/2015/10/Bayes_Deep.pdf)
590 | - [Bayesian Statistics Made Simple](http://greenteapress.com/thinkbayes/)
591 | - [Kalman & Bayesian Filters in Python](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python)
592 | - [Markov Chain Wikipedia Page](https://en.wikipedia.org/wiki/Markov_chain)
593 |
594 |
595 |
596 | ##Semi Supervised Learning
597 | - [Wikipedia article on Semi Supervised Learning](https://en.wikipedia.org/wiki/Semi-supervised_learning)
598 | - [Tutorial on Semi Supervised Learning](http://pages.cs.wisc.edu/~jerryzhu/pub/sslicml07.pdf)
599 | - [Graph Based Semi Supervised Learning for NLP](http://graph-ssl.wdfiles.com/local--files/blog%3A_start/graph_ssl_acl12_tutorial_slides_final.pdf)
600 | - [Taxonomy](http://is.tuebingen.mpg.de/fileadmin/user_upload/files/publications/taxo_[0].pdf)
601 | - [Video Tutorial Weka](https://www.youtube.com/watch?v=sWxcIjZFGNM)
602 | - [Unsupervised, Supervised and Semi Supervised learning](http://stats.stackexchange.com/questions/517/unsupervised-supervised-and-semi-supervised-learning)
603 | - [Research Papers 1](http://mlg.eng.cam.ac.uk/zoubin/papers/zglactive.pdf), [2](http://mlg.eng.cam.ac.uk/zoubin/papers/zgl.pdf), [3](http://icml.cc/2012/papers/616.pdf)
604 |
605 |
606 |
607 |
608 | ##Optimization
609 | - [Mean Variance Portfolio Optimization with R and Quadratic Programming](http://www.wdiam.com/2012/06/10/mean-variance-portfolio-optimization-with-r-and-quadratic-programming/?utm_content=buffer04c12&utm_medium=social&utm_source=linkedin.com&utm_campaign=buffer)
610 | - [Algorithms for Sparse Optimization and Machine
611 | Learning](http://www.ima.umn.edu/2011-2012/W3.26-30.12/activities/Wright-Steve/sjw-ima12)
612 | - [Optimization Algorithms in Machine Learning](http://pages.cs.wisc.edu/~swright/nips2010/sjw-nips10.pdf), [Video Lecture](http://videolectures.net/nips2010_wright_oaml/)
613 | - [Optimization Algorithms for Data Analysis](http://www.birs.ca/workshops/2011/11w2035/files/Wright.pdf)
614 | - [Video Lectures on Optimization](http://videolectures.net/stephen_j_wright/)
615 | - [Optimization Algorithms in Support Vector Machines](http://pages.cs.wisc.edu/~swright/talks/sjw-complearning.pdf)
616 | - [The Interplay of Optimization and Machine Learning Research](http://jmlr.org/papers/volume7/MLOPT-intro06a/MLOPT-intro06a.pdf)
617 |
618 |
619 | ##Other Tutorials
620 | - For a collection of Data Science Tutorials using R, please refer to [this list](https://github.com/ujjwalkarn/DataScienceR).
621 |
--------------------------------------------------------------------------------
/StatsLearning/1_linear_regression.R:
--------------------------------------------------------------------------------
1 | library(MASS) #loads dataset from the book MASS
2 | library(ISLR) #dataset by Statistical Learning professors
3 |
4 | ##Simple Linear Regression
5 | data(Boston)
6 | names(Boston)
7 | ?Boston
8 |
9 | plot(medv~lstat, Boston) #as lower status people decrease, median value of houses increase
10 |
11 | #response~predictor (response is modeled as predictor)
12 | fit1<-lm(medv~lstat, Boston)
13 | fit1
14 | summary(fit1)
15 |
16 | #add a line to the fit
17 | abline(fit1,col="red")
18 |
19 | #see the components of fit
20 | #access any one of these like "fit1$coefficients" etc.
21 | names(fit1)
22 | # [1] "coefficients" "residuals" "effects" "rank" "fitted.values" "assign"
23 | # [7] "qr" "df.residual" "xlevels" "call" "terms" "model"
24 |
25 | fit1$coefficients
26 | # (Intercept) lstat
27 | # 34.5538409 -0.9500494
28 |
29 | #95% confidence interval
30 | confint(fit1)
31 | # 2.5 % 97.5 %
32 | # (Intercept) 33.448457 35.6592247
33 | # lstat -1.026148 -0.8739505
34 |
35 | #predict medv (response) for these 3 values of lstat (predictor).
36 | #also show confidece intervals
37 | predict(fit1,data.frame(lstat=c(5,10,15)),interval="confidence")
38 | # fit lwr upr
39 | # 1 29.80359 29.00741 30.59978
40 | # 2 25.05335 24.47413 25.63256
41 | # 3 20.30310 19.73159 20.87461
42 |
43 |
44 | ##Multiple Linear Regression
45 | fit2<-lm(medv~lstat+age,data=Boston)
46 | summary(fit2)
47 | plot(fit2$residuals)
48 | plot(fitted(fit2),fit2$residuals)
49 | hist(fit2$residuals)
50 |
51 | fit3<-lm(medv~.,data=Boston)
52 | summary(fit3)
53 |
54 | #plot residuals
55 | par(mfrow=c(2,2))
56 | plot(fit3)
57 | hist(fit3$residuals)
58 |
59 | par(mfrow=c(1,1))
60 | plot(fitted(fit3),fit3$residuals)
61 |
62 | #update function used below to remove 'age' and 'indus' from the model
63 | fit4<- update(fit3,~.-age-indus)
64 | summary(fit4)
65 |
66 | ##Non Linearities and Interactions
67 | #"*" in the formula means we'll have both main-effects &interaction
68 | fit5<-lm(medv~lstat*age, Boston)
69 | summary(fit5)
70 |
71 | #"^" term has to be put inside identity function so that ^ is not computed while
72 | #executing and so that lstat^2 is treated as a separate term
73 | fit6<-lm(medv~lstat+ I(lstat^2), Boston)
74 | summary(fit6)
75 | plot(fit6)
76 |
77 | attach(Boston) #make the named variables in Boston avaliable in our R data space
78 | par(mfrow=c(1,1))
79 | plot(medv~lstat)
80 | #cant use abline since that only works when we've a straight line fit
81 | #fitted(fit6) gives fitted value from the model for each value of lstat in training data
82 | points(lstat,fitted(fit6),col="red",pch=20)
83 |
84 | #fit polynomial of degree 4
85 | fit7<-lm(medv~poly(lstat,4))
86 | points(lstat,fitted(fit7),col="blue",pch=20) #little more wiggly than desired (overfit)
87 |
88 | #see all plotting characters
89 | plot(1:20,1:20,pch=1:20,cex=2)
90 |
91 | ##Qualitative Predictors
92 | fix(Carseats) #fix opens an external window with the dataframe
93 | names(Carseats)
94 | summary(Carseats)
95 |
96 | #* and : both represent interaction terms
97 | fit8<-lm(Sales~.+Income*Advertising+Age:Price,data=Carseats)
98 | summary(fit8)
99 |
100 | #ShelveLoc is a qualitative predictor
101 | #contrasts shows how factors are treated in the model
102 | #only 2 dummy variables "Good" AND "Medium" are generated
103 | #number of dummy variables is 1 less than number of levels in the
104 | #factor variable (ShelveLoc) to prevent multi collinearity
105 | contrasts(Carseats$ShelveLoc)
106 | # Good Medium
107 | # Bad 0 0
108 | # Good 1 0
109 | # Medium 0 1
110 |
--------------------------------------------------------------------------------
/StatsLearning/2_logistic_regression.R:
--------------------------------------------------------------------------------
1 | #install.packages("ISLR")
2 | require(ISLR)
3 | names(Smarket)
4 | summary(Smarket)
5 |
6 | mydata<-Smarket
7 | ?Smarket
8 |
9 | #plot the data
10 | pairs(Smarket,col=Smarket$Direction)
11 |
12 | #Logistic Regression
13 | #family=Binomial means logistic regression
14 | glm.fit<- glm(Direction~ Lag1+Lag2+Lag3+Lag4+Lag5+Volume,data=Smarket,family=binomial)
15 | #?glm
16 | #?family
17 | summary(glm.fit)
18 | glm.probs=predict(glm.fit,type="response")
19 |
20 | #install.packages("usdm")
21 | library(car)
22 | vif(glm.fit) # variance inflation factors
23 | sqrt(vif(glm.fit)) > 2
24 |
--------------------------------------------------------------------------------
/contributing.md:
--------------------------------------------------------------------------------
1 | If you want to contribute to this list (please do), send me a pull request. Since we want this list to be useful in the long run, **please submit high quality links only**.
2 |
3 | ## Adding to this list
4 |
5 | Please ensure your pull request adheres to the following guidelines:
6 |
7 | - Please search previous suggestions before making a new one, as yours may be a duplicate.
8 | - Make sure your link has a useful and relevant title.
9 | - Please make an individual pull request for each suggestion.
10 | - Please use [title-casing](http://titlecapitalization.com) (AP style).
11 | - Please use the following format: `[Useful Title](link)`
12 | - Link additions should be added to the bottom of the relevant category.
13 | - New categories or improvements to the existing categorization are welcome.
14 | - Please Check your spelling and grammar.
15 | - The pull request and commit should have a useful title.
16 |
17 | Thank you for your suggestions!
18 |
--------------------------------------------------------------------------------