├── cookies.json ├── Lectures ├── 07 Supervised Learning - Classification │ ├── 10 Project 1.ipynb │ └── images │ │ ├── cm.png │ │ ├── knn.png │ │ ├── roc.png │ │ ├── svm.jpg │ │ ├── banner.png │ │ ├── bayes.png │ │ ├── kernel.png │ │ ├── kfold.png │ │ ├── logit.png │ │ ├── LOO-kfold.png │ │ ├── bayes-2.png │ │ ├── bernouli.webp │ │ ├── case-1-cm.png │ │ ├── case-2-cm.png │ │ ├── det-gen.avif │ │ ├── dog-clf.webp │ │ ├── entropy.png │ │ ├── glm-dist.png │ │ ├── loan-cm.png │ │ ├── nb-dist.avif │ │ ├── nb-types.jpg │ │ ├── softmax.png │ │ ├── svm-2d.avif │ │ ├── svm-line.png │ │ ├── glm-dist-2.png │ │ ├── glm-origin.jpg │ │ ├── lazy-eager.jpg │ │ ├── ml-models.webp │ │ ├── one-vs-rest.jpg │ │ ├── one-vs-rest.png │ │ ├── cm-multiclass.png │ │ ├── decision-tree.png │ │ ├── disease-metric.png │ │ ├── instance-based.jpg │ │ ├── roc-comparison.png │ │ ├── svm-hyperplane.png │ │ ├── svm-kernel-1.avif │ │ ├── svm-kernel-2.avif │ │ ├── svm-kernel-3.avif │ │ ├── clf-metrics-viz.webp │ │ ├── distance-metrics.png │ │ ├── instance-based.jpeg │ │ ├── linear-nonlinear.ppm │ │ ├── stratified-kfold.png │ │ ├── game-decision-tree.png │ │ ├── logistic-learning.gif │ │ ├── logistic-sigmoid.webp │ │ ├── logistic-vs-linear.png │ │ ├── metrics-comparison.jpg │ │ ├── soft-hard-margin.jpeg │ │ ├── soft-vs-hard-margin.png │ │ ├── supervised-learning.png │ │ ├── svm-bad-good-margin.png │ │ ├── svm-support-vector.jpg │ │ ├── logistic-vs-linear-2.png │ │ ├── classification-metrics.png │ │ ├── clf-performance-metrics.png │ │ ├── model-selection-training.png │ │ ├── naive-bayes-classifier.webp │ │ ├── parametric-nonparametric.png │ │ ├── regression-to-classification.png │ │ └── regression-vs-classification.avif ├── 01 Introduction to Machine Learning │ ├── 03 Python Crash Course.ipynb │ ├── 04 Statistics Crash Course.ipynb │ ├── 05 Data Processing Crash Course.ipynb │ ├── 02 Setting up the Python environment.ipynb │ ├── 06 Data Visualization Crash Course.ipynb │ ├── images │ │ ├── banner.png │ │ └── ml-101.png │ └── 01 Machine Learning 101.ipynb ├── 11 Practical Machine Learning Projects │ ├── 02 Fraud Detection.ipynb │ ├── 05 Titanic Dataset.ipynb │ ├── 01 Sentiment Analysis.ipynb │ ├── 04 Spam Email Detection.ipynb │ └── 03 Movie Recommendation System.ipynb ├── 03 Optimization Techniques │ ├── 04 Advanced Varients of Gradient Descent.ipynb │ └── images │ │ ├── banner.png │ │ ├── curvature.png │ │ ├── tmp │ │ ├── gradient.jpg │ │ ├── batch-size.jpg │ │ ├── vanishing.png │ │ ├── batch-update.png │ │ ├── ml-gradient.png │ │ ├── height-optimization.png │ │ └── optimization_learning.png │ │ ├── gradient_field.png │ │ └── optimization_shape.png ├── .DS_Store ├── 02 Probabilistic Modeling │ ├── .DS_Store │ ├── images │ │ ├── .DS_Store │ │ ├── banner.png │ │ ├── probabilities.png │ │ ├── tmp │ │ │ ├── bayesian.png │ │ │ ├── beta-dist.jpg │ │ │ ├── binomial.jpg │ │ │ ├── bernoulli.webp │ │ │ ├── gamma-dist.jpg │ │ │ ├── gamma-dist.png │ │ │ ├── binomial-dist.jpg │ │ │ ├── frequentist.webp │ │ │ ├── geometric-dist.png │ │ │ ├── normal-dist.webp │ │ │ ├── poisson-dist.webp │ │ │ ├── trinomial-dist.png │ │ │ ├── uniform-dist.jpg │ │ │ ├── uniform-dist.png │ │ │ ├── exponential-dist.jpg │ │ │ ├── exponential-dist.png │ │ │ ├── De_moivre-laplace.gif │ │ │ ├── bayesian-ai-system.png │ │ │ ├── standard-normal-dist.webp │ │ │ ├── bayesian-estimate-path.jpg │ │ │ ├── frequentist-vs-bayesian.png │ │ │ └── frequentist-estimate-path.jpg │ │ ├── prob-distributions.png │ │ └── probabilistic-modeling.png │ ├── 01 Introduction to Probabilistic Modeling.ipynb │ └── 07 Bayesian Inference.ipynb ├── 09 Ensemble Methods │ └── images │ │ ├── banner.png │ │ ├── stump.jpg │ │ ├── voting.png │ │ ├── wisdom.jpg │ │ ├── adaboost.png │ │ ├── bagging.png │ │ ├── boosting.png │ │ ├── stacking.png │ │ ├── xgboost-2.png │ │ ├── xgboost.png │ │ ├── adaboost-2.png │ │ ├── boosting-2.png │ │ ├── boosting-3.webp │ │ ├── extra-trees.png │ │ ├── stacking-2.jpg │ │ ├── stacking-3.png │ │ ├── bias-variance.png │ │ ├── oob-samples.avif │ │ ├── random-forest.jpg │ │ ├── random-forest.webp │ │ ├── bagging-variance.jpg │ │ ├── ensemble-methods.jpg │ │ ├── prob-calibration.png │ │ ├── soft-hard-voting.ppm │ │ ├── Bagging-classifier.png │ │ ├── boosting-algorithms.png │ │ ├── diverse-predictors.png │ │ ├── ensemble-learning.webp │ │ ├── gradient-boosting.jpg │ │ ├── gradient-boosting.png │ │ ├── random-forest-features.png │ │ ├── bagging-boosting-stacking.png │ │ └── decision-tree-vs-random-forest.png ├── 08 Unsupervised Learning │ └── images │ │ ├── 123d.png │ │ ├── 1d.png │ │ ├── 2d.png │ │ ├── 3d.png │ │ ├── kde.gif │ │ ├── pca.png │ │ ├── OPTICS.png │ │ ├── banner.png │ │ ├── basket.png │ │ ├── kmeans.gif │ │ ├── apriori.jpg │ │ ├── fp-growth.png │ │ ├── itemset.png │ │ ├── k-means.webp │ │ ├── optics.webp │ │ ├── partition.png │ │ ├── algorithms.jpg │ │ ├── clustering.webp │ │ ├── dbscan-algo.gif │ │ ├── dbscan-viz.webp │ │ ├── optics-viz.webp │ │ ├── types-of-ml.png │ │ ├── auto-encoder.png │ │ ├── avg-distance.png │ │ ├── dbscan-points.webp │ │ ├── dim-reduction.png │ │ ├── dim-reduction.webp │ │ ├── transactions.webp │ │ ├── clustering-types.jpg │ │ ├── feat-extraction.png │ │ ├── apriori-application.png │ │ ├── clustering-vs-clf.png │ │ ├── feature-selection.jpg │ │ ├── feature-selection.ppm │ │ ├── optics-reachability.gif │ │ ├── supervised-learning.png │ │ ├── clustering-algorithms.ppm │ │ ├── clustering-evaluation.png │ │ ├── grid-based-clustering.png │ │ ├── unsupervised-learning.png │ │ ├── density-based-clustering.gif │ │ ├── density-based-clustering.jpg │ │ ├── hierarchical-clustering.png │ │ ├── higher-dim-performance.png │ │ ├── model-based-clustering.jpg │ │ ├── support-confidence-lift.png │ │ ├── optics-reachability-plot.webp │ │ ├── clustering-distance-measure.png │ │ ├── inter-intra-cluster-distance.png │ │ ├── optics-reachability-plot-viz.webp │ │ ├── cluster-from-reachability-plot.webp │ │ └── frequent-itemset-mining-algorithms.webp ├── 04 Parameter Estimation │ └── images │ │ ├── banner.png │ │ ├── map-mle.png │ │ ├── beta-1-1.png │ │ ├── beta-7-5.png │ │ └── prob-likelihood.webp ├── 06 Fundamentals of Scikit-learn │ └── images │ │ ├── cv.png │ │ ├── fit.png │ │ ├── xy.webp │ │ ├── banner.png │ │ ├── csr-coo.png │ │ ├── mixin.png │ │ ├── pipeline.png │ │ ├── regressors.png │ │ ├── sklearn-map.jpg │ │ ├── sparse_dense.gif │ │ ├── base-estimator.png │ │ ├── feature-union.png │ │ ├── grid-vs-random.png │ │ ├── hyperparameter.png │ │ ├── sklearn-train.jpeg │ │ ├── sparse-matrix.png │ │ ├── invalid-vs-valid.png │ │ ├── scikit-learn-logo.png │ │ ├── sklearn-objects.jpg │ │ ├── column-transformer.png │ │ └── column-transformer.webp ├── 10 Neural Networks and Deep Learning │ ├── images │ │ ├── dl.jpg │ │ ├── dl.png │ │ ├── nn.gif │ │ ├── cnn.webp │ │ ├── loss.png │ │ ├── rnn.png │ │ ├── rnn.webp │ │ ├── xor.png │ │ ├── banner.png │ │ ├── dl-data.png │ │ ├── dropout.png │ │ ├── l1-l2.png │ │ ├── linear.avif │ │ ├── ml-dl.webp │ │ ├── mnist.png │ │ ├── nn-loss.png │ │ ├── relu.avif │ │ ├── tanh.avif │ │ ├── deep-nn.avif │ │ ├── dropout-2.png │ │ ├── neuron-ml.png │ │ ├── nn-layer.jpg │ │ ├── nn-layer.webp │ │ ├── nn-layers.jpg │ │ ├── sigmoid.avif │ │ ├── softmax.avif │ │ ├── agentic-ai.png │ │ ├── classic-dl.jpeg │ │ ├── cnn_banner.webp │ │ ├── mnist-cost-2.png │ │ ├── mnist-cost.png │ │ ├── nn-history.jpg │ │ ├── sgd-momentum.png │ │ ├── transformers.png │ │ ├── xor-problem.png │ │ ├── batch-min-sg-2.png │ │ ├── batch-mini-sg.webp │ │ ├── brain-process.png │ │ ├── early-stopping.png │ │ ├── hidden-layers.jpg │ │ ├── neuron-brain.webp │ │ ├── next-word-pred.png │ │ ├── nn-history-2.avif │ │ ├── nn-history-3.jpeg │ │ ├── nn-history-4.jpeg │ │ ├── self-attention.png │ │ ├── training-loop.png │ │ ├── nn-input-output.png │ │ ├── rnn-hidden-state.png │ │ ├── weight-derivation.png │ │ ├── data-agumentation.webp │ │ ├── observability-tools.jpg │ │ ├── single-multi-layer.png │ │ ├── vanishing-gradient.jpg │ │ ├── vanishing-gradient.png │ │ ├── activation-functions-2.png │ │ ├── activation-functions-3.png │ │ ├── activation-functions.png │ │ ├── backward-propagation.jpg │ │ ├── backward-propagation.webp │ │ ├── data-agumentation-text.jpg │ │ ├── data-agumentation-text.png │ │ ├── forward-propagation.webp │ │ ├── fully-connected-layer.png │ │ ├── graph-neural-network.jpeg │ │ ├── hyperparameter-tuning.jpg │ │ ├── iteration-epoch-batch.ppm │ │ ├── hyperparameter-tuning-2.png │ │ ├── optimizers-performance.webp │ │ └── single-layer-perceptron.png │ ├── 01 Foundations of Artificial Neural Networks.ipynb │ └── 06 Introduction to Deep Learning.ipynb └── 05 Supervised Learning - Regression │ └── images │ ├── l1-l2.ppm │ ├── mse.jpeg │ ├── tip.jpeg │ ├── tip.webp │ ├── banner.png │ ├── lasso.webp │ ├── normal.png │ ├── ridge.webp │ ├── polynomial.png │ ├── r-squared.png │ ├── bias-variance.png │ ├── elastic-net.webp │ ├── overfitting.png │ ├── poly-overfit.jpg │ ├── polynomial-2.png │ ├── regularization.webp │ ├── adjusted-r-squared.jpg │ ├── bias-variance-2.jpeg │ ├── stochastic-batch.png │ ├── ridge-lasso-elastic.webp │ ├── bias-variance-tradeoff.png │ ├── stochastic-batch-minibatch.jpg │ └── stochastic-batch-minibatch.png ├── images ├── banner.png └── pytopia-course.png ├── .gitignore └── README.md /cookies.json: -------------------------------------------------------------------------------- 1 | {} -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/10 Project 1.ipynb: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /Lectures/01 Introduction to Machine Learning/03 Python Crash Course.ipynb: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /Lectures/11 Practical Machine Learning Projects/02 Fraud Detection.ipynb: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /Lectures/11 Practical Machine Learning Projects/05 Titanic Dataset.ipynb: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /Lectures/01 Introduction to Machine Learning/04 Statistics Crash Course.ipynb: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /Lectures/11 Practical Machine Learning Projects/01 Sentiment Analysis.ipynb: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /Lectures/11 Practical Machine Learning Projects/04 Spam Email Detection.ipynb: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /Lectures/01 Introduction to Machine Learning/05 Data Processing Crash Course.ipynb: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /Lectures/03 Optimization Techniques/04 Advanced Varients of Gradient Descent.ipynb: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /Lectures/01 Introduction to Machine Learning/02 Setting up the Python environment.ipynb: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /Lectures/01 Introduction to Machine Learning/06 Data Visualization Crash Course.ipynb: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /Lectures/11 Practical Machine Learning Projects/03 Movie Recommendation System.ipynb: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /Lectures/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/.DS_Store -------------------------------------------------------------------------------- /images/banner.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/images/banner.png -------------------------------------------------------------------------------- /images/pytopia-course.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/images/pytopia-course.png -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/.DS_Store -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/banner.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/banner.png -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/stump.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/stump.jpg -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/voting.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/voting.png -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/wisdom.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/wisdom.jpg -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/123d.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/123d.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/1d.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/1d.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/2d.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/2d.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/3d.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/3d.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/kde.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/kde.gif -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/pca.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/pca.png -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/adaboost.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/adaboost.png -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/bagging.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/bagging.png -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/boosting.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/boosting.png -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/stacking.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/stacking.png -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/xgboost-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/xgboost-2.png -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/xgboost.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/xgboost.png -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/.DS_Store -------------------------------------------------------------------------------- /Lectures/04 Parameter Estimation/images/banner.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/04 Parameter Estimation/images/banner.png -------------------------------------------------------------------------------- /Lectures/04 Parameter Estimation/images/map-mle.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/04 Parameter Estimation/images/map-mle.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/OPTICS.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/OPTICS.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/banner.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/banner.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/basket.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/basket.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/kmeans.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/kmeans.gif -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/adaboost-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/adaboost-2.png -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/boosting-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/boosting-2.png -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/boosting-3.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/boosting-3.webp -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/extra-trees.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/extra-trees.png -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/stacking-2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/stacking-2.jpg -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/stacking-3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/stacking-3.png -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/banner.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/banner.png -------------------------------------------------------------------------------- /Lectures/03 Optimization Techniques/images/banner.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/03 Optimization Techniques/images/banner.png -------------------------------------------------------------------------------- /Lectures/04 Parameter Estimation/images/beta-1-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/04 Parameter Estimation/images/beta-1-1.png -------------------------------------------------------------------------------- /Lectures/04 Parameter Estimation/images/beta-7-5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/04 Parameter Estimation/images/beta-7-5.png -------------------------------------------------------------------------------- /Lectures/06 Fundamentals of Scikit-learn/images/cv.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/06 Fundamentals of Scikit-learn/images/cv.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/apriori.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/apriori.jpg -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/fp-growth.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/fp-growth.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/itemset.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/itemset.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/k-means.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/k-means.webp -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/optics.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/optics.webp -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/partition.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/partition.png -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/bias-variance.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/bias-variance.png -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/oob-samples.avif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/oob-samples.avif -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/random-forest.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/random-forest.jpg -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/random-forest.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/random-forest.webp -------------------------------------------------------------------------------- /Lectures/03 Optimization Techniques/images/curvature.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/03 Optimization Techniques/images/curvature.png -------------------------------------------------------------------------------- /Lectures/06 Fundamentals of Scikit-learn/images/fit.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/06 Fundamentals of Scikit-learn/images/fit.png -------------------------------------------------------------------------------- /Lectures/06 Fundamentals of Scikit-learn/images/xy.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/06 Fundamentals of Scikit-learn/images/xy.webp -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/algorithms.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/algorithms.jpg -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/clustering.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/clustering.webp -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/dbscan-algo.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/dbscan-algo.gif -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/dbscan-viz.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/dbscan-viz.webp -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/optics-viz.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/optics-viz.webp -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/types-of-ml.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/types-of-ml.png -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/bagging-variance.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/bagging-variance.jpg -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/ensemble-methods.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/ensemble-methods.jpg -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/prob-calibration.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/prob-calibration.png -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/soft-hard-voting.ppm: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/soft-hard-voting.ppm -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/probabilities.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/probabilities.png -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/tmp/bayesian.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/tmp/bayesian.png -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/tmp/beta-dist.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/tmp/beta-dist.jpg -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/tmp/binomial.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/tmp/binomial.jpg -------------------------------------------------------------------------------- /Lectures/03 Optimization Techniques/images/tmp/gradient.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/03 Optimization Techniques/images/tmp/gradient.jpg -------------------------------------------------------------------------------- /Lectures/06 Fundamentals of Scikit-learn/images/banner.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/06 Fundamentals of Scikit-learn/images/banner.png -------------------------------------------------------------------------------- /Lectures/06 Fundamentals of Scikit-learn/images/csr-coo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/06 Fundamentals of Scikit-learn/images/csr-coo.png -------------------------------------------------------------------------------- /Lectures/06 Fundamentals of Scikit-learn/images/mixin.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/06 Fundamentals of Scikit-learn/images/mixin.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/auto-encoder.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/auto-encoder.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/avg-distance.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/avg-distance.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/dbscan-points.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/dbscan-points.webp -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/dim-reduction.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/dim-reduction.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/dim-reduction.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/dim-reduction.webp -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/transactions.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/transactions.webp -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/Bagging-classifier.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/Bagging-classifier.png -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/boosting-algorithms.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/boosting-algorithms.png -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/diverse-predictors.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/diverse-predictors.png -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/ensemble-learning.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/ensemble-learning.webp -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/gradient-boosting.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/gradient-boosting.jpg -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/gradient-boosting.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/gradient-boosting.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/dl.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/dl.jpg -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/dl.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/dl.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/nn.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/nn.gif -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/tmp/bernoulli.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/tmp/bernoulli.webp -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/tmp/gamma-dist.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/tmp/gamma-dist.jpg -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/tmp/gamma-dist.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/tmp/gamma-dist.png -------------------------------------------------------------------------------- /Lectures/03 Optimization Techniques/images/gradient_field.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/03 Optimization Techniques/images/gradient_field.png -------------------------------------------------------------------------------- /Lectures/03 Optimization Techniques/images/tmp/batch-size.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/03 Optimization Techniques/images/tmp/batch-size.jpg -------------------------------------------------------------------------------- /Lectures/03 Optimization Techniques/images/tmp/vanishing.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/03 Optimization Techniques/images/tmp/vanishing.png -------------------------------------------------------------------------------- /Lectures/04 Parameter Estimation/images/prob-likelihood.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/04 Parameter Estimation/images/prob-likelihood.webp -------------------------------------------------------------------------------- /Lectures/05 Supervised Learning - Regression/images/l1-l2.ppm: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/05 Supervised Learning - Regression/images/l1-l2.ppm -------------------------------------------------------------------------------- /Lectures/05 Supervised Learning - Regression/images/mse.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/05 Supervised Learning - Regression/images/mse.jpeg -------------------------------------------------------------------------------- /Lectures/05 Supervised Learning - Regression/images/tip.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/05 Supervised Learning - Regression/images/tip.jpeg -------------------------------------------------------------------------------- /Lectures/05 Supervised Learning - Regression/images/tip.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/05 Supervised Learning - Regression/images/tip.webp -------------------------------------------------------------------------------- /Lectures/06 Fundamentals of Scikit-learn/images/pipeline.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/06 Fundamentals of Scikit-learn/images/pipeline.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/clustering-types.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/clustering-types.jpg -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/feat-extraction.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/feat-extraction.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/cnn.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/cnn.webp -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/loss.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/loss.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/rnn.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/rnn.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/rnn.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/rnn.webp -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/xor.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/xor.png -------------------------------------------------------------------------------- /Lectures/01 Introduction to Machine Learning/images/banner.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/01 Introduction to Machine Learning/images/banner.png -------------------------------------------------------------------------------- /Lectures/01 Introduction to Machine Learning/images/ml-101.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/01 Introduction to Machine Learning/images/ml-101.png -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/prob-distributions.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/prob-distributions.png -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/tmp/binomial-dist.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/tmp/binomial-dist.jpg -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/tmp/frequentist.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/tmp/frequentist.webp -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/tmp/geometric-dist.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/tmp/geometric-dist.png -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/tmp/normal-dist.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/tmp/normal-dist.webp -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/tmp/poisson-dist.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/tmp/poisson-dist.webp -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/tmp/trinomial-dist.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/tmp/trinomial-dist.png -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/tmp/uniform-dist.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/tmp/uniform-dist.jpg -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/tmp/uniform-dist.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/tmp/uniform-dist.png -------------------------------------------------------------------------------- /Lectures/03 Optimization Techniques/images/tmp/batch-update.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/03 Optimization Techniques/images/tmp/batch-update.png -------------------------------------------------------------------------------- /Lectures/03 Optimization Techniques/images/tmp/ml-gradient.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/03 Optimization Techniques/images/tmp/ml-gradient.png -------------------------------------------------------------------------------- /Lectures/05 Supervised Learning - Regression/images/banner.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/05 Supervised Learning - Regression/images/banner.png -------------------------------------------------------------------------------- /Lectures/05 Supervised Learning - Regression/images/lasso.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/05 Supervised Learning - Regression/images/lasso.webp -------------------------------------------------------------------------------- /Lectures/05 Supervised Learning - Regression/images/normal.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/05 Supervised Learning - Regression/images/normal.png -------------------------------------------------------------------------------- /Lectures/05 Supervised Learning - Regression/images/ridge.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/05 Supervised Learning - Regression/images/ridge.webp -------------------------------------------------------------------------------- /Lectures/06 Fundamentals of Scikit-learn/images/regressors.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/06 Fundamentals of Scikit-learn/images/regressors.png -------------------------------------------------------------------------------- /Lectures/06 Fundamentals of Scikit-learn/images/sklearn-map.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/06 Fundamentals of Scikit-learn/images/sklearn-map.jpg -------------------------------------------------------------------------------- /Lectures/06 Fundamentals of Scikit-learn/images/sparse_dense.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/06 Fundamentals of Scikit-learn/images/sparse_dense.gif -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/cm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/cm.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/knn.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/knn.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/roc.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/roc.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/svm.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/svm.jpg -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/apriori-application.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/apriori-application.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/clustering-vs-clf.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/clustering-vs-clf.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/feature-selection.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/feature-selection.jpg -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/feature-selection.ppm: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/feature-selection.ppm -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/optics-reachability.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/optics-reachability.gif -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/supervised-learning.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/supervised-learning.png -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/random-forest-features.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/random-forest-features.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/banner.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/banner.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/dl-data.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/dl-data.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/dropout.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/dropout.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/l1-l2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/l1-l2.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/linear.avif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/linear.avif -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/ml-dl.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/ml-dl.webp -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/mnist.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/mnist.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/nn-loss.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/nn-loss.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/relu.avif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/relu.avif -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/tanh.avif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/tanh.avif -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/tmp/exponential-dist.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/tmp/exponential-dist.jpg -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/tmp/exponential-dist.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/tmp/exponential-dist.png -------------------------------------------------------------------------------- /Lectures/03 Optimization Techniques/images/optimization_shape.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/03 Optimization Techniques/images/optimization_shape.png -------------------------------------------------------------------------------- /Lectures/05 Supervised Learning - Regression/images/polynomial.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/05 Supervised Learning - Regression/images/polynomial.png -------------------------------------------------------------------------------- /Lectures/05 Supervised Learning - Regression/images/r-squared.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/05 Supervised Learning - Regression/images/r-squared.png -------------------------------------------------------------------------------- /Lectures/06 Fundamentals of Scikit-learn/images/base-estimator.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/06 Fundamentals of Scikit-learn/images/base-estimator.png -------------------------------------------------------------------------------- /Lectures/06 Fundamentals of Scikit-learn/images/feature-union.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/06 Fundamentals of Scikit-learn/images/feature-union.png -------------------------------------------------------------------------------- /Lectures/06 Fundamentals of Scikit-learn/images/grid-vs-random.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/06 Fundamentals of Scikit-learn/images/grid-vs-random.png -------------------------------------------------------------------------------- /Lectures/06 Fundamentals of Scikit-learn/images/hyperparameter.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/06 Fundamentals of Scikit-learn/images/hyperparameter.png -------------------------------------------------------------------------------- /Lectures/06 Fundamentals of Scikit-learn/images/sklearn-train.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/06 Fundamentals of Scikit-learn/images/sklearn-train.jpeg -------------------------------------------------------------------------------- /Lectures/06 Fundamentals of Scikit-learn/images/sparse-matrix.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/06 Fundamentals of Scikit-learn/images/sparse-matrix.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/banner.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/banner.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/bayes.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/bayes.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/kernel.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/kernel.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/kfold.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/kfold.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/logit.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/logit.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/clustering-algorithms.ppm: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/clustering-algorithms.ppm -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/clustering-evaluation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/clustering-evaluation.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/grid-based-clustering.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/grid-based-clustering.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/unsupervised-learning.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/unsupervised-learning.png -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/bagging-boosting-stacking.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/bagging-boosting-stacking.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/deep-nn.avif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/deep-nn.avif -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/dropout-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/dropout-2.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/neuron-ml.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/neuron-ml.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/nn-layer.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/nn-layer.jpg -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/nn-layer.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/nn-layer.webp -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/nn-layers.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/nn-layers.jpg -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/sigmoid.avif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/sigmoid.avif -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/softmax.avif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/softmax.avif -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/probabilistic-modeling.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/probabilistic-modeling.png -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/tmp/De_moivre-laplace.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/tmp/De_moivre-laplace.gif -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/tmp/bayesian-ai-system.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/tmp/bayesian-ai-system.png -------------------------------------------------------------------------------- /Lectures/05 Supervised Learning - Regression/images/bias-variance.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/05 Supervised Learning - Regression/images/bias-variance.png -------------------------------------------------------------------------------- /Lectures/05 Supervised Learning - Regression/images/elastic-net.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/05 Supervised Learning - Regression/images/elastic-net.webp -------------------------------------------------------------------------------- /Lectures/05 Supervised Learning - Regression/images/overfitting.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/05 Supervised Learning - Regression/images/overfitting.png -------------------------------------------------------------------------------- /Lectures/05 Supervised Learning - Regression/images/poly-overfit.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/05 Supervised Learning - Regression/images/poly-overfit.jpg -------------------------------------------------------------------------------- /Lectures/05 Supervised Learning - Regression/images/polynomial-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/05 Supervised Learning - Regression/images/polynomial-2.png -------------------------------------------------------------------------------- /Lectures/06 Fundamentals of Scikit-learn/images/invalid-vs-valid.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/06 Fundamentals of Scikit-learn/images/invalid-vs-valid.png -------------------------------------------------------------------------------- /Lectures/06 Fundamentals of Scikit-learn/images/scikit-learn-logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/06 Fundamentals of Scikit-learn/images/scikit-learn-logo.png -------------------------------------------------------------------------------- /Lectures/06 Fundamentals of Scikit-learn/images/sklearn-objects.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/06 Fundamentals of Scikit-learn/images/sklearn-objects.jpg -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/LOO-kfold.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/LOO-kfold.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/bayes-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/bayes-2.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/bernouli.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/bernouli.webp -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/case-1-cm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/case-1-cm.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/case-2-cm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/case-2-cm.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/det-gen.avif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/det-gen.avif -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/dog-clf.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/dog-clf.webp -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/entropy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/entropy.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/glm-dist.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/glm-dist.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/loan-cm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/loan-cm.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/nb-dist.avif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/nb-dist.avif -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/nb-types.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/nb-types.jpg -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/softmax.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/softmax.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/svm-2d.avif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/svm-2d.avif -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/svm-line.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/svm-line.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/density-based-clustering.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/density-based-clustering.gif -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/density-based-clustering.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/density-based-clustering.jpg -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/hierarchical-clustering.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/hierarchical-clustering.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/higher-dim-performance.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/higher-dim-performance.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/model-based-clustering.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/model-based-clustering.jpg -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/support-confidence-lift.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/support-confidence-lift.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/agentic-ai.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/agentic-ai.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/classic-dl.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/classic-dl.jpeg -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/cnn_banner.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/cnn_banner.webp -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/mnist-cost-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/mnist-cost-2.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/mnist-cost.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/mnist-cost.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/nn-history.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/nn-history.jpg -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/sgd-momentum.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/sgd-momentum.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/transformers.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/transformers.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/xor-problem.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/xor-problem.png -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/tmp/standard-normal-dist.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/tmp/standard-normal-dist.webp -------------------------------------------------------------------------------- /Lectures/03 Optimization Techniques/images/tmp/height-optimization.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/03 Optimization Techniques/images/tmp/height-optimization.png -------------------------------------------------------------------------------- /Lectures/05 Supervised Learning - Regression/images/regularization.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/05 Supervised Learning - Regression/images/regularization.webp -------------------------------------------------------------------------------- /Lectures/06 Fundamentals of Scikit-learn/images/column-transformer.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/06 Fundamentals of Scikit-learn/images/column-transformer.png -------------------------------------------------------------------------------- /Lectures/06 Fundamentals of Scikit-learn/images/column-transformer.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/06 Fundamentals of Scikit-learn/images/column-transformer.webp -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/glm-dist-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/glm-dist-2.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/glm-origin.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/glm-origin.jpg -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/lazy-eager.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/lazy-eager.jpg -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/ml-models.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/ml-models.webp -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/one-vs-rest.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/one-vs-rest.jpg -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/one-vs-rest.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/one-vs-rest.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/optics-reachability-plot.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/optics-reachability-plot.webp -------------------------------------------------------------------------------- /Lectures/09 Ensemble Methods/images/decision-tree-vs-random-forest.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/09 Ensemble Methods/images/decision-tree-vs-random-forest.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/batch-min-sg-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/batch-min-sg-2.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/batch-mini-sg.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/batch-mini-sg.webp -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/brain-process.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/brain-process.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/early-stopping.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/early-stopping.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/hidden-layers.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/hidden-layers.jpg -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/neuron-brain.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/neuron-brain.webp -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/next-word-pred.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/next-word-pred.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/nn-history-2.avif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/nn-history-2.avif -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/nn-history-3.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/nn-history-3.jpeg -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/nn-history-4.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/nn-history-4.jpeg -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/self-attention.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/self-attention.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/training-loop.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/training-loop.png -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/tmp/bayesian-estimate-path.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/tmp/bayesian-estimate-path.jpg -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/tmp/frequentist-vs-bayesian.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/tmp/frequentist-vs-bayesian.png -------------------------------------------------------------------------------- /Lectures/03 Optimization Techniques/images/tmp/optimization_learning.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/03 Optimization Techniques/images/tmp/optimization_learning.png -------------------------------------------------------------------------------- /Lectures/05 Supervised Learning - Regression/images/adjusted-r-squared.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/05 Supervised Learning - Regression/images/adjusted-r-squared.jpg -------------------------------------------------------------------------------- /Lectures/05 Supervised Learning - Regression/images/bias-variance-2.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/05 Supervised Learning - Regression/images/bias-variance-2.jpeg -------------------------------------------------------------------------------- /Lectures/05 Supervised Learning - Regression/images/stochastic-batch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/05 Supervised Learning - Regression/images/stochastic-batch.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/cm-multiclass.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/cm-multiclass.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/decision-tree.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/decision-tree.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/disease-metric.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/disease-metric.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/instance-based.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/instance-based.jpg -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/roc-comparison.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/roc-comparison.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/svm-hyperplane.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/svm-hyperplane.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/svm-kernel-1.avif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/svm-kernel-1.avif -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/svm-kernel-2.avif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/svm-kernel-2.avif -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/svm-kernel-3.avif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/svm-kernel-3.avif -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/clustering-distance-measure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/clustering-distance-measure.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/inter-intra-cluster-distance.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/inter-intra-cluster-distance.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/optics-reachability-plot-viz.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/optics-reachability-plot-viz.webp -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/nn-input-output.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/nn-input-output.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/rnn-hidden-state.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/rnn-hidden-state.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/weight-derivation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/weight-derivation.png -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/images/tmp/frequentist-estimate-path.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/02 Probabilistic Modeling/images/tmp/frequentist-estimate-path.jpg -------------------------------------------------------------------------------- /Lectures/05 Supervised Learning - Regression/images/ridge-lasso-elastic.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/05 Supervised Learning - Regression/images/ridge-lasso-elastic.webp -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/clf-metrics-viz.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/clf-metrics-viz.webp -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/distance-metrics.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/distance-metrics.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/instance-based.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/instance-based.jpeg -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/linear-nonlinear.ppm: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/linear-nonlinear.ppm -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/stratified-kfold.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/stratified-kfold.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/cluster-from-reachability-plot.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/cluster-from-reachability-plot.webp -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/data-agumentation.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/data-agumentation.webp -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/observability-tools.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/observability-tools.jpg -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/single-multi-layer.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/single-multi-layer.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/vanishing-gradient.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/vanishing-gradient.jpg -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/vanishing-gradient.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/vanishing-gradient.png -------------------------------------------------------------------------------- /Lectures/05 Supervised Learning - Regression/images/bias-variance-tradeoff.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/05 Supervised Learning - Regression/images/bias-variance-tradeoff.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/game-decision-tree.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/game-decision-tree.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/logistic-learning.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/logistic-learning.gif -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/logistic-sigmoid.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/logistic-sigmoid.webp -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/logistic-vs-linear.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/logistic-vs-linear.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/metrics-comparison.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/metrics-comparison.jpg -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/soft-hard-margin.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/soft-hard-margin.jpeg -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/soft-vs-hard-margin.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/soft-vs-hard-margin.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/supervised-learning.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/supervised-learning.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/svm-bad-good-margin.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/svm-bad-good-margin.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/svm-support-vector.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/svm-support-vector.jpg -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/activation-functions-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/activation-functions-2.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/activation-functions-3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/activation-functions-3.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/activation-functions.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/activation-functions.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/backward-propagation.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/backward-propagation.jpg -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/backward-propagation.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/backward-propagation.webp -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/data-agumentation-text.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/data-agumentation-text.jpg -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/data-agumentation-text.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/data-agumentation-text.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/forward-propagation.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/forward-propagation.webp -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/fully-connected-layer.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/fully-connected-layer.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/graph-neural-network.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/graph-neural-network.jpeg -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/hyperparameter-tuning.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/hyperparameter-tuning.jpg -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/iteration-epoch-batch.ppm: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/iteration-epoch-batch.ppm -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/logistic-vs-linear-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/logistic-vs-linear-2.png -------------------------------------------------------------------------------- /Lectures/08 Unsupervised Learning/images/frequent-itemset-mining-algorithms.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/08 Unsupervised Learning/images/frequent-itemset-mining-algorithms.webp -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/hyperparameter-tuning-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/hyperparameter-tuning-2.png -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/optimizers-performance.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/optimizers-performance.webp -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/images/single-layer-perceptron.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/10 Neural Networks and Deep Learning/images/single-layer-perceptron.png -------------------------------------------------------------------------------- /Lectures/05 Supervised Learning - Regression/images/stochastic-batch-minibatch.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/05 Supervised Learning - Regression/images/stochastic-batch-minibatch.jpg -------------------------------------------------------------------------------- /Lectures/05 Supervised Learning - Regression/images/stochastic-batch-minibatch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/05 Supervised Learning - Regression/images/stochastic-batch-minibatch.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/classification-metrics.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/classification-metrics.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/clf-performance-metrics.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/clf-performance-metrics.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/model-selection-training.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/model-selection-training.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/naive-bayes-classifier.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/naive-bayes-classifier.webp -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/parametric-nonparametric.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/parametric-nonparametric.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/regression-to-classification.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/regression-to-classification.png -------------------------------------------------------------------------------- /Lectures/07 Supervised Learning - Classification/images/regression-vs-classification.avif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytopia/Machine-Learning/HEAD/Lectures/07 Supervised Learning - Classification/images/regression-vs-classification.avif -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | share/python-wheels/ 24 | *.egg-info/ 25 | .installed.cfg 26 | *.egg 27 | MANIFEST 28 | 29 | # PyInstaller 30 | # Usually these files are written by a python script from a template 31 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 32 | *.manifest 33 | *.spec 34 | 35 | # Installer logs 36 | pip-log.txt 37 | pip-delete-this-directory.txt 38 | 39 | # Unit test / coverage reports 40 | htmlcov/ 41 | .tox/ 42 | .nox/ 43 | .coverage 44 | .coverage.* 45 | .cache 46 | nosetests.xml 47 | coverage.xml 48 | *.cover 49 | *.py,cover 50 | .hypothesis/ 51 | .pytest_cache/ 52 | cover/ 53 | 54 | # Translations 55 | *.mo 56 | *.pot 57 | 58 | # Django stuff: 59 | *.log 60 | local_settings.py 61 | db.sqlite3 62 | db.sqlite3-journal 63 | 64 | # Flask stuff: 65 | instance/ 66 | .webassets-cache 67 | 68 | # Scrapy stuff: 69 | .scrapy 70 | 71 | # Sphinx documentation 72 | docs/_build/ 73 | 74 | # PyBuilder 75 | .pybuilder/ 76 | target/ 77 | 78 | # Jupyter Notebook 79 | .ipynb_checkpoints 80 | 81 | # IPython 82 | profile_default/ 83 | ipython_config.py 84 | 85 | # pyenv 86 | # For a library or package, you might want to ignore these files since the code is 87 | # intended to run in multiple environments; otherwise, check them in: 88 | # .python-version 89 | 90 | # pipenv 91 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 92 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 93 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 94 | # install all needed dependencies. 95 | #Pipfile.lock 96 | 97 | # poetry 98 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. 99 | # This is especially recommended for binary packages to ensure reproducibility, and is more 100 | # commonly ignored for libraries. 101 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control 102 | #poetry.lock 103 | 104 | # pdm 105 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. 106 | #pdm.lock 107 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it 108 | # in version control. 109 | # https://pdm.fming.dev/#use-with-ide 110 | .pdm.toml 111 | 112 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm 113 | __pypackages__/ 114 | 115 | # Celery stuff 116 | celerybeat-schedule 117 | celerybeat.pid 118 | 119 | # SageMath parsed files 120 | *.sage.py 121 | 122 | # Environments 123 | .env 124 | .venv 125 | env/ 126 | venv/ 127 | ENV/ 128 | env.bak/ 129 | venv.bak/ 130 | 131 | # Spyder project settings 132 | .spyderproject 133 | .spyproject 134 | 135 | # Rope project settings 136 | .ropeproject 137 | 138 | # mkdocs documentation 139 | /site 140 | 141 | # mypy 142 | .mypy_cache/ 143 | .dmypy.json 144 | dmypy.json 145 | 146 | # Pyre type checker 147 | .pyre/ 148 | 149 | # pytype static type analyzer 150 | .pytype/ 151 | 152 | # Cython debug symbols 153 | cython_debug/ 154 | 155 | # PyCharm 156 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can 157 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore 158 | # and can be added to the global gitignore or merged into this file. For a more nuclear 159 | # option (not recommended) you can uncomment the following to ignore the entire idea folder. 160 | #.idea/ 161 | 162 | *.DS_Store 163 | test.ipynb 164 | *.joblib 165 | *.pkl 166 | mlruns/ 167 | -------------------------------------------------------------------------------- /Lectures/01 Introduction to Machine Learning/01 Machine Learning 101.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "# Machine Learning 101" 15 | ] 16 | }, 17 | { 18 | "cell_type": "markdown", 19 | "metadata": {}, 20 | "source": [ 21 | "Welcome to the exciting world of machine learning! In this course, we will dive into the fundamental concepts and techniques that form the foundation of machine learning. By the end of this course, you will have a solid understanding of various machine learning algorithms, their applications, and how to implement them using Python.\n" 22 | ] 23 | }, 24 | { 25 | "cell_type": "markdown", 26 | "metadata": {}, 27 | "source": [ 28 | "Before we begin, I want to ensure that everyone has the necessary background knowledge. If you are new to machine learning or need a refresher, I highly recommend watching the first two chapters of my previous course, \"Machine Learning 101\":\n", 29 | "\n", 30 | "1. Introduction to ML\n", 31 | "2. ML Fundamentals\n", 32 | "\n", 33 | "You can watch it on YouTube using the following links: [Machine Learning 101 Playlist](https://www.youtube.com/watch?v=at-QcCMPW1w&list=PLawa3DOhc_42Dx-SEPJJNO_6bphQDfC2l)." 34 | ] 35 | }, 36 | { 37 | "cell_type": "markdown", 38 | "metadata": {}, 39 | "source": [ 40 | "" 41 | ] 42 | }, 43 | { 44 | "cell_type": "markdown", 45 | "metadata": {}, 46 | "source": [ 47 | "These chapters will provide you with a gentle introduction to machine learning, its different types, and the basic terminology used in the field. They will help you understand the core concepts and lay the groundwork for the topics we will cover in this course.\n" 48 | ] 49 | }, 50 | { 51 | "cell_type": "markdown", 52 | "metadata": {}, 53 | "source": [ 54 | "**Table of contents** \n", 55 | "- [Introduction to ML](#toc1_) \n", 56 | "- [ML Fundamentals](#toc2_) \n", 57 | "\n", 58 | "\n", 65 | "" 66 | ] 67 | }, 68 | { 69 | "cell_type": "markdown", 70 | "metadata": {}, 71 | "source": [ 72 | "## [Introduction to ML](#toc0_)\n" 73 | ] 74 | }, 75 | { 76 | "cell_type": "markdown", 77 | "metadata": {}, 78 | "source": [ 79 | "In the \"Introduction to ML\" chapter, you will learn:\n", 80 | "\n", 81 | "- AI History and Timeline\n", 82 | "- What machine learning is and why it is important\n", 83 | "- The three main types of machine learning: supervised, unsupervised, and reinforcement learning\n", 84 | "- Real-world applications of machine learning across various domains\n", 85 | "- Machine learning workflow and the steps involved in building a machine learning model" 86 | ] 87 | }, 88 | { 89 | "cell_type": "markdown", 90 | "metadata": {}, 91 | "source": [ 92 | "## [ML Fundamentals](#toc0_)\n" 93 | ] 94 | }, 95 | { 96 | "cell_type": "markdown", 97 | "metadata": {}, 98 | "source": [ 99 | "In the \"ML Fundamentals\" chapter, you will explore:\n", 100 | "\n", 101 | "- Data Processing and Cleaning\n", 102 | "- Feature Selection and Extraction\n", 103 | "- The difference between training, validation, and testing sets\n", 104 | "- Evaluation metrics used to assess the performance of machine learning models" 105 | ] 106 | }, 107 | { 108 | "cell_type": "markdown", 109 | "metadata": {}, 110 | "source": [ 111 | "I strongly encourage you to watch these two chapters before proceeding with this course. They will provide you with the necessary foundation and help you get the most out of the upcoming lectures.\n" 112 | ] 113 | }, 114 | { 115 | "cell_type": "markdown", 116 | "metadata": {}, 117 | "source": [ 118 | "If you have already completed the \"Machine Learning 101\" course or have a solid understanding of the basic concepts, you can dive right into the current course. However, if you feel the need to revisit any concepts, feel free to refer back to the relevant chapters in the previous course.\n" 119 | ] 120 | }, 121 | { 122 | "cell_type": "markdown", 123 | "metadata": {}, 124 | "source": [ 125 | "Throughout this course, we will build upon the knowledge gained from \"Machine Learning 101\" and explore more advanced topics and techniques. We will work on hands-on projects, implement various algorithms from scratch, and use popular Python libraries for machine learning.\n" 126 | ] 127 | }, 128 | { 129 | "cell_type": "markdown", 130 | "metadata": {}, 131 | "source": [ 132 | "Get ready to embark on an exciting journey into the world of machine learning! Let's start by setting up our Python environment and ensuring we have all the necessary dependencies installed. In the next lecture, we will discuss the course outline and the topics we will cover in detail.\n" 133 | ] 134 | } 135 | ], 136 | "metadata": { 137 | "language_info": { 138 | "name": "python" 139 | } 140 | }, 141 | "nbformat": 4, 142 | "nbformat_minor": 2 143 | } 144 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | ![GitHub last commit](https://img.shields.io/github/last-commit/pytopia/machine-learning) 4 | ![GitHub repo size](https://img.shields.io/github/repo-size/pytopia/machine-learning) 5 | ![GitHub code size in bytes](https://img.shields.io/github/languages/code-size/pytopia/machine-learning) 6 | ![GitHub Repo stars](https://img.shields.io/github/stars/pytopia/machine-learning) 7 | ![GitHub top language](https://img.shields.io/github/languages/top/pytopia/machine-learning) 8 | [![Website](https://img.shields.io/badge/Visit-Website-blue)](https://www.pytopia.ai) 9 | [![Telegram](https://img.shields.io/badge/Join-Telegram-blue)](https://t.me/pytopia_ai) 10 | [![Instagram](https://img.shields.io/badge/Follow-Instagram-red)](https://instagram.com/pytopia.ai) 11 | [![YouTube](https://img.shields.io/badge/Subscribe-YouTube-red)](https://www.youtube.com/@pytopia) 12 | [![LinkedIn](https://img.shields.io/badge/Follow-LinkedIn-blue)](https://linkedin.com/company/pytopia) 13 | [![Twitter](https://img.shields.io/badge/Follow-Twitter-blue)](https://twitter.com/pytopia_ai) 14 | 15 | Welcome to the Machine Learning Fundamentals course repository! This comprehensive course is designed to provide you with a solid foundation in the essential concepts, techniques, and algorithms that form the backbone of Machine Learning. Whether you're a beginner taking your first steps into the world of AI or an aspiring data scientist looking to enhance your skills, this course will equip you with the knowledge and practical experience needed to excel in the field. 16 | 17 | # 🎯 Course Objectives 18 | 19 | By the end of this course, you will: 20 | 21 | - Understand the fundamental principles and concepts of Machine Learning 22 | - Master probabilistic modeling and optimization techniques 23 | - Learn various supervised and unsupervised learning algorithms 24 | - Gain hands-on experience in implementing Machine Learning models 25 | - Acquire skills in feature engineering, selection, and model evaluation 26 | - Explore the basics of Neural Networks and Deep Learning 27 | - Apply your knowledge to real-world Machine Learning projects 28 | - Prepare yourself for advanced topics in the upcoming Advanced Machine Learning and Deep Learning courses 29 | 30 | # 📚 Course Contents 31 | 32 | The course is divided into the following chapters: 33 | 34 | 1. Introduction to Machine Learning 35 | 2. Probabilistic Modeling 36 | 3. Optimization Techniques 37 | 4. Parameter Estimation 38 | 5. Supervised Learning - Regression 39 | 6. Supervised Learning - Classification 40 | 7. Unsupervised Learning 41 | 8. Model Selection and Evaluation 42 | 9. Feature Engineering and Selection 43 | 10. Introduction to Neural Networks 44 | 11. Deep Learning Fundamentals 45 | 12. Practical Machine Learning Projects 46 | 47 | Each chapter includes a combination of theoretical explanations, practical examples, and hands-on exercises to reinforce your understanding of the concepts and their applications in real-world scenarios. The course culminates with a series of practical Machine Learning projects that allow you to apply your newly acquired skills to solve real-world problems, giving you valuable experience and a portfolio of projects to showcase. 48 | 49 | # ✅ Prerequisites 50 | 51 | To get the most out of this course, you should have: 52 | 53 | - Basic knowledge of programming (preferably in Python) 54 | - Familiarity with basic math, matrices, and statistics 55 | - Familiarity with data manipulation and visualization libraries (e.g., NumPy, Pandas, Matplotlib) 56 | - Enthusiasm to learn and explore the exciting field of Machine Learning! 57 | 58 | # 📚 Learn with Us! 59 | We also offer a [course on these contents](https://www.pytopia.ai/courses/machine-learning) where learners can interact with peers and instructors, ask questions, and participate in online coding sessions. By registering for the course, you also gain access to our dedicated Telegram group. Enroll now and start learning! Here are some useful links: 60 | 61 | - [Machine Learning Course](https://www.pytopia.ai/courses/machine-learning) 62 | - [Pytopia Public Telegram Group](https://t.me/pytopia_ai) 63 | - [Pytopia Website](https://www.pytopia.ai/) 64 | 65 | [](https://www.pytopia.ai/courses/machine-learning) 66 | 67 | # 🚀 Getting Started 68 | 69 | To get started with the course, follow these steps: 70 | 71 | 1. Clone this repository to your local machine using the following command: 72 | ``` 73 | git clone https://github.com/pytopia/machine-learning.git 74 | ``` 75 | 76 | 2. Navigate to the cloned repository: 77 | ``` 78 | cd machine-learning 79 | ``` 80 | 81 | 3. Set up the required dependencies and environment by following the instructions in the `setup.md` file. 82 | 83 | 4. Start exploring the course materials, beginning with the first chapter. 84 | 85 | Throughout the course, you will gain a deep understanding of the fundamental concepts and techniques that underpin Machine Learning. By completing the practical projects, you will develop the skills and confidence to tackle real-world Machine Learning challenges. This course sets the stage for your journey into more advanced topics covered in the upcoming Advanced Machine Learning and Deep Learning courses. 86 | 87 | # 📞 Contact Information 88 | 89 | Feel free to reach out to us! 90 | 91 | - 🌐 Website: [pytopia.ia](https://www.pytopia.ai) 92 | - 💬 Telegram: [pytopia_ai](https://t.me/pytopia_ai) 93 | - 🎥 YouTube: [pytopia](https://www.youtube.com/@pytopia) 94 | - 📸 Instagram: [pytopia.ai](https://www.instagram.com/pytopia.ai) 95 | - 🎓 LinkedIn: [pytopia](https://www.linkedin.com/in/pytopia) 96 | - 🐦 Twitter: [pytopia_ai](https://twitter.com/pytopia_ai) 97 | - 📧 Email: [pytopia.ai@gmail.com](mailto:pytopia.ai@gmail.com) 98 | -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/01 Introduction to Probabilistic Modeling.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "# Introduction to Probabilistic Modeling" 15 | ] 16 | }, 17 | { 18 | "cell_type": "markdown", 19 | "metadata": {}, 20 | "source": [ 21 | "In this lecture, we will introduce the concept of probabilistic modeling and explore its significance in the field of machine learning. We will discuss the key concepts and terminology associated with probabilistic modeling and provide examples to illustrate its applications.\n" 22 | ] 23 | }, 24 | { 25 | "cell_type": "markdown", 26 | "metadata": {}, 27 | "source": [ 28 | "Probabilistic modeling is a fundamental approach in machine learning that allows us to make informed decisions by incorporating uncertainty and prior knowledge into our models. By the end of this lecture, you will have a solid understanding of what probabilistic modeling is and why it is crucial in solving various machine learning problems.\n" 29 | ] 30 | }, 31 | { 32 | "cell_type": "markdown", 33 | "metadata": {}, 34 | "source": [ 35 | "Probabilistic modeling is a mathematical framework that uses probability theory to represent and reason about uncertainty in data and models. It involves building models that capture the probabilistic relationships between variables and using these models to make predictions or decisions.\n" 36 | ] 37 | }, 38 | { 39 | "cell_type": "markdown", 40 | "metadata": {}, 41 | "source": [ 42 | "**Probabilistic modeling can be defined as the process of constructing mathematical models that describe the probabilistic relationships between random variables**. These models aim to represent the uncertainty and variability present in real-world data and enable us to make probabilistic predictions or decisions.\n" 43 | ] 44 | }, 45 | { 46 | "cell_type": "markdown", 47 | "metadata": {}, 48 | "source": [ 49 | "In probabilistic modeling, we treat the data and the model parameters as random variables with associated probability distributions. By incorporating probability distributions, we can quantify the uncertainty in our data and model predictions.\n" 50 | ] 51 | }, 52 | { 53 | "cell_type": "markdown", 54 | "metadata": {}, 55 | "source": [ 56 | "To understand probabilistic modeling, let's familiarize ourselves with some key concepts and terminology:\n", 57 | "\n", 58 | "1. **Random Variables**: A random variable is a variable whose value is subject to chance or uncertainty. It can take on different values with associated probabilities. Random variables can be discrete (taking on a finite or countable number of values) or continuous (taking on an uncountable number of values).\n", 59 | "\n", 60 | "2. **Probability Distributions**: A probability distribution is a mathematical function that describes the likelihood of a random variable taking on different values. It assigns probabilities to the possible outcomes of a random variable. Examples of probability distributions include the Bernoulli distribution, Gaussian distribution, and Poisson distribution.\n", 61 | "\n", 62 | "3. **Joint Probability**: Joint probability refers to the probability of two or more events occurring simultaneously. It captures the relationship between multiple random variables and their combined probabilities.\n", 63 | "\n", 64 | "4. **Conditional Probability**: Conditional probability is the probability of an event occurring given that another event has already occurred. It allows us to update our beliefs about a random variable based on new information or evidence.\n", 65 | "\n", 66 | "5. **Independence**: Independence is a property of random variables where the occurrence of one event does not affect the probability of another event. If two random variables are independent, their joint probability is the product of their individual probabilities.\n", 67 | "\n", 68 | "6. **Bayes' Theorem**: Bayes' theorem is a fundamental rule in probability theory that describes how to update probabilities based on new evidence. It relates the conditional probabilities of events and allows us to make probabilistic inferences.\n" 69 | ] 70 | }, 71 | { 72 | "cell_type": "markdown", 73 | "metadata": {}, 74 | "source": [ 75 | "These concepts form the foundation of probabilistic modeling and will be explored in more detail throughout this chapter.\n" 76 | ] 77 | }, 78 | { 79 | "cell_type": "markdown", 80 | "metadata": {}, 81 | "source": [ 82 | "In the next section, we will discuss the importance of probabilistic modeling in machine learning and how it enables us to make informed decisions in the presence of uncertainty." 83 | ] 84 | }, 85 | { 86 | "cell_type": "markdown", 87 | "metadata": {}, 88 | "source": [ 89 | "**Table of contents** \n", 90 | "- [Importance of Probabilistic Modeling in Machine Learning](#toc1_) \n", 91 | " - [Dealing with Uncertainty in Data](#toc1_1_) \n", 92 | " - [Making Informed Decisions Based on Probabilities](#toc1_2_) \n", 93 | " - [Incorporating Prior Knowledge and Beliefs](#toc1_3_) \n", 94 | "- [Probability Theory Basics](#toc2_) \n", 95 | " - [Random Variables](#toc2_1_) \n", 96 | " - [Probability Distributions](#toc2_2_) \n", 97 | " - [Joint, Marginal, and Conditional Probability](#toc2_3_) \n", 98 | " - [Independence](#toc2_4_) \n", 99 | "- [Advantages and Limitations of Probabilistic Modeling](#toc3_) \n", 100 | " - [Advantages](#toc3_1_) \n", 101 | " - [Limitations](#toc3_2_) \n", 102 | "\n", 103 | "\n", 110 | "" 111 | ] 112 | }, 113 | { 114 | "cell_type": "markdown", 115 | "metadata": {}, 116 | "source": [ 117 | "## [Importance of Probabilistic Modeling in Machine Learning](#toc0_)" 118 | ] 119 | }, 120 | { 121 | "cell_type": "markdown", 122 | "metadata": {}, 123 | "source": [ 124 | "Probabilistic modeling plays a crucial role in machine learning by providing a principled approach to deal with uncertainty, make informed decisions, and incorporate prior knowledge. Let's explore each of these aspects in more detail.\n" 125 | ] 126 | }, 127 | { 128 | "cell_type": "markdown", 129 | "metadata": {}, 130 | "source": [ 131 | "" 132 | ] 133 | }, 134 | { 135 | "cell_type": "markdown", 136 | "metadata": {}, 137 | "source": [ 138 | "### [Dealing with Uncertainty in Data](#toc0_)\n" 139 | ] 140 | }, 141 | { 142 | "cell_type": "markdown", 143 | "metadata": {}, 144 | "source": [ 145 | "Real-world data often contains uncertainty, noise, and incompleteness. Probabilistic modeling allows us to explicitly represent and reason about this uncertainty in a principled manner. By treating data as random variables with associated probability distributions, we can quantify the uncertainty and make predictions that take it into account.\n" 146 | ] 147 | }, 148 | { 149 | "cell_type": "markdown", 150 | "metadata": {}, 151 | "source": [ 152 | "For example, consider a medical diagnosis problem where we want to predict whether a patient has a certain disease based on their symptoms. In reality, the relationship between symptoms and the disease may not be deterministic, and there could be noise or missing information in the patient's data. Probabilistic modeling enables us to represent the uncertainty in the relationship between symptoms and the disease, leading to more robust and reliable predictions.\n" 153 | ] 154 | }, 155 | { 156 | "cell_type": "markdown", 157 | "metadata": {}, 158 | "source": [ 159 | "### [Making Informed Decisions Based on Probabilities](#toc0_)\n" 160 | ] 161 | }, 162 | { 163 | "cell_type": "markdown", 164 | "metadata": {}, 165 | "source": [ 166 | "Probabilistic modeling provides a framework for making informed decisions based on probabilities. By computing the probabilities of different outcomes or events, we can make decisions that minimize the expected loss or maximize the expected utility.\n" 167 | ] 168 | }, 169 | { 170 | "cell_type": "markdown", 171 | "metadata": {}, 172 | "source": [ 173 | "For instance, in a spam email classification problem, we can use probabilistic modeling to compute the probability of an email being spam given its features. Based on this probability, we can make a decision to classify the email as spam or not spam. By setting a threshold probability, we can control the trade-off between false positives (classifying a legitimate email as spam) and false negatives (classifying a spam email as legitimate) based on our specific requirements.\n" 174 | ] 175 | }, 176 | { 177 | "cell_type": "markdown", 178 | "metadata": {}, 179 | "source": [ 180 | "### [Incorporating Prior Knowledge and Beliefs](#toc0_)\n" 181 | ] 182 | }, 183 | { 184 | "cell_type": "markdown", 185 | "metadata": {}, 186 | "source": [ 187 | "Probabilistic modeling allows us to incorporate prior knowledge and beliefs into our models. Prior knowledge refers to the information or assumptions we have about the problem domain before observing the data. By specifying prior probability distributions over the model parameters, we can encode our prior beliefs and update them based on the observed data using Bayes' theorem.\n" 188 | ] 189 | }, 190 | { 191 | "cell_type": "markdown", 192 | "metadata": {}, 193 | "source": [ 194 | "Incorporating prior knowledge can be particularly useful when dealing with limited or noisy data. For example, in a image classification task, we may have prior knowledge about the likely shapes, colors, or textures of objects. By incorporating this prior knowledge into our probabilistic model, we can improve the accuracy and robustness of our predictions, especially when the training data is scarce.\n" 195 | ] 196 | }, 197 | { 198 | "cell_type": "markdown", 199 | "metadata": {}, 200 | "source": [ 201 | "Moreover, probabilistic modeling provides a way to combine prior knowledge with observed data to make more informed decisions. As we observe more data, we can update our prior beliefs and obtain posterior probability distributions that reflect both the prior knowledge and the evidence from the data.\n" 202 | ] 203 | }, 204 | { 205 | "cell_type": "markdown", 206 | "metadata": {}, 207 | "source": [ 208 | "In summary, probabilistic modeling is important in machine learning because it allows us to:\n", 209 | "- Handle uncertainty and noise in data\n", 210 | "- Make informed decisions based on probabilities\n", 211 | "- Incorporate prior knowledge and beliefs into our models\n", 212 | "- Update our beliefs based on observed data using Bayes' theorem\n" 213 | ] 214 | }, 215 | { 216 | "cell_type": "markdown", 217 | "metadata": {}, 218 | "source": [ 219 | "By leveraging the principles of probability theory, probabilistic modeling provides a powerful framework for solving various machine learning problems and making reliable predictions in the presence of uncertainty.\n" 220 | ] 221 | }, 222 | { 223 | "cell_type": "markdown", 224 | "metadata": {}, 225 | "source": [ 226 | "## [Probability Theory Basics](#toc0_)" 227 | ] 228 | }, 229 | { 230 | "cell_type": "markdown", 231 | "metadata": {}, 232 | "source": [ 233 | "To understand probabilistic modeling, it is essential to have a solid foundation in probability theory. In this section, we will cover the basic concepts of probability theory, including random variables, probability distributions, joint and conditional probabilities, and independence.\n" 234 | ] 235 | }, 236 | { 237 | "cell_type": "markdown", 238 | "metadata": {}, 239 | "source": [ 240 | "### [Random Variables](#toc0_)\n" 241 | ] 242 | }, 243 | { 244 | "cell_type": "markdown", 245 | "metadata": {}, 246 | "source": [ 247 | "A random variable is a variable whose value is subject to chance or uncertainty. It can take on different values, each with an associated probability. Random variables are typically denoted by capital letters, such as X or Y.\n" 248 | ] 249 | }, 250 | { 251 | "cell_type": "markdown", 252 | "metadata": {}, 253 | "source": [ 254 | "There are two types of random variables:\n", 255 | "1. **Discrete Random Variables**: A discrete random variable takes on a finite or countable number of distinct values. Examples include the outcome of a coin flip (heads or tails) or the number of defective items in a batch.\n", 256 | "\n", 257 | "2. **Continuous Random Variables**: A continuous random variable takes on an uncountable number of values within a specific range. Examples include the height of a person or the time it takes to complete a task.\n" 258 | ] 259 | }, 260 | { 261 | "cell_type": "markdown", 262 | "metadata": {}, 263 | "source": [ 264 | "### [Probability Distributions](#toc0_)\n" 265 | ] 266 | }, 267 | { 268 | "cell_type": "markdown", 269 | "metadata": {}, 270 | "source": [ 271 | "A probability distribution is a mathematical function that describes the likelihood of a random variable taking on different values. It assigns probabilities to the possible outcomes of a random variable.\n" 272 | ] 273 | }, 274 | { 275 | "cell_type": "markdown", 276 | "metadata": {}, 277 | "source": [ 278 | "For discrete random variables, we use the probability mass function (PMF) to specify the probability distribution. The PMF, denoted as P(X = x), gives the probability that the random variable X takes on a specific value x.\n" 279 | ] 280 | }, 281 | { 282 | "cell_type": "markdown", 283 | "metadata": {}, 284 | "source": [ 285 | "For continuous random variables, we use the probability density function (PDF) to describe the probability distribution. The PDF, denoted as f(x), specifies the relative likelihood of the random variable taking on a particular value.\n" 286 | ] 287 | }, 288 | { 289 | "cell_type": "markdown", 290 | "metadata": {}, 291 | "source": [ 292 | "" 293 | ] 294 | }, 295 | { 296 | "cell_type": "markdown", 297 | "metadata": {}, 298 | "source": [ 299 | "Some common probability distributions include:\n", 300 | "- Bernoulli distribution (discrete)\n", 301 | "- Binomial distribution (discrete)\n", 302 | "- Poisson distribution (discrete)\n", 303 | "- Uniform distribution (continuous)\n", 304 | "- Gaussian (normal) distribution (continuous)\n", 305 | "- Exponential distribution (continuous)\n", 306 | "\n", 307 | "You can see a list of probability distributions and their properties [here](https://www.math.wm.edu/~leemis/chart/UDR/UDR.html)." 308 | ] 309 | }, 310 | { 311 | "cell_type": "markdown", 312 | "metadata": {}, 313 | "source": [ 314 | "" 315 | ] 316 | }, 317 | { 318 | "cell_type": "markdown", 319 | "metadata": {}, 320 | "source": [ 321 | "### [Joint, Marginal, and Conditional Probability](#toc0_)\n" 322 | ] 323 | }, 324 | { 325 | "cell_type": "markdown", 326 | "metadata": {}, 327 | "source": [ 328 | "When dealing with multiple random variables, we need to consider their joint, marginal, and conditional probabilities.\n", 329 | "\n", 330 | "1. **Joint Probability**: The joint probability is the probability of two or more events occurring simultaneously. For discrete random variables X and Y, the joint probability mass function is denoted as P(X = x, Y = y). For continuous random variables, the joint probability density function is denoted as f(x, y).\n", 331 | "\n", 332 | "2. **Marginal Probability**: The marginal probability is the probability of a single event occurring, regardless of the outcomes of other events. It is obtained by summing (for discrete variables) or integrating (for continuous variables) the joint probability over the other variables.\n", 333 | "\n", 334 | "3. **Conditional Probability**: The conditional probability is the probability of an event occurring given that another event has already occurred. It is denoted as P(X = x | Y = y) for discrete variables and f(x | y) for continuous variables. Conditional probability is calculated using the formula:\n", 335 | " \n", 336 | " P(X = x | Y = y) = P(X = x, Y = y) / P(Y = y)\n" 337 | ] 338 | }, 339 | { 340 | "cell_type": "markdown", 341 | "metadata": {}, 342 | "source": [ 343 | "" 344 | ] 345 | }, 346 | { 347 | "cell_type": "markdown", 348 | "metadata": {}, 349 | "source": [ 350 | "### [Independence](#toc0_)\n" 351 | ] 352 | }, 353 | { 354 | "cell_type": "markdown", 355 | "metadata": {}, 356 | "source": [ 357 | "Independence is a fundamental concept in probability theory. Two events A and B are said to be independent if the occurrence of one event does not affect the probability of the other event occurring. Mathematically, independence is defined as:\n" 358 | ] 359 | }, 360 | { 361 | "cell_type": "markdown", 362 | "metadata": {}, 363 | "source": [ 364 | "P(A and B) = P(A) * P(B)\n" 365 | ] 366 | }, 367 | { 368 | "cell_type": "markdown", 369 | "metadata": {}, 370 | "source": [ 371 | "Similarly, two random variables X and Y are independent if their joint probability is the product of their individual marginal probabilities:\n", 372 | "\n", 373 | "P(X = x, Y = y) = P(X = x) * P(Y = y)\n" 374 | ] 375 | }, 376 | { 377 | "cell_type": "markdown", 378 | "metadata": {}, 379 | "source": [ 380 | "Independence is an important assumption in many probabilistic models, as it simplifies the calculations and allows for more tractable inference.\n" 381 | ] 382 | }, 383 | { 384 | "cell_type": "markdown", 385 | "metadata": {}, 386 | "source": [ 387 | "Understanding these probability theory basics is crucial for working with probabilistic models in machine learning. In the next section, we will explore some examples of probabilistic models and their applications." 388 | ] 389 | }, 390 | { 391 | "cell_type": "markdown", 392 | "metadata": {}, 393 | "source": [ 394 | "## [Advantages and Limitations of Probabilistic Modeling](#toc0_)" 395 | ] 396 | }, 397 | { 398 | "cell_type": "markdown", 399 | "metadata": {}, 400 | "source": [ 401 | "Probabilistic modeling offers several advantages in machine learning, but it also has some limitations. Let's discuss both aspects in a concise manner.\n" 402 | ] 403 | }, 404 | { 405 | "cell_type": "markdown", 406 | "metadata": {}, 407 | "source": [ 408 | "### [Advantages](#toc0_)\n" 409 | ] 410 | }, 411 | { 412 | "cell_type": "markdown", 413 | "metadata": {}, 414 | "source": [ 415 | "- **Handling Uncertainty and Noise in Data**: Probabilistic models can effectively handle uncertainty and noise in data by explicitly representing and quantifying uncertainty through probability distributions. This allows for more robust and reliable predictions, as well as principled reasoning about the confidence or reliability of the model's outputs.\n", 416 | "\n", 417 | "- **Incorporating Prior Knowledge**: Probabilistic modeling enables the incorporation of prior knowledge or beliefs about the problem domain into the model. By specifying prior probability distributions over the model parameters, experts can encode their knowledge and update it based on observed data using Bayes' theorem. This is particularly beneficial when dealing with limited or noisy data.\n", 418 | "\n", 419 | "- **Interpretability of Results**: Probabilistic models often provide interpretable results due to their reliance on well-defined probability distributions and clear relationships between variables. The parameters of these models have intuitive meanings, and the computed probabilities offer a understandable measure of the model's confidence in its predictions.\n" 420 | ] 421 | }, 422 | { 423 | "cell_type": "markdown", 424 | "metadata": {}, 425 | "source": [ 426 | "### [Limitations](#toc0_)\n" 427 | ] 428 | }, 429 | { 430 | "cell_type": "markdown", 431 | "metadata": {}, 432 | "source": [ 433 | "- **Computational Complexity**: Exact inference and learning in complex probabilistic models can be computationally expensive, especially for high-dimensional data or large-scale problems. Approximate inference techniques, such as variational inference or sampling-based methods, are often used to mitigate this issue but may introduce additional approximation errors.\n", 434 | "\n", 435 | "- **Assumptions about Data Distribution**: Probabilistic models often make assumptions about the underlying data distribution, such as feature independence or specific distributional forms. When these assumptions are violated, the model's performance may be suboptimal or biased. Careful assessment of the assumptions' validity is crucial.\n", 436 | "\n", 437 | "- **Difficulty in Handling High-Dimensional Data**: High-dimensional data poses challenges for probabilistic models due to the curse of dimensionality. As the number of variables or features increases, the model's complexity grows exponentially, making inference and learning more difficult. Dimensionality reduction, feature selection, or regularization techniques may be necessary to address this issue.\n" 438 | ] 439 | }, 440 | { 441 | "cell_type": "markdown", 442 | "metadata": {}, 443 | "source": [ 444 | "Despite these limitations, probabilistic modeling remains a powerful and widely used approach in machine learning. Understanding the advantages and limitations helps practitioners make informed decisions about when and how to apply probabilistic models to their specific problems." 445 | ] 446 | } 447 | ], 448 | "metadata": { 449 | "language_info": { 450 | "name": "python" 451 | } 452 | }, 453 | "nbformat": 4, 454 | "nbformat_minor": 2 455 | } 456 | -------------------------------------------------------------------------------- /Lectures/02 Probabilistic Modeling/07 Bayesian Inference.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "# Bayesian Inference: Understanding the Probabilistic Approach" 15 | ] 16 | }, 17 | { 18 | "cell_type": "markdown", 19 | "metadata": {}, 20 | "source": [ 21 | "Imagine you're a detective trying to solve a mystery. As you gather clues, your suspicions about what happened change and evolve. This process of updating your beliefs as new evidence comes to light is the essence of **Bayesian inference**.\n" 22 | ] 23 | }, 24 | { 25 | "cell_type": "markdown", 26 | "metadata": {}, 27 | "source": [ 28 | "Bayesian inference is a powerful approach to statistical reasoning that allows us to:\n", 29 | "\n", 30 | "- **Incorporate prior knowledge** into our analyses\n", 31 | "- **Update our beliefs** as we collect new data\n", 32 | "- **Quantify uncertainty** in a natural and intuitive way\n" 33 | ] 34 | }, 35 | { 36 | "cell_type": "markdown", 37 | "metadata": {}, 38 | "source": [ 39 | "Named after Thomas Bayes, an 18th-century statistician, Bayesian inference has become increasingly popular in recent years, thanks to advances in computational power and its ability to handle complex, real-world problems.\n" 40 | ] 41 | }, 42 | { 43 | "cell_type": "markdown", 44 | "metadata": {}, 45 | "source": [ 46 | "In today's data-driven world, Bayesian inference is more relevant than ever:\n", 47 | "\n", 48 | "1. **Handling Uncertainty**: It provides a framework for making decisions under uncertainty, crucial in fields like finance, healthcare, and AI.\n", 49 | "\n", 50 | "2. **Flexibility**: Bayesian methods can handle small datasets and complex models where traditional methods might fail.\n", 51 | "\n", 52 | "3. **Interpretability**: Results are often more intuitive, expressing probabilities of hypotheses rather than abstract p-values.\n", 53 | "\n", 54 | "4. **Continuous Learning**: The Bayesian approach naturally incorporates new information, making it ideal for adaptive systems and online learning.\n" 55 | ] 56 | }, 57 | { 58 | "cell_type": "markdown", 59 | "metadata": {}, 60 | "source": [ 61 | "In this lecture, we'll dive deep into the world of Bayesian inference:\n", 62 | "\n", 63 | "- The fundamental concepts and thinking behind the Bayesian approach\n", 64 | "- How Bayes' Theorem works and why it's so powerful\n", 65 | "- The step-by-step process of Bayesian inference\n", 66 | "- Practical examples to illustrate these concepts\n", 67 | "- Applications in machine learning and data science\n" 68 | ] 69 | }, 70 | { 71 | "cell_type": "markdown", 72 | "metadata": {}, 73 | "source": [ 74 | "By the end of this lecture, you'll have a solid understanding of how Bayesian inference works and why it's become an essential tool in the modern data scientist's toolkit.\n" 75 | ] 76 | }, 77 | { 78 | "cell_type": "markdown", 79 | "metadata": {}, 80 | "source": [ 81 | "So, put on your detective hat, and let's embark on this journey of probabilistic reasoning and updated beliefs!" 82 | ] 83 | }, 84 | { 85 | "cell_type": "markdown", 86 | "metadata": {}, 87 | "source": [ 88 | "**Table of contents** \n", 89 | "- [Fundamentals of Bayesian Thinking](#toc1_) \n", 90 | " - [The Bayesian Framework](#toc1_1_) \n", 91 | "- [Bayes' Theorem: The Heart of Bayesian Inference](#toc2_) \n", 92 | " - [Components: Prior, Likelihood, and Posterior](#toc2_1_) \n", 93 | "- [The Bayesian Inference Process](#toc3_) \n", 94 | " - [Step 1: Defining the Prior](#toc3_1_) \n", 95 | " - [Step 2: Specifying the Likelihood](#toc3_2_) \n", 96 | " - [Step 3: Calculating the Posterior](#toc3_3_) \n", 97 | " - [Step 4: Making Inferences](#toc3_4_) \n", 98 | " - [Putting It All Together](#toc3_5_) \n", 99 | "- [Practical Examples](#toc4_) \n", 100 | " - [Simplified Coin Flip Example](#toc4_1_) \n", 101 | " - [Medical Diagnosis Example](#toc4_2_) \n", 102 | " - [Key Takeaways from These Examples](#toc4_3_) \n", 103 | "- [Conclusion](#toc5_) \n", 104 | "\n", 105 | "\n", 112 | "" 113 | ] 114 | }, 115 | { 116 | "cell_type": "markdown", 117 | "metadata": {}, 118 | "source": [ 119 | "## [Fundamentals of Bayesian Thinking](#toc0_)" 120 | ] 121 | }, 122 | { 123 | "cell_type": "markdown", 124 | "metadata": {}, 125 | "source": [ 126 | "In the Bayesian world, probability isn't just about coin flips and dice rolls. It's a way to quantify our uncertainty about the world. Here's how to think about it:\n", 127 | "\n", 128 | "- **Subjective Probability**: Unlike the frequentist view, which sees probability as long-term frequency, Bayesians view probability as a *degree of belief*.\n", 129 | "\n", 130 | "- **Uncertainty Quantification**: Probability becomes a tool to express how sure (or unsure) we are about something.\n", 131 | "\n", 132 | "- **Dynamic Beliefs**: These probabilities can change as we gather new information.\n", 133 | "\n", 134 | "**Example**: You might say, \"I'm 70% sure it will rain tomorrow.\" This isn't based on it raining on 70% of all tomorrows, but on your current belief given the information you have.\n" 135 | ] 136 | }, 137 | { 138 | "cell_type": "markdown", 139 | "metadata": {}, 140 | "source": [ 141 | "### [The Bayesian Framework](#toc0_)\n" 142 | ] 143 | }, 144 | { 145 | "cell_type": "markdown", 146 | "metadata": {}, 147 | "source": [ 148 | "The Bayesian framework is built on a few key ideas:\n", 149 | "\n", 150 | "1. **Prior Beliefs**: We start with what we already know (or think we know). This is called the *prior*.\n", 151 | "\n", 152 | "2. **New Evidence**: We collect data or make observations. This is our *likelihood*.\n", 153 | "\n", 154 | "3. **Updated Beliefs**: We combine our prior beliefs with the new evidence to form our *posterior* beliefs.\n", 155 | "\n", 156 | "4. **Continuous Learning**: This process can be repeated, with today's posterior becoming tomorrow's prior.\n" 157 | ] 158 | }, 159 | { 160 | "cell_type": "markdown", 161 | "metadata": {}, 162 | "source": [ 163 | "This framework is often represented mathematically as:\n", 164 | "\n", 165 | "$$ \\text{Posterior} \\propto \\text{Prior} \\times \\text{Likelihood} $$\n" 166 | ] 167 | }, 168 | { 169 | "cell_type": "markdown", 170 | "metadata": {}, 171 | "source": [ 172 | "Where $\\propto$ means \"proportional to\".\n" 173 | ] 174 | }, 175 | { 176 | "cell_type": "markdown", 177 | "metadata": {}, 178 | "source": [ 179 | "**Key Principles**:\n" 180 | ] 181 | }, 182 | { 183 | "cell_type": "markdown", 184 | "metadata": {}, 185 | "source": [ 186 | "- **All parameters are random variables**: In the Bayesian view, we're not trying to find fixed, \"true\" values, but rather distributions of possible values.\n", 187 | "\n", 188 | "- **Conditioning on known information**: We always work with probabilities that are conditional on what we know.\n", 189 | "\n", 190 | "- **Coherence**: Our beliefs should be logically consistent with each other.\n" 191 | ] 192 | }, 193 | { 194 | "cell_type": "markdown", 195 | "metadata": {}, 196 | "source": [ 197 | "**Example in Action**:\n", 198 | "Imagine you're guessing the skill of a basketball player:\n", 199 | "\n", 200 | "1. *Prior*: Based on the league average, you think they might make about 50% of their shots.\n", 201 | "2. *New Data*: You watch them make 8 out of 10 shots in practice.\n", 202 | "3. *Posterior*: You update your belief, now thinking they're probably better than average.\n" 203 | ] 204 | }, 205 | { 206 | "cell_type": "markdown", 207 | "metadata": {}, 208 | "source": [ 209 | "This simple example encapsulates the essence of Bayesian thinking: starting with a belief, observing evidence, and updating our belief accordingly.\n" 210 | ] 211 | }, 212 | { 213 | "cell_type": "markdown", 214 | "metadata": {}, 215 | "source": [ 216 | "Understanding these fundamentals sets the stage for diving deeper into how Bayesian inference works in practice. Next, we'll explore the mathematical heart of this approach: Bayes' Theorem." 217 | ] 218 | }, 219 | { 220 | "cell_type": "markdown", 221 | "metadata": {}, 222 | "source": [ 223 | "## [Bayes' Theorem: The Heart of Bayesian Inference](#toc0_)" 224 | ] 225 | }, 226 | { 227 | "cell_type": "markdown", 228 | "metadata": {}, 229 | "source": [ 230 | "Bayes' Theorem is a fundamental principle in probability theory and statistics that describes how to update the probability of a hypothesis based on new evidence. It's named after Reverend Thomas Bayes, who first formulated the theorem in the 18th century. Bayes' Theorem is the mathematical foundation of Bayesian inference. It's a way to calculate the probability of an event based on prior knowledge of conditions that might be related to the event. Here's the theorem in its simplest form:\n", 231 | "\n", 232 | "$$ P(A|B) = \\frac{P(B|A) \\times P(A)}{P(B)} $$\n" 233 | ] 234 | }, 235 | { 236 | "cell_type": "markdown", 237 | "metadata": {}, 238 | "source": [ 239 | "" 240 | ] 241 | }, 242 | { 243 | "cell_type": "markdown", 244 | "metadata": {}, 245 | "source": [ 246 | "Where:\n", 247 | "- $P(A|B)$ is the probability of A given B (posterior)\n", 248 | "- $P(B|A)$ is the probability of B given A (likelihood)\n", 249 | "- $P(A)$ is the probability of A (prior)\n", 250 | "- $P(B)$ is the probability of B (evidence)\n" 251 | ] 252 | }, 253 | { 254 | "cell_type": "markdown", 255 | "metadata": {}, 256 | "source": [ 257 | "**Intuitive Explanation**:\n", 258 | "Imagine you're a doctor diagnosing a rare disease. Bayes' Theorem helps you update your belief about whether a patient has the disease based on test results.\n" 259 | ] 260 | }, 261 | { 262 | "cell_type": "markdown", 263 | "metadata": {}, 264 | "source": [ 265 | "" 266 | ] 267 | }, 268 | { 269 | "cell_type": "markdown", 270 | "metadata": {}, 271 | "source": [ 272 | "### [Components: Prior, Likelihood, and Posterior](#toc0_)\n" 273 | ] 274 | }, 275 | { 276 | "cell_type": "markdown", 277 | "metadata": {}, 278 | "source": [ 279 | "Let's break down the key components of Bayes' Theorem:\n", 280 | "\n", 281 | "1. **Prior - $P(A)$**\n", 282 | " - This is our initial belief before seeing new evidence.\n", 283 | " - It represents what we know (or assume) beforehand.\n", 284 | " - Example: The general prevalence of the disease in the population.\n", 285 | "\n", 286 | "2. **Likelihood - $P(B|A)$**\n", 287 | " - This is the probability of seeing the evidence if our hypothesis is true.\n", 288 | " - It relates our hypothesis to observable data.\n", 289 | " - Example: The probability of a positive test result given that the patient has the disease.\n", 290 | "\n", 291 | "3. **Posterior - $P(A|B)$**\n", 292 | " - This is our updated belief after considering the new evidence.\n", 293 | " - It's what we're ultimately interested in calculating.\n", 294 | " - Example: The probability that the patient has the disease, given a positive test result.\n", 295 | "\n", 296 | "4. **Evidence - $P(B)$**\n", 297 | " - This is the probability of seeing the evidence, regardless of whether our hypothesis is true.\n", 298 | " - It acts as a normalizing constant.\n", 299 | " - Example: The overall probability of getting a positive test result.\n" 300 | ] 301 | }, 302 | { 303 | "cell_type": "markdown", 304 | "metadata": {}, 305 | "source": [ 306 | "**Putting It All Together**:\n", 307 | "\n", 308 | "Let's use our medical diagnosis example:\n", 309 | "- Prior: 1% of the population has the disease.\n", 310 | "- Likelihood: The test is 95% accurate (for both positive and negative results).\n", 311 | "- Evidence: A patient tests positive.\n" 312 | ] 313 | }, 314 | { 315 | "cell_type": "markdown", 316 | "metadata": {}, 317 | "source": [ 318 | "Applying Bayes' Theorem:\n", 319 | "\n", 320 | "$$ P(\\text{Disease|Positive}) = \\frac{P(\\text{Positive|Disease}) \\times P(\\text{Disease})}{P(\\text{Positive})} $$\n", 321 | "\n", 322 | "$$ = \\frac{0.95 \\times 0.01}{(0.95 \\times 0.01) + (0.05 \\times 0.99)} \\approx 0.16 $$\n" 323 | ] 324 | }, 325 | { 326 | "cell_type": "markdown", 327 | "metadata": {}, 328 | "source": [ 329 | "This means that even with a positive test result, there's only about a 16% chance the patient has the disease. This counterintuitive result demonstrates the power of Bayes' Theorem in handling real-world probabilities.\n" 330 | ] 331 | }, 332 | { 333 | "cell_type": "markdown", 334 | "metadata": {}, 335 | "source": [ 336 | "Understanding these components and how they interact in Bayes' Theorem is crucial for applying Bayesian inference to practical problems. In the next section, we'll walk through the step-by-step process of applying this knowledge in Bayesian inference." 337 | ] 338 | }, 339 | { 340 | "cell_type": "markdown", 341 | "metadata": {}, 342 | "source": [ 343 | "## [The Bayesian Inference Process](#toc0_)" 344 | ] 345 | }, 346 | { 347 | "cell_type": "markdown", 348 | "metadata": {}, 349 | "source": [ 350 | "The Bayesian inference process is like updating a mental model as new information comes in. Let's break it down step-by-step:\n" 351 | ] 352 | }, 353 | { 354 | "cell_type": "markdown", 355 | "metadata": {}, 356 | "source": [ 357 | "### [Step 1: Defining the Prior](#toc0_)\n" 358 | ] 359 | }, 360 | { 361 | "cell_type": "markdown", 362 | "metadata": {}, 363 | "source": [ 364 | "This is where we formalize our initial beliefs:\n", 365 | "\n", 366 | "- **What**: The prior is our belief about the parameter(s) of interest before seeing the data.\n", 367 | "- **How**: We express this as a probability distribution.\n", 368 | "- **Example**: If we're estimating the fairness of a coin, we might start with a prior that it's probably fair, but we're not certain.\n" 369 | ] 370 | }, 371 | { 372 | "cell_type": "markdown", 373 | "metadata": {}, 374 | "source": [ 375 | "**Key Point**: The prior can be:\n", 376 | "- *Informative*: Based on previous studies or expert knowledge.\n", 377 | "- *Uninformative* or *Flat*: When we have little prior knowledge.\n" 378 | ] 379 | }, 380 | { 381 | "cell_type": "markdown", 382 | "metadata": {}, 383 | "source": [ 384 | "### [Step 2: Specifying the Likelihood](#toc0_)\n" 385 | ] 386 | }, 387 | { 388 | "cell_type": "markdown", 389 | "metadata": {}, 390 | "source": [ 391 | "This step involves modeling how our data relates to the parameter(s):\n", 392 | "\n", 393 | "- **What**: The likelihood is the probability of observing our data, given different possible values of the parameter(s).\n", 394 | "- **How**: We choose a probability distribution that best represents our data-generating process.\n", 395 | "- **Example**: For coin flips, we might use a Binomial distribution.\n" 396 | ] 397 | }, 398 | { 399 | "cell_type": "markdown", 400 | "metadata": {}, 401 | "source": [ 402 | "**Key Point**: The likelihood function connects our theoretical model to the observed data.\n" 403 | ] 404 | }, 405 | { 406 | "cell_type": "markdown", 407 | "metadata": {}, 408 | "source": [ 409 | "### [Step 3: Calculating the Posterior](#toc0_)\n" 410 | ] 411 | }, 412 | { 413 | "cell_type": "markdown", 414 | "metadata": {}, 415 | "source": [ 416 | "Now we combine our prior beliefs with the observed data:\n", 417 | "\n", 418 | "- **What**: The posterior is our updated belief about the parameter(s) after seeing the data.\n", 419 | "- **How**: We use Bayes' Theorem to compute this:\n", 420 | "\n", 421 | " $$ P(\\theta|D) \\propto P(D|\\theta) \\times P(\\theta) $$\n", 422 | "\n", 423 | " Where $\\theta$ is our parameter and $D$ is our data.\n", 424 | "\n", 425 | "- **Example**: After seeing 60 heads in 100 flips, our belief about the coin's fairness would shift towards it being slightly biased.\n" 426 | ] 427 | }, 428 | { 429 | "cell_type": "markdown", 430 | "metadata": {}, 431 | "source": [ 432 | "**Key Point**: The posterior combines prior knowledge with observed data.\n" 433 | ] 434 | }, 435 | { 436 | "cell_type": "markdown", 437 | "metadata": {}, 438 | "source": [ 439 | "### [Step 4: Making Inferences](#toc0_)\n" 440 | ] 441 | }, 442 | { 443 | "cell_type": "markdown", 444 | "metadata": {}, 445 | "source": [ 446 | "Finally, we use our posterior distribution to draw conclusions:\n", 447 | "\n", 448 | "- **Point Estimates**: We might use the mean or mode of the posterior as our best guess for the parameter value.\n", 449 | "- **Credible Intervals**: We can find ranges where we're X% sure the true parameter lies.\n", 450 | "- **Predictions**: We can generate predictions for future data.\n", 451 | "- **Decision Making**: We can use the posterior to inform decisions under uncertainty.\n" 452 | ] 453 | }, 454 | { 455 | "cell_type": "markdown", 456 | "metadata": {}, 457 | "source": [ 458 | "**Example**: We might conclude that there's a 95% chance the coin's probability of heads lies between 0.51 and 0.69.\n" 459 | ] 460 | }, 461 | { 462 | "cell_type": "markdown", 463 | "metadata": {}, 464 | "source": [ 465 | "**Key Point**: Bayesian inference provides a full distribution of possible parameter values, allowing for rich and nuanced conclusions.\n" 466 | ] 467 | }, 468 | { 469 | "cell_type": "markdown", 470 | "metadata": {}, 471 | "source": [ 472 | "### [Putting It All Together](#toc0_)\n" 473 | ] 474 | }, 475 | { 476 | "cell_type": "markdown", 477 | "metadata": {}, 478 | "source": [ 479 | "Let's revisit our coin flip example:\n", 480 | "\n", 481 | "1. **Prior**: We start believing the coin is probably fair (Beta(10,10) distribution).\n", 482 | "2. **Likelihood**: We model 100 flips as a Binomial distribution.\n", 483 | "3. **Data**: We observe 60 heads out of 100 flips.\n", 484 | "4. **Posterior**: We update our belief, now leaning towards the coin being slightly biased (Beta(70,50) distribution).\n", 485 | "5. **Inference**: We might conclude there's strong evidence the coin is biased, with a 95% credible interval for the probability of heads being (0.51, 0.69).\n" 486 | ] 487 | }, 488 | { 489 | "cell_type": "markdown", 490 | "metadata": {}, 491 | "source": [ 492 | "This process allows us to start with our prior knowledge, incorporate new evidence, and end up with a nuanced understanding of the situation, complete with a measure of our uncertainty. It's this ability to handle uncertainty and update beliefs that makes Bayesian inference so powerful in real-world applications." 493 | ] 494 | }, 495 | { 496 | "cell_type": "markdown", 497 | "metadata": {}, 498 | "source": [ 499 | "## [Practical Examples](#toc0_)" 500 | ] 501 | }, 502 | { 503 | "cell_type": "markdown", 504 | "metadata": {}, 505 | "source": [ 506 | "### [Simplified Coin Flip Example](#toc0_)\n" 507 | ] 508 | }, 509 | { 510 | "cell_type": "markdown", 511 | "metadata": {}, 512 | "source": [ 513 | "Let's determine if a coin is fair using a simpler Bayesian approach.\n" 514 | ] 515 | }, 516 | { 517 | "cell_type": "markdown", 518 | "metadata": {}, 519 | "source": [ 520 | "1. **Define the Prior**:\n", 521 | " - We start believing the coin is probably fair.\n", 522 | " - Let's say we're 80% sure it's fair (50% chance of heads).\n", 523 | " - Prior: P(Fair) = 0.8, P(Biased) = 0.2\n", 524 | "\n", 525 | "2. **Specify the Likelihood**:\n", 526 | " - If the coin is fair, P(Heads|Fair) = 0.5\n", 527 | " - If it's biased, let's assume P(Heads|Biased) = 0.7\n", 528 | "\n", 529 | "3. **Observe Data**:\n", 530 | " - We flip the coin 10 times and get 7 heads.\n", 531 | "\n", 532 | "4. **Calculate the Posterior**:\n", 533 | " Using Bayes' Theorem:\n", 534 | "\n", 535 | " P(Fair|7 Heads) = P(7 Heads|Fair) × P(Fair) / P(7 Heads)\n", 536 | "\n", 537 | " P(Biased|7 Heads) = P(7 Heads|Biased) × P(Biased) / P(7 Heads)\n", 538 | "\n", 539 | " We can calculate these probabilities:\n", 540 | "\n", 541 | " P(7 Heads|Fair) ≈ 0.117\n", 542 | " P(7 Heads|Biased) ≈ 0.267\n", 543 | "\n", 544 | " P(Fair|7 Heads) ≈ 0.637\n", 545 | " P(Biased|7 Heads) ≈ 0.363\n", 546 | "\n", 547 | "5. **Make Inferences**:\n", 548 | " - Our belief in the coin being fair has decreased from 80% to about 64%.\n", 549 | " - There's now a 36% chance the coin is biased, up from our initial 20%.\n" 550 | ] 551 | }, 552 | { 553 | "cell_type": "markdown", 554 | "metadata": {}, 555 | "source": [ 556 | "This simplified example shows how our belief updates based on the observed data, without using complex distributions.\n" 557 | ] 558 | }, 559 | { 560 | "cell_type": "markdown", 561 | "metadata": {}, 562 | "source": [ 563 | "### [Medical Diagnosis Example](#toc0_)\n" 564 | ] 565 | }, 566 | { 567 | "cell_type": "markdown", 568 | "metadata": {}, 569 | "source": [ 570 | "Now, let's consider a more complex scenario: diagnosing a rare disease.\n", 571 | "\n", 572 | "1. **Define the Prior**:\n", 573 | " - The disease affects 1% of the population.\n", 574 | " - Prior probability of having the disease: P(D) = 0.01\n", 575 | "\n", 576 | "2. **Specify the Likelihood**:\n", 577 | " - We have a test that's 95% accurate for both positive and negative results.\n", 578 | " - P(Positive|Disease) = 0.95 (true positive rate)\n", 579 | " - P(Negative|No Disease) = 0.95 (true negative rate)\n", 580 | "\n", 581 | "3. **Observe Data**:\n", 582 | " - A patient tests positive.\n", 583 | "\n", 584 | "4. **Calculate the Posterior**:\n", 585 | " Using Bayes' Theorem:\n", 586 | " \n", 587 | " $P(D|+) = \\frac{P(+|D) \\times P(D)}{P(+)}$\n", 588 | " \n", 589 | " Where $P(+) = P(+|D)P(D) + P(+|\\text{Not D})P(\\text{Not D})$\n", 590 | " $= 0.95 \\times 0.01 + 0.05 \\times 0.99 = 0.0585$\n", 591 | " \n", 592 | " So, $P(D|+) = \\frac{0.95 \\times 0.01}{0.0585} \\approx 0.162$\n", 593 | "\n", 594 | "5. **Make Inferences**:\n", 595 | " - Despite the positive test, there's only about a 16.2% chance the patient has the disease.\n", 596 | " - This counterintuitive result demonstrates the importance of considering base rates (our prior) in medical diagnosis.\n" 597 | ] 598 | }, 599 | { 600 | "cell_type": "markdown", 601 | "metadata": {}, 602 | "source": [ 603 | "### [Key Takeaways from These Examples](#toc0_)\n" 604 | ] 605 | }, 606 | { 607 | "cell_type": "markdown", 608 | "metadata": {}, 609 | "source": [ 610 | "1. **Beliefs Update with Data**: In the coin example, our confidence in the coin's fairness decreased after seeing more heads than expected.\n", 611 | "\n", 612 | "2. **Prior Matters**: The medical example shows how the rarity of a disease affects the interpretation of a positive test.\n", 613 | "\n", 614 | "3. **Intuition vs. Calculation**: Both examples demonstrate how Bayesian calculations can sometimes contradict our initial intuitions.\n", 615 | "\n", 616 | "4. **Practical Application**: These examples show how Bayesian thinking applies to everyday scenarios, from games to important medical decisions.\n" 617 | ] 618 | }, 619 | { 620 | "cell_type": "markdown", 621 | "metadata": {}, 622 | "source": [ 623 | "By simplifying the coin flip example, we can more clearly see the process of updating beliefs based on new evidence, which is the core of Bayesian inference." 624 | ] 625 | }, 626 | { 627 | "cell_type": "markdown", 628 | "metadata": {}, 629 | "source": [ 630 | "## [Conclusion](#toc0_)" 631 | ] 632 | }, 633 | { 634 | "cell_type": "markdown", 635 | "metadata": {}, 636 | "source": [ 637 | "As we wrap up our journey through Bayesian inference, let's revisit the core ideas we've explored:\n", 638 | "\n", 639 | "1. **Probability as Belief**: \n", 640 | " - In Bayesian thinking, probability represents our degree of certainty about something.\n", 641 | " - This allows us to quantify and update our beliefs as we gather new information.\n", 642 | "\n", 643 | "2. **Bayes' Theorem**: \n", 644 | " - The mathematical heart of Bayesian inference.\n", 645 | " - It shows us how to update probabilities given new evidence.\n", 646 | "\n", 647 | "3. **Prior, Likelihood, and Posterior**:\n", 648 | " - *Prior*: Our initial beliefs before seeing data.\n", 649 | " - *Likelihood*: How probable the data is given our hypothesis.\n", 650 | " - *Posterior*: Our updated beliefs after considering the data.\n", 651 | "\n", 652 | "4. **The Inference Process**:\n", 653 | " - Start with a prior belief.\n", 654 | " - Collect data and specify how it relates to our hypothesis.\n", 655 | " - Use Bayes' Theorem to update our beliefs.\n", 656 | " - Make decisions or predictions based on the posterior.\n", 657 | "\n", 658 | "5. **Practical Applications**:\n", 659 | " - From simple examples like coin flips to complex scenarios like medical diagnoses.\n", 660 | " - Bayesian methods shine in handling uncertainty and incorporating prior knowledge.\n" 661 | ] 662 | }, 663 | { 664 | "cell_type": "markdown", 665 | "metadata": {}, 666 | "source": [ 667 | "By mastering Bayesian inference, you're not just learning a statistical technique – you're adopting a way of thinking that will serve you well in navigating the uncertain, data-rich world of modern data science. Keep exploring, keep questioning, and keep updating your beliefs as you encounter new evidence. That's the Bayesian way!" 668 | ] 669 | }, 670 | { 671 | "cell_type": "markdown", 672 | "metadata": { 673 | "vscode": { 674 | "languageId": "plaintext" 675 | } 676 | }, 677 | "source": [ 678 | "# add resources from youtube and medium" 679 | ] 680 | }, 681 | { 682 | "cell_type": "code", 683 | "execution_count": null, 684 | "metadata": { 685 | "vscode": { 686 | "languageId": "plaintext" 687 | } 688 | }, 689 | "outputs": [], 690 | "source": [] 691 | } 692 | ], 693 | "metadata": { 694 | "language_info": { 695 | "name": "python" 696 | } 697 | }, 698 | "nbformat": 4, 699 | "nbformat_minor": 2 700 | } 701 | -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/01 Foundations of Artificial Neural Networks.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "## [Foundations of Artificial Neural Networks](#toc0_)" 15 | ] 16 | }, 17 | { 18 | "cell_type": "markdown", 19 | "metadata": {}, 20 | "source": [ 21 | "Artificial Neural Networks (ANNs) are at the heart of modern machine learning and artificial intelligence systems. They are inspired by the way the human brain processes information, and they form the basis for many advanced applications in computer vision, natural language processing, and beyond. In this section, we'll explore the core ideas behind neural networks, why they originated, and how they serve as powerful tools for tasks that were once considered too complex for traditional algorithms.\n" 22 | ] 23 | }, 24 | { 25 | "cell_type": "markdown", 26 | "metadata": {}, 27 | "source": [ 28 | "" 29 | ] 30 | }, 31 | { 32 | "cell_type": "markdown", 33 | "metadata": {}, 34 | "source": [ 35 | "The concept of **artificial neural networks** originates from the study of biological neurons in the human brain. A biological neuron typically receives signals through its dendrites, processes them in the cell body, and then fires an output signal through its axon if certain conditions are met.\n", 36 | "\n", 37 | "- **Neuron Communication:** Neurons communicate via electrical impulses, with varying signal strengths depending on inputs and synaptic connections.\n", 38 | "- **Simplified Model in AI:** In an ANN, each artificial neuron is a simplified representation of a biological neuron. It takes weighted inputs, sums them, applies an activation function, and produces an output.\n" 39 | ] 40 | }, 41 | { 42 | "cell_type": "markdown", 43 | "metadata": {}, 44 | "source": [ 45 | "" 46 | ] 47 | }, 48 | { 49 | "cell_type": "markdown", 50 | "metadata": {}, 51 | "source": [ 52 | "" 53 | ] 54 | }, 55 | { 56 | "cell_type": "markdown", 57 | "metadata": {}, 58 | "source": [ 59 | "**Example Analogy:** \n", 60 | "Imagine each neuron as a voting participant. Each input (dendrite) casts a \"vote\" (weighted signal). Once the total votes exceed a threshold, the neuron \"fires\" an output. This high-level abstraction helps us model complex patterns.\n" 61 | ] 62 | }, 63 | { 64 | "cell_type": "markdown", 65 | "metadata": {}, 66 | "source": [ 67 | "❗️ **Important Note:** Artificial neurons are *not* perfect replicas of real neurons, but they capture enough important features (input processing, threshold-based firing) to be useful for many computational tasks.\n" 68 | ] 69 | }, 70 | { 71 | "cell_type": "markdown", 72 | "metadata": {}, 73 | "source": [ 74 | "To understand neural networks, let’s define a few key terms and ideas:\n", 75 | "\n", 76 | "1. **Inputs and Weights:** \n", 77 | " Each neuron in an ANN typically receives multiple inputs, each multiplied by a parameter called a *weight*. If you have $n$ inputs $\\{x_1, x_2, \\dots, x_n\\}$ with corresponding weights $\\{w_1, w_2, \\dots, w_n\\}$, the neuron computes a weighted sum such as: \n", 78 | " $$\n", 79 | " z = \\sum_{i=1}^{n} w_i \\, x_i + b\n", 80 | " $$ \n", 81 | " where $b$ is a bias term.\n", 82 | "\n", 83 | "2. **Activation Function:** \n", 84 | " After computing the weighted sum $z$, an *activation function* (e.g., **sigmoid**, **ReLU**, **tanh**) determines the output. For instance, the sigmoid activation function is given by: \n", 85 | " $$\n", 86 | " \\sigma(z) = \\frac{1}{1 + e^{-z}}\n", 87 | " $$\n", 88 | "\n", 89 | "3. **Layers:** \n", 90 | " - **Input Layer:** Receives raw data (e.g., pixel intensity for an image). \n", 91 | " - **Hidden Layers:** Transform the inputs through a series of weighted connections. \n", 92 | " - **Output Layer:** Delivers the final prediction or classification result.\n", 93 | "\n", 94 | "4. **Forward Pass:** \n", 95 | " The process of passing input data through the layers of the network to generate an output is known as the *forward pass*.\n" 96 | ] 97 | }, 98 | { 99 | "cell_type": "markdown", 100 | "metadata": {}, 101 | "source": [ 102 | "" 103 | ] 104 | }, 105 | { 106 | "cell_type": "markdown", 107 | "metadata": {}, 108 | "source": [ 109 | "**Simple Pseudocode Example:**\n", 110 | "\n", 111 | "```python\n", 112 | "# Pseudocode for a simple neuron\n", 113 | "inputs = [x1, x2, x3]\n", 114 | "weights = [w1, w2, w3]\n", 115 | "bias = b\n", 116 | "\n", 117 | "# Weighted sum\n", 118 | "z = w1 * x1 + w2 * x2 + w3 * x3 + b\n", 119 | "\n", 120 | "# Apply activation (sigmoid)\n", 121 | "output = 1 / (1 + exp(-z))\n", 122 | "```\n" 123 | ] 124 | }, 125 | { 126 | "cell_type": "markdown", 127 | "metadata": {}, 128 | "source": [ 129 | "Neural networks have gained popularity because they can **learn complex patterns** directly from data. Unlike traditional algorithms that often require manual feature engineering, ANNs can automatically discover features and representations internally.\n", 130 | "\n", 131 | "1. **Complex Problem-Solving:** \n", 132 | " Neural networks excel at tasks like image recognition, speech processing, and language translation—areas where it was historically hard to design hand-crafted features.\n", 133 | "\n", 134 | "2. **Adaptability and Learning:** \n", 135 | " By adjusting the *weights* and *biases* (often through a process called *backpropagation*), the network can adapt to new data. This makes ANNs extremely powerful for real-world problems where data and patterns may be very complex.\n", 136 | "\n", 137 | "3. **Scalability:** \n", 138 | " With modern computing capabilities, neural networks can scale to handle huge datasets. This is especially vital for tasks in fields like computer vision, where you may have millions of images.\n" 139 | ] 140 | }, 141 | { 142 | "cell_type": "markdown", 143 | "metadata": {}, 144 | "source": [ 145 | "💡 **Tip:** Neural networks perform best with large amounts of data and may require extensive computational resources. When data is scarce, simpler models or other machine learning techniques might be more appropriate.\n" 146 | ] 147 | }, 148 | { 149 | "cell_type": "markdown", 150 | "metadata": {}, 151 | "source": [ 152 | "By understanding the biological roots, basic structure, and core benefits of artificial neural networks, we set the stage for more detailed discussions on how they are trained and applied in real-world scenarios. This foundation will help you appreciate how a network's ability to learn from examples makes it an indispensable tool in modern AI." 153 | ] 154 | }, 155 | { 156 | "cell_type": "markdown", 157 | "metadata": {}, 158 | "source": [ 159 | "**Table of contents**\n", 160 | "- [Foundations of Artificial Neural Networks](#toc1_)\n", 161 | "- [Historical Context and Evolution](#toc2_)\n", 162 | " - [Early Developments in Neural Networks](#toc2_1_)\n", 163 | " - [The AI Winters and Their Impact](#toc2_2_)\n", 164 | " - [The Renaissance of Neural Networks](#toc2_3_)\n", 165 | "- [Core Architecture of Neural Networks](#toc3_)\n", 166 | " - [Layers: Input, Hidden, and Output](#toc3_1_)\n", 167 | " - [Activation Functions](#toc3_2_)\n", 168 | " - [Training and Learning Overview](#toc3_3_)\n", 169 | "- [Summary](#toc4_)\n", 170 | "\n", 177 | "" 178 | ] 179 | }, 180 | { 181 | "cell_type": "markdown", 182 | "metadata": {}, 183 | "source": [ 184 | "## [Historical Context and Evolution](#toc0_)" 185 | ] 186 | }, 187 | { 188 | "cell_type": "markdown", 189 | "metadata": {}, 190 | "source": [ 191 | "Neural networks did not become popular overnight. Their development is rooted in a series of breakthroughs, setbacks, and revivals that collectively shaped the field of artificial intelligence (AI). Understanding this history helps us appreciate both the technical and societal factors that influenced neural network research.\n" 192 | ] 193 | }, 194 | { 195 | "cell_type": "markdown", 196 | "metadata": {}, 197 | "source": [ 198 | "" 199 | ] 200 | }, 201 | { 202 | "cell_type": "markdown", 203 | "metadata": {}, 204 | "source": [ 205 | "### [Early Developments in Neural Networks](#toc0_)\n" 206 | ] 207 | }, 208 | { 209 | "cell_type": "markdown", 210 | "metadata": {}, 211 | "source": [ 212 | "In the early days of computing, researchers were captivated by the idea of creating a machine that could emulate human learning. The concept of the **artificial neuron** emerged as early as the 1940s:\n" 213 | ] 214 | }, 215 | { 216 | "cell_type": "markdown", 217 | "metadata": {}, 218 | "source": [ 219 | "The first significant milestone came in 1943 when **Warren McCulloch** and **Walter Pitts** proposed a simplified mathematical model of a neuron. They demonstrated how simple neuron-like structures could perform logical computations.\n" 220 | ] 221 | }, 222 | { 223 | "cell_type": "markdown", 224 | "metadata": {}, 225 | "source": [ 226 | "A few years later, in 1958, **Frank Rosenblatt** introduced the **Perceptron**, which is often regarded as the foundation of modern neural networks. The perceptron algorithm was designed to classify an input into one of two categories by adjusting weights in response to errors—a process resembling the notion of learning from mistakes.\n" 227 | ] 228 | }, 229 | { 230 | "cell_type": "markdown", 231 | "metadata": {}, 232 | "source": [ 233 | "Despite these beginnings, early neural network models faced limitations:\n", 234 | "- They could solve only linearly separable problems.\n", 235 | "- They lacked an efficient method to train multi-layer structures.\n" 236 | ] 237 | }, 238 | { 239 | "cell_type": "markdown", 240 | "metadata": {}, 241 | "source": [ 242 | "❗️ **Important Note:** The inability to go beyond single-layer nets initially stunted neural networks’ progress, sowing the seeds for a period known as the **AI Winter**.\n" 243 | ] 244 | }, 245 | { 246 | "cell_type": "markdown", 247 | "metadata": {}, 248 | "source": [ 249 | "### [The AI Winters and Their Impact](#toc0_)\n" 250 | ] 251 | }, 252 | { 253 | "cell_type": "markdown", 254 | "metadata": {}, 255 | "source": [ 256 | "Starting in the 1970s, neural network research encountered setbacks and waning interest. Funding agencies grew skeptical because of:\n", 257 | "- Overpromises: Bold claims that neural networks would solve complex problems without sufficient technical grounding.\n", 258 | "- Limited computing resources: Hardware constraints made large-scale experiments virtually impossible.\n", 259 | "- Theoretical critiques: Influential analyses (like Minsky and Papert’s book, *Perceptrons*) highlighted fundamental challenges with single-layer perceptrons.\n" 260 | ] 261 | }, 262 | { 263 | "cell_type": "markdown", 264 | "metadata": {}, 265 | "source": [ 266 | "" 267 | ] 268 | }, 269 | { 270 | "cell_type": "markdown", 271 | "metadata": {}, 272 | "source": [ 273 | "" 274 | ] 275 | }, 276 | { 277 | "cell_type": "markdown", 278 | "metadata": {}, 279 | "source": [ 280 | "Two major **AI Winters**—spanning roughly from the mid-1970s to the mid-1980s, and again in the late 1980s to the 1990s— led to severe reductions in AI research funding. Neural network research was particularly hard-hit during these periods.\n" 281 | ] 282 | }, 283 | { 284 | "cell_type": "markdown", 285 | "metadata": {}, 286 | "source": [ 287 | "Although this era saw fewer publicized breakthroughs, it prompted some researchers to explore foundational ideas in greater depth. This groundwork would later prove crucial in sparking a renewed interest in neural networks.\n" 288 | ] 289 | }, 290 | { 291 | "cell_type": "markdown", 292 | "metadata": {}, 293 | "source": [ 294 | "### [The Renaissance of Neural Networks](#toc0_)\n" 295 | ] 296 | }, 297 | { 298 | "cell_type": "markdown", 299 | "metadata": {}, 300 | "source": [ 301 | "The tide began to turn in the mid-1980s. A key driver was the **backpropagation algorithm**, popularized by **Rumelhart, Hinton, and Williams** in 1986. This method enabled learning in multi-layer networks by systematically **adjusting** the weights based on errors in the network’s output. Suddenly, it was possible to tackle more elaborate, **non-linear** classification problems.\n" 302 | ] 303 | }, 304 | { 305 | "cell_type": "markdown", 306 | "metadata": {}, 307 | "source": [ 308 | "" 309 | ] 310 | }, 311 | { 312 | "cell_type": "markdown", 313 | "metadata": {}, 314 | "source": [ 315 | "From there, a combination of factors fueled the renaissance:\n", 316 | "- **Increased Computing Power:** Advances in CPU and GPU technology made training deeper networks feasible.\n", 317 | "- **Big Data Availability:** The internet era offered massive datasets that effectively “fed” neural networks.\n", 318 | "- **Algorithmic Refinements:** Techniques like the convolutional neural network (CNN) and long short-term memory (LSTM) architecture showed impressive results in vision and language tasks, respectively.\n" 319 | ] 320 | }, 321 | { 322 | "cell_type": "markdown", 323 | "metadata": {}, 324 | "source": [ 325 | "Today, neural networks underpin everything from **image recognition** to **speech-to-text systems** and **machine translation**. This revival, often referred to as the **Deep Learning Revolution**, continues to redefine the boundaries of what AI can achieve.\n" 326 | ] 327 | }, 328 | { 329 | "cell_type": "markdown", 330 | "metadata": {}, 331 | "source": [ 332 | "By tracing these historical milestones, we see that neural networks weren’t born fully formed—they evolved through passionate research, skepticism, breakthroughs, and setbacks. This understanding underscores the importance of persistence in research and the interplay between theory, computing resources, and real-world data." 333 | ] 334 | }, 335 | { 336 | "cell_type": "markdown", 337 | "metadata": {}, 338 | "source": [ 339 | "## [Core Architecture of Neural Networks](#toc0_)" 340 | ] 341 | }, 342 | { 343 | "cell_type": "markdown", 344 | "metadata": {}, 345 | "source": [ 346 | "Understanding the core architecture of neural networks is essential for grasping how they learn representations of data. In this section, we’ll break down the foundational elements of a typical neural network, examine the role of activation functions, and explore how training is managed at a high level. By the end, you’ll see how these pieces fit together to create a powerful, flexible system that can handle complex prediction tasks.\n" 347 | ] 348 | }, 349 | { 350 | "cell_type": "markdown", 351 | "metadata": {}, 352 | "source": [ 353 | "### [Layers: Input, Hidden, and Output](#toc0_)\n" 354 | ] 355 | }, 356 | { 357 | "cell_type": "markdown", 358 | "metadata": {}, 359 | "source": [ 360 | "A neural network is composed of **layers** of interconnected nodes (often referred to as *neurons*). Each layer serves a distinct purpose:\n", 361 | "\n", 362 | "1. **Input Layer:** \n", 363 | " This is where your raw data enters the network. If you’re processing images, for example, the input layer might receive pixel values. The input layer doesn’t typically perform any computations; it just forwards information to the next layer.\n", 364 | "\n", 365 | "2. **Hidden Layers:** \n", 366 | " The “hidden” part comes from the fact that these layers are not directly visible as inputs or outputs. They perform transformations on the data by applying weights, biases, and activation functions. A network can have **one or multiple** hidden layers, depending on the complexity of the problem. \n", 367 | " - **Deep Networks** contain many hidden layers, allowing them to learn complex features from the data. \n", 368 | " - **Shallow Networks** have fewer hidden layers, which can be easier to train but may be less expressive for intricate tasks.\n", 369 | "\n", 370 | "3. **Output Layer:** \n", 371 | " This is the final layer that produces the predictions or classifications. For instance, a binary classification network might have a single output neuron with a sigmoid activation to produce a probability value between 0 and 1. For multiclass tasks, you might see a *softmax* function that provides probabilities across multiple categories.\n" 372 | ] 373 | }, 374 | { 375 | "cell_type": "markdown", 376 | "metadata": {}, 377 | "source": [ 378 | "" 379 | ] 380 | }, 381 | { 382 | "cell_type": "markdown", 383 | "metadata": {}, 384 | "source": [ 385 | "In essence, the information flows forward **layer by layer** (the *forward pass*) until an output is generated. The way signals progress between these layers determines how well the network learns to map inputs to desired outputs.\n" 386 | ] 387 | }, 388 | { 389 | "cell_type": "markdown", 390 | "metadata": {}, 391 | "source": [ 392 | "💡 **Tip:** The number and size of hidden layers significantly influence a network’s performance. More layers (or neurons per layer) can capture greater complexity, but they also require more data and computational power to train effectively.\n" 393 | ] 394 | }, 395 | { 396 | "cell_type": "markdown", 397 | "metadata": {}, 398 | "source": [ 399 | "### [Activation Functions](#toc0_)\n" 400 | ] 401 | }, 402 | { 403 | "cell_type": "markdown", 404 | "metadata": {}, 405 | "source": [ 406 | "A crucial element that gives neural networks their learning power is the **activation function**. After computing a weighted sum (plus a bias), neurons pass this value through an activation function that introduces **nonlinearity**. This nonlinearity is what enables neural networks to learn complex mappings from inputs to outputs.\n" 407 | ] 408 | }, 409 | { 410 | "cell_type": "markdown", 411 | "metadata": {}, 412 | "source": [ 413 | "" 414 | ] 415 | }, 416 | { 417 | "cell_type": "markdown", 418 | "metadata": {}, 419 | "source": [ 420 | "Some common activation functions include:\n", 421 | "\n", 422 | "- **Sigmoid**:\n", 423 | " $$\n", 424 | " \\sigma(z) = \\frac{1}{1 + e^{-z}}\n", 425 | " $$\n", 426 | " - Outputs values in the range $(0, 1)$.\n", 427 | " - Often used in the output layer for binary classification.\n", 428 | "\n", 429 | "- **Tanh**:\n", 430 | " $$\n", 431 | " \\tanh(z) = \\frac{e^z - e^{-z}}{e^z + e^{-z}}\n", 432 | " $$\n", 433 | " - Outputs values in the range $(-1, 1)$.\n", 434 | " - Can be more effective than sigmoid in hidden layers by providing a mean of 0.\n", 435 | "\n", 436 | "- **ReLU (Rectified Linear Unit)**:\n", 437 | " $$\n", 438 | " \\text{ReLU}(z) = \\max(0, z)\n", 439 | " $$\n", 440 | " - Outputs 0 when $z < 0$ and $z$ otherwise.\n", 441 | " - Popular for deep networks because it helps alleviate the vanishing gradient problem.\n", 442 | "\n", 443 | "- **Leaky ReLU** and **ELU**:\n", 444 | " - Variations of ReLU designed to fix some of its shortcomings, such as “dying ReLUs” where gradients become zero for negative inputs.\n" 445 | ] 446 | }, 447 | { 448 | "cell_type": "markdown", 449 | "metadata": {}, 450 | "source": [ 451 | "**Example Usage in Code:**\n", 452 | "```python\n", 453 | "def relu(z):\n", 454 | " return max(0, z)\n", 455 | "\n", 456 | "def sigmoid(z):\n", 457 | " import math\n", 458 | " return 1 / (1 + math.exp(-z))\n", 459 | "```\n" 460 | ] 461 | }, 462 | { 463 | "cell_type": "markdown", 464 | "metadata": {}, 465 | "source": [ 466 | "❗️ **Important Note:** Choosing an activation function is context-dependent. Different tasks may benefit from different functions, and experimentation or domain knowledge often guides this decision.\n" 467 | ] 468 | }, 469 | { 470 | "cell_type": "markdown", 471 | "metadata": {}, 472 | "source": [ 473 | "### [Training and Learning Overview](#toc0_)\n" 474 | ] 475 | }, 476 | { 477 | "cell_type": "markdown", 478 | "metadata": {}, 479 | "source": [ 480 | "Once we define a network’s layers and activation functions, the next step is **training** the network to perform a specific task, such as classification or regression. While a detailed look at training algorithms (like backpropagation) will come later, here’s a high-level overview:\n", 481 | "\n", 482 | "1. **Forward Pass:** \n", 483 | " - Input data is fed into the network. \n", 484 | " - Each layer computes its neurons’ outputs, culminating in final predictions.\n", 485 | "\n", 486 | "2. **Loss Calculation:** \n", 487 | " - Compare predictions with *ground truth* labels using a loss function (e.g., Mean Squared Error for regression, Cross-Entropy for classification). \n", 488 | " - The chosen loss function quantifies the difference between the network’s predictions and the correct answers.\n", 489 | "\n", 490 | "3. **Backward Pass (Backpropagation):** \n", 491 | " - The network adjusts its weights and biases in the opposite direction of the loss gradient. \n", 492 | " - This step ensures that neurons *contribute less* to errors in future iterations.\n", 493 | "\n", 494 | "4. **Optimization and Updates:** \n", 495 | " - Optimizers like **SGD (Stochastic Gradient Descent)** or **Adam** update the parameters in small steps. \n", 496 | " - The network iterates this cycle many times (epochs), gradually reducing error on the training data.\n" 497 | ] 498 | }, 499 | { 500 | "cell_type": "markdown", 501 | "metadata": {}, 502 | "source": [ 503 | "Over time, the **trained model** ideally generalizes to new, unseen data. The architecture (layers and activations) is what enables complex transformations of inputs through multiple stages, ultimately refining how the network represents data internally.\n" 504 | ] 505 | }, 506 | { 507 | "cell_type": "markdown", 508 | "metadata": {}, 509 | "source": [ 510 | "By connecting these pieces—layers, activation functions, and the training loop—you gain insight into how a neural network “learns” from its mistakes. This foundational understanding paves the way for delving deeper into specific architectures and specialized networks in more advanced settings.\n", 511 | "```" 512 | ] 513 | }, 514 | { 515 | "cell_type": "markdown", 516 | "metadata": { 517 | "vscode": { 518 | "languageId": "plaintext" 519 | } 520 | }, 521 | "source": [ 522 | "## [Summary](#toc0_)" 523 | ] 524 | }, 525 | { 526 | "cell_type": "markdown", 527 | "metadata": {}, 528 | "source": [ 529 | "Having explored the foundations, historical milestones, core architecture, and practical applications of artificial neural networks, it's clear that these models are transformative in the way machines learn from data. From simple beginnings in perceptrons to sophisticated deep architectures capable of understanding images, text, and beyond—neural networks have truly revolutionized modern computing.\n" 530 | ] 531 | }, 532 | { 533 | "cell_type": "markdown", 534 | "metadata": {}, 535 | "source": [ 536 | "Artificial Neural Networks (ANNs) derive their power from their layered structure and the ability to learn complex, nonlinear relationships. Here’s a quick recap of the essentials:\n", 537 | "\n", 538 | "- **Biological Inspiration:** ANNs are inspired by how neurons in our brains process information, though they are simplified abstractions. \n", 539 | "- **Historical Evolution:** Despite early excitement, neural networks faced setbacks during the AI Winters. However, the resurgence of **backpropagation** and increased computing resources ignited a new era of deep learning. \n", 540 | "- **Core Components:** Layers, weights, biases, activation functions, and a training mechanism (often via backpropagation) combine to produce learned models. \n", 541 | "- **Widespread Applications:** From computer vision to language applications and beyond, neural networks excel at tasks where traditional machine learning might struggle.\n" 542 | ] 543 | }, 544 | { 545 | "cell_type": "markdown", 546 | "metadata": {}, 547 | "source": [ 548 | "While neural networks are powerful, they are not a universal solution to every problem. A few critical points to keep in mind:\n", 549 | "\n", 550 | "1. **Data Requirements:** Networks typically need large, well-labeled datasets, which can be costly or difficult to obtain. \n", 551 | "2. **Computational Demand:** Training deep models often requires specialized hardware like GPUs or TPUs. \n", 552 | "3. **Interpretability:** Explaining how a neural network arrives at specific decisions can be challenging, raising concerns in regulated industries. \n", 553 | "4. **Overfitting:** With high capacity, ANNs can memorize training data rather than truly learning, necessitating robust regularization and careful evaluation.\n" 554 | ] 555 | }, 556 | { 557 | "cell_type": "markdown", 558 | "metadata": {}, 559 | "source": [ 560 | "❗️ **Important Note:** Not every data science challenge needs a neural network. Sometimes simpler models are both adequate and more interpretable.\n" 561 | ] 562 | }, 563 | { 564 | "cell_type": "markdown", 565 | "metadata": {}, 566 | "source": [ 567 | "Neural networks continue to evolve, with **deep learning** methods pushing the boundaries of AI in fields like medical diagnosis, autonomous vehicles, and robotics. Although this lecture provides a foundational understanding, the field is vast. Here are some avenues to explore as you progress:\n", 568 | "\n", 569 | "- **Advanced Architectures:** Investigate **Convolutional Neural Networks (CNNs)** and **Recurrent Neural Networks (RNNs)** for more complex tasks. \n", 570 | "- **Optimization Techniques:** Learn about momentum-based and adaptive optimizers, such as **Adam** or **RMSProp**, to enhance training performance. \n", 571 | "- **Explainable AI:** Delve into methods that make network decisions more understandable and transparent. \n", 572 | "- **Ethical and Societal Implications:** As networks become more pervasive, considerations like bias, fairness, and accountability become increasingly important.\n" 573 | ] 574 | }, 575 | { 576 | "cell_type": "markdown", 577 | "metadata": {}, 578 | "source": [ 579 | "💡 **Tip:** Keep experimenting! Building personal projects, contributing to open-source libraries, or competing in AI challenges can deepen your intuition and practical understanding.\n" 580 | ] 581 | }, 582 | { 583 | "cell_type": "markdown", 584 | "metadata": {}, 585 | "source": [ 586 | "By combining this knowledge with hands-on experience and continuous learning, you pave the way to becoming proficient in designing, training, and deploying neural network models. This concludes our introduction to Neural Networks, setting the stage for deeper explorations and specialized applications." 587 | ] 588 | } 589 | ], 590 | "metadata": { 591 | "kernelspec": { 592 | "display_name": "py310", 593 | "language": "python", 594 | "name": "python3" 595 | }, 596 | "language_info": { 597 | "name": "python", 598 | "version": "3.10.12" 599 | } 600 | }, 601 | "nbformat": 4, 602 | "nbformat_minor": 2 603 | } 604 | -------------------------------------------------------------------------------- /Lectures/10 Neural Networks and Deep Learning/06 Introduction to Deep Learning.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "# Introduction to Deep Learning" 15 | ] 16 | }, 17 | { 18 | "cell_type": "markdown", 19 | "metadata": {}, 20 | "source": [ 21 | "Deep learning has revolutionized the field of artificial intelligence by enabling machines to learn complex patterns from large datasets. It builds upon traditional neural networks by stacking multiple layers, often leading to significantly improved performance in various computer vision, natural language processing, and speech recognition tasks. In this lecture, we’ll explore the fundamental ideas behind deep learning, understand how it differs from shallow neural networks, and see why it’s so influential in modern machine learning research.\n" 22 | ] 23 | }, 24 | { 25 | "cell_type": "markdown", 26 | "metadata": {}, 27 | "source": [ 28 | "" 29 | ] 30 | }, 31 | { 32 | "cell_type": "markdown", 33 | "metadata": {}, 34 | "source": [ 35 | "Deep learning refers to a family of neural network architectures characterized by multiple layers of processing. Each layer refines and transforms the data, allowing the network to automatically learn hierarchical feature representations.\n" 36 | ] 37 | }, 38 | { 39 | "cell_type": "markdown", 40 | "metadata": {}, 41 | "source": [ 42 | "Early ideas of neural networks trace back to the 1940s and 1950s, with the *Hebbian learning rule* and the *McCulloch-Pitts neuron model*. However, limited computing power and data availability initially curbed progress. The resurgence of neural networks in the late 2000s—fueled by *faster GPUs*, *big data*, and innovative new architectures—marked the start of the deep learning revolution.\n" 43 | ] 44 | }, 45 | { 46 | "cell_type": "markdown", 47 | "metadata": {}, 48 | "source": [ 49 | "❗️ **Important Note:** Despite its current popularity, deep learning is still evolving. Breakthroughs in architecture designs and training methodologies continue to reshape what’s possible in the industry.\n" 50 | ] 51 | }, 52 | { 53 | "cell_type": "markdown", 54 | "metadata": {}, 55 | "source": [ 56 | "Deep learning became a game-changer due to its fundamentally different approach to feature extraction. Traditional machine learning typically required laborious *feature engineering* by domain experts. Deep networks, however, learn these features directly from raw data, making them highly adaptable to diverse tasks and domains.\n", 57 | "\n", 58 | "- **Automatic Feature Learning** \n", 59 | " Instead of manually crafting features (e.g., edges, corners, shapes for images), deep networks learn features across multiple levels of abstraction. \n", 60 | "- **Scalability** \n", 61 | " As data grows, deep learning architectures can learn increasingly nuanced representations. \n", 62 | "- **Performance Gains** \n", 63 | " Empirically, deeper models often outperform shallower counterparts, especially when dealing with unstructured data like images and text.\n" 64 | ] 65 | }, 66 | { 67 | "cell_type": "markdown", 68 | "metadata": {}, 69 | "source": [ 70 | "💡 **Tip:** When deciding between simple models and deep learning, consider the complexity of your problem, the size of your dataset, and the computational resources at your disposal.\n" 71 | ] 72 | }, 73 | { 74 | "cell_type": "markdown", 75 | "metadata": {}, 76 | "source": [ 77 | "A typical neural network with just one hidden layer is often referred to as a *shallow* architecture. While these can approximate many functions theoretically, they often struggle to capture complicated relationships in practical applications.\n" 78 | ] 79 | }, 80 | { 81 | "cell_type": "markdown", 82 | "metadata": {}, 83 | "source": [ 84 | "Deep architectures build on this idea by adding more layers:\n", 85 | "\n", 86 | "1. **Input Layer** \n", 87 | " Receives the raw data (e.g., pixels in an image, tokens in a sentence).\n", 88 | "2. **Multiple Hidden Layers** \n", 89 | " Each layer transforms and refines the representation. For example, a lower layer might capture simple features like edges in images, while higher layers capture objects and concepts.\n", 90 | "3. **Output Layer** \n", 91 | " Produces the final prediction or classification, often via a softmax or another activation function.\n" 92 | ] 93 | }, 94 | { 95 | "cell_type": "markdown", 96 | "metadata": {}, 97 | "source": [ 98 | "" 99 | ] 100 | }, 101 | { 102 | "cell_type": "markdown", 103 | "metadata": {}, 104 | "source": [ 105 | "This stacking of layers can be captured mathematically by repeatedly applying transformations to the input data, for example:\n", 106 | "\n", 107 | "$$\n", 108 | "\\mathbf{h}^{(1)} = f(\\mathbf{W}^{(1)} \\mathbf{x} + \\mathbf{b}^{(1)}) \n", 109 | "$$\n", 110 | "$$\n", 111 | "\\mathbf{h}^{(2)} = f(\\mathbf{W}^{(2)} \\mathbf{h}^{(1)} + \\mathbf{b}^{(2)})\n", 112 | "$$\n", 113 | "$$\n", 114 | "\\ldots\n", 115 | "$$\n", 116 | "$$\n", 117 | "\\mathbf{y} = f(\\mathbf{W}^{(n)} \\mathbf{h}^{(n-1)} + \\mathbf{b}^{(n)})\n", 118 | "$$\n" 119 | ] 120 | }, 121 | { 122 | "cell_type": "markdown", 123 | "metadata": {}, 124 | "source": [ 125 | "Where $ f $ is a non-linear activation function (e.g., ReLU, Sigmoid). The depth (number of layers) and breadth (number of neurons per layer) can significantly affect performance.\n" 126 | ] 127 | }, 128 | { 129 | "cell_type": "markdown", 130 | "metadata": {}, 131 | "source": [ 132 | "By the end of this lecture, you’ll understand the fundamental reasons behind the emergence of deep learning and how these deeper network structures unlock new possibilities in AI. \n" 133 | ] 134 | }, 135 | { 136 | "cell_type": "markdown", 137 | "metadata": {}, 138 | "source": [ 139 | "Next, we’ll explore the *key characteristics* of deep learning and how these architectures are applied in practice. Let’s dive in!" 140 | ] 141 | }, 142 | { 143 | "cell_type": "markdown", 144 | "metadata": {}, 145 | "source": [ 146 | "**Table of contents** \n", 147 | "- [Key Characteristics of Deep Learning](#toc1_) \n", 148 | " - [Multiple Layers and Representational Learning](#toc1_1_) \n", 149 | " - [Automatic Feature Extraction](#toc1_2_) \n", 150 | " - [Scalability with Large Datasets](#toc1_3_) \n", 151 | "- [Overview of Common Architectures](#toc2_) \n", 152 | " - [Convolutional Neural Networks (CNNs)](#toc2_1_) \n", 153 | " - [Recurrent Neural Networks (RNNs)](#toc2_2_) \n", 154 | " - [Transformers](#toc2_3_) \n", 155 | " - [Attention Mechanism (The Key Innovation)](#toc2_3_1_) \n", 156 | " - [Key Components](#toc2_3_2_) \n", 157 | " - [Position Encoding](#toc2_3_3_) \n", 158 | " - [Architecture](#toc2_3_4_) \n", 159 | " - [Advantages over RNNs](#toc2_3_5_) \n", 160 | " - [Real-world Applications](#toc2_3_6_) \n", 161 | " - [Graph Neural Networks](#toc2_4_) \n", 162 | "- [Summary and Next Steps](#toc3_) \n", 163 | "\n", 164 | "\n", 171 | "" 172 | ] 173 | }, 174 | { 175 | "cell_type": "markdown", 176 | "metadata": {}, 177 | "source": [ 178 | "## [Key Characteristics of Deep Learning](#toc0_)" 179 | ] 180 | }, 181 | { 182 | "cell_type": "markdown", 183 | "metadata": {}, 184 | "source": [ 185 | "Deep learning stands out from traditional machine learning approaches thanks to a few remarkable traits. These include its ability to handle multiple layers of transformations, automatically discover meaningful features, and scale effectively with large amounts of data. By leveraging these characteristics, deep learning models often achieve state-of-the-art results in tasks ranging from image recognition to natural language processing.\n" 186 | ] 187 | }, 188 | { 189 | "cell_type": "markdown", 190 | "metadata": {}, 191 | "source": [ 192 | "### [Multiple Layers and Representational Learning](#toc0_)\n" 193 | ] 194 | }, 195 | { 196 | "cell_type": "markdown", 197 | "metadata": {}, 198 | "source": [ 199 | "When we talk about *deep* neural networks, we’re referring to models with many layers of connected neurons. Each layer learns a progressively more complex representation of the input data:\n", 200 | "\n", 201 | "- **Layered Abstractions**: Lower layers might learn to detect simple edges or corners in an image, while higher layers combine these edges into more advanced concepts (like shapes or textures). Eventually, the network can recognize objects or entire scenes. \n", 202 | "- **Hierarchical Feature Discovery**: By stacking these transformations, deep networks can naturally learn a hierarchy of features that represent data at multiple levels of abstraction.\n" 203 | ] 204 | }, 205 | { 206 | "cell_type": "markdown", 207 | "metadata": {}, 208 | "source": [ 209 | "" 210 | ] 211 | }, 212 | { 213 | "cell_type": "markdown", 214 | "metadata": {}, 215 | "source": [ 216 | "❗️ **Important Note:** Deep models can be used for various data forms—images, text, audio—and can *adapt* their internal representations accordingly, making them highly versatile.\n" 217 | ] 218 | }, 219 | { 220 | "cell_type": "markdown", 221 | "metadata": {}, 222 | "source": [ 223 | "### [Automatic Feature Extraction](#toc0_)\n" 224 | ] 225 | }, 226 | { 227 | "cell_type": "markdown", 228 | "metadata": {}, 229 | "source": [ 230 | "One of the major strengths of deep learning is that it lowers the reliance on handcrafted features:\n", 231 | "\n", 232 | "- **Less Manual Engineering**: In traditional machine learning, designing features (e.g., edge detectors, color histograms) often required domain expertise. Deep neural networks learn these features directly from raw data, speeding up development. \n", 233 | "- **Generalization Across Tasks**: Features learned from one domain (like image classification) can sometimes transfer to related problems (like object detection), reducing the need for large datasets or specialized feature engineering for new tasks. \n" 234 | ] 235 | }, 236 | { 237 | "cell_type": "markdown", 238 | "metadata": {}, 239 | "source": [ 240 | "" 241 | ] 242 | }, 243 | { 244 | "cell_type": "markdown", 245 | "metadata": {}, 246 | "source": [ 247 | "💡 **Tip:** Although deep networks automate feature extraction, it’s still crucial to prepare your data carefully. Proper preprocessing and normalization often make a big difference in performance.\n" 248 | ] 249 | }, 250 | { 251 | "cell_type": "markdown", 252 | "metadata": {}, 253 | "source": [ 254 | "### [Scalability with Large Datasets](#toc0_)" 255 | ] 256 | }, 257 | { 258 | "cell_type": "markdown", 259 | "metadata": {}, 260 | "source": [ 261 | "Deep learning algorithms generally perform better when more data is available:\n", 262 | "\n", 263 | "- **Data-Hungry Models**: By increasing the number of training examples, deep networks can continue refining their internal representations, often achieving impressive improvements in accuracy. \n", 264 | "- **Leverage of Modern Hardware**: Advances in GPU computing and distributed systems enable researchers and engineers to train larger, deeper networks on massive datasets, pushing the boundaries of what AI can accomplish.\n" 265 | ] 266 | }, 267 | { 268 | "cell_type": "markdown", 269 | "metadata": {}, 270 | "source": [ 271 | "" 272 | ] 273 | }, 274 | { 275 | "cell_type": "markdown", 276 | "metadata": {}, 277 | "source": [ 278 | "❗️ **Important Note:** While deep learning scales well with large datasets, this also means it can be prone to *overfitting* if your dataset is too small or not representative. Regularization techniques and careful data collection become essential in these scenarios.\n" 279 | ] 280 | }, 281 | { 282 | "cell_type": "markdown", 283 | "metadata": {}, 284 | "source": [ 285 | "By understanding these *key characteristics*, you’ll see why deep learning has become so widely adopted. Its capacity for layered, automated feature learning and its scalability with large datasets allow practitioners to tackle problems once considered out of reach. Next, we’ll delve into the **common architectures** that bring these principles to life in real-world applications." 286 | ] 287 | }, 288 | { 289 | "cell_type": "markdown", 290 | "metadata": {}, 291 | "source": [ 292 | "## [Overview of Common Architectures](#toc0_)" 293 | ] 294 | }, 295 | { 296 | "cell_type": "markdown", 297 | "metadata": {}, 298 | "source": [ 299 | "Deep learning isn’t limited to just one type of network structure. Instead, there are several architectures designed to tackle different problem domains. From image recognition to language modeling, each architecture boasts unique strengths and applications. Below, we’ll explore some of the most commonly used deep learning architectures.\n" 300 | ] 301 | }, 302 | { 303 | "cell_type": "markdown", 304 | "metadata": {}, 305 | "source": [ 306 | "### [Convolutional Neural Networks (CNNs)](#toc0_)\n" 307 | ] 308 | }, 309 | { 310 | "cell_type": "markdown", 311 | "metadata": {}, 312 | "source": [ 313 | "Convolutional Neural Networks (CNNs) are the go-to choice for problems involving visual data, such as image classification, object detection, or even video processing. Instead of the fully connected layers found in basic neural networks, CNNs use *convolutional filters* (kernels) that slide across an input (e.g., an image) to detect specific patterns.\n", 314 | "\n", 315 | "- **Local Receptive Fields**: A small region of the image is connected to a neuron in the first layer, making the model more efficient and less prone to overfitting.\n", 316 | "- **Parameter Sharing**: The same filter is reused across the image, drastically reducing the number of parameters.\n", 317 | "- **Pooling Layers**: Commonly used (e.g., *max pooling*) to reduce spatial dimensions, helping the network focus on critical features and improving computational efficiency.\n" 318 | ] 319 | }, 320 | { 321 | "cell_type": "markdown", 322 | "metadata": {}, 323 | "source": [ 324 | "" 325 | ] 326 | }, 327 | { 328 | "cell_type": "markdown", 329 | "metadata": {}, 330 | "source": [ 331 | "💡 **Tip:** CNNs aren’t limited to image tasks. They can also handle 1D data (like time-series) and 3D data (like volumetric scans in medical imaging).\n" 332 | ] 333 | }, 334 | { 335 | "cell_type": "markdown", 336 | "metadata": {}, 337 | "source": [ 338 | "Below is a simple CNN in Keras:\n" 339 | ] 340 | }, 341 | { 342 | "cell_type": "markdown", 343 | "metadata": {}, 344 | "source": [ 345 | "```python\n", 346 | "import tensorflow as tf\n", 347 | "from tensorflow import keras\n", 348 | "from tensorflow.keras import layers\n", 349 | "\n", 350 | "model = keras.Sequential([\n", 351 | " layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),\n", 352 | " layers.MaxPooling2D(pool_size=(2,2)),\n", 353 | " layers.Conv2D(64, (3,3), activation='relu'),\n", 354 | " layers.MaxPooling2D(pool_size=(2,2)),\n", 355 | " layers.Flatten(),\n", 356 | " layers.Dense(128, activation='relu'),\n", 357 | " layers.Dense(10, activation='softmax')\n", 358 | "])\n", 359 | "```\n" 360 | ] 361 | }, 362 | { 363 | "cell_type": "markdown", 364 | "metadata": {}, 365 | "source": [ 366 | "### [Recurrent Neural Networks (RNNs)](#toc0_)\n" 367 | ] 368 | }, 369 | { 370 | "cell_type": "markdown", 371 | "metadata": {}, 372 | "source": [ 373 | "RNNs excel at tasks involving sequential data, like text, audio, or time-series signals. They maintain a hidden state that *evolves over time*, capturing dependencies between elements in a sequence.\n" 374 | ] 375 | }, 376 | { 377 | "cell_type": "markdown", 378 | "metadata": {}, 379 | "source": [ 380 | "" 381 | ] 382 | }, 383 | { 384 | "cell_type": "markdown", 385 | "metadata": { 386 | "vscode": { 387 | "languageId": "plaintext" 388 | } 389 | }, 390 | "source": [ 391 | "Imagine you're reading a book: as you read each word, you don't start from scratch - you maintain context from previous words. Your understanding of each word is influenced by the words that came before it, and you carry this \"memory\" forward as you continue reading. This is exactly how RNNs work." 392 | ] 393 | }, 394 | { 395 | "cell_type": "markdown", 396 | "metadata": {}, 397 | "source": [ 398 | "Unlike regular neural networks, RNNs have a \"memory\" component. They maintain a hidden state that gets updated with each input, much like taking notes while reading - these notes help you understand what comes next. RNNs process data one element at a time (like words in a sentence). At each step, they look at the new input, consider their \"notes\" (hidden state) from previous steps, update their understanding, and pass this updated understanding forward.\n" 399 | ] 400 | }, 401 | { 402 | "cell_type": "markdown", 403 | "metadata": {}, 404 | "source": [ 405 | "Consider predicting the next word in: \"The cat sat on the __\". The RNN first sees \"The\" and creates an initial understanding. Then it sees \"cat\", updates its understanding, sees \"sat\", further updates, and so on until it uses all this accumulated context to predict \"mat\" or \"roof\". This makes RNNs perfect for tasks like text prediction (smartphone keyboards), translation, speech recognition, and time series prediction.\n" 406 | ] 407 | }, 408 | { 409 | "cell_type": "markdown", 410 | "metadata": {}, 411 | "source": [ 412 | "" 413 | ] 414 | }, 415 | { 416 | "cell_type": "markdown", 417 | "metadata": {}, 418 | "source": [ 419 | "However, RNNs have limitations. Like human memory, they can \"forget\" things that happened too long ago - this is called the \"vanishing gradient problem\". Solutions like LSTM and GRU were developed to handle longer sequences better." 420 | ] 421 | }, 422 | { 423 | "cell_type": "markdown", 424 | "metadata": {}, 425 | "source": [ 426 | "A minimal RNN can be expressed as:\n", 427 | "$$\n", 428 | "h_t = f(W_{hh} \\cdot h_{t-1} + W_{xh} \\cdot x_t)\n", 429 | "$$\n", 430 | "where $ h_t $ is the new hidden state, and $ x_t $ is the input at time step $ t $.\n" 431 | ] 432 | }, 433 | { 434 | "cell_type": "markdown", 435 | "metadata": {}, 436 | "source": [ 437 | "" 438 | ] 439 | }, 440 | { 441 | "cell_type": "markdown", 442 | "metadata": {}, 443 | "source": [ 444 | "- **Memory of Past Inputs**: The hidden state effectively serves as a memory mechanism, allowing the network to accumulate information from prior time steps.\n", 445 | "- **Variants**: More sophisticated cells like *Long Short-Term Memory (LSTM)* and *Gated Recurrent Unit (GRU)* help manage long-range dependencies and mitigate the vanishing gradient problem.\n" 446 | ] 447 | }, 448 | { 449 | "cell_type": "markdown", 450 | "metadata": {}, 451 | "source": [ 452 | "❗️ **Important Note:** While RNNs are powerful, they can be slow to train on long sequences. Many modern architectures (like Transformers) now often outperform RNNs in natural language tasks.\n" 453 | ] 454 | }, 455 | { 456 | "cell_type": "markdown", 457 | "metadata": {}, 458 | "source": [ 459 | "### [Transformers](#toc0_)\n" 460 | ] 461 | }, 462 | { 463 | "cell_type": "markdown", 464 | "metadata": {}, 465 | "source": [ 466 | "Transformers are a newer class of deep learning models originally designed for natural language processing. They rely on *attention mechanisms* rather than sequential operations, allowing them to process entire sequences in parallel.\n" 467 | ] 468 | }, 469 | { 470 | "cell_type": "markdown", 471 | "metadata": {}, 472 | "source": [ 473 | "- **Self-Attention**: Every token in the input “pays attention” to every other token, capturing both **global** and **local** context more effectively than RNNs.\n", 474 | "- **Parallelization**: Transformers can be trained faster on large datasets because they don’t require sequential processing.\n", 475 | "- **Versatility**: While first popularized in NLP (e.g., BERT, GPT-series), Transformers are now applied to images (Vision Transformers), audio, and more.\n" 476 | ] 477 | }, 478 | { 479 | "cell_type": "markdown", 480 | "metadata": {}, 481 | "source": [ 482 | "" 483 | ] 484 | }, 485 | { 486 | "cell_type": "markdown", 487 | "metadata": {}, 488 | "source": [ 489 | "Transformers emerged as a revolutionary architecture that addresses many of RNN's limitations. Think of a translator at a conference - unlike RNNs that process words one by one, Transformers look at the entire sentence at once. Their key innovation is the attention mechanism. Consider translating \"The bank is by the river\" - you look at ALL words simultaneously to understand \"bank\" means riverbank, not financial bank." 490 | ] 491 | }, 492 | { 493 | "cell_type": "markdown", 494 | "metadata": {}, 495 | "source": [ 496 | "The architecture has several key components. First is self-attention, where each word \"looks at\" every other word to understand its context. In \"I love ice cream because it is sweet\", to understand what \"it\" refers to, the model looks back at \"ice cream\". Multi-head attention is like having several people analyze the same sentence - one focuses on grammar, another on subject-verb relationships, another on context, then they combine their insights.\n" 497 | ] 498 | }, 499 | { 500 | "cell_type": "markdown", 501 | "metadata": {}, 502 | "source": [ 503 | "" 504 | ] 505 | }, 506 | { 507 | "cell_type": "markdown", 508 | "metadata": {}, 509 | "source": [ 510 | "Since Transformers look at all words at once, they need position encoding to keep track of word order. The architecture consists of an encoder (which takes input text and creates deep understanding) and a decoder (which generates output while considering previous outputs).\n" 511 | ] 512 | }, 513 | { 514 | "cell_type": "markdown", 515 | "metadata": {}, 516 | "source": [ 517 | "Transformers offer several advantages over RNNs:\n", 518 | "\n", 519 | "- **Parallel processing (faster)**\n", 520 | "- **Better at handling long-range dependencies**\n", 521 | "- **No vanishing gradient problem**\n", 522 | "- **Better performance on many tasks**\n", 523 | "\n", 524 | "This is why Transformers have become the foundation for most modern language AI models like GPT-3, BERT, and others. Think of them as a super-efficient team of analysts who look at entire documents at once, with each team member focusing on different aspects, combining their insights to both understand and generate text.\n" 525 | ] 526 | }, 527 | { 528 | "cell_type": "markdown", 529 | "metadata": {}, 530 | "source": [ 531 | "Through this evolution from RNNs to Transformers, we've seen a shift from sequential processing with memory to parallel processing with attention, leading to more powerful and efficient language models." 532 | ] 533 | }, 534 | { 535 | "cell_type": "markdown", 536 | "metadata": {}, 537 | "source": [ 538 | "### [Graph Neural Networks](#toc0_)\n" 539 | ] 540 | }, 541 | { 542 | "cell_type": "markdown", 543 | "metadata": {}, 544 | "source": [ 545 | "Graph Neural Networks (GNNs) extend deep learning to graph-structured data, where nodes represent individual entities (e.g., users in a social network) and edges represent relationships between them.\n", 546 | "\n", 547 | "- **Node and Edge Features**: Graph data can involve various attributes on nodes or edges, making it more complex than grid-like structures (images) or sequences (text).\n", 548 | "- **Message Passing**: GNNs use a *message passing* framework where nodes update their state by aggregating information from their neighbors.\n", 549 | "- **Applications**: Commonly used in social network analysis, recommendation systems, and drug discovery (molecular structures).\n" 550 | ] 551 | }, 552 | { 553 | "cell_type": "markdown", 554 | "metadata": {}, 555 | "source": [ 556 | "" 557 | ] 558 | }, 559 | { 560 | "cell_type": "markdown", 561 | "metadata": {}, 562 | "source": [ 563 | "Although GNNs are still an evolving field, they hold promise for tasks that require understanding relationships beyond simple sequences or grids.\n" 564 | ] 565 | }, 566 | { 567 | "cell_type": "markdown", 568 | "metadata": {}, 569 | "source": [ 570 | "Different architectures fit different tasks. **CNNs** shine for visual data, **RNNs** handle sequential contexts, **Transformers** leverage attention for parallel sequence processing, and **GNNs** provide a framework for complex relational data. Next, we’ll discuss some key challenges of these deep architectures and how to address them." 571 | ] 572 | }, 573 | { 574 | "cell_type": "markdown", 575 | "metadata": {}, 576 | "source": [ 577 | "## [Summary and Next Steps](#toc0_)" 578 | ] 579 | }, 580 | { 581 | "cell_type": "markdown", 582 | "metadata": {}, 583 | "source": [ 584 | "Deep learning embodies a powerful shift in the way machines learn from data, emphasizing depth (multiple layers) and end-to-end representation learning. By stacking layers of neurons and leveraging large datasets, these models unlock complex feature hierarchies, delivering remarkable performance on tasks like image recognition, language translation, and more. However, deep learning also introduces new challenges such as computational intensity, data requirements, and the growing need for interpretability.\n" 585 | ] 586 | }, 587 | { 588 | "cell_type": "markdown", 589 | "metadata": {}, 590 | "source": [ 591 | "Over the course of this lecture, we covered:\n", 592 | "- **Key Characteristics of Deep Learning**: Multiple layers, representational learning, and automatic feature extraction.\n", 593 | "- **Common Architectures**: CNNs for images, RNNs (and their variants) for sequential data, Transformers for parallel sequence processing, and briefly, GNNs for graph-structured data.\n", 594 | "- **Challenges and Considerations**: The computational demands, the importance of large and high-quality datasets, overfitting risks, and the push for better interpretability.\n" 595 | ] 596 | }, 597 | { 598 | "cell_type": "markdown", 599 | "metadata": {}, 600 | "source": [ 601 | "💡 **Tip:** While deep learning can handle exceptionally complex tasks, always consider the problem domain, available data, and computational resources before choosing a deep solution.\n" 602 | ] 603 | }, 604 | { 605 | "cell_type": "markdown", 606 | "metadata": {}, 607 | "source": [ 608 | "Deep learning has found its way into a diverse range of industries:\n", 609 | "- **Healthcare**: Automatic diagnosis through medical image analysis.\n", 610 | "- **Finance**: Fraud detection and automated trading strategies.\n", 611 | "- **Transportation**: Self-driving cars and predictive maintenance.\n", 612 | "- **Customer Service**: Chatbots and personalized recommendations.\n" 613 | ] 614 | }, 615 | { 616 | "cell_type": "markdown", 617 | "metadata": {}, 618 | "source": [ 619 | "As the technology continues to evolve, it will likely expand into new domains, offering innovative solutions to pressing real-world problems.\n" 620 | ] 621 | }, 622 | { 623 | "cell_type": "markdown", 624 | "metadata": {}, 625 | "source": [ 626 | "Deep learning is a vast field, and this lecture only scratches the surface. For those interested in diving deeper, consider:\n", 627 | "- **Online Courses**: Coursera’s *Deep Learning Specialization*, Fast.ai’s *Practical Deep Learning*, and Stanford’s *CS230* or *CS231n*.\n", 628 | "- **Books**: *Deep Learning* by Ian Goodfellow, Yoshua Bengio, and Aaron Courville; *Neural Networks and Deep Learning* by Michael Nielsen (free online).\n", 629 | "- **Academic Papers**: Start with foundational papers, such as the *AlexNet* paper (Krizhevsky et al., 2012), and the *Attention Is All You Need* paper (Vaswani et al., 2017) for Transformers.\n", 630 | "- **Community**: Engaging with communities like Kaggle or building personal projects is a great way to refine your skills.\n" 631 | ] 632 | }, 633 | { 634 | "cell_type": "markdown", 635 | "metadata": {}, 636 | "source": [ 637 | "You’ve now seen how deep networks, with their layered structures and ability to automatically learn features, have pushed machine learning to new heights. As you progress, you can explore specialized architectures, experiment with large-scale datasets, and possibly tackle real-world applications. Up next, you’ll discover how to apply these insights in practical scenarios or transition to a dedicated course that dives deeper into the art and science of deep learning." 638 | ] 639 | } 640 | ], 641 | "metadata": { 642 | "language_info": { 643 | "name": "python" 644 | } 645 | }, 646 | "nbformat": 4, 647 | "nbformat_minor": 2 648 | } 649 | --------------------------------------------------------------------------------