├── .DS_Store ├── .gitignore ├── 0_Prep ├── .ipynb_checkpoints │ ├── 1_Python Prep-checkpoint.ipynb │ └── 2_Python Functions-checkpoint.ipynb ├── 1_Python Prep.ipynb └── 2_Python Functions.ipynb ├── 1_Foundations of AI Engineering ├── .DS_Store ├── 010_Machine Learning │ ├── .DS_Store │ ├── 1_Regression Analysis │ │ ├── .DS_Store │ │ ├── .ipynb_checkpoints │ │ │ └── 8_Linear Regression Part 1-checkpoint.ipynb │ │ ├── 1_Linear Regression with one Variable │ │ │ ├── .ipynb_checkpoints │ │ │ │ └── 10_ML Linear Regression-checkpoint.ipynb │ │ │ ├── 10_ML Linear Regression.ipynb │ │ │ ├── areas.csv │ │ │ └── homeprices.csv │ │ └── 2_Multiple Linear Regression │ │ │ ├── .ipynb_checkpoints │ │ │ └── 10_Multiple Linear Regression-checkpoint.ipynb │ │ │ ├── 10_Multiple Linear Regression.ipynb │ │ │ ├── IPL IMB381IPL2013.csv │ │ │ ├── e.png │ │ │ ├── ee.png │ │ │ ├── q.png │ │ │ ├── rr.png │ │ │ └── w.png │ └── 2_Logistic Regression │ │ ├── .ipynb_checkpoints │ │ └── 10_Logistic Regression-checkpoint.ipynb │ │ ├── 10_Logistic Regression.ipynb │ │ ├── c.png │ │ ├── insurance_data.csv │ │ └── titanic.csv ├── 011_Deep Learning │ ├── .DS_Store │ ├── .ipynb_checkpoints │ │ └── 11_Deep Learning and Neural Networks-checkpoint.ipynb │ ├── 11_Activation Functions Part 1 and Part 2 │ │ ├── .ipynb_checkpoints │ │ │ └── 11_Activation Functions Part 1 and Part 2-checkpoint.ipynb │ │ ├── 11_Activation Functions Part 1 and Part 2.ipynb │ │ ├── a1.png │ │ ├── a2.png │ │ ├── a3.png │ │ ├── a5.png │ │ ├── a6.png │ │ ├── a7.jpg │ │ └── s4.png │ ├── 11_Deep Learning and Neural Networks.ipynb │ ├── c1.png │ ├── c14.png │ ├── c16.png │ ├── c2.png │ ├── c5.png │ ├── c6.png │ ├── l1.png │ ├── n1.png │ ├── n11.jpeg │ ├── n2.png │ ├── n3.png │ ├── n4.png │ ├── n5.png │ ├── nn1.gif │ ├── nn2.png │ ├── nn3.png │ ├── r2.png │ └── t1.png ├── 012_MLops end-to-end project │ ├── .DS_Store │ ├── insurance -claim-prediction-mlops │ │ ├── .github │ │ │ └── workflows │ │ │ │ └── cicd.yaml │ │ ├── .gitignore │ │ ├── Dockerfile │ │ ├── README.md │ │ ├── app.py │ │ ├── claim │ │ │ ├── __init__.py │ │ │ ├── __pycache__ │ │ │ │ └── __init__.cpython-310.pyc │ │ │ ├── components │ │ │ │ ├── __init__.py │ │ │ │ ├── __pycache__ │ │ │ │ │ ├── __init__.cpython-310.pyc │ │ │ │ │ ├── data_ingestion.cpython-310.pyc │ │ │ │ │ ├── data_transformation.cpython-310.pyc │ │ │ │ │ ├── data_validation.cpython-310.pyc │ │ │ │ │ └── model_trainer.cpython-310.pyc │ │ │ │ ├── data_ingestion.py │ │ │ │ ├── data_transformation.py │ │ │ │ ├── data_validation.py │ │ │ │ └── model_trainer.py │ │ │ ├── constants │ │ │ │ ├── __init__.py │ │ │ │ └── __pycache__ │ │ │ │ │ └── __init__.cpython-310.pyc │ │ │ ├── entity │ │ │ │ ├── __init__.py │ │ │ │ ├── __pycache__ │ │ │ │ │ ├── __init__.cpython-310.pyc │ │ │ │ │ ├── data_ingestion_artifact.cpython-310.pyc │ │ │ │ │ ├── data_ingestion_config.cpython-310.pyc │ │ │ │ │ ├── data_transformation_artifact.cpython-310.pyc │ │ │ │ │ ├── data_transformation_config.cpython-310.pyc │ │ │ │ │ ├── data_validation_artifact.cpython-310.pyc │ │ │ │ │ ├── data_validation_config.cpython-310.pyc │ │ │ │ │ ├── model_trainer_artifact.cpython-310.pyc │ │ │ │ │ ├── model_trainer_config.cpython-310.pyc │ │ │ │ │ └── training_config.cpython-310.pyc │ │ │ │ ├── data_ingestion_artifact.py │ │ │ │ ├── data_ingestion_config.py │ │ │ │ ├── data_transformation_artifact.py │ │ │ │ ├── data_transformation_config.py │ │ │ │ ├── data_validation_artifact.py │ │ │ │ ├── data_validation_config.py │ │ │ │ ├── model_trainer_artifact.py │ │ │ │ ├── model_trainer_config.py │ │ │ │ └── training_config.py │ │ │ ├── exception │ │ │ │ ├── __init__.py │ │ │ │ ├── __pycache__ │ │ │ │ │ ├── __init__.cpython-310.pyc │ │ │ │ │ └── exception.cpython-310.pyc │ │ │ │ └── exception.py │ │ │ ├── logging │ │ │ │ ├── __init__.py │ │ │ │ ├── __pycache__ │ │ │ │ │ ├── __init__.cpython-310.pyc │ │ │ │ │ └── logger.cpython-310.pyc │ │ │ │ └── logger.py │ │ │ ├── params │ │ │ │ └── params.yaml │ │ │ ├── pipeline │ │ │ │ ├── __init__.py │ │ │ │ └── run_pipeline.py │ │ │ └── utils │ │ │ │ ├── __init__.py │ │ │ │ └── __pycache__ │ │ │ │ └── __init__.cpython-310.pyc │ │ ├── data_config │ │ │ └── schema.yaml │ │ ├── experiments │ │ │ └── exp.ipynb │ │ ├── final_models │ │ │ ├── model.pkl │ │ │ └── preprocessor.pkl │ │ ├── insurance_claim_prediction.egg-info │ │ │ ├── PKG-INFO │ │ │ ├── SOURCES.txt │ │ │ ├── dependency_links.txt │ │ │ ├── requires.txt │ │ │ └── top_level.txt │ │ ├── main.py │ │ ├── mlruns │ │ │ └── 0 │ │ │ │ ├── 3a5faa601f084ed98c92b80f2cf2c8f2 │ │ │ │ ├── meta.yaml │ │ │ │ ├── metrics │ │ │ │ │ ├── f1_score │ │ │ │ │ ├── precision │ │ │ │ │ └── recall_score │ │ │ │ └── tags │ │ │ │ │ ├── mlflow.log-model.history │ │ │ │ │ ├── mlflow.runName │ │ │ │ │ ├── mlflow.source.git.commit │ │ │ │ │ ├── mlflow.source.name │ │ │ │ │ ├── mlflow.source.type │ │ │ │ │ └── mlflow.user │ │ │ │ ├── f4fd50b718204024b76c852c30f223e9 │ │ │ │ ├── meta.yaml │ │ │ │ ├── metrics │ │ │ │ │ ├── f1_score │ │ │ │ │ ├── precision │ │ │ │ │ └── recall_score │ │ │ │ └── tags │ │ │ │ │ ├── mlflow.log-model.history │ │ │ │ │ ├── mlflow.runName │ │ │ │ │ ├── mlflow.source.git.commit │ │ │ │ │ ├── mlflow.source.name │ │ │ │ │ ├── mlflow.source.type │ │ │ │ │ └── mlflow.user │ │ │ │ └── meta.yaml │ │ ├── requirements.txt │ │ ├── setup.py │ │ ├── templates │ │ │ └── base.html │ │ └── test_df.csv │ └── live-session.rar ├── 013_Project Lab │ ├── .DS_Store │ └── resume_analyzer │ │ ├── .gitignore │ │ ├── README.md │ │ ├── __pycache__ │ │ ├── resume_parser.cpython-39.pyc │ │ └── utils.cpython-39.pyc │ │ ├── app.py │ │ ├── requirements.txt │ │ ├── resume_parser.py │ │ ├── static │ │ ├── css │ │ │ └── styles.css │ │ └── js │ │ │ └── scripts.js │ │ ├── templates │ │ ├── error.html │ │ ├── index.html │ │ └── results.html │ │ └── utils.py ├── 4_Python Hands-On │ ├── .ipynb_checkpoints │ │ └── 4_Module Packages and OOPs-checkpoint.ipynb │ ├── 4_Module Packages and OOPs.ipynb │ ├── __pycache__ │ │ └── indore.cpython-312.pyc │ ├── indore.py │ ├── main.py │ └── statisticsmodels │ │ ├── __init__.py │ │ ├── __pycache__ │ │ ├── __init__.cpython-312.pyc │ │ └── descriptive.cpython-312.pyc │ │ └── descriptive.py ├── 5_Python Hands-On │ ├── .ipynb_checkpoints │ │ └── 5_OOPs-checkpoint.ipynb │ └── 5_OOPs.ipynb ├── 6_NumPy Pandas Hands-On │ ├── .DS_Store │ ├── NumPy │ │ ├── .DS_Store │ │ ├── .ipynb_checkpoints │ │ │ └── 6_NumPy and Pandas (NumPy Part)-checkpoint.ipynb │ │ ├── 6_NumPy and Pandas (NumPy Part).ipynb │ │ └── NumPy Resources │ │ │ ├── .DS_Store │ │ │ ├── NumPy Books │ │ │ └── NumPy Cookbook, Second Edition.pdf │ │ │ ├── NumPy Cheat Sheet │ │ │ ├── Numpy_Python_Cheat_Sheet.pdf │ │ │ ├── numpy-cheat-sheet.pdf │ │ │ └── numpy.pdf │ │ │ ├── NumPy Practice Questions │ │ │ └── 1-100 NumPy Exercises for Data Analysis.pdf │ │ │ └── NumPy Reference or Documentation │ │ │ ├── .ipynb_checkpoints │ │ │ └── numpy-reference 1.18-checkpoint.pdf │ │ │ ├── numpy-reference 1.18.pdf │ │ │ └── numpy-user 1.18.pdf │ └── Pandas │ │ ├── .DS_Store │ │ ├── .ipynb_checkpoints │ │ └── 6_NumPy and Pandas (Pandas part)-checkpoint.ipynb │ │ ├── 6_NumPy and Pandas (Pandas part).ipynb │ │ ├── Pandas Books │ │ ├── Learning pandas.pdf │ │ └── Mastering Pandas.pdf │ │ ├── goa.xlsx │ │ ├── indore.csv │ │ ├── nyc_weather.csv │ │ ├── stock_data.csv │ │ ├── stocks_weather.xlsx │ │ ├── titanic.csv │ │ ├── weather_by_cities.csv │ │ ├── weather_data.csv │ │ ├── weather_data.xlsx │ │ ├── weather_data2.csv │ │ └── weather_datamissing.csv ├── 7_AI Ecosystem │ ├── .DS_Store │ ├── AI Ecosystem.pdf │ └── AI Ecosystem.pptx ├── 8_Mathematics for AI │ ├── .DS_Store │ ├── .ipynb_checkpoints │ │ └── 8_Statistics-checkpoint.ipynb │ ├── 8_Statistics.ipynb │ └── datasets │ │ ├── .DS_Store │ │ ├── .ipynb_checkpoints │ │ ├── AmesHousing-checkpoint.csv │ │ └── forbes-checkpoint.csv │ │ ├── AmesHousing.csv │ │ ├── Birthweight_reduced_kg_R.csv │ │ ├── CarPrice_Assignment.csv │ │ ├── Crime_R.csv │ │ ├── Health_insurance.csv │ │ ├── SP_500_1987.csv │ │ ├── data_loan.csv │ │ └── forbes.csv └── 9_Mathematics for AI │ ├── .DS_Store │ ├── .ipynb_checkpoints │ └── 9_Linear Algebra-checkpoint.ipynb │ ├── 9_Linear Algebra.ipynb │ ├── Tensors.png │ ├── v1.png │ ├── v11.png │ ├── v2.png │ ├── v3.png │ └── v4.gif └── README.md /.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/.DS_Store -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | /.ipynb_checkpoints 2 | /__pycache__ 3 | -------------------------------------------------------------------------------- /0_Prep/.ipynb_checkpoints/2_Python Functions-checkpoint.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [], 3 | "metadata": {}, 4 | "nbformat": 4, 5 | "nbformat_minor": 5 6 | } 7 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/.DS_Store -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/010_Machine Learning/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/010_Machine Learning/.DS_Store -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/010_Machine Learning/1_Regression Analysis/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/010_Machine Learning/1_Regression Analysis/.DS_Store -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/010_Machine Learning/1_Regression Analysis/1_Linear Regression with one Variable/areas.csv: -------------------------------------------------------------------------------- 1 | area 2 | 1000 3 | 1500 4 | 2300 5 | 3540 6 | 4120 7 | 4560 8 | 5490 9 | 3460 10 | 4750 11 | 2300 12 | 9000 13 | 8600 14 | 7100 15 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/010_Machine Learning/1_Regression Analysis/1_Linear Regression with one Variable/homeprices.csv: -------------------------------------------------------------------------------- 1 | area,price 2 | 2600,550000 3 | 3000,565000 4 | 3200,610000 5 | 3600,680000 6 | 4000,725000 7 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/010_Machine Learning/1_Regression Analysis/2_Multiple Linear Regression/IPL IMB381IPL2013.csv: -------------------------------------------------------------------------------- 1 | Sl.NO.,PLAYER NAME,AGE,COUNTRY,TEAM,PLAYING ROLE,T-RUNS,T-WKTS,ODI-RUNS-S,ODI-SR-B,ODI-WKTS,ODI-SR-BL,CAPTAINCY EXP,RUNS-S,HS,AVE,SR-B,SIXERS,RUNS-C,WKTS,AVE-BL,ECON,SR-BL,AUCTION YEAR,BASE PRICE,SOLD PRICE 2 | 1,"Abdulla, YA",2,SA,KXIP,Allrounder,0,0,0,0,0,0,0,0,0,0,0,0,307,15,20.47,8.9,13.93,2009,50000,50000 3 | 2,Abdur Razzak,2,BAN,RCB,Bowler,214,18,657,71.41,185,37.6,0,0,0,0,0,0,29,0,0,14.5,0,2008,50000,50000 4 | 3,"Agarkar, AB",2,IND,KKR,Bowler,571,58,1269,80.62,288,32.9,0,167,39,18.56,121.01,5,1059,29,36.52,8.81,24.9,2008,200000,350000 5 | 4,"Ashwin, R",1,IND,CSK,Bowler,284,31,241,84.56,51,36.8,0,58,11,5.8,76.32,0,1125,49,22.96,6.23,22.14,2011,100000,850000 6 | 5,"Badrinath, S",2,IND,CSK,Batsman,63,0,79,45.93,0,0,0,1317,71,32.93,120.71,28,0,0,0,0,0,2011,100000,800000 7 | 6,"Bailey, GJ",2,AUS,CSK,Batsman,0,0,172,72.26,0,0,1,63,48,21,95.45,0,0,0,0,0,0,2009,50000,50000 8 | 7,"Balaji, L",2,IND,CSK+,Bowler,51,27,120,78.94,34,42.5,0,26,15,4.33,72.22,1,1342,52,25.81,7.98,19.4,2011,100000,500000 9 | 8,"Bollinger, DE",2,AUS,CSK,Bowler,54,50,50,92.59,62,31.3,0,21,16,21,165.88,1,693,37,18.73,7.22,15.57,2011,200000,700000 10 | 9,"Botha, J",2,SA,RR,Allrounder,83,17,609,85.77,72,53,1,335,67,30.45,114.73,3,610,19,32.11,6.85,28.11,2011,200000,950000 11 | 10,"Boucher, MV",2,SA,RCB+,W. Keeper,5515,1,4686,84.76,0,0,1,394,50,28.14,127.51,13,0,0,0,0,0,2008,200000,450000 12 | 11,"Bravo, DJ",2,WI,MI+,Allrounder,2200,86,2004,81.39,142,34.1,0,839,70,27.97,127.12,38,1338,47,28.47,8.12,21.11,2011,200000,200000 13 | 12,"Chanderpaul, S",3,WI,RCB,Batsman,9918,9,8778,70.74,14,52.8,1,25,16,8.33,80.64,0,0,0,0,0,0,2008,200000,200000 14 | 13,"Chawla, PP",1,IND,KXIP,Allrounder,5,3,38,65.51,32,41,0,337,24,13.48,113.09,9,1819,73,126.3,38.11,100.2,2008,125000,400000 15 | 14,"de Villiers, AB",2,SA,DD+,W. Keeper,5457,2,4998,93.19,0,0,1,1302,105,34.26,128.53,42,0,0,0,0,0,2008,200000,300000 16 | 15,"Dhawan, S",2,IND,MI+,Batsman,0,0,69,56.09,0,0,0,1540,95,31.43,122.32,36,66,4,16.5,8.25,12,2011,100000,300000 17 | 16,"Dhoni, MS",2,IND,CSK,W. Keeper,3509,0,6773,88.19,1,12,1,1782,70,37.13,136.45,64,0,0,0,0,0,2008,400000,1500000 18 | 17,"Dilshan, TM",2,SL,DD+,Allrounder,4722,32,6455,86.8,67,58.3,1,1077,76,28.34,117.83,24,356,5,71.2,8.07,53,2008,150000,250000 19 | 18,"Dinda, AB",2,IND,KKR+,Bowler,0,0,18,60,5,61.4,0,6,2,1,33.33,0,926,36,25.72,7.29,21.19,2011,100000,375000 20 | 19,"Dravid, RS",3,IND,RCB+,Batsman,13288,1,10889,71.24,4,46.5,1,1703,75,27.92,116.88,23,0,0,0,0,0,2011,400000,500000 21 | 20,"Duminy, J-P",2,SA,MI+,Batsman,654,11,2536,84,25,47.6,0,978,74,36.22,119.27,35,377,10,37.7,7.11,31.8,2009,300000,300000 22 | 21,"Edwards, FH",2,WI,DC,Bowler,380,157,73,45.62,60,35.6,0,4,3,4,80,0,154,5,30.8,6.6,28,2009,150000,150000 23 | 22,"Fernando, CRD",2,SL,MI,Bowler,249,97,239,60.96,187,34.7,0,4,2,0,133.33,0,298,17,17.53,7.64,13.76,2008,150000,150000 24 | 23,"Fleming, SP",3,NZ,CSK,Batsman,7172,0,8037,71.49,1,29,1,196,45,21.77,118.78,3,0,0,0,0,0,2008,350000,350000 25 | 24,"Flintoff, A",2,ENG,CSK,Allrounder,3845,226,3394,88.82,169,33.2,1,62,24,31,116.98,2,105,2,52.5,9.55,33,2009,950000,1550000 26 | 25,"Gambhir, G",2,IND,DD+,Batsman,3712,0,4819,86.17,0,0,1,2065,93,33.31,128.9,32,0,0,0,0,0,2008,220000,725000 27 | 26,"Ganguly, SC",3,IND,KKR+,Batsman,7212,32,11363,73.7,100,45.6,1,1349,91,25.45,106.81,42,363,10,36.3,7.89,27.6,2011,200000,400000 28 | 27,"Gayle, CH",2,WI,KKR+,Allrounder,6373,72,8087,83.95,156,44.4,1,1804,128,50.11,161.79,129,606,13,46.62,8.05,34.85,2008,250000,800000 29 | 28,"Gibbs, HH",3,SA,DC,Batsman,6167,0,8094,83.26,0,0,1,886,69,27.69,109.79,31,0,0,0,0,0,2008,250000,575000 30 | 29,"Gilchrist, AC",3,AUS,DC+,W. Keeper,5570,0,9619,96.94,0,0,1,1775,109,27.73,140.21,86,0,0,0,0,0,2008,300000,700000 31 | 30,"Gony, MS",2,IND,CSK+,Bowler,0,0,0,0,2,39,0,54,15,9,117.39,5,999,30,33.3,8.47,23.6,2011,50000,290000 32 | 31,Harbhajan Singh,2,IND,MI,Bowler,2164,406,1190,80.51,259,46.5,1,430,49,16.54,151.41,22,1469,54,27.2,6.85,23.83,2008,250000,850000 33 | 32,"Harris, RJ",2,AUS,DC+,Bowler,199,46,48,100,44,23.4,0,115,17,10.45,107.48,3,975,44,22.16,7.71,17.27,2011,200000,325000 34 | 33,"Hayden, ML",3,AUS,CSK,Batsman,8625,0,6133,78.96,0,0,0,1107,93,36.9,137.52,44,0,0,0,0,0,2008,225000,375000 35 | 34,"Henderson, T",3,SA,RR,Allrounder,0,0,0,0,0,0,0,11,11,5.5,68.75,1,40,1,40,6.66,36,2009,100000,650000 36 | 35,"Henriques, MC",1,AUS,KKR+,Allrounder,0,0,18,60,1,90,0,49,30,16.33,108.89,1,142,3,47.33,8.82,32.33,2011,50000,50000 37 | 36,"Hodge, BJ",3,AUS,KKR+,Batsman,503,0,575,87.51,1,66,0,1006,73,31.44,121.5,28,300,17,17.65,7.89,13.41,2011,200000,425000 38 | 37,"Hopes, JR",2,AUS,KXIP,Allrounder,0,0,1326,93.71,67,47.1,0,417,71,26.06,136.27,11,548,14,39.14,9.13,25.71,2011,200000,350000 39 | 38,"Hussey, DJ",2,AUS,KKR+,Allrounder,0,0,1488,91.4,18,36.2,0,971,71,26.24,125.78,48,345,6,57.5,8.85,39,2008,100000,625000 40 | 39,"Hussey, MEK",3,AUS,CSK,Batsman,5708,7,5262,86.97,2,117,1,958,116,39.92,120.65,25,0,0,0,0,0,2008,250000,250000 41 | 40,"Jadeja, RA",1,IND,RR+,Allrounder,0,0,860,78.61,57,46.2,0,904,48,23.18,120.86,35,750,26,28.85,7.33,23.65,2011,100000,950000 42 | 41,"Jaffer, W",2,IND,RCB,Allrounder,1944,2,10,43.47,0,0,0,130,50,16.25,107.44,3,0,0,0,0,0,2008,150000,150000 43 | 42,"Jayasuriya, ST",3,SL,MI,Allrounder,6973,98,13430,91.21,323,46,1,768,114,27.43,144.36,39,390,13,30,7.96,22.62,2008,250000,975000 44 | 43,"Jayawardena, DPMD",2,SL,KXIP+,Batsman,10440,6,10596,78.08,7,83.1,1,1471,110,30.65,128.02,33,0,0,0,0,0,2008,250000,475000 45 | 44,"Kaif, M",2,IND,RR+,Batsman,624,0,2753,72.03,0,0,0,259,34,14.39,103.6,6,0,0,0,0,0,2008,125000,675000 46 | 45,"Kallis, JH",3,SA,RCB,Allrounder,12379,276,11498,72.97,270,39.3,1,1965,89,30.7,110.95,37,1713,45,38.07,7.96,28.71,2008,225000,900000 47 | 46,Kamran Akmal,2,PAK,RR,W. Keeper,2648,0,2924,84.31,0,0,0,128,53,25.6,164.1,8,0,0,0,0,0,2008,150000,150000 48 | 47,Kamran Khan,1,IND,RR+,Bowler,0,0,0,0,0,0,0,3,3,3,60,0,224,9,24.89,8.48,17.78,2009,20000,24000 49 | 48,"Karthik, KD",2,IND,DD+,W. Keeper,1000,0,1008,74.5,0,0,0,1231,69,24.14,123.84,28,0,0,0,0,0,2008,200000,525000 50 | 49,"Kartik, M",2,IND,KKR+,Bowler,88,24,126,70.78,37,51.5,0,111,21,18.5,105.71,1,1013,21,48.24,7.02,41.33,2008,200000,425000 51 | 50,"Katich, SM",3,AUS,KXIP,Batsman,4188,21,1324,68.74,0,0,0,241,75,24.1,129.57,8,0,0,0,0,0,2008,200000,200000 52 | 51,"Kohli, V",1,IND,RCB,Batsman,491,0,3590,86.31,2,137,1,1639,73,28.26,119.29,49,345,4,86.25,8.84,58.5,2011,150000,1800000 53 | 52,"Kumar, P",1,IND,RCB+,Bowler,149,27,292,88.21,77,42.1,0,243,34,10.57,114.08,14,1919,53,36.21,7.73,28.11,2011,200000,800000 54 | 53,"Kumble, A",3,IND,RCB,Bowler,2506,619,938,61.06,337,43,1,35,8,11.67,74.47,0,105,2,52.5,9.55,33,2008,250000,500000 55 | 54,"Langeveldt, CK",3,SA,KKR+,Bowler,16,16,73,58.87,100,34.8,0,8,8,4,88.89,1,187,13,14.38,7.19,12,2011,100000,140000 56 | 55,"Laxman, VVS",3,IND,DC+,Batsman,8781,2,2338,71.23,0,0,1,282,52,15.67,105.62,5,0,0,0,0,0,2008,150000,375000 57 | 56,"Lee, B",2,AUS,KXI+,Bowler,1451,310,1100,82.45,377,29.2,0,103,25,11.44,121.18,6,1009,21,48.05,7.56,38.24,2008,300000,900000 58 | 57,"Maharoof, MF",2,SL,DD,Allrounder,556,25,1042,84.44,133,33.3,0,177,39,17.7,143.9,9,520,27,19.26,7.43,15.56,2008,150000,225000 59 | 58,"Malinga, SL",2,SL,MI,Bowler,275,101,327,73.81,185,31.1,0,64,17,5.82,100,4,1381,83,16.64,6.36,15.69,2008,200000,350000 60 | 59,"Mascarenhas, AD",2,ENG,RR+,Allrounder,0,0,245,95.33,13,63.2,0,74,27,8.22,101.37,1,331,19,17.42,7.01,14.95,2011,100000,100000 61 | 60,"Mathews, AD",1,SL,KKR+,Allrounder,1219,7,1447,82.59,42,43,0,376,65,25.07,123.28,12,537,15,35.8,8.2,26.33,2011,300000,950000 62 | 61,"McCullum, BB",2,NZ,KKR+,W. Keeper,3763,0,4511,89.62,0,0,1,1233,158,28.02,123.42,48,0,0,0,0,0,2008,175000,700000 63 | 62,"McDonald, AB",2,AUS,DD+,Allrounder,107,9,0,0,0,0,0,123,33,30.75,125.51,4,244,10,24.4,8.41,17.4,2011,50000,80000 64 | 63,"McGrath, GD",3,AUS,DD,Bowler,641,563,115,48.72,381,34,0,4,4,4,80,0,357,12,29.75,6.61,27,2008,350000,350000 65 | 64,Misbah-ul-Haq,3,PAK,RCB,Batsman,2173,0,2763,75.1,0,0,1,117,47,16.71,144.44,6,0,0,0,0,0,2010,100000,100000 66 | 65,"Mishra, A",2,IND,DD+,Bowler,392,43,5,27.77,19,40.1,0,186,31,10.94,102.2,3,1530,74,20.68,7.11,17.46,2011,100000,300000 67 | 66,"Mithun, A",1,IND,RCB,Bowler,120,9,51,92.72,3,60,0,32,11,8,133.33,1,435,6,72.5,9.89,44,2011,100000,260000 68 | 67,Mohammad Asif,2,PAK,DD,Bowler,141,106,34,34,46,42.1,0,3,3,1.5,50,0,296,8,37,9.25,24,2008,225000,650000 69 | 68,"Morkel, JA",2,SA,CSK,Allrounder,58,1,782,100.25,50,41.4,0,781,71,24.41,146.25,45,1899,69,27.52,8.25,20.01,2008,225000,675000 70 | 69,"Morkel, M",2,SA,RR+,Bowler,555,139,117,75.97,94,28.5,0,60,16,10,111.11,2,884,38,23.26,7.37,18.95,2011,100000,475000 71 | 70,"Muralitharan, M",3,SL,CSK+,Bowler,1261,800,674,77.56,534,35.2,0,20,6,3.33,66.67,0,1395,57,24.47,6.49,22.63,2008,250000,600000 72 | 71,"Nannes, DP",3,AUS,DD+,Bowler,0,0,1,50,1,42,0,4,3,4,30.77,0,627,24,26.13,7.17,21.92,2011,200000,650000 73 | 72,"Nayar, AM",2,IND,MI+,Allrounder,0,0,0,0,0,0,0,563,35,19.41,123.19,19,263,7,37.57,8.74,25.86,2011,50000,800000 74 | 73,"Nehra, A",2,IND,DD+,Bowler,77,44,141,57.31,157,36.6,0,38,22,19,82.61,1,1192,48,24.83,7.57,19.73,2011,200000,850000 75 | 74,"Noffke, AA",2,AUS,RCB,Allrounder,0,0,0,0,1,54,0,9,9,9,90,0,40,1,40,10,24,2010,20000,20000 76 | 75,"Ntini, M",2,SA,CSK,Bowler,699,390,199,66.77,266,32.6,0,11,11,11,61.11,0,242,7,34.57,6.91,30,2008,200000,200000 77 | 76,"Ojha, NV",2,IND,RR+,W. Keeper,0,0,1,14.28,0,0,0,960,94,22.33,117.94,50,0,0,0,0,0,2011,100000,270000 78 | 77,"Ojha, PP",1,IND,DC+,Bowler,70,62,41,43.61,20,41.7,0,10,3,1,30.3,0,1548,69,22.43,7.16,18.8,2011,200000,500000 79 | 78,"Oram, JDP",2,NZ,CSK+,Allrounder,1780,60,2377,87.16,168,39.7,0,106,41,13.25,98.15,5,327,9,36.33,9.26,23.67,2008,200000,675000 80 | 79,Pankaj Singh,2,IND,RCB+,Bowler,0,0,3,100,0,0,0,7,4,3.5,58.33,0,468,11,42.55,9.36,27.27,2011,50000,95000 81 | 80,"Patel, MM",2,IND,RR+,Bowler,60,35,74,66.07,86,36.6,0,39,23,7.8,235.49,0,1504,70,21.49,7.39,17.47,2008,100000,275000 82 | 81,"Patel, PA",2,IND,CSK+,W. Keeper,683,0,736,76.5,0,0,0,912,57,20.27,107.29,13,0,0,0,0,0,2008,150000,325000 83 | 82,"Pathan, IK",2,IND,KXIP+,Allrounder,1105,100,1468,78.96,165,34,0,929,60,23.82,128.31,34,1975,66,29.92,7.74,23.23,2008,200000,925000 84 | 83,"Pathan, YK",2,IND,RR+,Allrounder,0,0,810,113.6,33,45.1,0,1488,100,25.66,149.25,81,1139,36,31.64,7.2,26.36,2008,100000,475000 85 | 84,"Pietersen, KP",2,ENG,RCB+,Batsman,6654,5,4184,86.76,7,57.1,1,634,103,42.27,141.2,30,215,7,30.71,7.41,24.86,2009,1350000,1550000 86 | 85,"Pollock, SM",3,SA,MI,Allrounder,3781,421,3519,86.69,393,39.9,1,147,33,18.37,132.43,8,301,11,27.36,6.54,25,2008,200000,550000 87 | 86,"Pomersbach, LA",2,AUS,KXIP+,Batsman,0,0,0,0,0,0,0,244,79,27.11,130.48,12,0,0,0,0,0,2011,20000,50000 88 | 87,"Ponting, RT",3,AUS,KKR,Batsman,13218,5,13704,80.39,3,50,1,39,20,9.75,73.58,1,0,0,0,0,0,2008,335000,400000 89 | 88,"Powar, RR",2,IND,KXIP+,Bowler,13,6,163,62.69,34,45.1,0,67,28,22.33,104.69,1,527,13,40.54,7.42,32.77,2008,150000,170000 90 | 89,"Raina, SK",1,IND,CSK,Batsman,710,13,3525,92.71,16,61.9,0,2254,98,33.64,139.39,97,678,20,33.9,7.05,28.9,2008,125000,650000 91 | 90,"Ryder, JD",2,NZ,RCB+,Allrounder,1269,5,1100,89.72,11,34.8,0,604,86,21.57,131.88,19,303,8,37.88,7.73,29.5,2009,100000,160000 92 | 91,"Saha, WP",2,IND,KKR+,W. Keeper,74,0,4,80,0,0,0,372,59,28.62,128.28,16,0,0,0,0,0,2011,100000,100000 93 | 92,"Sangakkara, KC",2,SL,KXIP+,W. Keeper,9382,0,10472,75.75,0,0,1,1567,94,27.98,124.76,27,0,0,0,0,0,2008,250000,700000 94 | 93,"Sarwan, RR",2,WI,KXIP,Batsman,5842,23,5644,75.76,16,36.3,0,73,31,18.25,97.33,1,0,0,0,0,0,2008,225000,225000 95 | 94,"Sehwag, V",2,IND,DD,Batsman,8178,40,8090,104.68,95,45.4,1,1879,119,30.31,167.32,79,226,6,37.67,10.56,21.67,2011,400000,1800000 96 | 95,Shahid Afridi,2,PAK,DC,Allrounder,1716,48,7040,113.87,344,43.4,1,81,33,10.12,176.08,6,225,9,25,7.5,20,2008,225000,675000 97 | 96,"Sharma, I",1,IND,KKR+,Bowler,432,133,47,34.05,64,33.6,0,37,9,9.25,80.43,1,1176,36,32.67,7.63,23.61,2008,150000,950000 98 | 97,"Sharma, J",2,IND,CSK,Allrounder,0,0,35,116.66,1,150,0,36,16,9,120,2,419,12,34.92,9.88,21.33,2008,100000,225000 99 | 98,"Sharma, RG",1,IND,DC+,Batsman,0,0,1961,78.85,8,59.1,0,1975,109,31.35,129.17,82,408,14,29.14,8,21.86,2008,150000,750000 100 | 99,Shoaib Akhtar,3,PAK,KKR,Bowler,544,178,394,73.23,247,31.4,0,2,2,2,28.57,0,54,5,10.8,7.71,8.4,2008,250000,425000 101 | 100,Shoaib Malik,2,PAK,DD,Allrounder,1606,21,5253,78.37,139,47.6,1,52,24,13,110.63,0,85,2,42.5,10,25.5,2008,300000,500000 102 | 101,"Silva, LPC",2,SL,DC,Batsman,537,1,1587,70.4,1,42,0,40,23,20,153.84,1,21,0,0,21,0,2008,100000,100000 103 | 102,"Singh, RP",2,IND,DC+,Bowler,116,40,104,42.97,69,37.1,0,52,10,3.47,68.42,1,1892,74,25.57,7.75,19.78,2008,200000,875000 104 | 103,"Smith, DR",2,WI,DC+,Allrounder,320,7,925,97.26,56,44.8,0,439,87,25.82,148.81,24,338,9,37.56,8.14,27.89,2009,100000,100000 105 | 104,"Smith, GC",2,SA,RR+,Batsman,8042,8,6598,81.58,18,57,1,739,91,28.42,110.63,9,0,0,0,0,0,2008,250000,250000 106 | 105,Sohail Tanvir,2,PAK,RR,Bowler,17,5,268,94.03,55,37.4,0,36,13,12,124.13,1,266,22,12.09,6.46,11.2,2008,100000,100000 107 | 106,"Sreesanth, S",2,IND,KXIP+,Bowler,281,87,44,36.36,75,33,0,33,15,11,64.71,0,1031,35,29.46,8.25,21.43,2008,200000,625000 108 | 107,"Steyn, DW",2,SA,RCB+,Bowler,770,272,142,73.57,91,33.7,0,70,13,4.67,86.42,1,1304,59,22.1,6.58,20.15,2008,150000,325000 109 | 108,"Styris, SB",3,NZ,DC+,Allrounder,1586,20,4483,79.41,137,44.6,0,131,36,18.71,98.5,3,276,8,34.5,7.67,27,2008,175000,175000 110 | 109,"Symonds, A",3,AUS,DC+,Allrounder,1462,24,5088,92.44,133,44.6,0,974,117,36.07,129.87,41,674,20,33.7,7.7,26.35,2008,250000,1350000 111 | 110,"Taibu, T",2,ZIM,KKR,W. Keeper,1546,1,3393,67.58,2,42,1,31,15,10.33,119.23,0,0,0,0,0,0,2008,125000,125000 112 | 111,"Taylor, LRPL",2,NZ,RCB+,Batsman,2742,2,3185,81.77,0,0,1,895,81,27.97,130.28,45,24,0,0,12,0,2008,400000,1000000 113 | 112,"Tendulkar, SR",3,IND,MI,Batsman,15470,45,18426,86.23,154,52.2,1,2047,100,37.91,119.22,24,58,0,0,9.67,0,2011,400000,1800000 114 | 113,"Tiwary, MK",2,IND,DD+,Batsman,0,0,165,75.68,1,60,0,969,75,31.26,113.33,22,45,1,45,11.25,24,2008,100000,675000 115 | 114,"Tiwary, SS",1,IND,MI+,Batsman,0,0,49,87.5,0,0,0,836,42,25.33,119.6,32,0,0,0,0,0,2011,100000,1600000 116 | 115,"Tyagi, S",1,IND,CSK,Bowler,0,0,1,50,3,55,0,3,3,3,0.75,0,295,6,49.17,8.55,34.83,2011,50000,240000 117 | 116,Umar Gul,2,PAK,KKR,Bowler,541,157,368,69.04,154,32.2,0,39,24,13,205.26,5,184,12,15.33,8.17,11.2,2008,150000,150000 118 | 117,"Uthappa, RV",2,IND,RCB+,Batsman,0,0,786,91.92,0,0,0,1538,69,26.98,126.17,59,0,0,0,0,0,2008,200000,800000 119 | 118,"Vaas, WPUJC",3,SL,DC,Bowler,3089,355,2025,72.52,400,39.4,0,81,20,10.13,110.96,3,355,18,19.72,7.55,15.67,2008,200000,200000 120 | 119,Van der Merwe,2,SA,RCB+,Allrounder,0,0,39,95.12,17,41.4,0,137,35,15.22,118.1,8,427,18,23.72,6.83,20.94,2011,50000,50000 121 | 120,"Venugopal Rao, Y",2,IND,DC+,Batsman,0,0,218,60.05,0,0,0,914,71,22.29,118.24,37,321,6,53.5,9.44,34,2011,100000,700000 122 | 121,"Vettori, DL",2,NZ,DD+,Allrounder,4486,359,2105,81.93,282,45.7,1,121,29,15.13,107.08,2,878,28,31.36,6.81,27.75,2008,250000,625000 123 | 122,"Vinay Kumar, R",2,IND,RCB+,Bowler,11,1,43,43.87,28,35.3,0,217,25,9.43,104.83,5,1664,61,27.28,8.24,19.87,2011,100000,475000 124 | 123,"Warne, SK",3,AUS,RR,Bowler,3154,708,1018,72.04,293,36.3,1,198,34,9.9,92.52,6,1447,57,25.39,7.27,20.95,2008,450000,450000 125 | 124,"Warner, DA",1,AUS,DD,Batsman,483,2,876,85.79,0,0,0,1025,109,27.7,135.76,44,0,0,0,0,0,2011,200000,750000 126 | 125,"White, CL",2,AUS,RCB+,Batsman,146,5,2037,80.48,12,27.5,1,745,78,31.04,132.09,29,70,0,0,14,0,2008,100000,500000 127 | 126,"Yadav, AS",2,IND,DC,Batsman,0,0,0,0,0,0,0,49,16,9.8,125.64,2,0,0,0,0,0,2010,50000,750000 128 | 127,Younis Khan,2,PAK,RR,Batsman,6398,7,6814,75.78,3,86.6,1,3,3,3,42.85,0,0,0,0,0,0,2008,225000,225000 129 | 128,Yuvraj Singh,2,IND,KXIP+,Batsman,1775,9,8051,87.58,109,44.3,1,1237,66,26.32,131.88,67,569,23,24.74,7.02,21.13,2011,400000,1800000 130 | 129,Zaheer Khan,2,IND,MI+,Bowler,1114,288,790,73.55,278,35.4,0,99,23,9.9,91.67,1,1783,65,27.43,7.75,21.26,2008,200000,450000 131 | 130,"Zoysa, DNT",2,SL,DC,Bowler,288,64,343,95.81,108,39.4,0,11,10,11,122.22,0,99,2,49.5,9,33,2008,100000,110000 -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/010_Machine Learning/1_Regression Analysis/2_Multiple Linear Regression/e.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/010_Machine Learning/1_Regression Analysis/2_Multiple Linear Regression/e.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/010_Machine Learning/1_Regression Analysis/2_Multiple Linear Regression/ee.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/010_Machine Learning/1_Regression Analysis/2_Multiple Linear Regression/ee.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/010_Machine Learning/1_Regression Analysis/2_Multiple Linear Regression/q.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/010_Machine Learning/1_Regression Analysis/2_Multiple Linear Regression/q.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/010_Machine Learning/1_Regression Analysis/2_Multiple Linear Regression/rr.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/010_Machine Learning/1_Regression Analysis/2_Multiple Linear Regression/rr.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/010_Machine Learning/1_Regression Analysis/2_Multiple Linear Regression/w.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/010_Machine Learning/1_Regression Analysis/2_Multiple Linear Regression/w.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/010_Machine Learning/2_Logistic Regression/c.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/010_Machine Learning/2_Logistic Regression/c.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/010_Machine Learning/2_Logistic Regression/insurance_data.csv: -------------------------------------------------------------------------------- 1 | age,bought_insurance 2 | 22,0 3 | 25,0 4 | 47,1 5 | 52,0 6 | 46,1 7 | 56,1 8 | 55,0 9 | 60,1 10 | 62,1 11 | 61,1 12 | 18,0 13 | 28,0 14 | 27,0 15 | 29,0 16 | 49,1 17 | 55,1 18 | 25,1 19 | 58,1 20 | 19,0 21 | 18,0 22 | 21,0 23 | 26,0 24 | 40,1 25 | 45,1 26 | 50,1 27 | 54,1 28 | 23,0 -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/.DS_Store -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/11_Activation Functions Part 1 and Part 2/a1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/11_Activation Functions Part 1 and Part 2/a1.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/11_Activation Functions Part 1 and Part 2/a2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/11_Activation Functions Part 1 and Part 2/a2.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/11_Activation Functions Part 1 and Part 2/a3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/11_Activation Functions Part 1 and Part 2/a3.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/11_Activation Functions Part 1 and Part 2/a5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/11_Activation Functions Part 1 and Part 2/a5.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/11_Activation Functions Part 1 and Part 2/a6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/11_Activation Functions Part 1 and Part 2/a6.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/11_Activation Functions Part 1 and Part 2/a7.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/11_Activation Functions Part 1 and Part 2/a7.jpg -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/11_Activation Functions Part 1 and Part 2/s4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/11_Activation Functions Part 1 and Part 2/s4.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/11_Deep Learning and Neural Networks.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "030f7b4f", 6 | "metadata": {}, 7 | "source": [ 8 | "# Deep Learning and NN\n", 9 | "\n", 10 | "1. Neural Networks\n", 11 | " - Neuron\n", 12 | " - Perceptron\n", 13 | " - Multi-Layer Perceptron\n", 14 | "2. Various Neural Network Architectures\n", 15 | " - Convolutional Neural Networks (CNNs)\n", 16 | " - Recurrent Neural Networks (RNNs)\n", 17 | " - Long Short-Term Memory (LSTM)\n" 18 | ] 19 | }, 20 | { 21 | "cell_type": "markdown", 22 | "id": "5b6566ed", 23 | "metadata": {}, 24 | "source": [ 25 | "# Machine Learning vs Deep Learning" 26 | ] 27 | }, 28 | { 29 | "cell_type": "code", 30 | "execution_count": null, 31 | "id": "f288007a", 32 | "metadata": {}, 33 | "outputs": [], 34 | "source": [ 35 | "ML - used for structured data (relational databases)\n", 36 | "- we do feature engineering\n", 37 | "\n", 38 | "DL - used for unstructured data (text, images, videos, audios)\n", 39 | "- we do not need feature engineering" 40 | ] 41 | }, 42 | { 43 | "cell_type": "markdown", 44 | "id": "a3e605ac", 45 | "metadata": {}, 46 | "source": [ 47 | "# Neuron\n", 48 | "\n", 49 | "" 50 | ] 51 | }, 52 | { 53 | "cell_type": "code", 54 | "execution_count": null, 55 | "id": "c97ceb2d", 56 | "metadata": {}, 57 | "outputs": [], 58 | "source": [] 59 | }, 60 | { 61 | "cell_type": "markdown", 62 | "id": "305ddaee", 63 | "metadata": {}, 64 | "source": [ 65 | "## Single Neuron (unit)\n", 66 | "\n", 67 | "with single input\n", 68 | "\n", 69 | "" 70 | ] 71 | }, 72 | { 73 | "cell_type": "code", 74 | "execution_count": null, 75 | "id": "e0135d1b", 76 | "metadata": {}, 77 | "outputs": [], 78 | "source": [ 79 | "y = wx + b\n", 80 | "\n", 81 | "# w - weights\n", 82 | "\n", 83 | "# b - bias\n", 84 | "\n", 85 | "y = mx + c" 86 | ] 87 | }, 88 | { 89 | "cell_type": "markdown", 90 | "id": "e16ed5f2", 91 | "metadata": {}, 92 | "source": [ 93 | "## multiple input\n", 94 | "\n", 95 | "" 96 | ] 97 | }, 98 | { 99 | "cell_type": "code", 100 | "execution_count": null, 101 | "id": "fd8a8649", 102 | "metadata": {}, 103 | "outputs": [], 104 | "source": [ 105 | "y = x0w0 + x1w1 + x2w2 + b\n", 106 | "\n" 107 | ] 108 | }, 109 | { 110 | "cell_type": "code", 111 | "execution_count": null, 112 | "id": "08d9e5fd", 113 | "metadata": {}, 114 | "outputs": [], 115 | "source": [] 116 | }, 117 | { 118 | "cell_type": "markdown", 119 | "id": "545a055e", 120 | "metadata": {}, 121 | "source": [ 122 | "# Perceptron\n", 123 | "\n", 124 | "" 125 | ] 126 | }, 127 | { 128 | "cell_type": "code", 129 | "execution_count": null, 130 | "id": "a797687f", 131 | "metadata": {}, 132 | "outputs": [], 133 | "source": [ 134 | "perceptron is a single layer neural network" 135 | ] 136 | }, 137 | { 138 | "cell_type": "code", 139 | "execution_count": null, 140 | "id": "cadb75ad", 141 | "metadata": {}, 142 | "outputs": [], 143 | "source": [ 144 | "step function -> 0 or 1" 145 | ] 146 | }, 147 | { 148 | "cell_type": "markdown", 149 | "id": "a266d5df", 150 | "metadata": {}, 151 | "source": [ 152 | "## Multi Layer Perceptron (Neural Networks)\n", 153 | "\n", 154 | "https://github.com/hemansnation/AI-ML-MLOps-GenAI-Live-Summer-Cohort-2024/blob/main/Module%2006%20-%20Machine%20Learning%20Algorithms/6_Neural%20Networks/26_Neural%20Networks.ipynb\n", 155 | "\n", 156 | "" 157 | ] 158 | }, 159 | { 160 | "cell_type": "code", 161 | "execution_count": null, 162 | "id": "758527c3", 163 | "metadata": {}, 164 | "outputs": [], 165 | "source": [] 166 | }, 167 | { 168 | "cell_type": "markdown", 169 | "id": "48e876d3", 170 | "metadata": {}, 171 | "source": [ 172 | "## Neural Network Equation\n", 173 | "\n", 174 | "" 175 | ] 176 | }, 177 | { 178 | "cell_type": "code", 179 | "execution_count": null, 180 | "id": "f0cf7934", 181 | "metadata": {}, 182 | "outputs": [], 183 | "source": [ 184 | "155513.78 -> activation function/transfer function" 185 | ] 186 | }, 187 | { 188 | "attachments": {}, 189 | "cell_type": "markdown", 190 | "id": "2da7563c", 191 | "metadata": {}, 192 | "source": [ 193 | "## Activation Function\n", 194 | "\n", 195 | "Sigmoid Activation\n", 196 | "\n", 197 | "" 198 | ] 199 | }, 200 | { 201 | "cell_type": "code", 202 | "execution_count": null, 203 | "id": "74004193-2688-4e3d-a1ca-9b3a7c47c0f6", 204 | "metadata": {}, 205 | "outputs": [], 206 | "source": [ 207 | "it is used to get the output of a node(neuron)" 208 | ] 209 | }, 210 | { 211 | "cell_type": "code", 212 | "execution_count": null, 213 | "id": "fd8c6bf1-deaa-4b08-89b4-ac7f2a05bf63", 214 | "metadata": {}, 215 | "outputs": [], 216 | "source": [ 217 | "why activation function\n", 218 | "\n", 219 | "- it is used to determine the output of a neural network in a certain range or category\n", 220 | "\n", 221 | "0 to 1, -1 to 1\n", 222 | "yes or no\n", 223 | "\n", 224 | "\n", 225 | "2 type of activation functions\n", 226 | "- linear (0 or 1)\n", 227 | "- non-linear\n", 228 | " - sigmoid\n", 229 | " - tanh\n", 230 | " - relu\n", 231 | " - leaky relu\n", 232 | " - softmax" 233 | ] 234 | }, 235 | { 236 | "cell_type": "code", 237 | "execution_count": null, 238 | "id": "345e6d64-dfde-4447-a9c7-e0949e06d58a", 239 | "metadata": {}, 240 | "outputs": [], 241 | "source": [ 242 | "# sigmoid activation\n", 243 | "\n", 244 | "range -> (0,1)\n", 245 | "\n", 246 | "- predict the probability as an output\n", 247 | "- binary classification" 248 | ] 249 | }, 250 | { 251 | "cell_type": "code", 252 | "execution_count": null, 253 | "id": "5dc4779c-9474-4c13-8978-aeecf0b166b6", 254 | "metadata": {}, 255 | "outputs": [], 256 | "source": [ 257 | "e = 2.72" 258 | ] 259 | }, 260 | { 261 | "cell_type": "code", 262 | "execution_count": 1, 263 | "id": "aa1dbbf8-0a36-4c5d-aa4e-d7f1050be670", 264 | "metadata": {}, 265 | "outputs": [], 266 | "source": [ 267 | "import math\n", 268 | "\n", 269 | "def sigmoid(x):\n", 270 | " return 1/(1 + math.exp(-x))" 271 | ] 272 | }, 273 | { 274 | "cell_type": "code", 275 | "execution_count": 2, 276 | "id": "b49c98fd-8fc3-4109-b93d-b00913f94ab7", 277 | "metadata": {}, 278 | "outputs": [ 279 | { 280 | "data": { 281 | "text/plain": [ 282 | "1.0" 283 | ] 284 | }, 285 | "execution_count": 2, 286 | "metadata": {}, 287 | "output_type": "execute_result" 288 | } 289 | ], 290 | "source": [ 291 | "sigmoid(100)" 292 | ] 293 | }, 294 | { 295 | "cell_type": "code", 296 | "execution_count": 3, 297 | "id": "099350d1-2b8f-4fd0-b227-359713a18511", 298 | "metadata": {}, 299 | "outputs": [ 300 | { 301 | "data": { 302 | "text/plain": [ 303 | "1.0" 304 | ] 305 | }, 306 | "execution_count": 3, 307 | "metadata": {}, 308 | "output_type": "execute_result" 309 | } 310 | ], 311 | "source": [ 312 | "sigmoid(155513)" 313 | ] 314 | }, 315 | { 316 | "cell_type": "code", 317 | "execution_count": 4, 318 | "id": "041ca129-0a41-43d4-8b6b-df192610e817", 319 | "metadata": {}, 320 | "outputs": [ 321 | { 322 | "data": { 323 | "text/plain": [ 324 | "0.7310585786300049" 325 | ] 326 | }, 327 | "execution_count": 4, 328 | "metadata": {}, 329 | "output_type": "execute_result" 330 | } 331 | ], 332 | "source": [ 333 | "sigmoid(1)" 334 | ] 335 | }, 336 | { 337 | "cell_type": "code", 338 | "execution_count": 5, 339 | "id": "4e4e78c7-4ac5-4f0b-a842-b7af6660f815", 340 | "metadata": {}, 341 | "outputs": [ 342 | { 343 | "data": { 344 | "text/plain": [ 345 | "0.598687660112452" 346 | ] 347 | }, 348 | "execution_count": 5, 349 | "metadata": {}, 350 | "output_type": "execute_result" 351 | } 352 | ], 353 | "source": [ 354 | "sigmoid(0.4)" 355 | ] 356 | }, 357 | { 358 | "cell_type": "code", 359 | "execution_count": null, 360 | "id": "8d724a61", 361 | "metadata": {}, 362 | "outputs": [], 363 | "source": [ 364 | "bias - additional parameter in the neuron that will help the \n", 365 | " activation function to be shifted left or right" 366 | ] 367 | }, 368 | { 369 | "cell_type": "code", 370 | "execution_count": null, 371 | "id": "7f2418a5", 372 | "metadata": {}, 373 | "outputs": [], 374 | "source": [ 375 | "0 - 0.5 0.5 - 1.0\n", 376 | "PCM commerce" 377 | ] 378 | }, 379 | { 380 | "cell_type": "code", 381 | "execution_count": null, 382 | "id": "43465a34-95f7-4cbc-928a-7b38accd9d8a", 383 | "metadata": {}, 384 | "outputs": [], 385 | "source": [ 386 | "0 - 0.1 0.1 - 0.2 0.2 - 0.3 \n", 387 | "skin 1 skin 2 skin 3" 388 | ] 389 | }, 390 | { 391 | "cell_type": "code", 392 | "execution_count": null, 393 | "id": "8cc3824c-5b09-4b48-a09c-a23ffc40c257", 394 | "metadata": {}, 395 | "outputs": [], 396 | "source": [] 397 | }, 398 | { 399 | "cell_type": "markdown", 400 | "id": "66dafcce", 401 | "metadata": {}, 402 | "source": [ 403 | "# 2. Various Neural Network Architectures" 404 | ] 405 | }, 406 | { 407 | "cell_type": "code", 408 | "execution_count": null, 409 | "id": "056887a6", 410 | "metadata": {}, 411 | "outputs": [], 412 | "source": [ 413 | "CNN\n", 414 | "RNN\n", 415 | "LSTM\n", 416 | "GRU\n", 417 | "Autoencoders\n", 418 | "GAN\n", 419 | "Transformers" 420 | ] 421 | }, 422 | { 423 | "cell_type": "code", 424 | "execution_count": null, 425 | "id": "8724870f", 426 | "metadata": {}, 427 | "outputs": [], 428 | "source": [] 429 | }, 430 | { 431 | "cell_type": "code", 432 | "execution_count": null, 433 | "id": "35263d9d", 434 | "metadata": {}, 435 | "outputs": [], 436 | "source": [ 437 | "What is the biggest real-life example of Linear Algebra?\n", 438 | "\n", 439 | "vectors and matrix" 440 | ] 441 | }, 442 | { 443 | "cell_type": "markdown", 444 | "id": "32e6787d", 445 | "metadata": {}, 446 | "source": [ 447 | "## Convolutional Neural Networks (CNNs)\n", 448 | "\n", 449 | "https://github.com/hemansnation/AI-ML-MLOps-GenAI-Live-Summer-Cohort-2024/blob/main/Module%2007%20-%20NLP%20x%20Deep%20Learning/31_32_Convolutional%20Neural%20Networks.ipynb\n", 450 | "\n", 451 | "Convolution\n", 452 | "\n", 453 | "" 454 | ] 455 | }, 456 | { 457 | "cell_type": "code", 458 | "execution_count": null, 459 | "id": "6c3506cb", 460 | "metadata": {}, 461 | "outputs": [], 462 | "source": [ 463 | "ANN\n", 464 | "\n", 465 | "1000x1000x3 = huge number\n", 466 | "\n", 467 | "\n", 468 | "RGB\n", 469 | "0-255\n", 470 | "\n", 471 | "\n", 472 | "- convolution is a mathematical operation\n", 473 | "- the fundamental part of CNN is a filter(matrix)\n" 474 | ] 475 | }, 476 | { 477 | "cell_type": "markdown", 478 | "id": "6f6caf82", 479 | "metadata": {}, 480 | "source": [ 481 | "" 482 | ] 483 | }, 484 | { 485 | "cell_type": "code", 486 | "execution_count": null, 487 | "id": "b4d148e4", 488 | "metadata": {}, 489 | "outputs": [], 490 | "source": [] 491 | }, 492 | { 493 | "cell_type": "markdown", 494 | "id": "47bc1f79", 495 | "metadata": {}, 496 | "source": [ 497 | "" 498 | ] 499 | }, 500 | { 501 | "cell_type": "code", 502 | "execution_count": null, 503 | "id": "0bb7eb90", 504 | "metadata": {}, 505 | "outputs": [], 506 | "source": [] 507 | }, 508 | { 509 | "cell_type": "markdown", 510 | "id": "5edba01f", 511 | "metadata": {}, 512 | "source": [ 513 | "" 514 | ] 515 | }, 516 | { 517 | "cell_type": "code", 518 | "execution_count": null, 519 | "id": "586f9386", 520 | "metadata": {}, 521 | "outputs": [], 522 | "source": [] 523 | }, 524 | { 525 | "cell_type": "markdown", 526 | "id": "e1883815", 527 | "metadata": {}, 528 | "source": [ 529 | "" 530 | ] 531 | }, 532 | { 533 | "cell_type": "code", 534 | "execution_count": null, 535 | "id": "dc03af12", 536 | "metadata": {}, 537 | "outputs": [], 538 | "source": [] 539 | }, 540 | { 541 | "cell_type": "markdown", 542 | "id": "adbb8edb", 543 | "metadata": {}, 544 | "source": [ 545 | "" 546 | ] 547 | }, 548 | { 549 | "cell_type": "code", 550 | "execution_count": null, 551 | "id": "6e622da7", 552 | "metadata": {}, 553 | "outputs": [], 554 | "source": [ 555 | "## real world applications\n", 556 | "\n", 557 | "- facial recognition -> DCNN (Deep CNN)\n", 558 | "- image classification\n", 559 | "- object detection\n", 560 | "- medical image analysis - Xrays, MRIs\n", 561 | "- video analytics - action recognition\n", 562 | "- NLP - text classification, language translation, sentiment analysis(SpaCy, NLTK)" 563 | ] 564 | }, 565 | { 566 | "cell_type": "markdown", 567 | "id": "06645050", 568 | "metadata": {}, 569 | "source": [ 570 | "## Recurrent(feedback) Neural Networks (RNNs)\n", 571 | "\n", 572 | "https://github.com/hemansnation/AI-ML-MLOps-GenAI-Live-Summer-Cohort-2024/blob/main/Module%2007%20-%20NLP%20x%20Deep%20Learning/33_Recurrent%20Neural%20Network.ipynb\n", 573 | "\n", 574 | "\n", 575 | "" 576 | ] 577 | }, 578 | { 579 | "cell_type": "code", 580 | "execution_count": null, 581 | "id": "d79ea2c3", 582 | "metadata": {}, 583 | "outputs": [], 584 | "source": [ 585 | "translation\n", 586 | "- google translate\n", 587 | "- apple siri\n", 588 | "\n", 589 | "used for sequential data" 590 | ] 591 | }, 592 | { 593 | "cell_type": "code", 594 | "execution_count": null, 595 | "id": "d1fa9463", 596 | "metadata": {}, 597 | "outputs": [], 598 | "source": [ 599 | "memorize parts of the input and use them to make accurate predictions" 600 | ] 601 | }, 602 | { 603 | "cell_type": "code", 604 | "execution_count": null, 605 | "id": "6aa97f8b", 606 | "metadata": {}, 607 | "outputs": [], 608 | "source": [ 609 | "this is a red ..... {apple}\n", 610 | "\n", 611 | "0 - a\n", 612 | "1 - are\n", 613 | "2 - apple\n", 614 | ".\n", 615 | ".\n", 616 | ".\n", 617 | ".\n", 618 | "4100 - this\n", 619 | ".\n", 620 | ".\n", 621 | "10000 - " 622 | ] 623 | }, 624 | { 625 | "cell_type": "code", 626 | "execution_count": null, 627 | "id": "501d6fc2", 628 | "metadata": {}, 629 | "outputs": [], 630 | "source": [ 631 | "## issue with RNN\n", 632 | "\n", 633 | "loss of memory\n", 634 | "\n", 635 | "\n", 636 | "Himanshu is taking the session. He is sitting on the chair.\n", 637 | "\n", 638 | "\n", 639 | "- large weight updates\n", 640 | "- vanishing gradient (gradient will goes to zero)\n", 641 | "\n", 642 | "cause\n", 643 | "- chain in multiplication\n", 644 | "- long sequence" 645 | ] 646 | }, 647 | { 648 | "cell_type": "code", 649 | "execution_count": null, 650 | "id": "a945658c", 651 | "metadata": {}, 652 | "outputs": [], 653 | "source": [ 654 | "RNN Architectures\n", 655 | "\n", 656 | "- Bidirectional RNN\n", 657 | "- Gated Recurrent Unit(GRU)\n", 658 | "- Long Short Term Memory(LSTM)" 659 | ] 660 | }, 661 | { 662 | "cell_type": "markdown", 663 | "id": "a89233c7", 664 | "metadata": {}, 665 | "source": [ 666 | "## Long Short-Term Memory (LSTM)\n", 667 | "\n", 668 | "https://github.com/hemansnation/AI-ML-MLOps-GenAI-Live-Summer-Cohort-2024/blob/main/Module%2007%20-%20NLP%20x%20Deep%20Learning/34_LSTM%20and%20GRU.ipynb\n", 669 | "\n", 670 | "" 671 | ] 672 | }, 673 | { 674 | "cell_type": "code", 675 | "execution_count": null, 676 | "id": "508d5cd3", 677 | "metadata": {}, 678 | "outputs": [], 679 | "source": [ 680 | "LSTM machanism -> gates\n", 681 | "\n", 682 | "- it stores relavant information to make predictions for a long period of time\n", 683 | "\n", 684 | "- cell state - memory of the network\n", 685 | "- gates\n", 686 | " - forget gate - it will decide what info should be thrown away or kept\n", 687 | " - input gate - update the cell state\n", 688 | " - cell state \n", 689 | " - output gate - determines what next hidden state should be" 690 | ] 691 | }, 692 | { 693 | "cell_type": "code", 694 | "execution_count": null, 695 | "id": "cc7edcc3", 696 | "metadata": {}, 697 | "outputs": [], 698 | "source": [ 699 | "GRU are faster than LSTM\n", 700 | "\n", 701 | "- less tensor operations" 702 | ] 703 | }, 704 | { 705 | "cell_type": "code", 706 | "execution_count": null, 707 | "id": "5cfb8a1e", 708 | "metadata": {}, 709 | "outputs": [], 710 | "source": [] 711 | }, 712 | { 713 | "cell_type": "code", 714 | "execution_count": null, 715 | "id": "bc4c864a", 716 | "metadata": {}, 717 | "outputs": [], 718 | "source": [ 719 | "ML - serialization\n" 720 | ] 721 | }, 722 | { 723 | "cell_type": "code", 724 | "execution_count": null, 725 | "id": "11556b23", 726 | "metadata": {}, 727 | "outputs": [], 728 | "source": [ 729 | "DL - Transfer Learning\n", 730 | "\n", 731 | "pretrained model is reused as the starting point for a different but related problem\n", 732 | "\n", 733 | "fine-tuning - \n", 734 | "- you contiune the training where it left off.\n", 735 | "- this will allow the model to adjust from generic feature extraction \n", 736 | " to feature more specific to your task\n" 737 | ] 738 | }, 739 | { 740 | "cell_type": "code", 741 | "execution_count": null, 742 | "id": "cfca6ea8", 743 | "metadata": {}, 744 | "outputs": [], 745 | "source": [] 746 | }, 747 | { 748 | "cell_type": "code", 749 | "execution_count": null, 750 | "id": "43bb9830", 751 | "metadata": {}, 752 | "outputs": [], 753 | "source": [] 754 | } 755 | ], 756 | "metadata": { 757 | "kernelspec": { 758 | "display_name": "Python 3 (ipykernel)", 759 | "language": "python", 760 | "name": "python3" 761 | }, 762 | "language_info": { 763 | "codemirror_mode": { 764 | "name": "ipython", 765 | "version": 3 766 | }, 767 | "file_extension": ".py", 768 | "mimetype": "text/x-python", 769 | "name": "python", 770 | "nbconvert_exporter": "python", 771 | "pygments_lexer": "ipython3", 772 | "version": "3.12.9" 773 | } 774 | }, 775 | "nbformat": 4, 776 | "nbformat_minor": 5 777 | } 778 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/c1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/c1.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/c14.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/c14.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/c16.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/c16.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/c2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/c2.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/c5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/c5.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/c6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/c6.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/l1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/l1.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/n1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/n1.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/n11.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/n11.jpeg -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/n2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/n2.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/n3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/n3.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/n4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/n4.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/n5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/n5.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/nn1.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/nn1.gif -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/nn2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/nn2.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/nn3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/nn3.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/r2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/r2.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/011_Deep Learning/t1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/011_Deep Learning/t1.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/.DS_Store -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/.github/workflows/cicd.yaml: -------------------------------------------------------------------------------- 1 | name: workflow 2 | 3 | on: 4 | push: 5 | branches: 6 | - main 7 | paths-ignore: 8 | - 'README.md' 9 | 10 | permissions: 11 | id-token: write 12 | contents: read 13 | 14 | 15 | jobs: 16 | integration: 17 | name: Continuous Integration 18 | runs-on: ubuntu-latest 19 | steps: 20 | - name: Checkout Code 21 | uses: actions/checkout@v3 22 | 23 | - name: Lint code 24 | run: echo "Linting code" 25 | 26 | - name: Run unit tests 27 | run: echo "Running unit tests" 28 | 29 | build-and-push-ecr-image: 30 | name: Continuous Delivery 31 | needs: integration 32 | runs-on: ubuntu-latest 33 | steps: 34 | - name: Checkout Code 35 | uses: actions/checkout@v3 36 | 37 | - name: Install Utilities 38 | run: | 39 | sudo apt-get update 40 | sudo apt-get install -y jq unzip 41 | - name: Configure AWS credentials 42 | uses: aws-actions/configure-aws-credentials@v1 43 | with: 44 | aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} 45 | aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} 46 | aws-region: ${{ secrets.AWS_REGION }} 47 | 48 | - name: Login to Amazon ECR 49 | id: login-ecr 50 | uses: aws-actions/amazon-ecr-login@v1 51 | 52 | - name: Build, tag, and push image to Amazon ECR 53 | id: build-image 54 | env: 55 | ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }} 56 | ECR_REPOSITORY: ${{ secrets.ECR_REPOSITORY_NAME }} 57 | IMAGE_TAG: latest 58 | run: | 59 | # Build a docker container and 60 | # push it to ECR so that it can 61 | # be deployed to ECS. 62 | docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG . 63 | echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" 64 | docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG 65 | echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" 66 | 67 | Continuous-Deployment: 68 | needs: build-and-push-ecr-image 69 | runs-on: self-hosted 70 | steps: 71 | - name: Checkout 72 | uses: actions/checkout@v3 73 | 74 | - name: Configure AWS credentials 75 | uses: aws-actions/configure-aws-credentials@v1 76 | with: 77 | aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} 78 | aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} 79 | aws-region: ${{ secrets.AWS_REGION }} 80 | 81 | - name: Login to Amazon ECR 82 | id: login-ecr 83 | uses: aws-actions/amazon-ecr-login@v1 84 | 85 | 86 | - name: Pull latest images 87 | run: | 88 | docker pull ${{secrets.AWS_ECR_LOGIN_URI}}:latest 89 | 90 | # - name: Stop and remove container if running 91 | # run: | 92 | # docker ps -q --filter "name=claim" | grep -q . && docker stop claim && docker rm -fv claim 93 | 94 | - name: Run Docker Image to serve users 95 | run: | 96 | docker run -d -p 8080:8080 --ipc="host" --name=claim -e 'AWS_ACCESS_KEY_ID=${{ secrets.AWS_ACCESS_KEY_ID }}' -e 'AWS_SECRET_ACCESS_KEY=${{ secrets.AWS_SECRET_ACCESS_KEY }}' -e 'AWS_REGION=${{ secrets.AWS_REGION }}' ${{secrets.AWS_ECR_LOGIN_URI}}:latest 97 | - name: Clean previous images and containers 98 | run: | 99 | docker system prune -f -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/.gitignore: -------------------------------------------------------------------------------- 1 | .development 2 | Artifacts 3 | logs -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:3.10-slim-buster 2 | WORKDIR /app 3 | COPY . /app 4 | 5 | RUN apt update -y && apt install awscli -y 6 | 7 | # RUN apt-get update && 8 | RUN pip install -r requirements.txt 9 | CMD ["python3", "app.py"] -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/README.md: -------------------------------------------------------------------------------- 1 | Steps to be Performed :) 2 | 3 | 1. create the virtual environment 4 | 2. create req.txt file and .gitignore and readme files 5 | 3. Activate the environment 6 | 4. Dataset folder created(Storing the data later) 7 | 5. Experiment folder created(for notebook testing) 8 | 6. claim folder is created(Inside this coding folder structure will be there) 9 | 7. inside claim folder we need to create a file __init__.py to treat that folder as a package(we convert it later) 10 | 8. inside claim folder another folder components created 11 | 9. inside claim folder another folder constant created 12 | 10. inside claim folder another folder logging and exception created 13 | 11. inside claim folder another folder pipeline and utils and cloud created 14 | 11. inside claim folder another folder entity created 15 | 12. Dockerfile and setup.py files created 16 | 13. .env file created 17 | 14. fill the requirement.txt file with name of libraries 18 | 15. run pip install -r requirements.txt 19 | 16. inside claim folder and inside all folders create __init__.py 20 | 21 | 22 | 23 | 24 | 25 | <-------time to first commit----------> 26 | 27 | 28 | 29 | (create a new repo and follow all steps mentioned by github) 30 | git init 31 | git add . 32 | git commit -m "project structure setup is completed" 33 | git branch -M main 34 | 35 | 36 | 17. write code inside setup.py(make sure to update -e .) 37 | 18. once whole setup.py filr ready and run python setup.py(create some .egg file where we can find all package details) 38 | 19. create a pyfile logger.py insider claim folder and then inside logging folder and write logging related code. 39 | 20. create a pyfile exception.py inside claim folder and then inside exception folder and write exception related code. 40 | 41 | 42 | 43 | 21. create data.ingestion.py file inside claim/components folder 44 | 22. now for configuration related stuff we need to use entity folder. Inside the entity folder we need to create training_config.py,data_ingestion_config.py and as of now lets write all required config for the data ingestion stage. Coding is not started yet. 45 | 23. inside constant folder and write constants on __init__.py for data_ingestion 46 | 24. create a new folder params inside claims folder and inside that create params.yaml and then only pass train_test_split ratio for data ingestion stage 47 | 48 | 25. start coding inside entity/training_config.py file for training_config stage related configuraton information. write code there 49 | 26. write code inside entity/data_ingestion_config.py related to data ingestion config. 50 | 27. wrie a function inside utils folder(read yaml file function and use it in data ingestion config code to get the split ratio) 51 | 28. now create data_ingestion.py file inside components section and pass coding.( at the end of this file we need to write data ingestion artifact but to write that we need to reate a data_ingestion_artifact.py file inside entity folder and the write artifact related code) 52 | 29.test on main.py 53 | 54 | starting data validation 55 | 56 | 57 | 1. go to constsnts folder and provide contstns for data validation 58 | 2. create a new data_validation_config.py file inside entity folder where data validation related configs have to be written 59 | 3. now to validate schema we need to have one schema.yaml file inside the root folder data_config 60 | 4. make sure to add SCHEMA_FILE_PATH inside constant folder 61 | 5. put your data validation artifact inside entity folder inside the file data_validation_artifact.py file 62 | 6. inside components folder create a new data_validation.py file and start writing code accordingly(write supportive utils functions while writing this code) 63 | 7. tested on main.py 64 | 65 | starting data transformation 66 | 1. go to constant folder and write all required constants for data transformation 67 | 2. create a new data_transformation_config.py inside entity folder where data transformation related confugs have to be written 68 | 3. write components code data_transformation.py and write code(in between make sure to write data_transformation artifacts) 69 | 4. once done run in main.py 70 | 71 | starting model training 72 | 73 | 1. go to constant folder and write all constants 74 | 2. once done create model_trainer_config.py file inside claim/entity folder 75 | 3. then write inside components section some first outline of methods model_trainer.py(without mlflow) 76 | 4. make sure to add model trainer artifact also by creating model_trainer_artifact.py 77 | 5. write mlflow related function inside components folder model_trainer.py 78 | 79 | test on main.py 80 | 81 | 82 | start building the complete training pipeline 83 | 1. inside pipeline folder create a pyfile training_pipeline.py write code(without s3) 84 | 85 | after that prepare app.py and then create dockerfile and .github yaml 86 | 87 | and execute ecr push but dont use firsdt continouoys deployment in the yaml file 88 | 89 | 90 | now once done then create ec2 and then run following commands first 91 | 1. sudo apt-get update -y 92 | 2. sudo apt-get upgrade 93 | 3. curl -fsSL https://get.docker.com -o get-docker.sh 94 | 4. sudo sh get-docker.sh 95 | 5. sudo usermod -aG docker ubuntu 96 | 6. newgrp docker 97 | 98 | 99 | go to ur github repo setting and choose runner under actions 100 | create self hostd runner and choose linux and run mentiond all lines in ec2 -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/app.py: -------------------------------------------------------------------------------- 1 | from flask import Flask, render_template, request 2 | import pandas as pd 3 | import os 4 | import numpy as np 5 | from claim.utils import load_object 6 | import warnings 7 | warnings.filterwarnings("ignore") 8 | 9 | app = Flask(__name__) 10 | 11 | # Load test data and preprocess 12 | df = pd.read_csv("test_df.csv").drop(columns=["is_claim"], axis=1) 13 | columns = df.columns.tolist() 14 | 15 | # Detect data types for each column 16 | def get_input_type(col): 17 | if df[col].nunique() < 10: 18 | return "select", df[col].dropna().unique().tolist() 19 | elif pd.api.types.is_numeric_dtype(df[col]): 20 | return "float", None 21 | elif "date" in col.lower(): 22 | return "date", None 23 | else: 24 | return "text", None 25 | 26 | @app.route("/", methods=["GET"]) 27 | def index(): 28 | fields = [] 29 | for col in columns: 30 | input_type, options = get_input_type(col) 31 | fields.append({ 32 | "name": col, 33 | "type": input_type, 34 | "options": options 35 | }) 36 | 37 | return render_template("base.html", fields=fields) 38 | 39 | @app.route("/submit", methods=["POST"]) 40 | def submit(): 41 | submitted = pd.DataFrame({field: [request.form.get(field)] for field in df.columns}) 42 | 43 | # Manual mappings 44 | binary_map = {'Yes': 1, 'No': 0} 45 | submitted["is_esc"] = submitted["is_esc"].map(binary_map) 46 | submitted["is_adjustable_steering"] = submitted["is_adjustable_steering"].map(binary_map) 47 | submitted["is_parking_sensors"] = submitted["is_parking_sensors"].map(binary_map) 48 | submitted["is_parking_camera"] = submitted["is_parking_camera"].map(binary_map) 49 | submitted["transmission_type"] = submitted["transmission_type"].map({'Manual': 1, 'Automatic': 0}) 50 | submitted["is_brake_assist"] = submitted["is_brake_assist"].map(binary_map) 51 | submitted["is_central_locking"] = submitted["is_central_locking"].map(binary_map) 52 | submitted["is_power_steering"] = submitted["is_power_steering"].map(binary_map) 53 | 54 | submitted["segment"] = submitted["segment"].map({'A': 1, 'B2': 2, 'C2': 3, 'Others': 4}) 55 | submitted["model"] = submitted["model"].map({'M1': 1, 'M4': 2, 'M6': 3, 'Others': 4}) 56 | submitted["fuel_type"] = submitted["fuel_type"].map({'CNG': 1, 'Petrol': 2, 'Diesel': 3}) 57 | submitted = submitted.drop(["segment", "model", "fuel_type"], axis=1) 58 | 59 | # Load preprocessor and model 60 | preprocessor = load_object("final_models/preprocessor.pkl") 61 | model = load_object("final_models/model.pkl") 62 | 63 | data = preprocessor.transform(submitted) 64 | 65 | prediction = model.predict(data)[0] 66 | submitted["is_claim"] = "Confirmed" if prediction == 1 else "Not Confirmed" 67 | 68 | return f"

Prediction Result:

{submitted['is_claim'][0]}
" 69 | 70 | if __name__ == "__main__": 71 | app.run(host="0.0.0.0", port=8080) 72 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/__init__.py -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/__pycache__/__init__.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/__pycache__/__init__.cpython-310.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/components/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/components/__init__.py -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/components/__pycache__/__init__.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/components/__pycache__/__init__.cpython-310.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/components/__pycache__/data_ingestion.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/components/__pycache__/data_ingestion.cpython-310.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/components/__pycache__/data_transformation.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/components/__pycache__/data_transformation.cpython-310.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/components/__pycache__/data_validation.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/components/__pycache__/data_validation.cpython-310.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/components/__pycache__/model_trainer.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/components/__pycache__/model_trainer.cpython-310.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/components/data_ingestion.py: -------------------------------------------------------------------------------- 1 | from claim.exception.exception import InsuranceClaimException 2 | from claim.logging.logger import logging 3 | from claim.entity.data_ingestion_config import DataIngestionConfig 4 | import os 5 | import sys 6 | from sklearn.model_selection import train_test_split 7 | import mysql.connector 8 | from sqlalchemy import create_engine 9 | import pandas as pd 10 | from claim.entity.data_ingestion_artifact import DataIngestionArtifact 11 | 12 | 13 | 14 | class DataIngestion: 15 | def __init__(self,data_ingestion_config:DataIngestionConfig): 16 | try: 17 | self.data_ingestion_config=data_ingestion_config 18 | except Exception as e: 19 | raise InsuranceClaimException(e,sys) 20 | 21 | def __fetch_data_as_dataframe(self): 22 | 23 | """ 24 | Connects to the local MySQL database and fetches data from the 'insurance_data' table 25 | into a Pandas DataFrame. 26 | 27 | Returns: 28 | pd.DataFrame: DataFrame containing the fetched data. 29 | 30 | Raises: 31 | SQLAlchemyError: If there's an error connecting to the database or executing the query. 32 | """ 33 | try: 34 | logging.info("Connecting to MySQL database and fetching data into DataFrame") 35 | # Define database connection parameters 36 | user = self.data_ingestion_config.db_user # Replace with your MySQL username 37 | password = self.data_ingestion_config.db_password # Replace with your MySQL password 38 | host = self.data_ingestion_config.db_host # Replace with your MySQL host 39 | database = self.data_ingestion_config.db_name # Replace with your database name 40 | 41 | # Construct the database URL 42 | db_url = f"mysql+mysqlconnector://{user}:{password}@{host}/{database}" 43 | 44 | # Create a SQLAlchemy engine 45 | engine = create_engine(db_url) 46 | 47 | # Define your SQL query 48 | query = f"SELECT * FROM {self.data_ingestion_config.db_table_name}" # Replace with your table name 49 | 50 | # Execute the query and fetch data into a DataFrame 51 | df = pd.read_sql(query, con=engine) 52 | logging.info("Data fetched successfully") 53 | return df 54 | 55 | except Exception as e: 56 | logging.error(f"An error occurred while fetching data: {e}") 57 | raise InsuranceClaimException(e,sys) 58 | 59 | 60 | def __split_data_as_train_test(self,data:pd.DataFrame): 61 | try: 62 | logging.info("Splitting data into training and testing sets") 63 | train_data,test_data=train_test_split(data,test_size=self.data_ingestion_config.ingestion_params['split_ratio']) 64 | 65 | dir_path=os.path.dirname(self.data_ingestion_config.training_file_path) 66 | 67 | os.makedirs(dir_path,exist_ok=True) 68 | 69 | train_data.to_csv(self.data_ingestion_config.training_file_path,index=False,header=True) 70 | test_data.to_csv(self.data_ingestion_config.testing_file_path,index=False,header=True) 71 | logging.info("Data split successfully and saved to CSV files") 72 | except Exception as e: 73 | logging.error(f"An error occurred while splitting data: {e}") 74 | raise InsuranceClaimException(e,sys) 75 | 76 | def initiate_data_ingestion(self): 77 | try: 78 | logging.info("Starting data ingestion process") 79 | dir_path=os.path.dirname(self.data_ingestion_config.feature_store_file_path) 80 | os.makedirs(dir_path,exist_ok=True) 81 | data=self.__fetch_data_as_dataframe() 82 | data.to_csv(self.data_ingestion_config.feature_store_file_path) 83 | self.__split_data_as_train_test(data) 84 | logging.info("Data ingestion process completed successfully") 85 | return DataIngestionArtifact( 86 | train_file_path=self.data_ingestion_config.training_file_path,test_file_path=self.data_ingestion_config.testing_file_path 87 | ) 88 | except Exception as e: 89 | logging.error(f"An error occurred during data ingestion: {e}") 90 | raise InsuranceClaimException(e,sys) 91 | 92 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/components/data_transformation.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import os 3 | import numpy as np 4 | import pandas as pd 5 | from sklearn.pipeline import Pipeline 6 | from claim.entity.data_transformation_artifact import DataTransformationArtifact 7 | from claim.entity.data_validation_artifact import DataValidationArtifact 8 | from claim.entity.data_transformation_config import DataTransformationConfig 9 | from claim.exception.exception import InsuranceClaimException 10 | from claim.logging.logger import logging 11 | from sklearn.preprocessing import StandardScaler 12 | from claim.utils import save_object,read_data 13 | from typing import List 14 | from claim.constants import TARGET_COLUMN,PREPROCESSOR_MODEL_PATH 15 | from imblearn.combine import SMOTETomek 16 | import warnings 17 | warnings.filterwarnings("ignore", category=FutureWarning) 18 | 19 | class DataTransformation: 20 | def __init__(self,data_validation_artifact:DataValidationArtifact, 21 | data_transformation_config:DataTransformationConfig): 22 | try: 23 | self.data_validation_artifact:DataValidationArtifact=data_validation_artifact 24 | self.data_transformation_config:DataTransformationConfig=data_transformation_config 25 | except Exception as e: 26 | raise InsuranceClaimException(e,sys) 27 | 28 | def __handling_outliers(self,dataframe:pd.DataFrame,column:str)->pd.DataFrame: 29 | """ 30 | It handles outliers in the dataframe by replacing them with the median of the column. 31 | 32 | Args: 33 | dataframe: pd.DataFrame 34 | columns: List[str] 35 | 36 | Returns: 37 | pd.DataFrame 38 | """ 39 | try: 40 | logging.info(f"Handling outliers in the column: {column}") 41 | #handiling the outliers in the data 42 | Q1=dataframe[column].quantile(0.25) 43 | Q3=dataframe[column].quantile(0.75) 44 | IQR=Q3-Q1 45 | lower_bound=Q1-1.5*(IQR) 46 | upper_bound=Q3+1.5*(IQR) 47 | dataframe=dataframe[(dataframe[column]>=lower_bound) & (dataframe[column]<=upper_bound)] 48 | return dataframe 49 | except Exception as e: 50 | raise InsuranceClaimException(e,sys) 51 | 52 | 53 | def get_data_transformer_object(self): 54 | """ 55 | It initialises a Standard Scaler object with the parameters specified. 56 | 57 | Args: 58 | cls: DataTransformation 59 | 60 | Returns: 61 | A Scaler object 62 | """ 63 | logging.info( 64 | "Entered get_data_trnasformer_object method of Transformation class" 65 | ) 66 | try: 67 | scaler=StandardScaler() 68 | return scaler 69 | except Exception as e: 70 | raise InsuranceClaimException(e,sys) 71 | def __process_disbalanced_cat_col(self,dataframe:pd.DataFrame,column:str)->pd.DataFrame: 72 | """ 73 | It processes the categorical column by replacing the values with the mean of the column. 74 | 75 | Args: 76 | dataframe: pd.DataFrame 77 | columns: str 78 | 79 | Returns: 80 | pd.DataFrame 81 | """ 82 | try: 83 | logging.info(f"Processing categorical column: {column}") 84 | #handiling segment categorical column 85 | segments=list(dataframe[column].value_counts(ascending=False)[:3].index) 86 | for rows in range(dataframe.shape[0]): 87 | if dataframe.loc[rows,column] not in segments: 88 | dataframe.loc[rows,column]="Others" 89 | 90 | return dataframe 91 | except Exception as e: 92 | raise InsuranceClaimException(e,sys) 93 | 94 | 95 | def initiate_data_transformation(self)->DataTransformationArtifact: 96 | logging.info("Entered initiate_data_transformation method of DataTransformation class") 97 | try: 98 | dir_path=os.path.dirname(self.data_transformation_config.transformed_train_file_path) 99 | os.makedirs(dir_path,exist_ok=True) 100 | 101 | 102 | logging.info("Starting data transformation") 103 | train_df=read_data(self.data_validation_artifact.valid_train_file_path) 104 | test_df=read_data(self.data_validation_artifact.valid_test_file_path) 105 | 106 | 107 | train_df=self.__handling_outliers(train_df,column="age_of_car") 108 | train_df=train_df.reset_index(drop=True) 109 | 110 | #extracting torque nm from max torque 111 | train_df["torque_nm"]=0 112 | for idx in list(train_df["max_torque"].str.split("@").index): 113 | train_df.loc[idx,"torque_nm"]=float(train_df.loc[idx,"max_torque"].split("@")[0].split("Nm")[0]) 114 | 115 | #extracting torque rpm for max torque 116 | train_df["torque_rpm"]=0 117 | for idx in list(train_df["max_torque"].str.split("@").index): 118 | train_df.loc[idx,"torque_rpm"]=float(train_df.loc[idx,"max_torque"].split("@")[1].split("rpm")[0]) 119 | 120 | #extracting power bhp for max power 121 | train_df["power_bhp"]=0 122 | for idx in list(train_df["max_power"].str.split("@").index): 123 | train_df.loc[idx,"power_bhp"]=float(train_df.loc[idx,"max_power"].split("@")[0].split("bhp")[0]) 124 | 125 | #extract power rpm for max power 126 | train_df["power_rpm"]=0 127 | for idx in list(train_df["max_power"].str.split("@").index): 128 | train_df.loc[idx,"power_rpm"]=float(train_df.loc[idx,"max_power"].split("@")[1].split("rpm")[0]) 129 | 130 | 131 | train_df=train_df.drop(["policy_id","max_power","max_torque","area_cluster",'engine_type','is_tpms','rear_brakes_type','steering_type','is_front_fog_lights','is_rear_window_wiper','is_rear_window_washer','is_rear_window_defogger','is_power_door_locks','is_driver_seat_height_adjustable','is_day_night_rear_view_mirror','is_ecw','is_speed_alert','population_density'],axis=1) 132 | 133 | 134 | train_df=self.__process_disbalanced_cat_col(train_df,column="segment") 135 | train_df=self.__process_disbalanced_cat_col(train_df,column="model") 136 | 137 | #handling the categorical columns 138 | #converting all categoricals to numericals 139 | train_df["is_esc"]=train_df["is_esc"].map({'Yes':1,'No':0}) 140 | train_df["is_adjustable_steering"]=train_df["is_adjustable_steering"].map({'Yes':1,'No':0}) 141 | train_df["is_parking_sensors"]=train_df["is_parking_sensors"].map({'Yes':1,'No':0}) 142 | train_df["is_parking_camera"]=train_df["is_parking_camera"].map({'Yes':1,'No':0}) 143 | train_df["transmission_type"]=train_df["transmission_type"].map({'Manual':1,'Automatic':0}) 144 | train_df["is_brake_assist"]=train_df["is_brake_assist"].map({'Yes':1,'No':0}) 145 | train_df["is_central_locking"]=train_df["is_central_locking"].map({'Yes':1,'No':0}) 146 | train_df["is_power_steering"]=train_df["is_power_steering"].map({'Yes':1,'No':0}) 147 | train_df["segment"]=train_df["segment"].map({'A':1,'B2':2,'C2':3,'Others':4}) 148 | train_df["model"]=train_df["model"].map({'M1':1,'M4':2,'M6':3,'Others':4}) 149 | train_df["fuel_type"]=train_df["fuel_type"].map({'CNG':1,'Petrol':2,'Diesel':3}) 150 | 151 | #dropping those normal non OHE cols 152 | train_df=train_df.drop(["segment","model","fuel_type"],axis=1) 153 | 154 | 155 | 156 | test_df=self.__handling_outliers(test_df,column="age_of_car") 157 | test_df=test_df.reset_index(drop=True) 158 | 159 | #extracting torque nm from max torque 160 | test_df["torque_nm"]=0 161 | for idx in list(test_df["max_torque"].str.split("@").index): 162 | test_df.loc[idx,"torque_nm"]=float(test_df.loc[idx,"max_torque"].split("@")[0].split("Nm")[0]) 163 | 164 | #extracting torque rpm for max torque 165 | test_df["torque_rpm"]=0 166 | for idx in list(test_df["max_torque"].str.split("@").index): 167 | test_df.loc[idx,"torque_rpm"]=float(test_df.loc[idx,"max_torque"].split("@")[1].split("rpm")[0]) 168 | 169 | #extracting power bhp for max power 170 | test_df["power_bhp"]=0 171 | for idx in list(test_df["max_power"].str.split("@").index): 172 | test_df.loc[idx,"power_bhp"]=float(test_df.loc[idx,"max_power"].split("@")[0].split("bhp")[0]) 173 | 174 | #extract power rpm for max power 175 | test_df["power_rpm"]=0 176 | for idx in list(test_df["max_power"].str.split("@").index): 177 | test_df.loc[idx,"power_rpm"]=float(test_df.loc[idx,"max_power"].split("@")[1].split("rpm")[0]) 178 | 179 | 180 | test_df=test_df.drop(["policy_id","max_power","max_torque","area_cluster",'engine_type','is_tpms','rear_brakes_type','steering_type','is_front_fog_lights','is_rear_window_wiper','is_rear_window_washer','is_rear_window_defogger','is_power_door_locks','is_driver_seat_height_adjustable','is_day_night_rear_view_mirror','is_ecw','is_speed_alert','population_density'],axis=1) 181 | 182 | 183 | test_df=self.__process_disbalanced_cat_col(test_df,column="segment") 184 | test_df=self.__process_disbalanced_cat_col(test_df,column="model") 185 | # test_df.to_csv("test_df.csv",index=False, header=True) 186 | #handling the categorical columns 187 | #converting all categoricals to numericals 188 | test_df["is_esc"]=test_df["is_esc"].map({'Yes':1,'No':0}) 189 | test_df["is_adjustable_steering"]=test_df["is_adjustable_steering"].map({'Yes':1,'No':0}) 190 | test_df["is_parking_sensors"]=test_df["is_parking_sensors"].map({'Yes':1,'No':0}) 191 | test_df["is_parking_camera"]=test_df["is_parking_camera"].map({'Yes':1,'No':0}) 192 | test_df["transmission_type"]=test_df["transmission_type"].map({'Manual':1,'Automatic':0}) 193 | test_df["is_brake_assist"]=test_df["is_brake_assist"].map({'Yes':1,'No':0}) 194 | test_df["is_central_locking"]=test_df["is_central_locking"].map({'Yes':1,'No':0}) 195 | test_df["is_power_steering"]=test_df["is_power_steering"].map({'Yes':1,'No':0}) 196 | test_df["segment"]=test_df["segment"].map({'A':1,'B2':2,'C2':3,'Others':4}) 197 | test_df["model"]=test_df["model"].map({'M1':1,'M4':2,'M6':3,'Others':4}) 198 | test_df["fuel_type"]=test_df["fuel_type"].map({'CNG':1,'Petrol':2,'Diesel':3}) 199 | 200 | 201 | #dropping those normal non OHE cols 202 | test_df=test_df.drop(["segment","model","fuel_type"],axis=1) 203 | 204 | X_train=train_df.drop([TARGET_COLUMN],axis=1) 205 | y_train=train_df[TARGET_COLUMN] 206 | 207 | smk = SMOTETomek(random_state=42) 208 | X_train_res,y_train_res=smk.fit_resample(X_train,y_train) 209 | 210 | X_test=test_df.drop([TARGET_COLUMN],axis=1) 211 | y_test=test_df[TARGET_COLUMN] 212 | 213 | preprocessor=self.get_data_transformer_object() 214 | 215 | preprocessor_object=preprocessor.fit(X_train_res) 216 | print(X_train_res.columns) 217 | print("=============================") 218 | transformed_input_train_feature=pd.concat([pd.DataFrame(preprocessor_object.transform(X_train_res),columns=X_train_res.columns),y_train_res],axis=1) 219 | 220 | transformed_input_test_feature =pd.concat([pd.DataFrame(preprocessor_object.transform(X_test),columns=X_test.columns),y_test],axis=1) 221 | 222 | #save numpy array data 223 | transformed_input_train_feature.to_csv( self.data_transformation_config.transformed_train_file_path,index=False, header=True) 224 | transformed_input_test_feature.to_csv( self.data_transformation_config.transformed_test_file_path,index=False, header=True) 225 | 226 | save_object( self.data_transformation_config.transformed_object_file_path, preprocessor_object) 227 | 228 | save_object( PREPROCESSOR_MODEL_PATH, preprocessor_object) 229 | 230 | 231 | #preparing artifacts 232 | 233 | data_transformation_artifact=DataTransformationArtifact( 234 | transformed_object_file_path=self.data_transformation_config.transformed_object_file_path, 235 | transformed_train_file_path=self.data_transformation_config.transformed_train_file_path, 236 | transformed_test_file_path=self.data_transformation_config.transformed_test_file_path 237 | ) 238 | return data_transformation_artifact 239 | 240 | 241 | 242 | except Exception as e: 243 | raise InsuranceClaimException(e,sys) 244 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/components/data_validation.py: -------------------------------------------------------------------------------- 1 | from claim.entity.data_ingestion_artifact import DataIngestionArtifact 2 | from claim.entity.data_validation_artifact import DataValidationArtifact 3 | from claim.entity.data_validation_config import DataValidationConfig 4 | from claim.exception.exception import InsuranceClaimException 5 | from claim.logging.logger import logging 6 | from claim.constants import SCHEMA_FILE_PATH 7 | from scipy.stats import ks_2samp 8 | import pandas as pd 9 | import os,sys 10 | from claim.utils import read_yaml_file,write_yaml_file,read_data 11 | 12 | class DataValidation: 13 | def __init__(self,data_ingestion_artifact:DataIngestionArtifact, 14 | data_validation_config:DataValidationConfig): 15 | 16 | try: 17 | self.data_ingestion_artifact=data_ingestion_artifact 18 | self.data_validation_config=data_validation_config 19 | self._schema_config = read_yaml_file(SCHEMA_FILE_PATH) 20 | except Exception as e: 21 | raise InsuranceClaimException(e,sys) 22 | 23 | 24 | def validate_number_of_columns(self,dataframe:pd.DataFrame)->bool: 25 | try: 26 | number_of_columns=len(self._schema_config["columns"]) 27 | logging.info(f"Required number of columns:{number_of_columns}") 28 | logging.info(f"Data frame has columns:{len(dataframe.columns)}") 29 | if len(dataframe.columns)==number_of_columns: 30 | return True 31 | return False 32 | except Exception as e: 33 | raise InsuranceClaimException(e,sys) 34 | 35 | def detect_dataset_drift(self,existing_df,current_df,threshold=0.05)->bool: 36 | try: 37 | report={} 38 | for column in existing_df.columns: 39 | d1=existing_df[column] 40 | d2=current_df[column] 41 | check_dist=ks_2samp(d1,d2) 42 | if threshold<=check_dist.pvalue: 43 | is_found=False 44 | else: 45 | is_found=True 46 | report.update({column:{ 47 | "p_value":float(check_dist.pvalue), 48 | "drift_status":is_found 49 | 50 | }}) 51 | 52 | drift_report_file_path = self.data_validation_config.drift_report_file_path 53 | 54 | #Create directory 55 | dir_path = os.path.dirname(drift_report_file_path) 56 | os.makedirs(dir_path,exist_ok=True) 57 | write_yaml_file(file_path=drift_report_file_path,data=report) 58 | logging.info(f"Drift report file path: {drift_report_file_path}") 59 | 60 | except Exception as e: 61 | raise InsuranceClaimException(e,sys) 62 | 63 | 64 | def initiate_data_validation(self)->DataValidationArtifact: 65 | try: 66 | train_file_path=self.data_ingestion_artifact.train_file_path 67 | test_file_path=self.data_ingestion_artifact.test_file_path 68 | 69 | ## read the data from train and test 70 | train_dataframe=read_data(train_file_path) 71 | test_dataframe=read_data(test_file_path) 72 | 73 | ## validate number of columns 74 | 75 | status=self.validate_number_of_columns(dataframe=train_dataframe) 76 | if not status: 77 | logging.error("Train dataframe does not contain all columns.") 78 | raise InsuranceClaimException(str("Train dataframe does not contain all columns."),sys) 79 | status = self.validate_number_of_columns(dataframe=test_dataframe) 80 | print("column check status->{}".format(status)) 81 | if not status: 82 | logging.error("Test dataframe does not contain all columns.") 83 | raise InsuranceClaimException(str("Test dataframe does not contain all columns."),sys) 84 | 85 | ## lets check datadrift 86 | self.detect_dataset_drift(existing_df=train_dataframe,current_df=test_dataframe) 87 | dir_path=os.path.dirname(self.data_validation_config.valid_train_file_path) 88 | os.makedirs(dir_path,exist_ok=True) 89 | 90 | train_dataframe.to_csv( 91 | self.data_validation_config.valid_train_file_path, index=False, header=True 92 | 93 | ) 94 | 95 | test_dataframe.to_csv( 96 | self.data_validation_config.valid_test_file_path, index=False, header=True 97 | ) 98 | 99 | data_validation_artifact = DataValidationArtifact( 100 | valid_train_file_path=self.data_ingestion_artifact.train_file_path, 101 | valid_test_file_path=self.data_ingestion_artifact.test_file_path, 102 | drift_report_file_path=self.data_validation_config.drift_report_file_path, 103 | ) 104 | return data_validation_artifact 105 | except Exception as e: 106 | raise InsuranceClaimException(e,sys) 107 | 108 | 109 | 110 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/components/model_trainer.py: -------------------------------------------------------------------------------- 1 | from claim.entity.data_ingestion_artifact import DataIngestionArtifact 2 | from claim.entity.data_validation_artifact import DataValidationArtifact 3 | from claim.entity.data_validation_config import DataValidationConfig 4 | from claim.entity.data_transformation_artifact import DataTransformationArtifact 5 | from claim.entity.model_trainer_artifact import ModelTrainerArtifact 6 | from claim.entity.model_trainer_config import ModelTrainerConfig 7 | from claim.entity.model_trainer_artifact import ClassificationMetricArtifact 8 | from claim.exception.exception import InsuranceClaimException 9 | from claim.logging.logger import logging 10 | from sklearn.ensemble import RandomForestClassifier 11 | from sklearn.model_selection import GridSearchCV 12 | from sklearn.metrics import precision_score,recall_score,f1_score 13 | import mlflow 14 | from urllib.parse import urlparse 15 | import pandas as pd 16 | import os,sys 17 | from claim.utils import read_yaml_file,write_yaml_file,read_data,save_object 18 | from claim.constants import TARGET_COLUMN,ML_MODEL_PATH 19 | 20 | class ModelTrainer: 21 | def __init__(self,model_trainer_config:ModelTrainerConfig,data_transformation_artifact:DataTransformationArtifact): 22 | try: 23 | self.model_trainer_config=model_trainer_config 24 | self.data_transformation_artifact=data_transformation_artifact 25 | except Exception as e: 26 | raise InsuranceClaimException(e,sys) 27 | 28 | def __models_runner(self,x_train:pd.DataFrame,y_train:pd.DataFrame,x_test:pd.DataFrame,y_test:pd.DataFrame): 29 | parameters=read_yaml_file("claim/params/params.yaml")["RandomForestClassifier"] 30 | model=RandomForestClassifier() 31 | gs = GridSearchCV(RandomForestClassifier(),parameters,cv=2) 32 | gs.fit(x_train,y_train) 33 | 34 | model.set_params(**gs.best_params_) 35 | model.fit(x_train,y_train) 36 | 37 | #model.fit(X_train, y_train) # Train model 38 | save_object(file_path=self.model_trainer_config.trained_model_file_path,obj=model) 39 | save_object(file_path=ML_MODEL_PATH,obj=model) 40 | y_train_pred = model.predict(x_train) 41 | 42 | y_test_pred = model.predict(x_test) 43 | 44 | # Calculate metrics 45 | train_precision = precision_score(y_train, y_train_pred) 46 | train_recall = recall_score(y_train, y_train_pred) 47 | train_f1 = f1_score(y_train, y_train_pred) 48 | logging.info(f"Precision: {train_precision}") 49 | logging.info(f"Recall: {train_recall}") 50 | logging.info(f"F1 Score: {train_f1}") 51 | 52 | test_precision = precision_score(y_test, y_test_pred) 53 | test_recall = recall_score(y_test, y_test_pred) 54 | test_f1 = f1_score(y_test, y_test_pred) 55 | 56 | logging.info(f"Test Precision: {test_precision}") 57 | logging.info(f"Test Recall: {test_recall}") 58 | logging.info(f"Test F1 Score: {test_f1}") 59 | 60 | metrics={ 61 | "train_precision": train_precision, 62 | "train_recall": train_recall, 63 | "train_f1": train_f1, 64 | "test_precision": test_precision, 65 | "test_recall": test_recall, 66 | "test_f1": test_f1 67 | } 68 | 69 | return metrics,model 70 | 71 | def __track_model(self,model,metrics:ClassificationMetricArtifact): 72 | mlflow.set_tracking_uri("http://127.0.0.1:5000") 73 | with mlflow.start_run(): 74 | f1_score=metrics.f1_score 75 | precision_score=metrics.precision_score 76 | recall_score=metrics.recall_score 77 | 78 | 79 | 80 | mlflow.log_metric("f1_score",f1_score) 81 | mlflow.log_metric("precision",precision_score) 82 | mlflow.log_metric("recall_score",recall_score) 83 | mlflow.sklearn.log_model(model,"model") 84 | 85 | 86 | 87 | def train_model(self,x_train:pd.DataFrame,y_train:pd.DataFrame,x_test:pd.DataFrame,y_test:pd.DataFrame)->ModelTrainerArtifact: 88 | try: 89 | 90 | dir=os.path.dirname(self.model_trainer_config.trained_model_file_path) 91 | os.makedirs(dir,exist_ok=True) 92 | 93 | 94 | metrics,model=self.__models_runner(x_train,y_train,x_test,y_test) 95 | 96 | training_metrics=ClassificationMetricArtifact( 97 | f1_score=metrics["train_f1"], 98 | precision_score=metrics["train_precision"], 99 | recall_score=metrics["train_recall"] 100 | ) 101 | 102 | # Track the model 103 | self.__track_model(model,training_metrics) 104 | 105 | testing_metrics=ClassificationMetricArtifact( 106 | f1_score=metrics["test_f1"], 107 | precision_score=metrics["test_precision"], 108 | recall_score=metrics["test_recall"] 109 | ) 110 | 111 | # Track the model 112 | self.__track_model(model,testing_metrics) 113 | 114 | if self.model_trainer_config.expected_accuracy>metrics["test_f1"]: 115 | 116 | 117 | #Creating artifact 118 | model_trainer_artifact=ModelTrainerArtifact(trained_model_file_path=self.model_trainer_config.trained_model_file_path,train_metric_artifact=training_metrics,test_metric_artifact=testing_metrics) 119 | 120 | return model_trainer_artifact 121 | raise InsuranceClaimException(Exception("Failed F1 Score"),sys) 122 | 123 | except Exception as e: 124 | raise InsuranceClaimException(e,sys) 125 | 126 | def initiate_model_trainer(self)->ModelTrainerArtifact: 127 | try: 128 | train_file_path = self.data_transformation_artifact.transformed_train_file_path 129 | test_file_path = self.data_transformation_artifact.transformed_test_file_path 130 | 131 | #loading training array and testing array 132 | train_data = read_data(train_file_path) 133 | test_data = read_data(test_file_path) 134 | 135 | x_train, y_train, x_test, y_test = ( 136 | train_data.drop(columns=[TARGET_COLUMN],axis=1), 137 | train_data[TARGET_COLUMN], 138 | test_data.drop(columns=[TARGET_COLUMN],axis=1), 139 | test_data[TARGET_COLUMN], 140 | ) 141 | 142 | model_trainer_artifact=self.train_model(x_train,y_train,x_test,y_test) 143 | return model_trainer_artifact 144 | 145 | 146 | except Exception as e: 147 | raise InsuranceClaimException(e,sys) -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/constants/__init__.py: -------------------------------------------------------------------------------- 1 | 2 | TARGET_COLUMN:str="is_claim" 3 | TRAIN_FILE_NAME:str="train.csv" 4 | TEST_FILE_NAME:str="test.csv" 5 | RAW_FILE_NAME:str="raw.csv" 6 | ARTIFACT_DIR:str="Artifacts" 7 | PIPELINE_NAME:str="Insurance_Claim_Pipeline" 8 | PARAMS_PATH="claim/params/params.yaml" 9 | SCHEMA_FILE_PATH="data_config/schema.yaml" 10 | ML_MODEL_PATH="final_models/model.pkl" 11 | PREPROCESSOR_MODEL_PATH="final_models/preprocessor.pkl" 12 | #############DATA INGESTION RELATED CONSTANTS############# 13 | DATA_INGESTION_HOST_NAME:str="localhost" 14 | DATA_INGESTION_USER:str="root" 15 | DATA_INGESTION_PASSWORD:str="9062860379" 16 | DATA_INGESTION_DATABASE_NAME:str="insurance_db" 17 | DATA_INGESTION_TABLE_NAME:str="insurance_data" 18 | DATA_INGESTION_DIR_NAME:str="data_ingestion" 19 | DATA_INGESTION_FEATURE_STORE:str="feature_store" 20 | DATA_INGESTION_INGESTED_DIR:str="ingested" 21 | PARAMS_PATH="claim/params/params.yaml" 22 | 23 | 24 | #############DATA VALIDATION RELATED CONSTANTS############# 25 | DATA_VALIDATION_DIR_NAME:str="data_validation" 26 | DATA_VALIDATION_VALID_DIR:str="validated" 27 | DATA_VALIDATION_DRIFT_REPORT_DIR:str="drift_report" 28 | DATA_VALIDATION_DRIFT_REPORT_FILE_NAME:str="report.yaml" 29 | SCHEMA_FILE_PATH="data_config/schema.yaml" 30 | 31 | 32 | 33 | 34 | ###############DATA TRANSFORMATION RELATED CONSTANTS############### 35 | DATA_TRANSFORMATION_DIR_NAME: str = "data_transformation" 36 | DATA_TRANSFORMATION_TRANSFORMED_DATA_DIR: str = "transformed" 37 | DATA_TRANSFORMATION_TRANSFORMED_OBJECT_DIR: str = "transformed_object" 38 | PREPROCESSING_OBJECT_FILE_NAME = "preprocessing.pkl" 39 | ############################################################ 40 | 41 | ###############MODEL TRAINING RELATED CONSTANTS############### 42 | MODEL_TRAINER_DIR_NAME: str = "model_trainer" 43 | MODEL_TRAINER_TRAINED_MODEL_DIR: str = "trained_model" 44 | MODEL_TRAINER_TRAINED_MODEL_NAME: str = "model.pkl" 45 | MODEL_TRAINER_EXPECTED_SCORE: float = 0.6 46 | MODEL_TRAINER_OVER_FIITING_UNDER_FITTING_THRESHOLD: float = 0.05 47 | MODEL_FILE_NAME = "model.pkl" -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/constants/__pycache__/__init__.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/constants/__pycache__/__init__.cpython-310.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/__init__.py -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/__pycache__/__init__.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/__pycache__/__init__.cpython-310.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/__pycache__/data_ingestion_artifact.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/__pycache__/data_ingestion_artifact.cpython-310.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/__pycache__/data_ingestion_config.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/__pycache__/data_ingestion_config.cpython-310.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/__pycache__/data_transformation_artifact.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/__pycache__/data_transformation_artifact.cpython-310.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/__pycache__/data_transformation_config.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/__pycache__/data_transformation_config.cpython-310.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/__pycache__/data_validation_artifact.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/__pycache__/data_validation_artifact.cpython-310.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/__pycache__/data_validation_config.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/__pycache__/data_validation_config.cpython-310.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/__pycache__/model_trainer_artifact.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/__pycache__/model_trainer_artifact.cpython-310.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/__pycache__/model_trainer_config.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/__pycache__/model_trainer_config.cpython-310.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/__pycache__/training_config.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/__pycache__/training_config.cpython-310.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/data_ingestion_artifact.py: -------------------------------------------------------------------------------- 1 | from dataclasses import dataclass 2 | 3 | @dataclass 4 | class DataIngestionArtifact: 5 | """ 6 | Data Ingestion Artifact class to store the artifacts of data ingestion. 7 | """ 8 | 9 | train_file_path: str 10 | test_file_path: str 11 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/data_ingestion_config.py: -------------------------------------------------------------------------------- 1 | from datetime import datetime 2 | import os 3 | from claim.constants import ( 4 | DATA_INGESTION_DIR_NAME, 5 | DATA_INGESTION_FEATURE_STORE, 6 | RAW_FILE_NAME, 7 | DATA_INGESTION_INGESTED_DIR, 8 | TRAIN_FILE_NAME, 9 | TEST_FILE_NAME, 10 | DATA_INGESTION_USER, 11 | DATA_INGESTION_PASSWORD, 12 | DATA_INGESTION_HOST_NAME, 13 | DATA_INGESTION_DATABASE_NAME, 14 | DATA_INGESTION_TABLE_NAME, 15 | PARAMS_PATH 16 | ) 17 | from claim.entity.training_config import TrainingPipelineConfig 18 | from claim.utils import read_yaml_file 19 | 20 | class DataIngestionConfig: 21 | """ 22 | Configuration class for setting up the data ingestion process. 23 | It initializes paths for storing raw and processed data, as well as 24 | database connection parameters and ingestion-specific parameters. 25 | """ 26 | 27 | def __init__(self, training_pipeline_config: TrainingPipelineConfig): 28 | """ 29 | Initializes the DataIngestionConfig instance with paths and parameters. 30 | 31 | Args: 32 | training_pipeline_config (TrainingPipelineConfig): An instance of the TrainingPipelineConfig class. 33 | """ 34 | 35 | # Base directory for data ingestion artifacts 36 | self.data_ingestion_dir: str = os.path.join( 37 | training_pipeline_config.artifact_dir, 38 | DATA_INGESTION_DIR_NAME 39 | ) 40 | 41 | # Path to store the raw data (feature store) 42 | self.feature_store_file_path: str = os.path.join( 43 | self.data_ingestion_dir, 44 | DATA_INGESTION_FEATURE_STORE, 45 | RAW_FILE_NAME 46 | ) 47 | 48 | # Path to store the training data after splitting 49 | self.training_file_path: str = os.path.join( 50 | self.data_ingestion_dir, 51 | DATA_INGESTION_INGESTED_DIR, 52 | TRAIN_FILE_NAME 53 | ) 54 | 55 | # Path to store the testing data after splitting 56 | self.testing_file_path: str = os.path.join( 57 | self.data_ingestion_dir, 58 | DATA_INGESTION_INGESTED_DIR, 59 | TEST_FILE_NAME 60 | ) 61 | 62 | # Database connection parameters 63 | self.db_user: str = DATA_INGESTION_USER 64 | self.db_password: str = DATA_INGESTION_PASSWORD 65 | self.db_host: str = DATA_INGESTION_HOST_NAME 66 | self.db_name: str = DATA_INGESTION_DATABASE_NAME 67 | self.db_table_name: str = DATA_INGESTION_TABLE_NAME 68 | 69 | # Ingestion-specific parameters loaded from a YAML configuration file 70 | self.ingestion_params: dict = read_yaml_file(PARAMS_PATH).get("Ingestion", {}) 71 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/data_transformation_artifact.py: -------------------------------------------------------------------------------- 1 | from dataclasses import dataclass 2 | 3 | @dataclass 4 | class DataTransformationArtifact: 5 | transformed_object_file_path: str 6 | transformed_train_file_path: str 7 | transformed_test_file_path: str -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/data_transformation_config.py: -------------------------------------------------------------------------------- 1 | from datetime import datetime 2 | import os 3 | from claim.constants import ( 4 | DATA_TRANSFORMATION_DIR_NAME, 5 | DATA_TRANSFORMATION_TRANSFORMED_DATA_DIR, 6 | DATA_TRANSFORMATION_TRANSFORMED_OBJECT_DIR, 7 | TRAIN_FILE_NAME, 8 | TEST_FILE_NAME, 9 | PREPROCESSING_OBJECT_FILE_NAME 10 | ) 11 | from claim.entity.training_config import TrainingPipelineConfig 12 | 13 | 14 | class DataTransformationConfig: 15 | def __init__(self,training_pipeline_config:TrainingPipelineConfig): 16 | self.data_transformation_dir: str = os.path.join( training_pipeline_config.artifact_dir,DATA_TRANSFORMATION_DIR_NAME ) 17 | self.transformed_train_file_path: str = os.path.join( self.data_transformation_dir,DATA_TRANSFORMATION_TRANSFORMED_DATA_DIR, 18 | TRAIN_FILE_NAME) 19 | self.transformed_test_file_path: str = os.path.join(self.data_transformation_dir, DATA_TRANSFORMATION_TRANSFORMED_DATA_DIR, 20 | TEST_FILE_NAME) 21 | self.transformed_object_file_path: str = os.path.join( self.data_transformation_dir, DATA_TRANSFORMATION_TRANSFORMED_OBJECT_DIR,PREPROCESSING_OBJECT_FILE_NAME) -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/data_validation_artifact.py: -------------------------------------------------------------------------------- 1 | from dataclasses import dataclass 2 | 3 | @dataclass 4 | class DataValidationArtifact: 5 | valid_train_file_path: str 6 | valid_test_file_path: str 7 | drift_report_file_path: str -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/data_validation_config.py: -------------------------------------------------------------------------------- 1 | from datetime import datetime 2 | import os 3 | from claim.constants import ( 4 | DATA_VALIDATION_DIR_NAME, 5 | DATA_VALIDATION_DRIFT_REPORT_DIR, 6 | DATA_VALIDATION_DRIFT_REPORT_FILE_NAME, 7 | DATA_VALIDATION_VALID_DIR, 8 | TRAIN_FILE_NAME, 9 | TEST_FILE_NAME 10 | ) 11 | from claim.entity.training_config import TrainingPipelineConfig 12 | 13 | 14 | 15 | class DataValidationConfig: 16 | def __init__(self,training_pipeline_config:TrainingPipelineConfig): 17 | self.data_validation_dir: str = os.path.join( training_pipeline_config.artifact_dir, DATA_VALIDATION_DIR_NAME) 18 | self.valid_data_dir: str = os.path.join(self.data_validation_dir, DATA_VALIDATION_VALID_DIR) 19 | self.valid_train_file_path: str = os.path.join(self.valid_data_dir, TRAIN_FILE_NAME) 20 | self.valid_test_file_path: str = os.path.join(self.valid_data_dir, TEST_FILE_NAME) 21 | self.drift_report_file_path: str = os.path.join( 22 | self.data_validation_dir, 23 | DATA_VALIDATION_DRIFT_REPORT_DIR, 24 | DATA_VALIDATION_DRIFT_REPORT_FILE_NAME, 25 | ) -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/model_trainer_artifact.py: -------------------------------------------------------------------------------- 1 | from dataclasses import dataclass 2 | 3 | 4 | @dataclass 5 | class ClassificationMetricArtifact: 6 | f1_score: float 7 | precision_score: float 8 | recall_score: float 9 | 10 | 11 | 12 | @dataclass 13 | class ModelTrainerArtifact: 14 | trained_model_file_path: str 15 | train_metric_artifact: ClassificationMetricArtifact 16 | test_metric_artifact: ClassificationMetricArtifact -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/model_trainer_config.py: -------------------------------------------------------------------------------- 1 | from datetime import datetime 2 | import os 3 | from claim.constants import ( 4 | MODEL_TRAINER_DIR_NAME, 5 | MODEL_TRAINER_EXPECTED_SCORE, 6 | MODEL_TRAINER_OVER_FIITING_UNDER_FITTING_THRESHOLD, 7 | MODEL_TRAINER_TRAINED_MODEL_DIR, 8 | MODEL_TRAINER_TRAINED_MODEL_NAME, 9 | TRAIN_FILE_NAME, 10 | TEST_FILE_NAME, 11 | MODEL_FILE_NAME, 12 | PREPROCESSING_OBJECT_FILE_NAME 13 | ) 14 | from claim.entity.training_config import TrainingPipelineConfig 15 | 16 | 17 | 18 | class ModelTrainerConfig: 19 | def __init__(self,training_pipeline_config:TrainingPipelineConfig): 20 | self.model_trainer_dir: str = os.path.join( 21 | training_pipeline_config.artifact_dir, MODEL_TRAINER_DIR_NAME 22 | ) 23 | self.trained_model_file_path: str = os.path.join( 24 | self.model_trainer_dir, MODEL_TRAINER_TRAINED_MODEL_DIR, 25 | MODEL_FILE_NAME 26 | ) 27 | self.expected_accuracy: float = MODEL_TRAINER_EXPECTED_SCORE 28 | self.overfitting_underfitting_threshold = MODEL_TRAINER_OVER_FIITING_UNDER_FITTING_THRESHOLD -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/entity/training_config.py: -------------------------------------------------------------------------------- 1 | from datetime import datetime 2 | import os 3 | from claim.constants import PIPELINE_NAME, ARTIFACT_DIR 4 | 5 | class TrainingPipelineConfig: 6 | """ 7 | Configuration class for setting up the training pipeline. 8 | It initializes the pipeline name and constructs the artifact directory path 9 | with a timestamp to ensure uniqueness for each pipeline run. 10 | """ 11 | 12 | def __init__(self): 13 | # Generate a timestamp in the format 'YYYY-MM-DD-HH-MM-SS' 14 | timestamp = datetime.now().strftime("%Y-%m-%d-%H-%M-%S") 15 | 16 | # Set the name of the pipeline from constants 17 | self.pipeline_name:str = PIPELINE_NAME 18 | 19 | # Set the base artifact directory name from constants 20 | self.artifact_name:str = ARTIFACT_DIR 21 | 22 | # Construct the full path to the artifact directory by combining 23 | # the base artifact directory with the generated timestamp 24 | self.artifact_dir:str = os.path.join(self.artifact_name, timestamp) 25 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/exception/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/exception/__init__.py -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/exception/__pycache__/__init__.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/exception/__pycache__/__init__.cpython-310.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/exception/__pycache__/exception.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/exception/__pycache__/exception.cpython-310.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/exception/exception.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import traceback 3 | 4 | class InsuranceClaimException(Exception): 5 | """ 6 | Custom exception class for the Insurance Claim Prediction project. 7 | Captures detailed information about exceptions, including the error message, 8 | file name, and line number where the exception occurred. 9 | """ 10 | 11 | def __init__(self, error_message: str, error_detail: sys): 12 | """ 13 | Initializes the InsuranceClaimException instance with detailed error information. 14 | 15 | Args: 16 | error_message (str): The original error message. 17 | error_detail (sys): The sys module, used to extract exception details. 18 | """ 19 | super().__init__(error_message) 20 | self.error_message = error_message 21 | 22 | # Extract exception information 23 | _, _, exc_tb = error_detail.exc_info() 24 | 25 | 26 | if exc_tb is not None: 27 | # Traverse to the innermost traceback to get accurate error location 28 | while exc_tb.tb_next: 29 | exc_tb = exc_tb.tb_next 30 | 31 | self.filename = exc_tb.tb_frame.f_code.co_filename # File where exception occurred 32 | self.lineno = exc_tb.tb_lineno # Line number where exception occurred 33 | 34 | def __str__(self): 35 | """ 36 | Returns a formatted string representation of the error, including 37 | the error message, file name, and line number. 38 | 39 | Returns: 40 | str: Formatted error message. 41 | """ 42 | return f"Error: {self.error_message} | File: {self.filename} | Line: {self.lineno}" 43 | 44 | 45 | if __name__ == "__main__": 46 | try: 47 | # Example operation that will raise a ZeroDivisionError 48 | result = 1 / 0 49 | except Exception as e: 50 | # Raise custom exception with detailed information 51 | raise InsuranceClaimException(str(e), sys) 52 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/logging/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/logging/__init__.py -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/logging/__pycache__/__init__.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/logging/__pycache__/__init__.cpython-310.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/logging/__pycache__/logger.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/logging/__pycache__/logger.cpython-310.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/logging/logger.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import os 3 | from datetime import datetime 4 | 5 | # Generate a log file name using the current date and time 6 | LOG_FILE_NAME = f"{datetime.now().strftime('%Y-%m-%d-%H-%M-%S')}.log" 7 | 8 | # Define the path where log files will be stored 9 | logs_path = os.path.join(os.getcwd(), "logs") 10 | 11 | # Create the logs directory if it doesn't already exist 12 | os.makedirs(logs_path, exist_ok=True) 13 | 14 | # Full path to the log file 15 | log_file_path = os.path.join(logs_path, LOG_FILE_NAME) 16 | 17 | # Configure the logging settings 18 | logging.basicConfig( 19 | filename=log_file_path, # Log file location 20 | format='%(asctime)s - %(message)s - [Line: %(lineno)d] - %(name)s - %(levelname)s', # Log format 21 | level=logging.INFO, # Minimum logging level 22 | datefmt='%Y-%m-%d %H:%M:%S' # Date format in log entries 23 | ) 24 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/params/params.yaml: -------------------------------------------------------------------------------- 1 | ### Ingestion 2 | Ingestion: 3 | split_ratio : 0.2 4 | 5 | 6 | RandomForestClassifier: 7 | n_estimators: [8,16] 8 | max_depth: [2,4] 9 | min_samples_split: [2,4] -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/pipeline/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/pipeline/__init__.py -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/pipeline/run_pipeline.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys 3 | 4 | from claim.exception.exception import InsuranceClaimException 5 | from claim.logging.logger import logging 6 | 7 | from claim.components.data_ingestion import DataIngestion 8 | from claim.components.data_validation import DataValidation 9 | from claim.components.data_transformation import DataTransformation 10 | from claim.components.model_trainer import ModelTrainer 11 | 12 | 13 | from claim.entity.training_config import TrainingPipelineConfig 14 | from claim.entity.data_ingestion_config import DataIngestionConfig 15 | from claim.entity.data_validation_config import DataValidationConfig 16 | from claim.entity.data_transformation_config import DataTransformationConfig 17 | from claim.entity.model_trainer_config import ModelTrainerConfig 18 | 19 | from claim.entity.data_ingestion_artifact import DataIngestionArtifact 20 | from claim.entity.data_validation_artifact import DataValidationArtifact 21 | from claim.entity.data_transformation_artifact import DataTransformationArtifact 22 | from claim.entity.model_trainer_artifact import ModelTrainerArtifact 23 | 24 | import sys 25 | 26 | 27 | class TrainingPipeline: 28 | def __init__(self): 29 | self.training_pipeline_config=TrainingPipelineConfig() 30 | 31 | 32 | 33 | def start_data_ingestion(self): 34 | try: 35 | self.data_ingestion_config=DataIngestionConfig(training_pipeline_config=self.training_pipeline_config) 36 | logging.info("Start data Ingestion") 37 | data_ingestion=DataIngestion(data_ingestion_config=self.data_ingestion_config) 38 | data_ingestion_artifact=data_ingestion.initiate_data_ingestion() 39 | logging.info(f"Data Ingestion completed and artifact: {data_ingestion_artifact}") 40 | return data_ingestion_artifact 41 | 42 | except Exception as e: 43 | raise InsuranceClaimException(e,sys) 44 | 45 | def start_data_validation(self,data_ingestion_artifact:DataIngestionArtifact): 46 | try: 47 | data_validation_config=DataValidationConfig(training_pipeline_config=self.training_pipeline_config) 48 | data_validation=DataValidation(data_ingestion_artifact=data_ingestion_artifact,data_validation_config=data_validation_config) 49 | logging.info("Initiate the data Validation") 50 | data_validation_artifact=data_validation.initiate_data_validation() 51 | return data_validation_artifact 52 | except Exception as e: 53 | raise InsuranceClaimException(e,sys) 54 | 55 | def start_data_transformation(self,data_validation_artifact:DataValidationArtifact): 56 | try: 57 | data_transformation_config = DataTransformationConfig(training_pipeline_config=self.training_pipeline_config) 58 | data_transformation = DataTransformation(data_validation_artifact=data_validation_artifact, 59 | data_transformation_config=data_transformation_config) 60 | 61 | data_transformation_artifact = data_transformation.initiate_data_transformation() 62 | return data_transformation_artifact 63 | except Exception as e: 64 | raise InsuranceClaimException(e,sys) 65 | 66 | def start_model_trainer(self,data_transformation_artifact:DataTransformationArtifact)->ModelTrainerArtifact: 67 | try: 68 | self.model_trainer_config: ModelTrainerConfig = ModelTrainerConfig( 69 | training_pipeline_config=self.training_pipeline_config 70 | ) 71 | 72 | model_trainer = ModelTrainer( 73 | data_transformation_artifact=data_transformation_artifact, 74 | model_trainer_config=self.model_trainer_config, 75 | ) 76 | 77 | model_trainer_artifact = model_trainer.initiate_model_trainer() 78 | 79 | return model_trainer_artifact 80 | 81 | except Exception as e: 82 | raise InsuranceClaimException(e, sys) 83 | 84 | 85 | 86 | 87 | def run_pipeline(self): 88 | try: 89 | data_ingestion_artifact=self.start_data_ingestion() 90 | data_validation_artifact=self.start_data_validation(data_ingestion_artifact=data_ingestion_artifact) 91 | data_transformation_artifact=self.start_data_transformation(data_validation_artifact=data_validation_artifact) 92 | model_trainer_artifact=self.start_model_trainer(data_transformation_artifact=data_transformation_artifact) 93 | 94 | 95 | return model_trainer_artifact 96 | except Exception as e: 97 | raise InsuranceClaimException(e,sys) 98 | 99 | 100 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/utils/__init__.py: -------------------------------------------------------------------------------- 1 | import yaml 2 | import os 3 | import pickle 4 | import sys 5 | from claim.exception.exception import InsuranceClaimException 6 | import pandas as pd 7 | from claim.logging.logger import logging 8 | def read_yaml_file(file_path: str) -> dict: 9 | """ 10 | Reads a YAML file and returns its contents as a Python dictionary. 11 | 12 | Args: 13 | file_path (str): The path to the YAML file. 14 | 15 | Returns: 16 | dict: Parsed contents of the YAML file. 17 | 18 | Raises: 19 | FileNotFoundError: If the specified file does not exist. 20 | yaml.YAMLError: If there's an error parsing the YAML file. 21 | Exception: For any other exceptions that may occur. 22 | """ 23 | try: 24 | logging.info("Entered the read_yaml_file method of MainUtils class") 25 | # Open and read the YAML file 26 | with open(file_path, 'r') as file: 27 | logging.info(f"Reading YAML file from {file_path}") 28 | data = yaml.safe_load(file) 29 | return data 30 | 31 | except Exception as e: 32 | # Handle other exceptions and provide traceback 33 | logging.error(f"Error reading YAML file: {e}") 34 | raise InsuranceClaimException(e, sys) 35 | 36 | 37 | def write_yaml_file(file_path: str, data: dict) -> None: 38 | """ 39 | Writes the provided data to a YAML file at the specified path. 40 | 41 | Args: 42 | file_path (str): The path where the YAML file will be written. 43 | data (dict): The data to write to the YAML file. 44 | 45 | Raises: 46 | Exception: If an error occurs during the file writing process. 47 | """ 48 | try: 49 | # Ensure the directory exists 50 | logging.info("Entered the write_yaml_file method") 51 | os.makedirs(os.path.dirname(file_path), exist_ok=True) 52 | 53 | # Open the file in write mode and dump the data as YAML 54 | with open(file_path, 'w') as file: 55 | yaml.dump(data, file, default_flow_style=False) 56 | logging.info(f"Data written to {file_path}") 57 | print(f"Data successfully written to {file_path}") 58 | 59 | except Exception as e: 60 | raise InsuranceClaimException(e, sys) 61 | 62 | def read_data(file_path: str) -> pd.DataFrame: 63 | """ 64 | Reads a CSV file from the specified path into a Pandas DataFrame. 65 | 66 | Args: 67 | file_path (str): The path to the CSV file. 68 | 69 | Returns: 70 | pd.DataFrame: DataFrame containing the CSV data. 71 | 72 | Raises: 73 | InsuranceClaimException: If any error occurs during file reading. 74 | """ 75 | try: 76 | 77 | # Attempt to read the CSV file 78 | logging.info("Entered the read_data method of MainUtils class") 79 | df = pd.read_csv(file_path) 80 | return df 81 | 82 | except Exception as e: 83 | logging.error(f"Error reading CSV file: {e}") 84 | # Capture the traceback for debugging 85 | raise InsuranceClaimException( 86 | e,sys 87 | ) 88 | 89 | 90 | 91 | def save_object(file_path: str, obj: object) -> None: 92 | """ 93 | Serializes and saves a Python object to a specified file path using pickle. 94 | 95 | Args: 96 | file_path (str): The full path (including filename) where the object will be saved. 97 | obj (object): The Python object to serialize and save. 98 | 99 | Raises: 100 | InsuranceClaimException: If any error occurs during the save process. 101 | """ 102 | try: 103 | logging.info("Starting save_object function.") 104 | 105 | # Create the directory if it does not exist 106 | dir_name = os.path.dirname(file_path) 107 | os.makedirs(dir_name, exist_ok=True) 108 | logging.debug(f"Ensured directory exists: {dir_name}") 109 | 110 | # Serialize the object and write to file in binary mode 111 | with open(file_path, "wb") as file_obj: 112 | pickle.dump(obj, file_obj) 113 | logging.info(f"Object successfully saved at: {file_path}") 114 | 115 | except Exception as e: 116 | logging.error(f"Failed to save object at {file_path}. Reason: {e}") 117 | raise InsuranceClaimException(str(e), sys) from e 118 | 119 | 120 | def load_object(file_path: str, ) -> object: 121 | try: 122 | if not os.path.exists(file_path): 123 | raise Exception(f"The file: {file_path} is not exists") 124 | with open(file_path, "rb") as file_obj: 125 | print(file_obj) 126 | return pickle.load(file_obj) 127 | except Exception as e: 128 | raise InsuranceClaimException(e, sys) from e 129 | 130 | 131 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/utils/__pycache__/__init__.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/claim/utils/__pycache__/__init__.cpython-310.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/data_config/schema.yaml: -------------------------------------------------------------------------------- 1 | columns: 2 | - policy_id: string 3 | - policy_tenure: float 4 | - age_of_car: float 5 | - age_of_policyholder: float 6 | - area_cluster: string 7 | - population_density: int 8 | - make: int 9 | - segment: string 10 | - model: string 11 | - fuel_type: string 12 | - max_torque: string 13 | - max_power: string 14 | - engine_type: string 15 | - airbags: int 16 | - is_esc: string 17 | - is_adjustable_steering: string 18 | - is_tpms: string 19 | - is_parking_sensors: string 20 | - is_parking_camera: string 21 | - rear_brakes_type: string 22 | - displacement: int 23 | - cylinder: int 24 | - transmission_type: string 25 | - gear_box: int 26 | - steering_type: string 27 | - turning_radius: float 28 | - length: int 29 | - width: int 30 | - height: int 31 | - gross_weight: int 32 | - is_front_fog_lights: string 33 | - is_rear_window_wiper: string 34 | - is_rear_window_washer: string 35 | - is_rear_window_defogger: string 36 | - is_brake_assist: string 37 | - is_power_door_locks: string 38 | - is_central_locking: string 39 | - is_power_steering: string 40 | - is_driver_seat_height_adjustable: string 41 | - is_day_night_rear_view_mirror: string 42 | - is_ecw: string 43 | - is_speed_alert: string 44 | - ncap_rating: int 45 | - is_claim: int 46 | 47 | numerical_columns: 48 | - age_of_car 49 | - age_of_policyholder 50 | - airbags 51 | - cylinder 52 | - displacement 53 | - gear_box 54 | - gross_weight 55 | - height 56 | - is_claim 57 | - length 58 | - make 59 | - ncap_rating 60 | - policy_tenure 61 | - population_density 62 | - turning_radius 63 | - width 64 | 65 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/experiments/exp.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 1, 6 | "id": "ff75f16d", 7 | "metadata": {}, 8 | "outputs": [ 9 | { 10 | "name": "stdout", 11 | "output_type": "stream", 12 | "text": [ 13 | "Log file will be created: 2025-05-25-10-20-14.log\n" 14 | ] 15 | } 16 | ], 17 | "source": [ 18 | "import logging\n", 19 | "import os\n", 20 | "from datetime import datetime\n", 21 | "\n", 22 | "# Generate a log file name using the current date and time\n", 23 | "LOG_FILE_NAME = f\"{datetime.now().strftime('%Y-%m-%d-%H-%M-%S')}.log\"\n", 24 | "\n", 25 | "print(f\"Log file will be created: {LOG_FILE_NAME}\")" 26 | ] 27 | }, 28 | { 29 | "cell_type": "code", 30 | "execution_count": null, 31 | "id": "be864904", 32 | "metadata": {}, 33 | "outputs": [], 34 | "source": [] 35 | }, 36 | { 37 | "cell_type": "code", 38 | "execution_count": null, 39 | "id": "df6639eb", 40 | "metadata": {}, 41 | "outputs": [], 42 | "source": [] 43 | } 44 | ], 45 | "metadata": { 46 | "kernelspec": { 47 | "display_name": ".development", 48 | "language": "python", 49 | "name": "python3" 50 | }, 51 | "language_info": { 52 | "codemirror_mode": { 53 | "name": "ipython", 54 | "version": 3 55 | }, 56 | "file_extension": ".py", 57 | "mimetype": "text/x-python", 58 | "name": "python", 59 | "nbconvert_exporter": "python", 60 | "pygments_lexer": "ipython3", 61 | "version": "3.10.11" 62 | } 63 | }, 64 | "nbformat": 4, 65 | "nbformat_minor": 5 66 | } 67 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/final_models/model.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/final_models/model.pkl -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/final_models/preprocessor.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/final_models/preprocessor.pkl -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/insurance_claim_prediction.egg-info/PKG-INFO: -------------------------------------------------------------------------------- 1 | Metadata-Version: 2.1 2 | Name: insurance-claim-prediction 3 | Version: 0.0.1 4 | Author: Tanmay Chakraborty 5 | Author-email: chakrabortytanmay326@gmail.com 6 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/insurance_claim_prediction.egg-info/SOURCES.txt: -------------------------------------------------------------------------------- 1 | setup.py 2 | claim/__init__.py 3 | claim/components/__init__.py 4 | claim/constants/__init__.py 5 | claim/entity/__init__.py 6 | claim/exception/__init__.py 7 | claim/logging/__init__.py 8 | claim/pipeline/__init__.py 9 | claim/utils/__init__.py 10 | insurance_claim_prediction.egg-info/PKG-INFO 11 | insurance_claim_prediction.egg-info/SOURCES.txt 12 | insurance_claim_prediction.egg-info/dependency_links.txt 13 | insurance_claim_prediction.egg-info/requires.txt 14 | insurance_claim_prediction.egg-info/top_level.txt -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/insurance_claim_prediction.egg-info/dependency_links.txt: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/insurance_claim_prediction.egg-info/requires.txt: -------------------------------------------------------------------------------- 1 | pandas 2 | numpy 3 | ipykernel 4 | mysql-connector-python 5 | sqlalchemy 6 | pyyaml 7 | scikit-learn 8 | imblearn 9 | mlflow 10 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/insurance_claim_prediction.egg-info/top_level.txt: -------------------------------------------------------------------------------- 1 | claim 2 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/main.py: -------------------------------------------------------------------------------- 1 | from claim.entity.data_ingestion_config import DataIngestionConfig 2 | from claim.entity.data_validation_config import DataValidationConfig 3 | from claim.components.data_validation import DataValidation 4 | from claim.entity.data_transformation_config import DataTransformationConfig 5 | from claim.components.data_transformation import DataTransformation 6 | from claim.entity.model_trainer_config import ModelTrainerConfig 7 | from claim.components.model_trainer import ModelTrainer 8 | 9 | from claim.components.data_ingestion import DataIngestion 10 | 11 | from claim.entity.training_config import TrainingPipelineConfig 12 | import sys 13 | 14 | 15 | training_config = TrainingPipelineConfig() 16 | data_ingestion_config = DataIngestionConfig(training_pipeline_config=training_config) 17 | data_ingestion=DataIngestion(data_ingestion_config=data_ingestion_config) 18 | data_ingestion_artifact=data_ingestion.initiate_data_ingestion() 19 | print(data_ingestion_artifact) 20 | print("-===========================") 21 | 22 | 23 | data_validation_config = DataValidationConfig(training_pipeline_config=training_config) 24 | data_validation=DataValidation(data_ingestion_artifact=data_ingestion_artifact, 25 | data_validation_config=data_validation_config) 26 | data_validation_artifact=data_validation.initiate_data_validation() 27 | print(data_validation_artifact) 28 | print("-===========================") 29 | 30 | data_transformation_config = DataTransformationConfig(training_pipeline_config=training_config) 31 | data_transformation=DataTransformation(data_validation_artifact=data_validation_artifact, 32 | data_transformation_config=data_transformation_config) 33 | data_transformation_artifact=data_transformation.initiate_data_transformation() 34 | print(data_transformation_artifact) 35 | print("-===========================") 36 | 37 | model_trainer_config = ModelTrainerConfig(training_pipeline_config=training_config) 38 | model_trainer=ModelTrainer(data_transformation_artifact=data_transformation_artifact, 39 | model_trainer_config=model_trainer_config) 40 | model_trainer_artifact=model_trainer.initiate_model_trainer() 41 | print(model_trainer_artifact) -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/mlruns/0/3a5faa601f084ed98c92b80f2cf2c8f2/meta.yaml: -------------------------------------------------------------------------------- 1 | artifact_uri: mlflow-artifacts:/0/3a5faa601f084ed98c92b80f2cf2c8f2/artifacts 2 | end_time: 1748154521465 3 | entry_point_name: '' 4 | experiment_id: '0' 5 | lifecycle_stage: active 6 | run_id: 3a5faa601f084ed98c92b80f2cf2c8f2 7 | run_name: treasured-crab-225 8 | run_uuid: 3a5faa601f084ed98c92b80f2cf2c8f2 9 | source_name: '' 10 | source_type: 4 11 | source_version: '' 12 | start_time: 1748154517246 13 | status: 3 14 | tags: [] 15 | user_id: Tanmay Chakraborty 16 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/mlruns/0/3a5faa601f084ed98c92b80f2cf2c8f2/metrics/f1_score: -------------------------------------------------------------------------------- 1 | 1748154517283 0.15818815331010452 0 2 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/mlruns/0/3a5faa601f084ed98c92b80f2cf2c8f2/metrics/precision: -------------------------------------------------------------------------------- 1 | 1748154517296 0.09070929070929071 0 2 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/mlruns/0/3a5faa601f084ed98c92b80f2cf2c8f2/metrics/recall_score: -------------------------------------------------------------------------------- 1 | 1748154517308 0.617687074829932 0 2 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/mlruns/0/3a5faa601f084ed98c92b80f2cf2c8f2/tags/mlflow.log-model.history: -------------------------------------------------------------------------------- 1 | [{"run_id": "3a5faa601f084ed98c92b80f2cf2c8f2", "artifact_path": "model", "utc_time_created": "2025-05-25 06:28:37.319266", "model_uuid": "bf4ad9f08c8b47b5a104835e1573262e", "flavors": {"python_function": {"model_path": "model.pkl", "predict_fn": "predict", "loader_module": "mlflow.sklearn", "python_version": "3.10.11", "env": {"conda": "conda.yaml", "virtualenv": "python_env.yaml"}}, "sklearn": {"pickled_model": "model.pkl", "sklearn_version": "1.6.1", "serialization_format": "cloudpickle", "code": null}}}] -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/mlruns/0/3a5faa601f084ed98c92b80f2cf2c8f2/tags/mlflow.runName: -------------------------------------------------------------------------------- 1 | treasured-crab-225 -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/mlruns/0/3a5faa601f084ed98c92b80f2cf2c8f2/tags/mlflow.source.git.commit: -------------------------------------------------------------------------------- 1 | 14614ef76149e18c4c54474eea488f8cd3a68bdc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/mlruns/0/3a5faa601f084ed98c92b80f2cf2c8f2/tags/mlflow.source.name: -------------------------------------------------------------------------------- 1 | main.py -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/mlruns/0/3a5faa601f084ed98c92b80f2cf2c8f2/tags/mlflow.source.type: -------------------------------------------------------------------------------- 1 | LOCAL -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/mlruns/0/3a5faa601f084ed98c92b80f2cf2c8f2/tags/mlflow.user: -------------------------------------------------------------------------------- 1 | Tanmay Chakraborty -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/mlruns/0/f4fd50b718204024b76c852c30f223e9/meta.yaml: -------------------------------------------------------------------------------- 1 | artifact_uri: mlflow-artifacts:/0/f4fd50b718204024b76c852c30f223e9/artifacts 2 | end_time: 1748154517203 3 | entry_point_name: '' 4 | experiment_id: '0' 5 | lifecycle_stage: active 6 | run_id: f4fd50b718204024b76c852c30f223e9 7 | run_name: bedecked-robin-840 8 | run_uuid: f4fd50b718204024b76c852c30f223e9 9 | source_name: '' 10 | source_type: 4 11 | source_version: '' 12 | start_time: 1748154510235 13 | status: 3 14 | tags: [] 15 | user_id: Tanmay Chakraborty 16 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/mlruns/0/f4fd50b718204024b76c852c30f223e9/metrics/f1_score: -------------------------------------------------------------------------------- 1 | 1748154510289 0.6721430516474539 0 2 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/mlruns/0/f4fd50b718204024b76c852c30f223e9/metrics/precision: -------------------------------------------------------------------------------- 1 | 1748154510298 0.6361410592328508 0 2 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/mlruns/0/f4fd50b718204024b76c852c30f223e9/metrics/recall_score: -------------------------------------------------------------------------------- 1 | 1748154510309 0.7124645210949151 0 2 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/mlruns/0/f4fd50b718204024b76c852c30f223e9/tags/mlflow.log-model.history: -------------------------------------------------------------------------------- 1 | [{"run_id": "f4fd50b718204024b76c852c30f223e9", "artifact_path": "model", "utc_time_created": "2025-05-25 06:28:30.327869", "model_uuid": "b2da57c32d5544afaaa2d9568362930c", "flavors": {"python_function": {"model_path": "model.pkl", "predict_fn": "predict", "loader_module": "mlflow.sklearn", "python_version": "3.10.11", "env": {"conda": "conda.yaml", "virtualenv": "python_env.yaml"}}, "sklearn": {"pickled_model": "model.pkl", "sklearn_version": "1.6.1", "serialization_format": "cloudpickle", "code": null}}}] -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/mlruns/0/f4fd50b718204024b76c852c30f223e9/tags/mlflow.runName: -------------------------------------------------------------------------------- 1 | bedecked-robin-840 -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/mlruns/0/f4fd50b718204024b76c852c30f223e9/tags/mlflow.source.git.commit: -------------------------------------------------------------------------------- 1 | 14614ef76149e18c4c54474eea488f8cd3a68bdc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/mlruns/0/f4fd50b718204024b76c852c30f223e9/tags/mlflow.source.name: -------------------------------------------------------------------------------- 1 | main.py -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/mlruns/0/f4fd50b718204024b76c852c30f223e9/tags/mlflow.source.type: -------------------------------------------------------------------------------- 1 | LOCAL -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/mlruns/0/f4fd50b718204024b76c852c30f223e9/tags/mlflow.user: -------------------------------------------------------------------------------- 1 | Tanmay Chakraborty -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/mlruns/0/meta.yaml: -------------------------------------------------------------------------------- 1 | artifact_location: mlflow-artifacts:/0 2 | creation_time: 1748154232882 3 | experiment_id: '0' 4 | last_update_time: 1748154232882 5 | lifecycle_stage: active 6 | name: Default 7 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/requirements.txt: -------------------------------------------------------------------------------- 1 | pandas 2 | numpy 3 | ipykernel 4 | mysql-connector-python 5 | sqlalchemy 6 | pyyaml 7 | scikit-learn 8 | imblearn 9 | mlflow 10 | 11 | -e . -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import find_packages, setup 2 | from typing import List 3 | 4 | 5 | def gather_requirements(filename: str = "requirements.txt") -> List[str]: 6 | """ 7 | Reads and parses dependencies from a requirements file. 8 | 9 | Args: 10 | filename (str): Path to the requirements file. 11 | 12 | Returns: 13 | List[str]: A list of required packages, excluding editable installs. 14 | """ 15 | try: 16 | # Open the requirements file and read all lines 17 | with open(filename, "r") as file: 18 | return [ 19 | line.strip() # Remove leading/trailing whitespace 20 | for line in file.readlines() # Read each line 21 | if line.strip() and line.strip() != "-e ." # Ignore empty lines and '-e .' 22 | ] 23 | except FileNotFoundError: 24 | # Raise a specific error if the file doesn't exist 25 | raise FileNotFoundError(f"'{filename}' not found.") 26 | except Exception as e: 27 | # Catch and raise any other exceptions 28 | raise RuntimeError(f"An error occurred while reading {filename}: {e}") 29 | 30 | 31 | # The setup() function defines metadata and configuration for the package 32 | setup( 33 | name="insurance-claim-prediction", # Package name 34 | version="0.0.1", # Initial version 35 | author="Tanmay Chakraborty", # Author name 36 | author_email="chakrabortytanmay326@gmail.com", # Contact email 37 | packages=find_packages(), # Automatically discover all packages and subpackages 38 | install_requires=gather_requirements(), # List of dependencies from requirements.txt 39 | ) 40 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/insurance -claim-prediction-mlops/templates/base.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Insurance Claim Prediction 5 | 38 | 39 | 40 |

Insurance Claim Prediction Form

41 |
42 | {% for field in fields %} 43 | 44 | 45 | {% if field.type == "select" %} 46 | 51 | {% elif field.type == "float" %} 52 | 53 | {% elif field.type == "date" %} 54 | 55 | {% else %} 56 | 57 | {% endif %} 58 | {% endfor %} 59 | 60 |
61 | 62 | 63 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/012_MLops end-to-end project/live-session.rar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/012_MLops end-to-end project/live-session.rar -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/013_Project Lab/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/013_Project Lab/.DS_Store -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/013_Project Lab/resume_analyzer/.gitignore: -------------------------------------------------------------------------------- 1 | venv/ -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/013_Project Lab/resume_analyzer/README.md: -------------------------------------------------------------------------------- 1 | # AI powered resume analyzer 2 | 3 | - upload pdf 4 | - extract structured information using NLP (spaCy) 5 | - present everything using flask 6 | 7 | ## Features 8 | 9 | - resume upload 10 | - data extraction: name, email, phone number, skills, education, work experience 11 | - resume scoring: skill relevance based on the Job Description 12 | 13 | ## use cases 14 | 15 | - hiring automation - screen resumes to shortlist candidates 16 | - career counseling 17 | - job matching 18 | 19 | ## Vendor Solution 20 | 21 | - https://www.affinda.com/resume-parser 22 | - https://www.textkernel.com/products-solutions/parser/ 23 | - https://www.hireability.com/ 24 | 25 | ## Implementation 26 | 27 | - NLP 28 | > Named Entity Recognition 29 | > Tokenization 30 | > Text classification 31 | 32 | ['Text', 'classification'] 33 | 34 | stop words -> and, the, a, an 35 | 36 | - LLMs 37 | 38 | > GPT 39 | > Llama 40 | 41 | > resume summarization 42 | > generating personalized feedback 43 | 44 | 45 | ## Steps 46 | 47 | 1. enviroment 48 | 2. download an english model by spacy (NER): python -m spacy download en_core_web_sm 49 | 3. create project files and folder 50 | 4. pdf extraction using pdfplumber 51 | 5. nlp logic - resume parser 52 | 6. flask app 53 | 7. templates html and static css 54 | 8. run the project 55 | 56 | 57 | 58 | https://github.com/hemansnation 59 | scheme 60 | - ftp 61 | - http 62 | - https (secure - SSL) 63 | 64 | dns 65 | representation of IP address 66 | 67 | route 68 | /hemansnation -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/013_Project Lab/resume_analyzer/__pycache__/resume_parser.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/013_Project Lab/resume_analyzer/__pycache__/resume_parser.cpython-39.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/013_Project Lab/resume_analyzer/__pycache__/utils.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/013_Project Lab/resume_analyzer/__pycache__/utils.cpython-39.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/013_Project Lab/resume_analyzer/app.py: -------------------------------------------------------------------------------- 1 | from flask import Flask, request, jsonify, render_template 2 | from werkzeug.utils import secure_filename 3 | from resume_parser import ResumeParser 4 | from utils import extract_text_from_pdf 5 | import os 6 | 7 | 8 | app = Flask(__name__) 9 | app.config['UPLOAD_FOLDER'] = 'uploads' 10 | app.config['MAX_CONTENT_LENGTH'] = 16 * 1024 * 1024 # 16 MB max file size 11 | 12 | os.makedirs(app.config['UPLOAD_FOLDER'], exist_ok=True) 13 | 14 | parser = ResumeParser() 15 | 16 | @app.route('/', methods=['GET']) 17 | def index(): 18 | return render_template('index.html') 19 | 20 | @app.route('/upload', methods=['POST']) 21 | def upload_resume(): 22 | if 'resume' not in request.files: 23 | return render_template("error.html", message="No file uploaded.") 24 | 25 | file = request.files['resume'] 26 | if file.filename == '': 27 | return render_template("error.html", message="No file selected.") 28 | 29 | if file and file.filename.lower().endswith('.pdf'): 30 | filename = secure_filename(file.filename) 31 | file_path = os.path.join(app.config['UPLOAD_FOLDER'], filename) 32 | file.save(file_path) 33 | 34 | text = extract_text_from_pdf(file_path) 35 | if text.startswith("Error:"): 36 | return render_template("error.html", message=text) 37 | 38 | entities = parser.extract_entities(text) 39 | score_data = parser.score_resume(entities) 40 | 41 | os.remove(file_path) 42 | 43 | return render_template('results.html', entities=entities, score_data=score_data) 44 | 45 | return render_template("error.html", message="Invalid file format. Please upload a PDF file.") 46 | 47 | 48 | if __name__ == '__main__': 49 | app.run(debug=True) 50 | 51 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/013_Project Lab/resume_analyzer/requirements.txt: -------------------------------------------------------------------------------- 1 | blinker==1.9.0 2 | blis==0.7.11 3 | catalogue==2.0.10 4 | certifi==2025.4.26 5 | cffi==1.17.1 6 | charset-normalizer==3.4.2 7 | click==8.1.8 8 | confection==0.1.5 9 | cryptography==45.0.3 10 | cymem==2.0.11 11 | Flask==2.3.3 12 | idna==3.10 13 | importlib_metadata==8.7.0 14 | itsdangerous==2.2.0 15 | Jinja2==3.1.6 16 | langcodes==3.5.0 17 | language_data==1.3.0 18 | marisa-trie==1.2.1 19 | MarkupSafe==3.0.2 20 | murmurhash==1.0.13 21 | numpy==1.24.3 22 | packaging==25.0 23 | pandas==2.0.3 24 | pathlib_abc==0.1.1 25 | pathy==0.11.0 26 | pdfminer.six==20221105 27 | pdfplumber==0.10.2 28 | pillow==11.2.1 29 | preshed==3.0.10 30 | pycparser==2.22 31 | pydantic==1.10.22 32 | pypdfium2==4.30.1 33 | python-dateutil==2.9.0.post0 34 | pytz==2025.2 35 | requests==2.32.3 36 | six==1.17.0 37 | smart-open==6.4.0 38 | spacy==3.5.0 39 | spacy-legacy==3.0.12 40 | spacy-loggers==1.0.5 41 | srsly==2.5.1 42 | thinc==8.1.12 43 | tqdm==4.67.1 44 | typer==0.7.0 45 | typing_extensions==4.13.2 46 | tzdata==2025.2 47 | urllib3==2.4.0 48 | wasabi==1.1.3 49 | Werkzeug==3.1.3 50 | zipp==3.22.0 51 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/013_Project Lab/resume_analyzer/resume_parser.py: -------------------------------------------------------------------------------- 1 | import spacy 2 | import re 3 | 4 | class ResumeParser: 5 | def __init__(self): 6 | self.nlp = spacy.load("en_core_web_sm") 7 | self.skills_db = ['Python','Java', 'SQL', 'Machine Learning', 'Data Analysis'] 8 | 9 | def extract_entities(self, text): 10 | doc = self.nlp(text) 11 | 12 | entities = { 13 | 'name': None, 14 | 'email': None, 15 | 'phone': None, 16 | 'skills': [], 17 | 'education': [] 18 | } 19 | 20 | for ent in doc.ents: 21 | if ent.label_ == 'PERSON' and not entities['name']: 22 | entities['name'] = ent.text 23 | break 24 | 25 | email_pattern = r'[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}' 26 | emails = re.findall(email_pattern, text) 27 | entities['email'] = emails[0] if emails else None 28 | 29 | phone_pattern = r"\(?\d{3}\)?[-.\s]?\d{3}[-.\s]?\d{4}" 30 | phones = re.findall(phone_pattern, text) 31 | entities['phone'] = phones[0] if phones else None 32 | 33 | for skill in self.skills_db: 34 | if skill.lower() in text.lower(): 35 | entities['skills'].append(skill) 36 | 37 | degree_keywords = ['bachelors', 'masters', 'phd', 'degree'] 38 | 39 | for sent in doc.sents: 40 | if any(keyword in sent.text.lower() for keyword in degree_keywords): 41 | entities['education'].append(sent.text.strip()) 42 | 43 | return entities 44 | 45 | def score_resume(self, entities, job_skills=['Python', 'SQL']): 46 | matched_skills = [skill for skill in entities['skills'] if skill in job_skills] 47 | score = len(matched_skills) / len(job_skills) * 100 48 | 49 | feedback = f'Matched {len(matched_skills)} out of {len(job_skills)} required skills.' 50 | 51 | if score < 50: 52 | feedback += "Consider adding skills like: " + ', '.join(set(job_skills) - set(entities['skills'])) 53 | 54 | return {"score": round(score, 2), "feedback": feedback} 55 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/013_Project Lab/resume_analyzer/static/css/styles.css: -------------------------------------------------------------------------------- 1 | body { 2 | font-family: Arial, sans-serif; 3 | margin: 20px; 4 | background-color: #f4f4f4; 5 | } 6 | h1, h2 { 7 | color: #333; 8 | } 9 | form { 10 | margin: 20px 0; 11 | } 12 | button { 13 | background-color: #007bff; 14 | color: white; 15 | padding: 10px 20px; 16 | border: none; 17 | cursor: pointer; 18 | } 19 | button:hover { 20 | background-color: #0056b3; 21 | } 22 | p { 23 | margin: 10px 0; 24 | } 25 | a { 26 | color: #007bff; 27 | text-decoration: none; 28 | } 29 | a:hover { 30 | text-decoration: underline; 31 | } -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/013_Project Lab/resume_analyzer/static/js/scripts.js: -------------------------------------------------------------------------------- 1 | document.querySelector('form').addEventListener('submit', function(e) { 2 | const fileInput = document.querySelector('#resume'); 3 | if (!fileInput.value) { 4 | e.preventDefault(); 5 | alert('Please select a PDF file.'); 6 | } 7 | }); -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/013_Project Lab/resume_analyzer/templates/error.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Error 6 | 7 | 8 | 9 |

Error

10 |

{{ message }}

11 | Try again 12 | 13 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/013_Project Lab/resume_analyzer/templates/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Resume Analyzer 6 | 7 | 8 | 9 |

AI Powered Resume Analyzer

10 |
11 | 12 | 13 | 14 |
15 | 16 | 17 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/013_Project Lab/resume_analyzer/templates/results.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Resume Analysis Results 6 | 7 | 8 | 9 |

Resume Analysis Results

10 |

Extracted Information

11 |

Name: {{ entities.name or 'Not found' }}

12 |

Email: {{ entities.email or 'Not found' }}

13 |

Phone: {{ entities.phone or 'Not found' }}

14 |

Skills: {{ entities.skills | join(', ') or 'None' }}

15 |

Education: {{ entities.education | join(', ') or 'None' }}

16 | 17 |
18 |

Resume Score

19 |

Score: {{ score_data.score }}%

20 |

Feedback: {{ score_data.feedback }}

21 | Analyze another resume 22 | 23 | 24 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/013_Project Lab/resume_analyzer/utils.py: -------------------------------------------------------------------------------- 1 | import pdfplumber 2 | 3 | def extract_text_from_pdf(pdf_path): 4 | """ 5 | Extracts text from a PDF file. 6 | 7 | Args: 8 | pdf_path (str): The path to the PDF file. 9 | 10 | Returns: 11 | str: The extracted text from the PDF. 12 | """ 13 | try: 14 | with pdfplumber.open(pdf_path) as pdf: 15 | text = '' 16 | for page in pdf.pages: 17 | text += page.extract_text() or '' 18 | return text 19 | except Exception as e: 20 | return f"Error extracting text from PDF: {str(e)}" -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/4_Python Hands-On/__pycache__/indore.cpython-312.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/4_Python Hands-On/__pycache__/indore.cpython-312.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/4_Python Hands-On/indore.py: -------------------------------------------------------------------------------- 1 | msg = "Indore is the Best City in India" 2 | 3 | def vijay(): 4 | return "Vijay is the Best" 5 | 6 | def nagar(n): 7 | return n + "Nagar is the Best" -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/4_Python Hands-On/main.py: -------------------------------------------------------------------------------- 1 | # import indore 2 | 3 | # print(indore.msg) 4 | 5 | # print(indore.vijay()) 6 | 7 | 8 | # import statisticsmodels 9 | 10 | from statisticsmodels import descriptive 11 | 12 | object = descriptive.Descript() 13 | 14 | print(object.mean([1,2,3,4,5])) -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/4_Python Hands-On/statisticsmodels/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/4_Python Hands-On/statisticsmodels/__init__.py -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/4_Python Hands-On/statisticsmodels/__pycache__/__init__.cpython-312.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/4_Python Hands-On/statisticsmodels/__pycache__/__init__.cpython-312.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/4_Python Hands-On/statisticsmodels/__pycache__/descriptive.cpython-312.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/4_Python Hands-On/statisticsmodels/__pycache__/descriptive.cpython-312.pyc -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/4_Python Hands-On/statisticsmodels/descriptive.py: -------------------------------------------------------------------------------- 1 | import math 2 | 3 | class Descript: 4 | 5 | def mean(self, data): 6 | return sum(data) / len(data) 7 | 8 | def mode(self, data): 9 | return "mode: " + str(data) -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/.DS_Store -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/NumPy/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/NumPy/.DS_Store -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/NumPy/NumPy Resources/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/NumPy/NumPy Resources/.DS_Store -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/NumPy/NumPy Resources/NumPy Books/NumPy Cookbook, Second Edition.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/NumPy/NumPy Resources/NumPy Books/NumPy Cookbook, Second Edition.pdf -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/NumPy/NumPy Resources/NumPy Cheat Sheet/Numpy_Python_Cheat_Sheet.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/NumPy/NumPy Resources/NumPy Cheat Sheet/Numpy_Python_Cheat_Sheet.pdf -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/NumPy/NumPy Resources/NumPy Cheat Sheet/numpy-cheat-sheet.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/NumPy/NumPy Resources/NumPy Cheat Sheet/numpy-cheat-sheet.pdf -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/NumPy/NumPy Resources/NumPy Cheat Sheet/numpy.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/NumPy/NumPy Resources/NumPy Cheat Sheet/numpy.pdf -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/NumPy/NumPy Resources/NumPy Practice Questions/1-100 NumPy Exercises for Data Analysis.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/NumPy/NumPy Resources/NumPy Practice Questions/1-100 NumPy Exercises for Data Analysis.pdf -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/NumPy/NumPy Resources/NumPy Reference or Documentation/.ipynb_checkpoints/numpy-reference 1.18-checkpoint.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/NumPy/NumPy Resources/NumPy Reference or Documentation/.ipynb_checkpoints/numpy-reference 1.18-checkpoint.pdf -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/NumPy/NumPy Resources/NumPy Reference or Documentation/numpy-reference 1.18.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/NumPy/NumPy Resources/NumPy Reference or Documentation/numpy-reference 1.18.pdf -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/NumPy/NumPy Resources/NumPy Reference or Documentation/numpy-user 1.18.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/NumPy/NumPy Resources/NumPy Reference or Documentation/numpy-user 1.18.pdf -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/Pandas/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/Pandas/.DS_Store -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/Pandas/Pandas Books/Learning pandas.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/Pandas/Pandas Books/Learning pandas.pdf -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/Pandas/Pandas Books/Mastering Pandas.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/Pandas/Pandas Books/Mastering Pandas.pdf -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/Pandas/goa.xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/Pandas/goa.xlsx -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/Pandas/indore.csv: -------------------------------------------------------------------------------- 1 | ,day,temperature,windspeed,event 2 | 0,1/1/2024,32,6,Rain 3 | 1,1/1/2024,35,6,Snow 4 | 2,1/3/2024,30,6,Sunny 5 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/Pandas/nyc_weather.csv: -------------------------------------------------------------------------------- 1 | EST,Temperature,DewPoint,Humidity,Sea Level PressureIn,VisibilityMiles,WindSpeedMPH,PrecipitationIn,CloudCover,Events,WindDirDegrees 2 | 1/1/2016,38,23,52,30.03,10,8,0,5,,281 3 | 1/2/2016,36,18,46,30.02,10,7,0,3,,275 4 | 1/3/2016,40,21,47,29.86,10,8,0,1,,277 5 | 1/4/2016,25,9,44,30.05,10,9,0,3,,345 6 | 1/5/2016,20,-3,41,30.57,10,5,0,0,,333 7 | 1/6/2016,33,4,35,30.5,10,4,0,0,,259 8 | 1/7/2016,39,11,33,30.28,10,2,0,3,,293 9 | 1/8/2016,39,29,64,30.2,10,4,0,8,,79 10 | 1/9/2016,44,38,77,30.16,9,8,T,8,Rain,76 11 | 1/10/2016,50,46,71,29.59,4,,1.8,7,Rain,109 12 | 1/11/2016,33,8,37,29.92,10,,0,1,,289 13 | 1/12/2016,35,15,53,29.85,10,6,T,4,,235 14 | 1/13/2016,26,4,42,29.94,10,10,0,0,,284 15 | 1/14/2016,30,12,47,29.95,10,5,T,7,,266 16 | 1/15/2016,43,31,62,29.82,9,5,T,2,,101 17 | 1/16/2016,47,37,70,29.52,8,7,0.24,7,Rain,340 18 | 1/17/2016,36,23,66,29.78,8,6,0.05,6,Fog-Snow,345 19 | 1/18/2016,25,6,53,29.83,9,12,T,2,Snow,293 20 | 1/19/2016,22,3,42,30.03,10,11,0,1,,293 21 | 1/20/2016,32,15,49,30.13,10,6,0,2,,302 22 | 1/21/2016,31,11,45,30.15,10,6,0,1,,312 23 | 1/22/2016,26,6,41,30.21,9,,0.01,3,Snow,34 24 | 1/23/2016,26,21,78,29.77,1,16,2.31,8,Fog-Snow,42 25 | 1/24/2016,28,11,53,29.92,8,6,T,3,Snow,327 26 | 1/25/2016,34,18,54,30.25,10,3,0,2,,286 27 | 1/26/2016,43,29,56,30.03,10,7,0,2,,244 28 | 1/27/2016,41,22,45,30.03,10,7,T,3,Rain,311 29 | 1/28/2016,37,20,51,29.9,10,5,0,1,,234 30 | 1/29/2016,36,21,50,29.58,10,8,0,4,,298 31 | 1/30/2016,34,16,46,30.01,10,7,0,0,,257 32 | 1/31/2016,46,28,52,29.9,10,5,0,0,,241 33 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/Pandas/stock_data.csv: -------------------------------------------------------------------------------- 1 | tickers,eps,revenue,price,people 2 | GOOGL,27.82,87,845,larry page 3 | WMT,4.61,484,65,n.a. 4 | MSFT,-1,85,64,bill gates 5 | RIL ,not available,50,1023,mukesh ambani 6 | TATA,5.6,-1,n.a.,ratan tata 7 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/Pandas/stocks_weather.xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/Pandas/stocks_weather.xlsx -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/Pandas/weather_by_cities.csv: -------------------------------------------------------------------------------- 1 | day,city,temperature,windspeed,event 2 | 1/1/2017,new york,32,6,Rain 3 | 1/2/2017,new york,36,7,Sunny 4 | 1/3/2017,new york,28,12,Snow 5 | 1/4/2017,new york,33,7,Sunny 6 | 1/1/2017,mumbai,90,5,Sunny 7 | 1/2/2017,mumbai,85,12,Fog 8 | 1/3/2017,mumbai,87,15,Fog 9 | 1/4/2017,mumbai,92,5,Rain 10 | 1/1/2017,paris,45,20,Sunny 11 | 1/2/2017,paris,50,13,Cloudy 12 | 1/3/2017,paris,54,8,Cloudy 13 | 1/4/2017,paris,42,10,Cloudy 14 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/Pandas/weather_data.csv: -------------------------------------------------------------------------------- 1 | day,temperature,windspeed,event 2 | 1/1/2017,32,6,Rain 3 | 1/2/2017,35,7,Sunny 4 | 1/3/2017,28,2,Snow 5 | 1/4/2017,24,7,Snow 6 | 1/5/2017,32,4,Rain 7 | 1/6/2017,31,2,Sunny 8 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/Pandas/weather_data.xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/Pandas/weather_data.xlsx -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/Pandas/weather_data2.csv: -------------------------------------------------------------------------------- 1 | day,temperature,windspeed,event 2 | 1/1/2017,32,6,Rain 3 | 1/4/2017,,9,Sunny 4 | 1/5/2017,28,,Snow 5 | 1/6/2017,,7, 6 | 1/7/2017,32,,Rain 7 | 1/8/2017,,,Sunny 8 | 1/9/2017,,, 9 | 1/10/2017,34,8,Cloudy 10 | 1/11/2017,40,12,Sunny 11 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/6_NumPy Pandas Hands-On/Pandas/weather_datamissing.csv: -------------------------------------------------------------------------------- 1 | day,temperature,windspeed,event 2 | 1/1/2017,32,6,Rain 3 | 1/2/2017,-99999,7,Sunny 4 | 1/3/2017,28,-99999,Snow 5 | 1/4/2017,-99999,7,0 6 | 1/5/2017,32,-99999,Rain 7 | 1/6/2017,31,2,Sunny 8 | 1/6/2017,34,5,0 9 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/7_AI Ecosystem/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/7_AI Ecosystem/.DS_Store -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/7_AI Ecosystem/AI Ecosystem.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/7_AI Ecosystem/AI Ecosystem.pdf -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/7_AI Ecosystem/AI Ecosystem.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/7_AI Ecosystem/AI Ecosystem.pptx -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/8_Mathematics for AI/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/8_Mathematics for AI/.DS_Store -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/8_Mathematics for AI/.ipynb_checkpoints/8_Statistics-checkpoint.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "id": "790bc5a5-7947-49eb-8676-6b52599e4f03", 7 | "metadata": {}, 8 | "outputs": [], 9 | "source": [ 10 | "Machine Learning\n", 11 | "\n", 12 | "- statistics\n", 13 | "- linear algebra\n", 14 | "- probability\n", 15 | "- calculus" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "id": "37024fa1", 21 | "metadata": {}, 22 | "source": [ 23 | "# Statistics\n" 24 | ] 25 | }, 26 | { 27 | "cell_type": "code", 28 | "execution_count": null, 29 | "id": "6b83cea5", 30 | "metadata": {}, 31 | "outputs": [], 32 | "source": [ 33 | "data science = software development + statistics (research and experimentation)\n", 34 | "\n", 35 | "AI = software development + models (LLM and ML model) + math (research)" 36 | ] 37 | }, 38 | { 39 | "cell_type": "code", 40 | "execution_count": null, 41 | "id": "09d4bd42-1077-48a7-b34b-cd57edad06ca", 42 | "metadata": {}, 43 | "outputs": [], 44 | "source": [ 45 | " statistics\n", 46 | " Descriptive Inferential\n", 47 | " - univariate - Hypothesis testing\n", 48 | " - bivariate - model fitting\n", 49 | " - multivariate\n", 50 | "- central tendency\n", 51 | "- frequency\n", 52 | "- despersion" 53 | ] 54 | }, 55 | { 56 | "cell_type": "code", 57 | "execution_count": null, 58 | "id": "2fa744f0-33e2-4254-af28-823181360c3f", 59 | "metadata": {}, 60 | "outputs": [], 61 | "source": [ 62 | " Data\n", 63 | "Qualitative Quantitative\n", 64 | "\"I am from India\" - discrete\n", 65 | " - number of students\n", 66 | " - categories\n", 67 | " - ratings\n", 68 | " - continuous\n", 69 | " - height (6.5)" 70 | ] 71 | }, 72 | { 73 | "cell_type": "code", 74 | "execution_count": null, 75 | "id": "2f5ed35d-565e-44ae-9d7b-0450e3c5bc60", 76 | "metadata": {}, 77 | "outputs": [], 78 | "source": [ 79 | "1. measure of frequency -> what is the count\n", 80 | "2. measure of central tendency - mean, median, mode\n", 81 | " - geomatric mean -> ratio or rates\n", 82 | " - harmonic mean\n", 83 | "3. despersion\n", 84 | " - range (max - min) -> sensitive to outliers\n", 85 | " - inter quartile range - (75th percentile - 25th percentile)\n", 86 | " - variance - how scattered your data is\n", 87 | " - standard deviation" 88 | ] 89 | }, 90 | { 91 | "cell_type": "code", 92 | "execution_count": null, 93 | "id": "d63a783a-7ec3-4521-8ff7-67aaebce0130", 94 | "metadata": {}, 95 | "outputs": [], 96 | "source": [ 97 | "# mean\n", 98 | "\n", 99 | "(sum of all values)/total number of values\n", 100 | "\n", 101 | "- it best represents data\n", 102 | "- it consider all the data points\n", 103 | "- discrete data - obtained by counting\n", 104 | " - number of students\n", 105 | " - number of chairs in a classroom\n", 106 | " - number of coins giving heads\n", 107 | "- continuous data \n", 108 | " - obtained by measuring\n", 109 | " - height of students\n", 110 | " - weights\n", 111 | " - time it takes for a session\n", 112 | " - distance from class to home\n", 113 | "- extremely sensitive to presence of outliers" 114 | ] 115 | }, 116 | { 117 | "cell_type": "code", 118 | "execution_count": null, 119 | "id": "bf78bc95-933e-4963-9188-bd685fc06745", 120 | "metadata": {}, 121 | "outputs": [], 122 | "source": [ 123 | "# median\n", 124 | "\n", 125 | "- arrange the data points in ascending order or decending order\n", 126 | "- less sensitive to outliers\n", 127 | "- 50% of the data is on either side of it\n", 128 | "- dealing with ordinal data (data with order)\n", 129 | "- categorical data - positive or negative reviews\n", 130 | "\n", 131 | "- decision tree - for splitting the data into 2 equal parts\n" 132 | ] 133 | }, 134 | { 135 | "cell_type": "code", 136 | "execution_count": null, 137 | "id": "d53154fb-3ffc-40e6-be7b-2ec3362a540b", 138 | "metadata": {}, 139 | "outputs": [], 140 | "source": [ 141 | "# mode\n", 142 | "\n", 143 | "- values that occurs most\n", 144 | "- sentiment from a data of reviews - what is the frequency of positive reviews\n", 145 | "- vote in an election\n", 146 | "- mode is not good for continuous data\n", 147 | "- recommendation system - " 148 | ] 149 | }, 150 | { 151 | "cell_type": "code", 152 | "execution_count": null, 153 | "id": "033bba57-8892-46b4-9b55-9043af921a76", 154 | "metadata": {}, 155 | "outputs": [], 156 | "source": [] 157 | }, 158 | { 159 | "cell_type": "code", 160 | "execution_count": null, 161 | "id": "8040c2c3-7653-4c11-a3d9-9ed6bc26f932", 162 | "metadata": {}, 163 | "outputs": [], 164 | "source": [] 165 | }, 166 | { 167 | "cell_type": "code", 168 | "execution_count": null, 169 | "id": "af307ef2-76cd-4b77-888b-3ea69c4b58c9", 170 | "metadata": {}, 171 | "outputs": [], 172 | "source": [] 173 | }, 174 | { 175 | "cell_type": "code", 176 | "execution_count": null, 177 | "id": "4bc57fc9-7847-48ab-afcb-177efcfaf876", 178 | "metadata": {}, 179 | "outputs": [], 180 | "source": [] 181 | }, 182 | { 183 | "cell_type": "code", 184 | "execution_count": null, 185 | "id": "6b3367f6-fd9a-4dd8-b18d-17c8d8510213", 186 | "metadata": {}, 187 | "outputs": [], 188 | "source": [] 189 | }, 190 | { 191 | "cell_type": "code", 192 | "execution_count": null, 193 | "id": "597c013b-f4ff-4714-8e0a-2e524b769705", 194 | "metadata": {}, 195 | "outputs": [], 196 | "source": [] 197 | }, 198 | { 199 | "cell_type": "code", 200 | "execution_count": null, 201 | "id": "6a9a8d69-9c3d-49b9-82a8-ed9a14a3a463", 202 | "metadata": {}, 203 | "outputs": [], 204 | "source": [] 205 | }, 206 | { 207 | "cell_type": "code", 208 | "execution_count": null, 209 | "id": "fd1220e9-1cbc-49e0-bcbc-fdf4ad8e84f7", 210 | "metadata": {}, 211 | "outputs": [], 212 | "source": [] 213 | } 214 | ], 215 | "metadata": { 216 | "kernelspec": { 217 | "display_name": "Python 3 (ipykernel)", 218 | "language": "python", 219 | "name": "python3" 220 | }, 221 | "language_info": { 222 | "codemirror_mode": { 223 | "name": "ipython", 224 | "version": 3 225 | }, 226 | "file_extension": ".py", 227 | "mimetype": "text/x-python", 228 | "name": "python", 229 | "nbconvert_exporter": "python", 230 | "pygments_lexer": "ipython3", 231 | "version": "3.12.9" 232 | } 233 | }, 234 | "nbformat": 4, 235 | "nbformat_minor": 5 236 | } 237 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/8_Mathematics for AI/datasets/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/8_Mathematics for AI/datasets/.DS_Store -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/8_Mathematics for AI/datasets/Birthweight_reduced_kg_R.csv: -------------------------------------------------------------------------------- 1 | ID,Length,Birthweight,Headcirc,Gestation,smoker,mage,mnocig,mheight,mppwt,fage,fedyrs,fnocig,fheight,lowbwt,mage35 2 | 1360,56,4.55,34,44,0,20,0,162,57,23,10,35,179,0,0 3 | 1016,53,4.32,36,40,0,19,0,171,62,19,12,0,183,0,0 4 | 462,58,4.1,39,41,0,35,0,172,58,31,16,25,185,0,1 5 | 1187,53,4.07,38,44,0,20,0,174,68,26,14,25,189,0,0 6 | 553,54,3.94,37,42,0,24,0,175,66,30,12,0,184,0,0 7 | 1636,51,3.93,38,38,0,29,0,165,61,31,16,0,180,0,0 8 | 820,52,3.77,34,40,0,24,0,157,50,31,16,0,173,0,0 9 | 1191,53,3.65,33,42,0,21,0,165,61,21,10,25,185,0,0 10 | 1081,54,3.63,38,38,0,18,0,172,50,20,12,7,172,0,0 11 | 822,50,3.42,35,38,0,20,0,157,48,22,14,0,179,0,0 12 | 1683,53,3.35,33,41,0,27,0,164,62,37,14,0,170,0,0 13 | 1088,51,3.27,36,40,0,24,0,168,53,29,16,0,181,0,0 14 | 1107,52,3.23,36,38,0,31,0,164,57,35,16,0,183,0,0 15 | 755,53,3.2,33,41,0,21,0,155,55,25,14,25,183,0,0 16 | 1058,53,3.15,34,40,0,29,0,167,60,30,16,25,182,0,0 17 | 321,48,3.11,33,37,0,28,0,158,54,39,10,0,171,0,0 18 | 697,48,3.03,35,39,0,27,0,162,62,27,14,0,178,0,0 19 | 808,48,2.92,33,34,0,26,0,167,64,25,12,25,175,0,0 20 | 1600,53,2.9,34,39,0,19,0,165,57,23,14,2,193,0,0 21 | 1313,43,2.65,32,33,0,24,0,149,45,26,16,0,169,1,0 22 | 792,53,3.64,38,40,1,20,2,170,59,24,12,12,185,0,0 23 | 1388,51,3.14,33,41,1,22,7,160,53,24,16,12,176,0,0 24 | 575,50,2.78,30,37,1,19,7,165,60,20,14,0,183,0,0 25 | 569,50,2.51,35,39,1,22,7,159,52,23,14,25,200,1,0 26 | 1363,48,2.37,30,37,1,20,7,163,47,20,10,35,185,1,0 27 | 300,46,2.05,32,35,1,41,7,166,57,37,14,25,173,1,1 28 | 431,48,1.92,30,33,1,20,7,161,50,20,10,35,180,1,0 29 | 1764,58,4.57,39,41,1,32,12,173,70,38,14,25,180,0,0 30 | 532,53,3.59,34,40,1,31,12,163,49,41,12,50,191,0,0 31 | 752,49,3.32,36,40,1,27,12,152,48,37,12,25,170,0,0 32 | 1023,52,3,35,38,1,30,12,165,64,38,14,50,180,0,0 33 | 57,51,3.32,38,39,1,23,17,157,48,32,12,25,169,0,0 34 | 1522,50,2.74,33,39,1,21,17,156,53,24,12,7,179,0,0 35 | 223,50,3.87,33,45,1,28,25,163,54,30,16,0,183,0,0 36 | 272,52,3.86,36,39,1,30,25,170,78,40,16,50,178,0,0 37 | 27,53,3.55,37,41,1,37,25,161,66,46,16,0,175,0,1 38 | 365,52,3.53,37,40,1,26,25,170,62,30,10,25,181,0,0 39 | 619,52,3.41,33,39,1,23,25,181,69,23,16,2,181,0,0 40 | 1369,49,3.18,34,38,1,31,25,162,57,32,16,50,194,0,0 41 | 1262,53,3.19,34,41,1,27,35,163,51,31,16,25,185,0,0 42 | 516,47,2.66,33,35,1,20,35,170,57,23,12,50,186,1,0 43 | 1272,53,2.75,32,40,1,37,50,168,61,31,16,0,173,0,1 44 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/8_Mathematics for AI/datasets/Crime_R.csv: -------------------------------------------------------------------------------- 1 | CrimeRate,Youth,Southern,Education,ExpenditureYear0,LabourForce,Males,MoreMales,StateSize,YouthUnemployment,MatureUnemployment,HighYouthUnemploy,Wage,BelowWage,CrimeRate10,Youth10,Education10,ExpenditureYear10,LabourForce10,Males10,MoreMales10,StateSize10,YouthUnemploy10,MatureUnemploy10,HighYouthUnemploy10,Wage10,BelowWage10 2 | 45.5,135,0,12.4,69,540,965,0,6,80,22,1,564,139,26.5,135,12.5,71,564,974,0,6,82,20,1,632,142 3 | 52.3,140,0,10.9,55,535,1045,1,6,135,40,1,453,200,35.9,135,10.9,54,540,1039,1,7,138,39,1,521,210 4 | 56.6,157,1,11.2,47,512,962,0,22,97,34,0,288,276,37.1,153,11,44,529,959,0,24,98,33,0,359,256 5 | 60.3,139,1,11.9,46,480,968,0,19,135,53,0,457,249,42.7,139,11.8,41,497,983,0,20,131,50,0,510,235 6 | 64.2,126,0,12.2,106,599,989,0,40,78,25,1,593,171,46.7,125,12.2,97,602,989,0,42,79,24,1,660,162 7 | 67.6,128,0,13.5,67,624,972,0,28,77,25,1,507,206,47.9,128,13.8,60,621,983,0,28,81,24,1,571,199 8 | 70.5,130,0,14.1,63,641,984,0,14,70,21,1,486,196,50.6,153,14.1,57,641,993,0,14,71,23,1,556,176 9 | 73.2,143,0,12.9,66,537,977,0,10,114,35,1,487,166,55.9,143,13,63,549,973,0,11,119,36,1,561,168 10 | 75,141,0,12.9,56,523,968,0,4,107,37,0,489,170,61.8,153,12.9,54,538,968,0,5,110,36,1,550,126 11 | 78.1,133,0,11.4,51,599,1024,1,7,99,27,1,425,225,65.4,134,11.2,47,600,1024,1,7,97,28,1,499,215 12 | 79.8,142,1,12.9,45,533,969,0,18,94,33,0,318,250,71.4,142,13.1,44,552,969,0,19,93,36,0,378,247 13 | 82.3,123,0,12.5,97,526,948,0,113,124,50,0,572,158,75.4,134,12.4,87,529,949,0,117,125,49,0,639,146 14 | 83.1,135,0,13.6,62,595,986,0,22,77,27,0,529,190,77.3,137,13.7,61,599,993,0,23,80,28,0,591,189 15 | 84.9,121,0,13.2,118,547,964,0,25,84,29,0,689,126,78.6,132,13.3,115,538,968,0,25,82,30,0,742,127 16 | 85.6,166,1,11.4,58,521,973,0,46,72,26,0,396,237,80.6,153,11.2,54,543,983,0,47,76,25,1,568,246 17 | 88,140,0,12.9,71,632,1029,1,7,100,24,1,526,174,82.2,130,12.9,68,620,1024,1,8,104,25,1,570,182 18 | 92.3,126,0,12.7,74,602,984,0,34,102,33,1,557,195,87.5,134,12.9,67,599,982,0,33,107,34,1,621,199 19 | 94.3,130,0,13.3,128,536,934,0,51,78,34,0,627,135,92.9,127,13.3,128,530,949,0,52,79,33,0,692,140 20 | 95.3,125,0,12,90,586,964,0,97,105,43,0,617,163,94.1,134,11.9,81,571,971,0,99,106,41,0,679,162 21 | 96.8,151,1,10,58,510,950,0,33,108,41,0,394,261,96.2,161,10.1,56,515,1001,1,32,110,40,0,465,254 22 | 97.4,152,1,10.8,57,530,986,0,30,92,43,0,405,264,97.8,152,11,53,541,989,0,30,92,41,0,470,243 23 | 98.7,162,1,12.1,75,522,996,0,40,73,27,0,496,224,99.9,162,12,70,533,992,0,41,80,28,0,562,229 24 | 99.9,149,1,10.7,61,515,953,0,36,86,35,0,395,251,101.4,150,10.7,54,520,952,0,35,84,32,0,476,249 25 | 103,177,1,11,58,638,974,0,24,76,28,0,382,254,103.5,164,10.9,56,638,978,0,25,79,28,0,456,257 26 | 104.3,134,0,12.5,75,595,972,0,47,83,31,0,580,172,104.5,133,12.7,71,599,982,0,50,87,32,0,649,182 27 | 105.9,130,0,13.4,90,623,1049,1,3,113,40,0,588,160,106.4,153,13.4,91,622,1050,1,3,119,41,0,649,159 28 | 106.6,157,1,11.1,65,553,955,0,39,81,28,0,421,239,107.8,156,11.2,62,562,956,0,39,85,29,0,499,243 29 | 107.2,148,0,13.7,72,601,998,0,9,84,20,1,590,144,110.1,134,13.9,66,602,999,0,9,87,15,0,656,151 30 | 108.3,126,0,13.8,97,542,990,0,18,102,35,0,589,166,110.5,126,13.8,97,549,993,0,19,103,34,1,659,160 31 | 109.4,135,1,11.4,123,537,978,0,31,89,34,0,631,165,113.5,134,11.3,115,529,978,0,32,93,35,0,703,175 32 | 112.1,142,1,10.9,81,497,956,0,33,116,47,0,427,247,116.3,147,10.7,77,501,962,0,33,117,44,0,500,256 33 | 114.3,127,1,12.8,82,519,982,0,4,97,38,0,620,168,119.7,125,12.9,79,510,945,0,4,99,39,0,696,170 34 | 115.1,131,0,13.7,78,574,1038,1,7,142,42,1,540,176,124.5,134,13.6,73,581,1029,1,7,143,41,1,615,177 35 | 117.2,136,0,12.9,95,574,1012,1,29,111,37,1,622,162,127.8,140,13,96,581,1011,1,29,115,36,1,691,169 36 | 119.7,119,0,11.9,166,521,938,0,168,92,36,0,637,154,129.8,120,11.9,157,524,935,0,180,93,27,1,698,169 37 | 121.6,147,1,13.9,63,560,972,0,23,76,24,1,462,233,130.7,139,14,64,571,970,0,24,78,24,1,511,220 38 | 123.4,145,1,11.7,82,560,981,0,96,88,31,0,488,228,132.5,154,11.8,74,563,980,0,99,89,29,1,550,230 39 | 127.2,132,0,10.4,87,564,953,0,43,83,32,0,513,227,134.6,135,10.2,83,560,948,0,44,83,32,0,589,234 40 | 132.4,152,0,12,82,571,1018,1,10,103,28,1,537,215,137.5,151,12.1,76,567,1079,1,11,105,27,1,617,204 41 | 135.5,125,0,12.5,113,567,985,0,78,130,58,0,626,166,140.5,140,12.5,105,571,993,0,77,131,59,0,684,174 42 | 137.8,141,0,14.2,109,591,985,0,18,91,20,1,578,174,145.7,142,14.2,101,590,987,0,19,94,19,1,649,180 43 | 140.8,150,0,12,109,531,964,0,9,87,38,0,559,153,150.6,153,12,98,539,982,0,10,88,36,0,635,151 44 | 145.4,131,1,12.2,115,542,969,0,50,79,35,0,472,206,157.3,131,12.1,109,548,976,0,52,82,34,0,539,219 45 | 149.3,143,0,12.3,103,583,1012,1,13,96,36,0,557,194,162.7,142,12.2,95,612,1003,1,13,97,36,0,625,196 46 | 154.3,124,0,12.3,121,580,966,0,101,77,35,0,657,170,169.6,134,12.2,116,580,987,0,104,79,36,0,719,172 47 | 157.7,136,0,15.1,149,577,994,0,157,102,39,0,673,167,177.2,140,15.2,141,578,995,0,160,110,40,0,739,169 48 | 161.8,131,0,13.2,160,631,1071,1,3,102,41,0,674,152,178.2,132,13.2,143,632,1058,1,4,100,40,0,748,150 49 | -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/8_Mathematics for AI/datasets/SP_500_1987.csv: -------------------------------------------------------------------------------- 1 | Date,Open,High,Low,Close,AdjClose,Volume 2 | 31-Dec-1986,243.37,244.03,241.28,242.17,242.17,"13,92,00,000" 3 | 2-Jan-1987,242.17,246.45,242.17,246.45,246.45,"9,18,80,000" 4 | 5-Jan-1987,246.45,252.57,246.45,252.19,252.19,"18,19,00,000" 5 | 6-Jan-1987,252.2,253.99,252.14,252.78,252.78,"18,93,00,000" 6 | 7-Jan-1987,252.78,255.72,252.65,255.33,255.33,"19,09,00,000" 7 | 8-Jan-1987,255.36,257.28,254.97,257.28,257.28,"19,45,00,000" 8 | 9-Jan-1987,257.26,259.2,256.11,258.73,258.73,"19,30,00,000" 9 | 12-Jan-1987,258.72,261.36,257.92,260.3,260.3,"18,42,00,000" 10 | 13-Jan-1987,260.3,260.45,259.21,259.95,259.95,"17,09,00,000" 11 | 14-Jan-1987,259.95,262.72,259.62,262.64,262.64,"21,42,00,000" 12 | 15-Jan-1987,262.65,266.68,262.64,265.49,265.49,"25,31,00,000" 13 | 16-Jan-1987,265.46,267.24,264.31,266.28,266.28,"21,84,00,000" 14 | 19-Jan-1987,266.26,269.34,264,269.34,269.34,"16,28,00,000" 15 | 20-Jan-1987,269.34,271.03,267.65,269.04,269.04,"22,48,00,000" 16 | 21-Jan-1987,269.04,270.87,267.35,267.84,267.84,"18,42,00,000" 17 | 22-Jan-1987,267.84,274.05,267.32,273.91,273.91,"18,87,00,000" 18 | 23-Jan-1987,273.91,280.96,268.41,270.1,270.1,"30,24,00,000" 19 | 26-Jan-1987,270.1,270.4,267.73,269.61,269.61,"13,89,00,000" 20 | 27-Jan-1987,269.61,274.31,269.61,273.75,273.75,"19,23,00,000" 21 | 28-Jan-1987,273.75,275.71,273.03,275.4,275.4,"19,58,00,000" 22 | 29-Jan-1987,275.4,276.85,272.54,274.24,274.24,"20,53,00,000" 23 | 30-Jan-1987,274.24,274.24,271.38,274.08,274.08,"16,34,00,000" 24 | 2-Feb-1987,274.08,277.35,273.16,276.45,276.45,"17,74,00,000" 25 | 3-Feb-1987,276.45,277.83,275.84,275.99,275.99,"19,81,00,000" 26 | 4-Feb-1987,275.99,279.65,275.35,279.64,279.64,"22,24,00,000" 27 | 5-Feb-1987,279.64,282.26,278.66,281.16,281.16,"25,67,00,000" 28 | 6-Feb-1987,281.16,281.79,279.87,280.04,280.04,"18,41,00,000" 29 | 09-Feb-1987,280.04,280.04,277.24,278.16,278.16,"14,33,00,000" 30 | 10-Feb-1987,278.16,278.16,273.49,275.07,275.07,"16,83,00,000" 31 | 11-Feb-1987,275.07,277.71,274.71,277.54,277.54,"17,24,00,000" 32 | 12-Feb-1987,277.54,278.04,273.89,275.62,275.62,"20,04,00,000" 33 | 13-Feb-1987,275.62,280.91,275.01,279.7,279.7,"18,44,00,000" 34 | 17-Feb-1987,279.7,285.49,279.7,285.49,285.49,"18,78,00,000" 35 | 18-Feb-1987,285.49,287.55,282.97,285.42,285.42,"21,82,00,000" 36 | 19-Feb-1987,285.42,286.24,283.84,285.57,285.57,"18,15,00,000" 37 | 20-Feb-1987,285.57,285.98,284.31,285.48,285.48,"17,58,00,000" 38 | 23-Feb-1987,285.48,285.5,279.37,282.38,282.38,"17,05,00,000" 39 | 24-Feb-1987,282.38,283.33,281.45,282.88,282.88,"15,13,00,000" 40 | 25-Feb-1987,282.88,285.35,282.14,284,284,"18,41,00,000" 41 | 26-Feb-1987,284,284.4,280.73,282.96,282.96,"16,58,00,000" 42 | 27-Feb-1987,282.96,284.55,282.77,284.2,284.2,"14,28,00,000" 43 | 2-Mar-1987,284.17,284.83,282.3,283,283,"15,67,00,000" 44 | 3-Mar-1987,283,284.19,282.92,284.12,284.12,"14,92,00,000" 45 | 4-Mar-1987,284.12,288.62,284.12,288.62,288.62,"19,84,00,000" 46 | 5-Mar-1987,288.62,291.24,288.6,290.52,290.52,"20,54,00,000" 47 | 6-Mar-1987,290.52,290.67,288.77,290.66,290.66,"18,16,00,000" 48 | 9-Mar-1987,290.66,290.66,287.12,288.3,288.3,"16,54,00,000" 49 | 10-Mar-1987,288.3,290.87,287.89,290.86,290.86,"17,48,00,000" 50 | 11-Mar-1987,290.87,292.51,289.33,290.31,290.31,"18,69,00,000" 51 | 12-Mar-1987,290.33,291.91,289.66,291.22,291.22,"17,45,00,000" 52 | 13-Mar-1987,291.22,291.79,289.88,289.89,289.89,"15,09,00,000" 53 | 16-Mar-1987,289.88,289.89,286.64,288.23,288.23,"13,49,00,000" 54 | 17-Mar-1987,288.09,292.47,287.96,292.47,292.47,"17,73,00,000" 55 | 18-Mar-1987,292.49,294.58,290.87,292.78,292.78,"19,81,00,000" 56 | 19-Mar-1987,292.73,294.46,292.26,294.08,294.08,"16,61,00,000" 57 | 20-Mar-1987,294.08,298.17,294.08,298.17,298.17,"23,40,00,000" 58 | 23-Mar-1987,298.16,301.17,297.5,301.16,301.16,"18,91,00,000" 59 | 24-Mar-1987,301.17,301.92,300.14,301.64,301.64,"18,99,00,000" 60 | 25-Mar-1987,301.52,301.85,299.36,300.38,300.38,"17,13,00,000" 61 | 26-Mar-1987,300.39,302.72,300.38,300.93,300.93,"19,60,00,000" 62 | 27-Mar-1987,300.96,301.41,296.06,296.13,296.13,"18,44,00,000" 63 | 30-Mar-1987,296.1,296.13,286.69,289.2,289.2,"20,84,00,000" 64 | 31-Mar-1987,289.21,291.87,289.07,291.7,291.7,"17,18,00,000" 65 | 1-Apr-1987,291.59,292.38,288.34,292.38,292.38,"18,26,00,000" 66 | 2-Apr-1987,292.41,294.47,292.02,293.63,293.63,"18,30,00,000" 67 | 3-Apr-1987,293.64,301.3,292.3,300.41,300.41,"21,34,00,000" 68 | 6-Apr-1987,300.46,302.21,300.41,301.95,301.95,"17,37,00,000" 69 | 7-Apr-1987,301.94,303.65,296.67,296.69,296.69,"18,64,00,000" 70 | 8-Apr-1987,296.72,299.2,295.18,297.26,297.26,"17,98,00,000" 71 | 9-Apr-1987,297.25,297.71,291.5,292.86,292.86,"18,03,00,000" 72 | 10-April-1987,292.82,293.74,290.94,292.49,292.49,"16,95,00,000" 73 | 13-April-1987,292.48,293.36,285.62,285.62,285.62,"18,10,00,000" 74 | 14-April-1987,285.61,285.62,275.67,279.16,279.16,"26,65,00,000" 75 | 15-April-1987,279.17,285.14,279.16,284.44,284.44,"19,82,00,000" 76 | 16-April-1987,284.45,289.57,284.44,286.91,286.91,"18,96,00,000" 77 | 20-April-1987,286.91,288.36,284.55,286.09,286.09,"13,91,00,000" 78 | 21-April-1987,285.88,293.07,282.89,293.07,293.07,"19,13,00,000" 79 | 22-April-1987,293.05,293.46,286.98,287.19,287.19,"18,59,00,000" 80 | 23-April-1987,287.19,289.12,284.28,286.82,286.82,"17,39,00,000" 81 | 24-April-1987,286.81,286.82,281.18,281.52,281.52,"17,80,00,000" 82 | 27-April-1987,281.52,284.45,276.22,281.83,281.83,"22,27,00,000" 83 | 28-April-1987,281.83,285.95,281.83,282.51,282.51,"18,01,00,000" 84 | 29-April-1987,282.58,286.42,282.58,284.57,284.57,"17,36,00,000" 85 | 30-April-1987,284.58,290.08,284.57,288.36,288.36,"18,31,00,000" 86 | 1-May-1987,286.99,289.71,286.52,288.03,288.03,"16,01,00,000" 87 | 4-May-1987,288.02,289.99,286.39,289.36,289.36,"14,06,00,000" 88 | 5-May-1987,289.36,295.4,289.34,295.34,295.34,"19,23,00,000" 89 | 6-May-1987,295.35,296.19,293.6,295.47,295.47,"19,66,00,000" 90 | 7-May-1987,295.45,296.8,294.07,294.71,294.71,"21,52,00,000" 91 | 8-May-1987,294.73,296.18,291.73,293.37,293.37,"16,19,00,000" 92 | 11-May-1987,293.37,298.69,291.55,291.57,291.57,"20,37,00,000" 93 | 12-May-1987,291.57,293.3,290.18,293.3,293.3,"15,53,00,000" 94 | 13-May-1987,293.31,294.54,290.74,293.98,293.98,"17,10,00,000" 95 | 14-May-1987,293.98,295.1,292.95,294.24,294.24,"15,20,00,000" 96 | 15-May-1987,294.23,294.24,287.11,287.43,287.43,"18,08,00,000" 97 | 18-May-1987,287.43,287.43,282.57,286.65,286.65,"17,42,00,000" 98 | 19-May-1987,286.66,287.39,278.83,279.62,279.62,"17,54,00,000" 99 | 20-May-1987,279.62,280.89,277.01,278.21,278.21,"20,68,00,000" 100 | 21-May-1987,278.23,282.31,278.21,280.17,280.17,"16,48,00,000" 101 | 22-May-1987,280.17,283.33,280.17,282.16,282.16,"13,58,00,000" 102 | 26-May-1987,282.16,289.11,282.16,289.11,289.11,"15,25,00,000" 103 | 27-May-1987,289.07,290.78,288.19,288.73,288.73,"17,14,00,000" 104 | 28-May-1987,288.73,291.5,286.33,290.76,290.76,"15,38,00,000" 105 | 29-May-1987,290.77,292.87,289.7,290.1,290.1,"15,35,00,000" 106 | 1-Jun-1987,290.12,291.96,289.23,289.83,289.83,"14,93,00,000" 107 | 2-Jun-1987,289.82,290.94,286.93,288.46,288.46,"15,34,00,000" 108 | 3-Jun-1987,288.56,293.47,288.56,293.47,293.47,"16,42,00,000" 109 | 4-Jun-1987,293.46,295.09,292.76,295.09,295.09,"14,03,00,000" 110 | 5-Jun-1987,295.11,295.11,292.8,293.45,293.45,"12,91,00,000" 111 | 8-Jun-1987,293.46,297.03,291.55,296.72,296.72,"13,64,00,000" 112 | 9-Jun-1987,296.72,297.59,295.9,297.28,297.28,"16,42,00,000" 113 | 10-Jun-1987,297.28,300.81,295.66,297.47,297.47,"19,74,00,000" 114 | 11-Jun-1987,297.5,298.94,297.47,298.73,298.73,"13,89,00,000" 115 | 12-Jun-1987,298.77,302.26,298.73,301.62,301.62,"17,51,00,000" 116 | 15-Jun-1987,301.62,304.11,301.62,303.14,303.14,"15,69,00,000" 117 | 16-Jun-1987,303.12,304.86,302.6,304.76,304.76,"15,78,00,000" 118 | 17-Jun-1987,304.77,305.74,304.03,304.81,304.81,"18,47,00,000" 119 | 18-Jun-1987,304.78,306.13,303.38,305.69,305.69,"16,86,00,000" 120 | 19-Jun-1987,305.71,306.97,305.55,306.97,306.97,"22,05,00,000" 121 | 22-Jun-1987,306.98,310.2,306.97,309.65,309.65,"17,82,00,000" 122 | 23-Jun-1987,309.66,310.27,307.48,308.43,308.43,"19,42,00,000" 123 | 24-Jun-1987,308.44,308.91,306.32,306.86,306.86,"15,38,00,000" 124 | 25-Jun-1987,306.87,309.44,306.86,308.96,308.96,"17,35,00,000" 125 | 26-Jun-1987,308.94,308.96,306.36,307.16,307.16,"15,05,00,000" 126 | 29-Jun-1987,307.15,308.15,306.75,307.9,307.9,"14,25,00,000" 127 | 30-Jun-1987,307.89,308,303.01,304,304,"16,55,00,000" 128 | 1-Jul-1987,303.99,304,302.53,302.94,302.94,"15,70,00,000" 129 | 2-Jul-1987,302.96,306.34,302.94,305.63,305.63,"15,49,00,000" 130 | 6-Jul-1987,305.64,306.75,304.23,304.92,304.92,"15,50,00,000" 131 | 7-Jul-1987,304.91,308.63,304.73,307.4,307.4,"20,07,00,000" 132 | 8-Jul-1987,307.41,308.48,306.01,308.29,308.29,"20,75,00,000" 133 | 9-Jul-1987,308.3,309.56,307.42,307.52,307.52,"19,54,00,000" 134 | 10-Jul-1987,307.55,308.4,306.96,308.37,308.37,"17,21,00,000" 135 | 13-Jul-1987,308.41,308.41,305.49,307.63,307.63,"15,25,00,000" 136 | 14-Jul-1987,307.67,310.69,307.46,310.68,310.68,"18,59,00,000" 137 | 15-Jul-1987,310.67,312.08,309.07,310.42,310.42,"20,23,00,000" 138 | 16-Jul-1987,311,312.83,310.42,312.7,312.7,"21,09,00,000" 139 | 17-Jul-1987,312.71,314.59,312.38,314.59,314.59,"21,00,00,000" 140 | 20-Jul-1987,314.56,314.59,311.24,311.39,311.39,"16,81,00,000" 141 | 21-Jul-1987,311.36,312.41,307.51,308.55,308.55,"18,66,00,000" 142 | 22-Jul-1987,308.56,309.12,307.22,308.47,308.47,"17,47,00,000" 143 | 23-Jul-1987,308.5,309.63,306.1,307.81,307.81,"16,37,00,000" 144 | 24-Jul-1987,307.82,309.28,307.78,309.27,309.27,"15,84,00,000" 145 | 27-Jul-1987,309.3,310.7,308.61,310.65,310.65,"15,20,00,000" 146 | 28-Jul-1987,310.65,312.33,310.28,312.33,312.33,"17,26,00,000" 147 | 29-Jul-1987,312.34,315.65,311.73,315.65,315.65,"19,62,00,000" 148 | 30-Jul-1987,315.69,318.53,315.65,318.05,318.05,"20,80,00,000" 149 | 31-Jul-1987,318.05,318.85,317.56,318.66,318.66,"18,19,00,000" 150 | 3-Aug-1987,318.62,320.26,316.52,317.57,317.57,"20,78,00,000" 151 | 4-Aug-1987,317.59,318.25,314.51,316.23,316.23,"16,65,00,000" 152 | 5-Aug-1987,316.25,319.74,316.23,318.45,318.45,"19,27,00,000" 153 | 6-Aug-1987,318.49,322.09,317.5,322.09,322.09,"19,20,00,000" 154 | 7-Aug-1987,322.1,324.15,321.82,323,323,"21,27,00,000" 155 | 10-Aug-1987,322.98,328,322.95,328,328,"18,72,00,000" 156 | 11-Aug-1987,328.02,333.4,328,333.33,333.33,"27,81,00,000" 157 | 12-Aug-1987,333.32,334.57,331.06,332.39,332.39,"23,58,00,000" 158 | 13-Aug-1987,332.38,335.52,332.38,334.65,334.65,"21,71,00,000" 159 | 14-Aug-1987,334.63,336.08,332.63,333.99,333.99,"19,61,00,000" 160 | 17-Aug-1987,333.98,335.43,332.88,334.11,334.11,"16,61,00,000" 161 | 18-Aug-1987,334.1,334.11,326.43,329.25,329.25,"19,84,00,000" 162 | 19-Aug-1987,329.26,329.89,326.54,329.83,329.83,"18,09,00,000" 163 | 20-Aug-1987,331.49,335.19,329.83,334.84,334.84,"19,66,00,000" 164 | 21-Aug-1987,334.85,336.37,334.3,335.9,335.9,"18,96,00,000" 165 | 24-Aug-1987,335.89,335.9,331.92,333.33,333.33,"14,94,00,000" 166 | 25-Aug-1987,333.37,337.89,333.33,336.77,336.77,"21,35,00,000" 167 | 26-Aug-1987,336.77,337.39,334.46,334.57,334.57,"19,62,00,000" 168 | 27-Aug-1987,334.56,334.57,331.1,331.38,331.38,"16,36,00,000" 169 | 28-Aug-1987,331.37,331.38,327.03,327.04,327.04,"15,63,00,000" 170 | 31-Aug-1987,327.03,330.09,326.99,329.8,329.8,"16,58,00,000" 171 | 1-Sep-1987,329.81,332.18,322.83,323.4,323.4,"19,35,00,000" 172 | 2-Sep-1987,323.4,324.53,318.76,321.68,321.68,"19,99,00,000" 173 | 3-Sep-1987,321.47,324.29,317.39,320.21,320.21,"16,52,00,000" 174 | 4-Sep-1987,320.21,322.03,316.53,316.7,316.7,"12,91,00,000" 175 | 8-Sep-1987,316.68,316.7,308.56,313.56,313.56,"24,29,00,000" 176 | 9-Sep-1987,313.6,315.41,312.29,313.92,313.92,"16,49,00,000" 177 | 10-Sep-1987,313.92,317.59,313.92,317.13,317.13,"17,98,00,000" 178 | 11-Sep-1987,317.14,322.45,317.13,321.98,321.98,"17,80,00,000" 179 | 14-Sep-1987,322.02,323.81,320.4,323.08,323.08,"15,44,00,000" 180 | 15-Sep-1987,323.07,323.08,317.63,317.74,317.74,"13,62,00,000" 181 | 16-Sep-1987,317.75,319.5,314.61,314.86,314.86,"19,57,00,000" 182 | 17-Sep-1987,314.94,316.08,313.45,314.93,314.93,"15,07,00,000" 183 | 18-Sep-1987,314.98,316.99,314.86,314.86,314.86,"18,81,00,000" 184 | 21-Sep-1987,314.92,317.66,310.12,310.54,310.54,"17,01,00,000" 185 | 22-Sep-1987,310.54,319.51,308.69,319.5,319.5,"20,95,00,000" 186 | 23-Sep-1987,319.49,321.83,319.12,321.19,321.19,"22,03,00,000" 187 | 24-Sep-1987,321.09,322.01,319.12,319.72,319.72,"16,22,00,000" 188 | 25-Sep-1987,319.72,320.55,318.1,320.16,320.16,"13,80,00,000" 189 | 28-Sep-1987,320.16,325.33,320.16,323.2,323.2,"18,81,00,000" 190 | 29-Sep-1987,323.2,324.63,320.27,321.69,321.69,"17,35,00,000" 191 | 30-Sep-1987,321.69,322.53,320.16,321.83,321.83,"18,31,00,000" 192 | 1-Oct-1987,321.83,327.34,321.83,327.33,327.33,"19,32,00,000" 193 | 2-Oct-1987,327.33,328.94,327.22,328.07,328.07,"18,91,00,000" 194 | 5-Oct-1987,328.07,328.57,326.09,328.08,328.08,"15,97,00,000" 195 | 6-Oct-1987,328.08,328.08,319.17,319.22,319.22,"17,56,00,000" 196 | 7-Oct-1987,319.22,319.39,315.78,318.54,318.54,"18,63,00,000" 197 | 8-Oct-1987,318.54,319.34,312.02,314.16,314.16,"19,87,00,000" 198 | 9-Oct-1987,314.16,315.04,310.97,311.07,311.07,"15,83,00,000" 199 | 12-Oct-1987,311.07,311.07,306.76,309.39,309.39,"14,19,00,000" 200 | 13-Oct-1987,309.39,314.53,309.39,314.52,314.52,"17,29,00,000" 201 | 14-Oct-1987,314.52,314.52,304.78,305.23,305.23,"20,74,00,000" 202 | 15-Oct-1987,305.21,305.23,298.07,298.08,298.08,"26,32,00,000" 203 | 16-Oct-1987,298.08,298.92,281.52,282.7,282.7,"33,85,00,000" 204 | 19-Oct-1987,282.7,282.7,224.83,224.84,224.84,"60,43,00,000" 205 | 20-Oct-1987,225.06,245.62,216.46,236.83,236.83,"60,81,00,000" 206 | 21-Oct-1987,236.83,259.27,236.83,258.38,258.38,"44,96,00,000" 207 | 22-Oct-1987,258.24,258.38,242.99,248.25,248.25,"39,22,00,000" 208 | 23-Oct-1987,248.29,250.7,242.76,248.22,248.22,"24,56,00,000" 209 | 26-Oct-1987,248.2,248.22,227.26,227.67,227.67,"30,88,00,000" 210 | 27-Oct-1987,227.67,237.81,227.67,233.19,233.19,"26,02,00,000" 211 | 28-Oct-1987,233.19,238.58,226.26,233.28,233.28,"27,94,00,000" 212 | 29-Oct-1987,233.31,246.69,233.28,244.77,244.77,"25,81,00,000" 213 | 30-Oct-1987,244.77,254.04,244.77,251.79,251.79,"30,34,00,000" 214 | 2-Nov-1987,251.73,255.75,249.15,255.75,255.75,"17,60,00,000" 215 | 3-Nov-1987,255.75,255.75,242.78,250.82,250.82,"22,78,00,000" 216 | 4-Nov-1987,250.81,251,246.34,248.96,248.96,"20,25,00,000" 217 | 5-Nov-1987,248.93,256.09,247.72,254.48,254.48,"22,60,00,000" 218 | 6-Nov-1987,254.49,257.21,249.68,250.41,250.41,"22,82,90,000" 219 | 9-Nov-1987,250.41,250.41,243.01,243.17,243.17,"16,06,90,000" 220 | 10-Nov-1987,243.14,243.17,237.64,239,239,"18,43,10,000" 221 | 11-Nov-1987,239.01,243.86,239,241.9,241.9,"14,78,50,000" 222 | 12-Nov-1987,241.93,249.9,241.9,248.52,248.52,"20,62,80,000" 223 | 13-Nov-1987,248.54,249.42,245.64,245.64,245.64,"17,49,20,000" 224 | 16-Nov-1987,245.69,249.54,244.98,246.76,246.76,"16,43,40,000" 225 | 17-Nov-1987,246.73,246.76,240.81,243.04,243.04,"14,82,40,000" 226 | 18-Nov-1987,243.09,245.55,240.67,245.55,245.55,"15,82,70,000" 227 | 19-Nov-1987,245.54,245.55,239.7,240.05,240.05,"15,71,40,000" 228 | 20-Nov-1987,240.04,242.01,235.89,242,242,"18,91,70,000" 229 | 23-Nov-1987,242,242.99,240.5,242.99,242.99,"14,31,60,000" 230 | 24-Nov-1987,242.98,247.9,242.98,246.39,246.39,"19,95,20,000" 231 | 25-Nov-1987,246.42,246.54,244.08,244.1,244.1,"13,97,80,000" 232 | 27-Nov-1987,244.11,244.12,240.34,240.34,240.34,"8,63,60,000" 233 | 30-Nov-1987,240.27,240.34,225.75,230.3,230.3,"26,89,10,000" 234 | 1-Dec-1987,230.32,234.02,230.3,232,232,"14,98,70,000" 235 | 2-Dec-1987,232.01,234.56,230.31,233.45,233.45,"14,88,90,000" 236 | 3-Dec-1987,233.46,233.9,225.21,225.21,225.21,"20,41,60,000" 237 | 4-Dec-1987,225.2,225.77,221.24,223.92,223.92,"18,48,00,000" 238 | 7-Dec-1987,223.98,228.77,223.92,228.76,228.76,"14,66,60,000" 239 | 8-Dec-1987,228.77,234.92,228.69,234.91,234.91,"22,73,10,000" 240 | 9-Dec-1987,234.91,240.09,233.83,238.89,238.89,"23,14,30,000" 241 | 10-Dec-1987,238.89,240.05,233.4,233.57,233.57,"18,89,60,000" 242 | 11-Dec-1987,233.6,235.48,233.35,235.32,235.32,"15,16,80,000" 243 | 14-Dec-1987,235.3,242.34,235.04,242.19,242.19,"18,76,80,000" 244 | 15-Dec-1987,242.19,245.59,241.31,242.81,242.81,"21,49,70,000" 245 | 16-Dec-1987,242.81,248.11,242.8,248.08,248.08,"19,38,20,000" 246 | 17-Dec-1987,248.08,248.6,242.96,242.98,242.98,"19,17,80,000" 247 | 18-Dec-1987,243.01,249.18,243.01,249.16,249.16,"27,62,20,000" 248 | 21-Dec-1987,249.14,250.25,248.3,249.54,249.54,"16,17,90,000" 249 | 22-Dec-1987,249.56,249.97,247.01,249.95,249.95,"19,26,50,000" 250 | 23-Dec-1987,249.96,253.35,249.95,253.16,253.16,"20,31,10,000" 251 | 24-Dec-1987,253.13,253.16,251.68,252.03,252.03,"10,88,00,000" 252 | 28-Dec-1987,252.01,252.02,244.19,245.57,245.57,"13,12,20,000" 253 | 29-Dec-1987,245.58,245.88,244.28,244.59,244.59,"11,15,80,000" 254 | 30-Dec-1987,244.63,248.06,244.59,247.86,247.86,"14,92,30,000" -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/9_Mathematics for AI/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/9_Mathematics for AI/.DS_Store -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/9_Mathematics for AI/Tensors.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/9_Mathematics for AI/Tensors.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/9_Mathematics for AI/v1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/9_Mathematics for AI/v1.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/9_Mathematics for AI/v11.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/9_Mathematics for AI/v11.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/9_Mathematics for AI/v2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/9_Mathematics for AI/v2.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/9_Mathematics for AI/v3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/9_Mathematics for AI/v3.png -------------------------------------------------------------------------------- /1_Foundations of AI Engineering/9_Mathematics for AI/v4.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hemansnation/AI-Engineer-Headquarters/f7fc47d4dd42448660af6e1c5cce85f3915a1ae7/1_Foundations of AI Engineering/9_Mathematics for AI/v4.gif -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | [![Open Source Love](https://firstcontributions.github.io/open-source-badges/badges/open-source-v1/open-source.svg)](https://github.com/hemansnation/AI-Engineer-Headquarters) 2 | 3 |

AI Engineer Headquarters

4 |

5 | A drill of scientific methods, processes, algorithms, and systems to build stories & models. An in-depth learning resource for humans. 6 | 7 | This is a drill for people who aim to be in the top 1% of Data and AI experts. 8 | 9 | You can do the drill by watching video sessions or text content. 10 | 11 | I will recommend video sessions and use text content as go-to notes. 12 | 13 | You can be in one of the following categories:- 14 | - either you are working on a leadership position 15 | - or you are working as a professional 16 | - or you are a student 17 | 18 | No matter what position you are working in currently, you need to put in the same amount of effort to be in the top 1%. 19 | 20 | Spoiler alert - There are NO Shortcuts in the tech field. 21 | 22 | This is for all humans who want to improve in the field and are courageous enough to take action. 23 | 24 | You will find all the topics explained here and whatever is needed to understand it completely. 25 | 26 | The drill is all action-oriented. 27 | 28 | To be the authority/best in the AI field, I created a routine that includes: 29 | - 4 hours of deep work sessions every day 30 | - Deep work session rules: 31 | - no phone/notifications 32 | - no talking to anyone 33 | - coffee/chai allowed 34 | - 2 hours of shallow work sessions every day 35 | - Shallow work session rules: 36 | - phone allowed 37 | - talking allowed 38 | - include sharing your work online 39 | 40 | You can customize the learning sessions according to your time availability. 41 | 42 |

43 | 44 | ## the path 45 | 46 | 0. Prep 47 | 1. Foundations of AI Engineering 48 | 2. Mastering Large Language Models (LLMs) 49 | 3. Retrieval-Augmented Generation (RAG) 50 | 4. Fine-Tuning LLMs 51 | 5. Reinforcement Learning and Ethical AI 52 | 6. Agentic Workflows 53 | 7. Career Acceleration 54 | 8. Bonus 55 | --------------------------------------------------------------------------------