├── Activation_Functions └── SOFTMAX.md ├── DAYS_PROGRESS ├── DAY12.md ├── DAY13.md ├── DAY14.md ├── DAY15.md ├── DAY16.md ├── DAY17.md ├── DAY18.md ├── DAY19.md ├── DAY20.md ├── DAY21.md ├── DAY22.md ├── DAY23.md ├── DAY24.md ├── DAY25.md ├── DAY26.md ├── DAY27.md ├── DAY28.md ├── DAY29.md ├── DAY30.md ├── DAY31.md ├── DAY32.md ├── DAY33.md ├── DAY34.md ├── DAY35.md ├── DAY36.md ├── DAY37.md ├── DAY38.md ├── DAY39.md ├── DAY40.md ├── DAY41.md ├── DAY42.md ├── DAY43.md ├── DAY44.md ├── DAY45.md ├── DAY46.md ├── DAY47.md ├── DAY48.md ├── DAY49.md ├── DAY50.md ├── DAY51.md ├── DAY52.md ├── DAY53.md ├── DAY54.md ├── DAY55.md ├── DAY56.md ├── DAY57.md ├── DAY58.md ├── DAY59.md ├── README.md └── imgs │ ├── README.md │ ├── wrapper_shot.png │ ├── wrapper_shot2.png │ └── wrapper_shot3.png ├── Differential_Privacy ├── Diff_Attack.md ├── On_the_limits_of_DP.md └── Sens_Epsilon.md ├── FashionMNIST ├── README.md ├── Untitled.md ├── output_23_1.png ├── output_24_1.png ├── output_39_1.png ├── output_40_1.png └── output_41_1.png ├── Federated_Learning └── Remote_Execution_Overview.md ├── Fundamentals_of_Deep_Learning └── README.rst ├── Green_shade_classifier ├── README.md ├── img1.png ├── img10.png ├── img11.png ├── img12.png ├── img2.png ├── img3.png ├── img4.png ├── img5.png ├── img6.png ├── img7.png ├── img8.png ├── img9.png ├── nn_in_py.py └── shot_of_images.png ├── LICENSE ├── Matrixtools ├── README.md ├── matmul_intro.md └── matrixtools.py ├── ModelEncryptor └── encryptor.py ├── Power_Of_Math_In_Image_Analysis ├── README.md ├── image_correlation.py ├── samp1.jpeg ├── samp2.jpeg ├── samp3.jpeg └── samp4.jpeg ├── Python_Basics ├── Getting_Started.md └── Python_Tricks.md ├── Python_Halls_of_Fame └── Python_Uniqueness_Hall_Of_Fame.rst ├── README.rst ├── RGBYCM_Color_Classifier ├── README.md ├── architecture_nn_3.png ├── archtecture_nn_2.png ├── img1.png ├── img10.png ├── img11.png ├── img12.png ├── img13.png ├── img14.png ├── img15.png ├── img16.png ├── img2.png ├── img3.png ├── img4.png ├── img5.png ├── img6.png ├── img7.png ├── img8.png ├── img9.png ├── nn_in_py3.py ├── nn_py3_shot.png ├── shot_of_images3.png └── snap_3.png ├── Red_Green_Blue_Classifier ├── RGB_Classifier.md ├── archtecture_nn_2.png ├── img1.png ├── img10.png ├── img11.png ├── img12.png ├── img13.png ├── img14.png ├── img15.png ├── img16.png ├── img2.png ├── img3.png ├── img4.png ├── img5.png ├── img6.png ├── img7.png ├── img8.png ├── img9.png ├── nn_in_py2.py ├── shot_of_images2.png └── shot_of_outcomes2.png ├── Tinkering_With_Tensors ├── Explaining_Tensors.md └── Spontaneous_Matrix.rst ├── Udacity_DL_With_Pytorch_Exercises ├── Part 1 - Tensors in PyTorch (Exercises).ipynb ├── Part 1 - Tensors in PyTorch (Exercises).md ├── Part 2 - Neural Networks in PyTorch (Exercises).ipynb ├── Part 2 - Neural Networks in PyTorch (Exercises).md ├── Part 3 - Training Neural Networks (Exercises).ipynb ├── Part 3 - Training Neural Networks (Exercises).md └── README.rst ├── cmap_gray_behavior ├── README.md ├── cmap_gray_demo.md ├── output_11_2.png ├── output_5_2.png ├── output_7_2.png └── output_9_2.png ├── docs ├── README.md ├── channels.md ├── flattening.md └── transpose.md ├── imgs ├── README.md ├── RGB_coms.png ├── flatten1.png ├── flatten_samp_image.png ├── image_transpose.png ├── samp_image_transpose.png ├── samp_image_transpose2.png ├── samp_image_transpose2b.png ├── samp_image_transpose3.png └── transpose.png ├── quicknets ├── README.md ├── wrapper1.png ├── wrapper2.png └── wrapper3.png └── xray ├── README.md ├── Starter_ Novice AI xrays v3 974b53f0-c f8d223.ipynb └── Starter_ Novice AI xrays v3 974b53f0-c f8d223.md /Activation_Functions/SOFTMAX.md: -------------------------------------------------------------------------------- 1 | THE SOFTMAX FUNCTION 2 | ==================== 3 | 4 | Primarily, the softmax function is a mathematical function which has been borrowed for use as activation functions in artificial neurons. 5 | So the principles of the softmax function remains mathematical. 6 | 7 | 8 | WHAT DOES SOFTMAX DO? 9 | --------------------- 10 | At a basic level, the softmax function takes a sequence of numbers, or an array of numbers, and operates on them such that now all of them add up to 1. 11 | 12 | **For example:** 13 | > for `X = [a, b, c]` 14 | > the sum of X, `a + b + c`, can be any number. 15 | 16 | > With softmax function applied to X, we get `Xsoft = [m, n, o]`, such that the sum `m + n + o` equals 1. 17 | > So in simple terms, when we want to change a sequence of values so that they all fall within the range 0 and 1, and their sum equals 1, we can use the softmax function. 18 | 19 | HOW DOES IT WORK 20 | ---------------- 21 | Given `X = [a, b, c]`, 22 | 23 | 1. Apply the exponential function to every number in the sequence: 24 | `[exp(a), exp(b), exp(c)]` 25 | 26 | 2. Get the sum, SUM_OF_EXPS, of the "exponentials": 27 | `exp(a) + exp(b) + exp(c)` 28 | 29 | 3. Divide every "exponential" by the SUM of exponentials obtained in step 2: 30 | `[exp(a)/SUM_OF_EXPS, exp(b)/SUM_OF_EXPS, exp(c)/SUM_OF_EXPS]` 31 | 32 | **Bingo!!!** 33 | 34 | 35 | HOW IS THIS USEFUL IN NEURAL NETWORKS? 36 | -------------------------------------- 37 | When a neural network has sifted through data, got the weighted sums, we need an output that "says" something meaningful. We do not just need the numbers, as they might not mean anything. 38 | In NNs, when we want to classify things, we must draw inferences from the probabilities of the given entity belonging to each of our target classes. This is especially useful, when the entity can belong to more than two classes. 39 | 40 | **For example:** 41 | > If we have 3 classes of colours: `[red, green, blue]`, and our neural network has 3 outputs each for a given colour. 42 | 43 | > For a particular colour which is to be determined, the outputs are as follows `[0.758, -0.875, 0.654]` for red, green, blue respectively. Which of these colours will we classify our unknown colour? 44 | > We cannot tell yet, as some values are negative whilst others are positive. Thus, we need to find a level ground to compare them. We decide that we want to see where each of them will fall within the range 0 to 1. 45 | > Then, we can have a fair comparison. Softmax becomes useful, because it will change all these numbers into positive numbers falling within the range of 0 and 1, then we can take the highest number as the highest probability. 46 | 47 | > When we apply softmax to our output, `[0.758, -0.875, 0.654]`, we get `[0.477, 0.093, 0.430]`, for RED, GREEN, BLUE respectively. Clearly, we can conclude that `RED` has the highest probability. 48 | > With this scenario, the NN can classify the colour as `RED`. 49 | 50 | 51 | ROUND UP 52 | -------- 53 | The softmax function makes it easy for an NN to get a class for an entity by transforming final outputs into a range of values between `0` and `1`. 54 | These transformed values become probabilities for the respective classes, and the class with highest probability is assigned. It is especially useful when an entity can belong to several classes. 55 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY12.md: -------------------------------------------------------------------------------- 1 | 2 | DAY 12: 3 | ======= 4 | 5 | ACTIVITIES 6 | --------------------------------------------------------------------------------------------------------------- 7 | ### HIGHLIGHT: BUILT A SINGLE-LAYER NEURAL NETWORK WHICH CLASSIFIES SHADES OF GREEN FROM NON-SHADES OF GREEN, WITH CORE PYTHON 8 | 9 | 1. As almost always, I leveraged only python to build 10 | a simple shades of green classifier. It worked well, but for 11 | it's confusion with yellow. It was an amazing experience to 12 | implement the workings of NN from scratch. Snapshots below. 13 | Link to demo here: https://github.com/ayivima/AI-SURFS/edit/master/Green_shade_classifier/README.rst 14 | 15 | 2. Reverted to beginning of course and released a completed 16 | notebook to github. https://github.com/ayivima/AI-SURFS/blob/master/Udacity_DL_With_Pytorch_Exercises/Part%201%20-%20Tensors%20in%20PyTorch%20(Exercises).ipynb 17 | 18 | 3. Resumed Pytorch tinkering :) 19 | 20 | 21 | REPO FOR 60DAYSOFUDACITY: 22 | ------------------------- 23 | https://github.com/ayivima/AI-SURFS/ 24 | 25 | 26 | ENCOURAGEMENTS 27 | -------------- 28 | Cheers to @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya Kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, , @gfred, @Erika Yoon. 29 | 30 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY13.md: -------------------------------------------------------------------------------- 1 | 2 | DAY 13: 3 | ======= 4 | 5 | ACTIVITIES 6 | --------------------------------------------------------------------------------------------------------------- 7 | ### HIGHLIGHT: STARTED BUILDING MATRIX TOOLS: THE SECOND MAJOR PROJECT FOR THE STREAK, ADDING UP TO VECTORKIT 8 | 9 | 1. Work has started on matrix tools: the sequel to VectorKit. Implemented 10 | Matrix multiplication, transposition, scalar multiplication...more to come! 11 | Link here: https://github.com/ayivima/AI-SURFS/blob/master/Matrixtools/matrixtools.py 12 | 13 | 2. Continued my second "epoch" through the course: 14 | Had fun with the MNIST dataset and the `torch.nn` module. 15 | 16 | 17 | REPO FOR 60DAYSOFUDACITY: 18 | ------------------------- 19 | https://github.com/ayivima/AI-SURFS/ 20 | 21 | PROGRESS: 22 | --------- 23 | https://github.com/ayivima/AI-SURFS/new/master/DAYS_PROGRESS 24 | 25 | 26 | ENCOURAGEMENTS 27 | -------------- 28 | Cheers to @Frida, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya Kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, , @gfred, @Erika Yoon. 29 | 30 | 31 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY14.md: -------------------------------------------------------------------------------- 1 | 2 | DAY 14: 3 | ======= 4 | 5 | ACTIVITIES 6 | --------------------------------------------------------------------------------------------------------------- 7 | ### HIGHLIGHT: CONTINUED MATRIX TOOLS: THE SECOND MAJOR PROJECT FOR THE STREAK, ADDING UP TO VECTORKIT 8 | 9 | 1. Work continues on matrix tools: the sequel to VectorKit. Implemented 10 | Matrix multiplication, Conjugate transposition, scalar multiplication, Hadamard Product, 11 | Matrix Addition, Matrix Subtraction, Matrix-Vector Multiplication, 12 | Generation of nxn identity matrix! 13 | Link here: https://github.com/ayivima/AI-SURFS/blob/master/Matrixtools/matrixtools.py 14 | 15 | 2. Continued my second "epoch" through the course, locally: 16 | Continually having fun with `torch`. 17 | 18 | 3. I met Laplace again...Haha...This time, not for 'Laplacian noise', 19 | but for Laplace (or cofactor) expansion, for deriving the determinant 20 | to be used for calculating the inverse of a matrix. 21 | 22 | 23 | REPO FOR 60DAYSOFUDACITY: 24 | ------------------------- 25 | https://github.com/ayivima/AI-SURFS/ 26 | 27 | PROGRESS: 28 | --------- 29 | https://github.com/ayivima/AI-SURFS/new/master/DAYS_PROGRESS 30 | 31 | 32 | ENCOURAGEMENTS 33 | -------------- 34 | Cheers to @Nirupama Singh, @Frida, @Lisa Crossman, @Nirupama Singh, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon. 35 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY15.md: -------------------------------------------------------------------------------- 1 | 2 | DAY 15: 3 | ======= 4 | 5 | ACTIVITIES 6 | --------------------------------------------------------------------------------------------------------------- 7 | ### HIGHLIGHT: BUILT A DOUBLE LAYER NEURAL NETWORK THAT DETECTS RED, GREEN, AND BLUE COLORS IN CORE PYTHON 8 | 9 | 1. Built a simple neural network with a single hidden layer, input layer and output layer. 10 | It detects red, green, blue colour shades. It did pretty fine :). Link: https://github.com/ayivima/AI-SURFS/blob/master/Red_Green_Blue_Classifier/RGB_Classifier.md 11 | 12 | Code available here: https://github.com/ayivima/AI-SURFS/blob/master/Red_Green_Blue_Classifier/nn_in_py2.py 13 | 14 | 2. Work continues on matrix tools as usual: 15 | Link here: https://github.com/ayivima/AI-SURFS/blob/master/Matrixtools/matrixtools.py 16 | 17 | 18 | REPO FOR 60DAYSOFUDACITY: 19 | ------------------------- 20 | https://github.com/ayivima/AI-SURFS/ 21 | 22 | PROGRESS: 23 | --------- 24 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 25 | 26 | 27 | ENCOURAGEMENTS 28 | -------------- 29 | Cheers to @Nirupama Singh, @Frida, @Lisa Crossman, @Nirupama Singh, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon. 30 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY16.md: -------------------------------------------------------------------------------- 1 | 2 | DAY 16: 3 | ======= 4 | 5 | ACTIVITIES 6 | --------------------------------------------------------------------------------------------------------------- 7 | ### HIGHLIGHT: EXTENDED THE PYTHON BARE BONES NEURAL NETWORK WITH AN ADDITIONAL HIDDEN LAYER 8 | 9 | 1. Continued with exploration of building NNs from the scratch with core python. 10 | Classifier can additionally detect Yellow, Cyan and Magenta. 11 | Link: https://github.com/ayivima/AI-SURFS/blob/master/RGBYCM_Color_Classifier/README.md 12 | 13 | Code available here: https://github.com/ayivima/AI-SURFS/blob/master/RGBYCM_Color_Classifier/nn_in_py3.py 14 | 15 | 16 | 17 | 2. Work continues on matrix tools... 18 | 19 | 20 | 21 | REPO FOR 60DAYSOFUDACITY: 22 | ------------------------- 23 | https://github.com/ayivima/AI-SURFS/ 24 | 25 | PROGRESS: 26 | --------- 27 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 28 | 29 | 30 | ENCOURAGEMENTS 31 | -------------- 32 | Cheers to @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon. 33 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY17.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | DAY 17: 4 | ======= 5 | 6 | ACTIVITIES 7 | --------------------------------------------------------------------------------------------------------------- 8 | ### FIRST VIRTUAL MEETUP FOR GHANAIAN SPAIC SCHOLARS 9 | 10 | 1. Attended the first virtual meetup for Ghanaian SPAIC scholars, organised by @Nana Aba T. 11 | Discussed the future of AI in healthcare, and Africa. 12 | 13 | 2. Reflected on, researched on, and delved deeper into Differential Privacy; a reaction to stimulating questions asked by @kumaraswamy in the 14 | #l2_intro_diff_privacy channel. 15 | 16 | 3. Work continues on matrix tools... 17 | 18 | 19 | 20 | REPO FOR 60DAYSOFUDACITY: 21 | ------------------------- 22 | https://github.com/ayivima/AI-SURFS/ 23 | 24 | PROGRESS: 25 | --------- 26 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 27 | 28 | 29 | ENCOURAGEMENTS 30 | -------------- 31 | Cheers to @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon. 32 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY18.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | DAY 18: 5 | ======= 6 | 7 | ACTIVITIES 8 | --------------------------------------------------------------------------------------------------------------- 9 | ### HIGHLIGHT: SPAIC WEBINAR WITH ROBERT WAGNER 10 | 11 | 1. Attended the SPAIC webinar with Robert Wagner. His "open-source contribution -> AI researcher" trajectory 12 | is particularly intriguing. 13 | 14 | 2. Had a discussion with @Stark about ##sg_novice-ai. Teaming up with him to 15 | facilitate Week 2 activities. 16 | 17 | 3. Updated AI-SURFS, with new AI explorations. 18 | 19 | 4. Work continues on matrix tools... 20 | 21 | 22 | REPO FOR 60DAYSOFUDACITY: 23 | ------------------------- 24 | https://github.com/ayivima/AI-SURFS/ 25 | 26 | PROGRESS: 27 | --------- 28 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 29 | 30 | 31 | ENCOURAGEMENTS 32 | -------------- 33 | Cheers to @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon. 34 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY19.md: -------------------------------------------------------------------------------- 1 | DAY 19: 2 | ======= 3 | 4 | ACTIVITIES 5 | --------------------------------------------------------------------------------------------------------------- 6 | ### HIGHLIGHT: ROLLED OUT FIRST PYBYTE FOR SG_NOVICE-AI 7 | 8 | 1. Enjoying the team-up with @Stark to facilitate activities at #sg_novice-ai. Rolled out the first PYBYTE, and had the first virtual meetup. @Michael Sheinman @Eman @cibaca @Shudipto Trafder @Rosa Paccotacya @Agata [OR, USA] @Sayed Maheen Basheer @Olivia @Aarthi Alagammai @Poornima Venkatraman @SANMITRA 9 | 10 | 2. I and @Stark are back to work on planning next projects. 11 | 12 | 3. Work continues on matrix tools... 13 | 14 | 15 | REPO FOR 60DAYSOFUDACITY: 16 | ------------------------- 17 | https://github.com/ayivima/AI-SURFS/ 18 | 19 | PROGRESS: 20 | --------- 21 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 22 | 23 | 24 | ENCOURAGEMENTS 25 | -------------- 26 | Cheers to @Varez.W, @THIYAGARAJAN R, @LauraT T, @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon. 27 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY20.md: -------------------------------------------------------------------------------- 1 | DAY 20: 2 | ======= 3 | 4 | ACTIVITIES 5 | --------------------------------------------------------------------------------------------------------------- 6 | ### HAD SO MUCH TO DO 7 | 8 | 1. Resumed reading this book, "The Mathematics of Language": http://linguistics.ucla.edu/people/Kracht/html/formal.alt.pdf. I am sharing it, especially, with NLP lovers. 9 | The day I accepted that the entire world could be described in math, was the day I had my AI epiphany. 10 | 11 | 2. Squeezed some time to spend in the Udacity workspace. 12 | https://github.com/ayivima/AI-SURFS/blob/master/Udacity_DL_With_Pytorch_Exercises/Part%203%20-%20Training%20Neural%20Networks%20(Exercises).ipynb 13 | 14 | 3. I and @Stark are busy on a side project...And working on stirring up activity in #sg_novice-ai. 15 | 16 | 4. ...And, I am still at implementing concepts at a bare-bones level: Matrix tools and its "spin-offs" 17 | 18 | 19 | REPO FOR 60DAYSOFUDACITY: 20 | ------------------------- 21 | https://github.com/ayivima/AI-SURFS/ 22 | 23 | PROGRESS: 24 | --------- 25 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 26 | 27 | 28 | ENCOURAGEMENTS 29 | -------------- 30 | Cheers to @Olivia, @Sharim, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon. 31 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY21.md: -------------------------------------------------------------------------------- 1 | 2 | DAY 21: 3 | ======= 4 | 5 | ACTIVITIES 6 | --------------------------------------------------------------------------------------------------------------- 7 | ### PYBYTES IN SG_NOVICE-AI, BUILT A SIMPLE PYTORCH WRAPPER 8 | 9 | 1. Rolled out PYBYTES in #sg_novice-ai, which led us to demonstrate, and self implement tensor flattening. 10 | It is part of a series of explorations of seemingly scary AI concepts. 11 | Special shout out to @Olivia, @Eman, @Hung, @Aarthi Alagammai, and @Nirupama Singh for their amazing contributions to today's activities. 12 | 13 | 2. Still reading this book, "The Mathematics of Language": http://linguistics.ucla.edu/people/Kracht/html/formal.alt.pdf. 14 | 15 | 3. Built a minimalistic wrapper around Pytorch for creating networks. Network creation gets as easy as: 16 | `network(sequence of node counts for each layer, hidden nodes activation function, output function)` 17 | ``` 18 | >>> model3 = network((784,256,128,64,32,16,10,5,3), ReLU(), logsoftmax(0)) 19 | >>> model3 20 | Sequential( 21 | (0): Linear(in_features=784, out_features=256, bias=True) 22 | (1): ReLU() 23 | (2): Linear(in_features=256, out_features=128, bias=True) 24 | (3): ReLU() 25 | (4): Linear(in_features=128, out_features=64, bias=True) 26 | (5): ReLU() 27 | (6): Linear(in_features=64, out_features=32, bias=True) 28 | (7): ReLU() 29 | (8): Linear(in_features=32, out_features=16, bias=True) 30 | (9): ReLU() 31 | (10): Linear(in_features=16, out_features=10, bias=True) 32 | (11): ReLU() 33 | (12): Linear(in_features=10, out_features=5, bias=True) 34 | (13): ReLU() 35 | (14): Linear(in_features=5, out_features=3, bias=True) 36 | (15): ReLU() 37 | (16): LogSoftmax() 38 | ) 39 | ``` 40 | Snapshots of `quicknets` in action attached below. 41 | 42 | 4. @Stark's passion for #sg_novice-ai is amazing...We keep planning on stirring up empowering activity in #sg_novice-ai. 43 | We believe in building from the ground right up! @Stark says we will become "Pytorch beasts"...Haha! 44 | 45 | 5. ...And, I am still at implementing concepts at a bare-bones level: Matrix tools and its "spin-offs" 46 | 47 | 48 | REPO FOR 60DAYSOFUDACITY: 49 | ------------------------- 50 | https://github.com/ayivima/AI-SURFS/ 51 | 52 | PROGRESS: 53 | --------- 54 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 55 | 56 | 57 | ENCOURAGEMENTS 58 | -------------- 59 | Cheers to @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon. 60 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY22.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | DAY 22: 4 | ======= 5 | 6 | ACTIVITIES 7 | --------------------------------------------------------------------------------------------------------------- 8 | ### PYBYTES IN SG_NOVICE-AI 9 | 10 | 1. Rolled out PYBYTES in #sg_novice-ai, which led us to explore tensor dimensions.  11 | It is part of a series of explorations of seemingly scary AI concepts.  12 | Special shout out to @Eman, @Hung, @Aarthi Alagammai, and @Shudipto Trafder for their amazing contributions to today's activities......And there's more planning with @Stark 13 | 14 | 2. Still reading this book, "The Mathematics of Language": http://linguistics.ucla.edu/people/Kracht/html/formal.alt.pdf. 15 | 16 | 3. ...And, I am still at implementing concepts at a bare-bones level: Matrix tools and its "spin-offs".....and the Pytorch wrapper project. 17 | 18 | 19 | REPO FOR 60DAYSOFUDACITY: 20 | ------------------------- 21 | https://github.com/ayivima/AI-SURFS/ 22 | 23 | PROGRESS: 24 | --------- 25 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 26 | 27 | 28 | ENCOURAGEMENTS 29 | -------------- 30 | Cheers to  @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon. 31 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY23.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | DAY 23: 4 | ======= 5 | 6 | ACTIVITIES 7 | --------------------------------------------------------------------------------------------------------------- 8 | ### PYBYTES IN SG_NOVICE-AI, BAREBONES AI TOOLS IMPLEMENTATION 9 | 10 | 1. Still busy with PYBYTES in #sg_novice-ai  11 | The series of explorations of seemingly scary AI concepts.  12 | 13 | 2. Still reading this book, "The Mathematics of Language": http://linguistics.ucla.edu/people/Kracht/html/formal.alt.pdf. 14 | 15 | 3. ...Matrix tools and its "spin-offs".....and the Pytorch wrapper project...still ongoing 16 | 17 | 4. The weekend was tough. I took yesterday off :) 18 | 19 | 20 | 21 | REPO FOR 60DAYSOFUDACITY: 22 | ------------------------- 23 | https://github.com/ayivima/AI-SURFS/ 24 | 25 | PROGRESS: 26 | --------- 27 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 28 | 29 | 30 | 31 | ENCOURAGEMENTS 32 | -------------- 33 | Cheers to  @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon. 34 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY24.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | DAY 24: 5 | ======= 6 | 7 | ACTIVITIES 8 | --------------------------------------------------------------------------------------------------------------- 9 | ### DRIVING SG_NOVICE-AI DEEPER INTO PYTORCH WITH PYBYTES 10 | 11 | 1. Still busy with creating PYBYTES in #sg_novice-ai. We have successfully implemented equivalents of tensor normalization, tensor flattening, getting tensor shapes, and tensor reshaping. 12 | ------------ 13 | 14 | **A TEASER:** We successfully demonstrated, in vanilla code, how reshaping with `tensor.view` will change this, a `6 X 2` matrix to: 15 | ``` 16 | P = [ 17 | [1, 2], 18 | [3, 4], 19 | [5, 6], 20 | [7, 8], 21 | [9, 10], 22 | [11, 12] 23 | ] 24 | ``` 25 | to this, a `4 X 3` matrix: 26 | 27 | ``` 28 | P = [ 29 | [1, 2, 3], 30 | [4, 5, 6], 31 | [7, 8, 9], 32 | [10, 11, 12] 33 | ] 34 | ``` 35 | 36 | What we have achieved: We have first-hand experience, through our own implementations, to vividly describe reshaping with `tensor.view()`, normalization with `transforms.normalize` etc. 37 | 38 | You can actually join us at #sg_novice-ai. We have more activities in the pipeline...One of them is **DLDANCE**...Can you visualize how Deep Learning dances?...Haha...Join us to see! 39 | 40 | **HONOURABLE MENTIONS: @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Shudipto Trafder, @Hung. Thanks to you for your amazing contributions.** 41 | 42 | 43 | 2. Still reading this book, "The Mathematics of Language": http://linguistics.ucla.edu/people/Kracht/html/formal.alt.pdf...In smaller bits now as the activties pile. 44 | --------------------- 45 | 46 | 47 | 3. ...Matrix tools and its "spin-offs".....and the Pytorch wrapper project...still ongoing 48 | --------------------- 49 | 50 | 51 | REPO FOR 60DAYSOFUDACITY: 52 | ------------------------- 53 | https://github.com/ayivima/AI-SURFS/ 54 | 55 | PROGRESS: 56 | --------- 57 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 58 | 59 | 60 | 61 | ENCOURAGEMENTS 62 | -------------- 63 | Cheers to  @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon. 64 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY25.md: -------------------------------------------------------------------------------- 1 | 2 | DAY 25: 3 | ======= 4 | 5 | ACTIVITIES 6 | --------------------------------------------------------------------------------------------------------------- 7 | ### SPENT MORE TIME ON QUICKNETS WRAPPER - NEW MILESTONES TODAY, PYBYTES CONTINUED 8 | 9 | 1. Added new functionality to my Pytorch wrapper: QUICKNETS. Tested it out on FashionMNIST and glad with the outcomes. Among a couple milestones, looking at training different models with different tunings and comparing them more conveniently, and tweaking outputs. 10 | Snapshots here: https://raw.githubusercontent.com/ayivima/AI-SURFS/master/DAYS_PROGRESS/imgs/wrapper_shot.png, https://raw.githubusercontent.com/ayivima/AI-SURFS/master/DAYS_PROGRESS/imgs/wrapper_shot2.png 11 | 12 | 2. Still busy with creating PYBYTES in #sg_novice-ai. We have explored dot product out of the usual way. 13 | **HONOURABLE MENTIONS: @Ingus Terbets, @Aarthi Alagammai, @Olivia. Thanks to you for your amazing contributions.** 14 | 15 | 3. No reading today...But it's still up..."The Mathematics of Language": http://linguistics.ucla.edu/people/Kracht/html/formal.alt.pdf 16 | 17 | 4. ...Matrix tools and its "spin-offs".....and the Pytorch wrapper project...still ongoing...MATRIXTOOLS - THERE'S A PARTICULAR ARITHMETIC THAT KEEPS OUTSMARTING ME :slightly_smiling_face: 18 | 19 | REPO FOR 60DAYSOFUDACITY: 20 | ------------------------- 21 | https://github.com/ayivima/AI-SURFS/ 22 | 23 | PROGRESS: 24 | --------- 25 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 26 | 27 | ENCOURAGEMENTS 28 | -------------- 29 | Cheers to @Labiba, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 30 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY26.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | DAY 26: 4 | ======= 5 | 6 | ACTIVITIES 7 | --------------------------------------------------------------------------------------------------------------- 8 | ### A MODEST DAY 9 | 10 | 1. Still on Matrix tools and Pytorch wrapper project...slowly prepping for a run with all the SPAI stuff. 11 | 12 | 2. PYBYTES in #sg_novice-ai. Cheers to all contributors #sg_novice-ai. @Stark rolled out DLDANCE!!! 13 | 14 | 3. Spent time in the workspace 15 | 16 | 4. A small byte of this "The Mathematics of Language": http://linguistics.ucla.edu/people/Kracht/html/formal.alt.pdf 17 | 18 | 19 | REPO FOR 60DAYSOFUDACITY: 20 | ------------------------- 21 | https://github.com/ayivima/AI-SURFS/ 22 | 23 | PROGRESS: 24 | --------- 25 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 26 | 27 | ENCOURAGEMENTS 28 | -------------- 29 | Cheers to @K.S., @Labiba, @Shudipto Trafder, @Marwa, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 30 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY27.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | DAY 27: 4 | ======= 5 | 6 | ACTIVITIES 7 | --------------------------------------------------------------------------------------------------------------- 8 | ### ANOTHER MODEST DAY, BUT EVENTFUL IN THE SHADOWS 9 | 10 | I had a stressed day...so I took a break to read on the history of math. I am a fan of the mystery AI has brought to, and has taken from Math at the same. From my experience, AI and Math exist within a space of alternating "Wows" and the "That simple?" moments. But, what fascinates me most is how simple geometry among other math disciplines, like theorems of Pythagoras and Euclid, has evolved into intelligence comparable to the mind - Well, AI power will always be a miniscule of the human mind's power. And, I have been wondering what other part of math we are underrating? There's probably something more to the simple `1 + 1 = 2` theory, which someone must find for the next breakthrough. Then, I woke up from my fantasy to type my progress...Haha 11 | 12 | 13 | 1. Still slowly prepping for a run with all the SPAI stuff...and on the personal projects I started already. For a few days more, I will have a pretty modest and repetitive routine. 14 | 15 | 2. PYBYTES had a break today...deliberately. 16 | 17 | 3. Spent time in the Udacity workspace 18 | 19 | 4. A small byte of this "The Mathematics of Language": http://linguistics.ucla.edu/people/Kracht/html/formal.alt.pdf 20 | 21 | 22 | 23 | REPO FOR 60DAYSOFUDACITY: 24 | ------------------------- 25 | https://github.com/ayivima/AI-SURFS/ 26 | 27 | PROGRESS: 28 | --------- 29 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 30 | 31 | ENCOURAGEMENTS 32 | -------------- 33 | Cheers to @K.S., @Labiba, @Shudipto Trafder, @Marwa, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 34 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY28.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | DAY 28: 5 | ======= 6 | 7 | ACTIVITIES 8 | --------------------------------------------------------------------------------------------------------------- 9 | ### ANOTHER MODEST DAY 10 | 11 | I had another AI fantasy...But it's too long to be written...Haha... 12 | 13 | 1. Did a quick write on the concept of Remote Execution, as related to PySyft. 14 | https://secureprivataischolar.slack.com/archives/CJCHVPLGK/p1564142947091000 15 | https://github.com/ayivima/AI-SURFS/blob/master/Federated_Learning/Remote_Execution_Overview.md/ 16 | 17 | 2. PYBYTES resumed, and we are delving deeper into image analysis. 18 | https://secureprivataischolar.slack.com/archives/CL5KWHXR6/p1564157720187100 19 | 20 | 3. Spent time in the Udacity workspace 21 | 22 | 4. A small byte of this "The Mathematics of Language": http://linguistics.ucla.edu/people/Kracht/html/formal.alt.pdf 23 | 24 | 25 | 26 | REPO FOR 60DAYSOFUDACITY: 27 | ------------------------- 28 | https://github.com/ayivima/AI-SURFS/ 29 | 30 | PROGRESS: 31 | --------- 32 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 33 | 34 | ENCOURAGEMENTS 35 | -------------- 36 | Cheers to @K.S., @Labiba, @Shudipto Trafder, @Marwa, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 37 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY29.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | DAY 29: 4 | ======= 5 | 6 | ACTIVITIES 7 | --------------------------------------------------------------------------------------------------------------- 8 | ### ANOTHER MODEST DAY 9 | 10 | Glad to see scholars gone past the 30-day mark already. Congratulations!!! 11 | 12 | 1. Spent time on the course notebooks. 13 | 14 | 2. Gearing up for my phase 2 of the 60daysofudacity challenge. 15 | 16 | 17 | 18 | REPO FOR 60DAYSOFUDACITY: 19 | ------------------------- 20 | https://github.com/ayivima/AI-SURFS/ 21 | 22 | PROGRESS: 23 | --------- 24 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 25 | 26 | ENCOURAGEMENTS 27 | -------------- 28 | Cheers to @K.S., @Labiba, @Shudipto Trafder, @Marwa, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 29 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY30.md: -------------------------------------------------------------------------------- 1 | 2 | DAY 30: 3 | ======= 4 | ACTIVITIES 5 | --------------------------------------------------------------------------------------------------------------- 6 | RETROSPECTION OF MY 30-DAY JOURNEY 7 | 8 | PROJECTS: 9 | 1. VectorKit - https://github.com/ayivima/vectorkit 10 | 11 | 2. MatrixKit - https://github.com/ayivima/matrixkit 12 | 13 | 3. Pytorch Wrappers (More of Personal Gain) 14 | 15 | SOME NOTABLE MOMENTS: 16 | 17 | 1. Proof of Concept for CNN Architecture/Algorithms 18 | Thanks to @Shaam ...He had an interesting project that was to test correlation between images. That was the beggining of this particular experimental journey. There were three images of apples. Using core python, numpy and PIL, I sought to get correlation between them. It did well with two of the images...They all had white backgrounds and similar sized apples...The other one had an apple that was like one in the other images but the code reported poor correlation. 19 | And, CNNs made sense...a blind multilayer perceptron, of course, would be helpless and hopeless. 20 | Materials from my first exploit here: https://github.com/ayivima/AI-SURFS/blob/master/Power_Of_Math_In_Image_Analysis/ 21 | 22 | 2. I further explored the weaknesses of Perceptrons by using toy Multilayer Perceptrons to determine colours within colours. That was a furtherance on the toy Multilayer perceptron here: https://github.com/ayivima/AI-SURFS/blob/master/RGBYCM_Color_Classifier/ 23 | 24 | 3. A further dive into CNNs involved exploration of Mathematical Dynamic System Theory(it has interesting applications in human behaviour, and makes sense in CNN applications and neural networks), and the power behind convolutions. I share some of my resources: It is quite easy on the brain to start with these... https://www.britannica.com/science/analysis-mathematics/Dynamical-systems-theory-and-chaos#ref732355, https://www.researchgate.net/publication/23786997_A_view_of_Neural_Networks_as_dynamical_systems/link/0fcfd50a90b0b1a417000000/download 25 | , https://core.ac.uk/download/pdf/82349505.pdf, http://eprints.whiterose.ac.uk/78723/1/acse%20research%20report%20436.....pdf. 26 | 27 | 4. Then, I was fascinated by the adoption of CNNs for Natural Language Processing...and the debate on its preference over RNNs. 28 | This was how I got back to the book "Mathematics of Language" [http://linguistics.ucla.edu/people/Kracht/html/formal.alt.pdf], to be able to appreciate the notions better. But, I have been more fascinated at how there is a link between language acquisition and the Dynamic System Theory: https://www.rug.nl/staff/c.l.j.de.bot/debotetal2007-bilingualism.pdf 29 | 30 | With that surface retrospection, I think I am poised to push further to my next phase, Phase 2, of this challenge! 31 | 32 | REPO FOR 60DAYSOFUDACITY: 33 | ------------------------- 34 | https://github.com/ayivima/AI-SURFS/ 35 | 36 | PROGRESS: 37 | --------- 38 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 39 | 40 | ENCOURAGEMENTS 41 | -------------- 42 | Cheers to @K.S., @Labiba, @Shudipto Trafder, @Marwa, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 43 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY31.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | DAY 31: 5 | ======= 6 | 7 | ACTIVITIES 8 | --------------------------------------------------------------------------------------------------------------- 9 | ### A NEW DAWN BEGINS 10 | 11 | A step into the second half of this challenge...Exciting and Frightening :) at the same. I hoped this wouldn't end...Haha. 12 | 13 | 1. Continued exploring CNNs and Natural Language Processing...It appears as though, there are more points for LSTM and RNNs but, I think "convolutions" offer better hope. 14 | Language is chaotic: it is random and, yet, has a pattern. One word, can mean many things...but within a context, the differentials narrow down. A practical dynamic system; language is very dynamic. And, convolutional neural networks are great extraction information from dynamic systems. 15 | 16 | 2. Explored the Bag-of-words models, N-gram models, Markov models and some challenges of NLP as has persisted with classical ML. Trying to get soaked in all the perspectives, from classic ML to DL to grasp all of this. 17 | This is a pretty old paper, but the writer was futuristic: https://www.aclweb.org/anthology/J95-1009. 18 | 19 | 3. I believe, from my exploration so far, that NLP is in some kind of winter. I was wondering why a lot of competitions are geared toward CV. CV is more popular. 20 | Interestingly, the challenges with language even start with humans, we are as yet to conquer natural language processing ourselves: https://www.researchgate.net/publication/227006847_Challenges_in_natural_language_processing_The_case_of_metaphor_commentary/citation/download 21 | 22 | 4. I also explored how we would apply differential privacy in NLP...Looks like it's in winter too...But, language is probably our greatest liability when it comes to privacy. This was an interesting read along the line: http://aclweb.org/anthology/P18-2005 23 | 24 | 5. With all that is going on, I just have to continue being suffocated by the teachings of this book..."The Mathematics of Language": http://linguistics.ucla.edu/people/Kracht/html/formal.alt.pdf 25 | 26 | 6. Spent time on the course notebooks. 27 | 28 | 29 | 30 | REPO FOR 60DAYSOFUDACITY: 31 | ------------------------- 32 | https://github.com/ayivima/AI-SURFS/ 33 | 34 | PROGRESS: 35 | --------- 36 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 37 | 38 | ENCOURAGEMENTS 39 | -------------- 40 | Cheers to @Aisha Javed, @Temitope Oladokun , @K.S., @Labiba, @Shudipto Trafder, @Marwa, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 41 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY32.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | DAY 32: 6 | ======= 7 | 8 | ACTIVITIES 9 | --------------------------------------------------------------------------------------------------------------- 10 | ### THE NEW DAWN CONTINUES 11 | 12 | 1. Still investigating CNNs and NLP... I implemented a simple Bag-of-words model using spacy and scikit learn to seek a deeper understanding of the concepts. And, the blindness of the bag-of-words approach to the structure or sequence of words, adds validation to the CNN concept. 13 | 14 | 2. I took a curious look at the APTOS Diabetic Retinopathy Classification endeavour by Maria Camila Alvarez Triviño1, Jérémie Despraz, Jesús Alfonso López Sotelo1 and Carlos Andrés Peña. I read their paper: https://arxiv.org/ftp/arxiv/papers/1807/1807.09232.pdf ...and followed to their repo here: https://github.com/mcamila777/DL-to-retina-images 15 | 16 | 3. Spent time on the course notebooks... Been exploring the possibilities of feature unions in PyTorch. And, pipelines that go beyond "transforms.compose" 17 | 18 | 4. We keep reading, even if its just a sentence a day :) --> https://linguistics.ucla.edu/people/Kracht/courses/compling2-2007/formal.pdf 19 | 20 | 21 | REPO FOR 60DAYSOFUDACITY: 22 | ------------------------- 23 | https://github.com/ayivima/AI-SURFS/ 24 | 25 | PROGRESS: 26 | --------- 27 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 28 | 29 | ENCOURAGEMENTS 30 | -------------- 31 | Cheers to @Aisha Javed, @Temitope Oladokun , @K.S., @Labiba, @Shudipto Trafder, @Marwa, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 32 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY33.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | DAY 33: 4 | ======= 5 | 6 | ACTIVITIES 7 | --------------------------------------------------------------------------------------------------------------- 8 | ### THE NEW DAWN CONTINUES 9 | 10 | 1. Attended an inspiring webinar by Udacity: "Putting Humans at the Center of AI", with Dr Fei-Fei Li and Sebastien Thrun, Founder of Udacity. 11 | She was a pioneer involved in rebboting Ai from winter. She is pursuing some inspiring things for the improvement of healthcare. For instance, using sensors to monitor infection control and patient well-being. Obviously, there is a high attrition rate now, for healthcare workers, and Ai could do a godd job of minimizing the burden. 12 | AI is about 60 years old; that's interesting. 13 | 14 | Major takeaways: 15 | - Algorithmic advances are as critical as data publishing. As we better algorithms, we must better data collection and publishing. I guess privacy comes in here as well. 16 | - AI is an interdisciplinary field. A healthy synergistic relationship between the various domains, is critical for the unfolding AI revolution. 17 | - Interdisciplinary AI will foster oversight for ethics, values and issues that threaten the constructive adoption of AI. If we have the social sciences, for instance, actively involved we could find better ways to guide AI toward collaboration with humans to enhance himans, rather than replacement. 18 | - Essentially, AI was created for humans, and must be centered around humans. As practitioners, we need to have this footprint boldly engineered into our endeavours. If any AI will not help humans, it is not a good idea to pursue. 19 | 20 | 2. Still investigating CNNs and NLP... 21 | 22 | 3. Spent time on the course notebooks... Still exploring the possibilities of feature unions and extensive pipelines for PyTorch. 23 | 24 | 4. I read through a scholar's syft-js implementation: https://github.com/vvmnnnkv/syft-js-worker. It is a nice one. One of the best things to do in tech is to read a lot of code, and not just write. 25 | 26 | 27 | REPO FOR 60DAYSOFUDACITY: 28 | ------------------------- 29 | https://github.com/ayivima/AI-SURFS/ 30 | 31 | PROGRESS: 32 | --------- 33 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 34 | 35 | ENCOURAGEMENTS 36 | -------------- 37 | Cheers to @Aisha Javed, @Temitope Oladokun, @K.S., @Labiba, @Shudipto Trafder, @Marwa, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 38 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY34.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | DAY 34: 5 | ======= 6 | 7 | ACTIVITIES 8 | --------------------------------------------------------------------------------------------------------------- 9 | ### LANGUAGE UNDERSTANDING PURSUIT CONTINUES 10 | 11 | 1. Deeper dive into the mathematical structure of language. A sample literature - http://web.mit.edu/6.441/spring06/projects/1/mwillsey_Report.pdf 12 | 13 | 2. Seeking deeper understanding of the Formal Language Theory and Regular expressions. Meditating through the basics; grammar, terminal and non-terminal symbols, production etc. Still investigating CNNs and NLP... 14 | Some places I started with; http://mathworld.wolfram.com/RegularExpression.html, http://mathworld.wolfram.com/Grammar.html, and of course, "The Mathematics of Language" 15 | 16 | 3. Engaged my curiosity drive towards what Udacity had to say about NLP. I covered the first 20 videos of Lesson 5 of ud730... Embeddings, wordtovec, LSTMs, tSNE versus PCA, Memory and Backpropagation issues in regular RNNs etc. 17 | https://eu.udacity.com/course/deep-learning--ud730 18 | 19 | 4. On my exploration of pipelines and possible feature unions for PyTorch, I reviewed the `transforms` source code and documentation. 20 | https://pytorch.org/docs/stable/_modules/torchvision/transforms/transforms.html, 21 | https://pytorch.org/docs/stable/torchvision/transforms.html 22 | 23 | 24 | REPO FOR 60DAYSOFUDACITY: 25 | ------------------------- 26 | https://github.com/ayivima/AI-SURFS/ 27 | 28 | PROGRESS: 29 | --------- 30 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 31 | 32 | ENCOURAGEMENTS 33 | -------------- 34 | Cheers to @erinSnPAI , @Andzelika Balyseviene , @PaulBruce, @Aisha Javed, @Temitope Oladokun, @K.S., @Labiba, @Shudipto Trafder, @Marwa, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 35 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY35.md: -------------------------------------------------------------------------------- 1 | 2 | DAY35 3 | ====== 4 | 5 | STILL INVESTIGATING NLP 6 | 7 | 1. Dug deep into Markov chains. 8 | 9 | 2. Spent time on course notebooks. 10 | 11 | 3. Read more of the “Mathematics of Language”: http://linguistics.ucla.edu/people/Kracht/html/formal.alt.pdf 12 | 13 | REPO 14 | ===== 15 | https://github.com/ayivima/AI-SURFS 16 | 17 | PROGRESS 18 | ========= 19 | https://github.com/ayivima/AI-SURFS/tree/master/DAYS_PROGRESS 20 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY36.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | DAY 36: 6 | ======= 7 | 8 | ACTIVITIES 9 | --------------------------------------------------------------------------------------------------------------- 10 | ### A FOCUSED DAY 11 | 12 | 1. Had a meetup with @Seeratpal K. Jaura @Stark @Hung @Archit @Oudarjya Sen Sarma @Labiba @Agata [OR, USA] @Ingus Terbets @cibaca @Apoorva Patil @Alexander Villasoto @Shudipto Trafder 13 | We are poised to improve healthcare...each one of us. Health is one of the industries in chaos, and we make the case that we should leverage AI more to help streamline processes, arrive at better diagnoses in less time, and ease the burden of clinicians. We are about it. 14 | 15 | 2. On the quest for datasets for our cause above. 16 | 17 | 3. And I am still on my deep exploration of CNNs and NLP 18 | 19 | REPO FOR 60DAYSOFUDACITY: 20 | ------------------------- 21 | https://github.com/ayivima/AI-SURFS/ 22 | 23 | PROGRESS: 24 | --------- 25 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 26 | 27 | ENCOURAGEMENTS 28 | -------------- 29 | Cheers to @erinSnPAI , @Andzelika Balyseviene , @PaulBruce, @Aisha Javed, @Temitope Oladokun, @K.S., @Labiba, @Shudipto Trafder, @Marwa, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 30 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY37.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | DAY 37: 5 | ======= 6 | 7 | ACTIVITIES 8 | --------------------------------------------------------------------------------------------------------------- 9 | ### A PACKED DAY: BLOSSOMING FINDINGS ON CNN AND NLP 10 | 11 | 1. Investigating BERT and OpenAI's GPT-2. These two have an interesting rivalry. Commentary has it that BERT outperformed GPT, until GPT-2 emerged to muddy the waters this year. 12 | So, my curiosity regards the optimal between the left-to-right next-word approach of GPT-2 versus the bidirectional what-is-before-and-after approach of BERT. 13 | https://github.com/openai/gpt-2, https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf, https://arxiv.org/pdf/1810.04805 14 | 15 | 2. Accessed integration options for spaCy and Pytorch. I came across this interesting library: https://pypi.org/project/spacy-pytorch-transformers/. 16 | But, foremost the CNN-NLP alignment was vindicated: spacy 2.0 uses CNN for tagging, parsing and entity recognition. https://spacy.io/usage/v2#features-models 17 | It's a huge boost to the non-RNN path. 18 | 19 | 3. I went over some of the course notebooks and miniprojects. I have heightened interest in production-grade differential privacy and federated learning. 20 | 21 | 22 | REPO FOR 60DAYSOFUDACITY: 23 | ------------------------- 24 | https://github.com/ayivima/AI-SURFS/ 25 | 26 | PROGRESS: 27 | --------- 28 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 29 | 30 | ENCOURAGEMENTS 31 | -------------- 32 | Cheers to @erinSnPAI , @Andzelika Balyseviene , @Aleksandra Deis @PaulBruce, @Aisha Javed, @Temitope Oladokun, @K.S., @Labiba, @Shudipto Trafder, @Marwa, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 33 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY38.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | DAY 38: 5 | ======= 6 | 7 | ACTIVITIES 8 | --------------------------------------------------------------------------------------------------------------- 9 | ### CNN, NLP, CRYPTOGRAPHY, SECURE DEEP LEARNING 10 | 11 | 1. Implemented an adhoc CNN model for my NLP exploration - a proof of concept. 12 | 13 | 2. Started crawling appropriate websites for requisite data for my corpus. 14 | 15 | 3. Spent time going over Secure Federated Learning and Encrypted Deep Learning. 16 | 17 | 4. Dug into cryptography concepts, especially secret sharing. Some reads: http://www1.cs.columbia.edu/~tal/4261/F16/secretsharing.pdf, 18 | https://users.cs.duke.edu/~ashwin/pubs/Kotsogiannis-Pythia-SIGMOD2017.pdf 19 | 20 | 5. And then, a little for Differential privacy 21 | https://petsymposium.org/2019/files/papers/issue1/popets-2019-0011.pdf 22 | 23 | 24 | 25 | REPO FOR 60DAYSOFUDACITY: 26 | ------------------------- 27 | https://github.com/ayivima/AI-SURFS/ 28 | 29 | PROGRESS: 30 | --------- 31 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 32 | 33 | ENCOURAGEMENTS 34 | -------------- 35 | Cheers to @erinSnPAI , @Andzelika Balyseviene , @Aleksandra Deis @PaulBruce, @Aisha Javed, @Temitope Oladokun, @K.S., @Labiba, @Shudipto Trafder, @Marwa, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 36 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY39.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | DAY 39: 4 | ======= 5 | 6 | ACTIVITIES 7 | --------------------------------------------------------------------------------------------------------------- 8 | ### MEETUP, PYBYTES, NLP/CNN 9 | 10 | 1. Rolled out PYBYTES: looking at loss function, and data serialization - serialization is behind our ability to save model states and be able to use later or use in transfer learning. 11 | Cheers to @Aarthi Alagammai, @Lina, @Ingus Terbets, @Kapil Chandorikar. More contributions coming, we are not through yet. 12 | 13 | 2. Joined the meetup of #sg_real_world_ai. Had some fun, talking about real world privacy, federated learning, GANs. Cheers! @Ebinbin Ajagun @Evi @Biswajit Banerjee @Ankur Bhatia @Md. Mahedi Hasan Riday @Oudarjya Sen Sarma @Ankur Bhatia 14 | 15 | 3. Still exploring NLP and CNN, and builing a corpus. 16 | 17 | 4. Read a little of the "Mathematics of Language". http://linguistics.ucla.edu/people/Kracht/html/formal.alt.pdf 18 | 19 | 20 | REPO FOR 60DAYSOFUDACITY: 21 | ------------------------- 22 | https://github.com/ayivima/AI-SURFS/ 23 | 24 | PROGRESS: 25 | --------- 26 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 27 | 28 | ENCOURAGEMENTS 29 | -------------- 30 | Cheers to @erinSnPAI , @Andzelika Balyseviene , @Aleksandra Deis @PaulBruce, @Aisha Javed, @Temitope Oladokun, @K.S., @Labiba, @Shudipto Trafder, @Marwa, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 31 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY40.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | DAY 40: 5 | ======= 6 | 7 | ACTIVITIES 8 | --------------------------------------------------------------------------------------------------------------- 9 | ### MEETUP, GROUP PROJECT 10 | 11 | 1. #sg_novice-ai meetup for showcase project. It was a lengthy one, well over 2 hours. We made headway with some dat exploration. 12 | 13 | 2. Still exploring NLP and CNN, and builing a corpus. It feels crazier each new day. But, I am poised for the outcome. 14 | 15 | 16 | 17 | REPO FOR 60DAYSOFUDACITY: 18 | ------------------------- 19 | https://github.com/ayivima/AI-SURFS/ 20 | 21 | PROGRESS: 22 | --------- 23 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 24 | 25 | ENCOURAGEMENTS 26 | -------------- 27 | Cheers to @erinSnPAI , @Andzelika Balyseviene , @Aleksandra Deis @PaulBruce, @Aisha Javed, @Temitope Oladokun, @K.S., @Labiba, @Shudipto Trafder, @Marwa, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 28 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY41.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | DAY 41: 6 | ======= 7 | 8 | ACTIVITIES 9 | --------------------------------------------------------------------------------------------------------------- 10 | ### MEETUP, GROUP PROJECT 11 | 12 | 1. Meetup @ #sg_novice-ai 13 | 14 | 2. We started our group project implementations. 15 | 16 | 17 | 18 | REPO FOR 60DAYSOFUDACITY: 19 | ------------------------- 20 | https://github.com/ayivima/AI-SURFS/ 21 | 22 | PROGRESS: 23 | --------- 24 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 25 | 26 | ENCOURAGEMENTS 27 | -------------- 28 | Cheers to @erinSnPAI , @Andzelika Balyseviene , @Aleksandra Deis @PaulBruce, @Aisha Javed, @Temitope Oladokun, @K.S., @Labiba, @Shudipto Trafder, @Marwa, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 29 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY42.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | DAY 42: 5 | ======= 6 | 7 | ACTIVITIES 8 | --------------------------------------------------------------------------------------------------------------- 9 | ### CNN NLP PROJECT, EXERCISES FOR REINFORCEMENT OF COURSE CONCEPTS 10 | 11 | 1. Group Showcase project underway. 12 | 13 | 2. Spent time on my "crazy" CNN NLP Project. Investigated Semantic Segmentation. Wrapping my head around how the concepts could be transfered to NLP. 14 | I had the opportunity to read about FCN: https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf 15 | 16 | 3. I went through some of the course exercises...for reinforcement 17 | 18 | 19 | 20 | 21 | REPO FOR 60DAYSOFUDACITY: 22 | ------------------------- 23 | https://github.com/ayivima/AI-SURFS/ 24 | 25 | PROGRESS: 26 | --------- 27 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 28 | 29 | ENCOURAGEMENTS 30 | -------------- 31 | Cheers to @erinSnPAI , @Andzelika Balyseviene , @Aleksandra Deis @PaulBruce, @Aisha Javed, @Temitope Oladokun, @K.S., @Labiba, @Shudipto Trafder, @Marwa, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 32 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY43.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | DAY 43: 4 | ======= 5 | 6 | ACTIVITIES 7 | --------------------------------------------------------------------------------------------------------------- 8 | ### SHOWCASE CHALLENGE PLANNING, CNN NLP PROJECT 9 | 10 | 1. We had a lengthy planning session and discussion for our data strategy. Cheers! to @Olivia, @Ingus Terbets, @Hung, @Alexander Villasoto, @Stark, @George Christopoulos. It was a lot of brainstorming for a delicate dataset. 11 | 12 | 2. My CNN NLP Project is still in view... 13 | 14 | 3. Continued my reinforcement sweep over the course. 15 | 16 | 17 | 18 | 19 | REPO FOR 60DAYSOFUDACITY: 20 | ------------------------- 21 | https://github.com/ayivima/AI-SURFS/ 22 | 23 | PROGRESS: 24 | --------- 25 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 26 | 27 | ENCOURAGEMENTS 28 | -------------- 29 | Cheers to @Suparna S Nair, @erinSnPAI, @Andzelika Balyseviene , @Aleksandra Deis @PaulBruce, @Aisha Javed, @Temitope Oladokun, @K.S., @Labiba, @Shudipto Trafder, @Marwa, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 30 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY44.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | DAY 44: 5 | ======= 6 | 7 | ACTIVITIES 8 | --------------------------------------------------------------------------------------------------------------- 9 | GROUP SHOWCASE CHALLENGE CONTINUES, CNN NLP PROJECT 10 | 11 | 1. We had a virtual meetup...It was actually a stand-up...we are using a customized Scrum...Haha...To review our progress, and plan the next phases. We are making headway. 12 | @Stark, @George Christopoulos, @Ingus Terbets, @Alexander Villasoto, @ayivima, @Olivia, @Hung, @Marwa, @Shudipto Trafder, @Aarthi Alagammai, @Kapil Chandorikar, @Archit, @Oudarjya Sen Sarma 13 | 14 | 2. My CNN NLP Project is still in view... 15 | 16 | 3. Continued my reinforcement sweep over the course... 17 | 18 | 19 | 20 | 21 | REPO FOR 60DAYSOFUDACITY: 22 | ------------------------- 23 | https://github.com/ayivima/AI-SURFS/ 24 | 25 | PROGRESS: 26 | --------- 27 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 28 | 29 | ENCOURAGEMENTS 30 | -------------- 31 | Cheers to @Suparna S Nair, @erinSnPAI, @Andzelika Balyseviene , @Aleksandra Deis @PaulBruce, @Aisha Javed, @Temitope Oladokun, @K.S., @Labiba, @Shudipto Trafder, @Marwa, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 32 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY45.md: -------------------------------------------------------------------------------- 1 | 2 | DAY 45: 3 | ======= 4 | ACTIVITIES 5 | --------------------------------------------------------------------------------------------------------------- 6 | SEVERAL MEETINGS, SESSIONS OF BRAINSTORMING, AND STAND-UP FOR GROUP SHOWCASE CHALLENGE 7 | 1. Progress upon progress at #sg_novice-ai. And, we have been having sessions upon sessions of brainstorming since beggining of the GMT Day...Haha...And we ended with a Stand-up! 8 | Cheers! @Stark, @George Christopoulos, @Ingus Terbets, @Alexander Villasoto, @Olivia, @Hung, @Marwa, @Shudipto Trafder, @Aarthi Alagammai, @Kapil Chandorikar, @Archit, @cibaca, @Oudarjya Sen Sarma, @Pooja Vinod 9 | NB. The entire day was for our showcase challenge project!!! 10 | REPO FOR 60DAYSOFUDACITY: 11 | ------------------------- 12 | https://github.com/ayivima/AI-SURFS/ 13 | PROGRESS: 14 | --------- 15 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 16 | ENCOURAGEMENTS 17 | -------------- 18 | Cheers to @Suparna S Nair, @erinSnPAI, @Andzelika Balyseviene , @Aleksandra Deis @PaulBruce, @Aisha Javed, @Temitope Oladokun, @K.S., @Labiba, @Shudipto Trafder, @Marwa, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 19 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY46.md: -------------------------------------------------------------------------------- 1 | 2 | DAY 46: 3 | ======= 4 | 5 | ACTIVITIES 6 | --------------------------------------------------------------------------------------------------------------- 7 | GROUP SHOWCASE CHALLENGE PROJECT CONTINUES 8 | 9 | 1. We had some brainstorming sessions for staging the model #sg_novice-ai. 10 | 11 | 2. Special shout out to @George Christopoulus for leading us in implementing and reaching our preliminary model staging goal - the model was really dumb yesterday :smile:. And, to @Alexander Villasoto, and @Ingus Terbets for the tireless efforts at Data Control. 12 | 13 | Long live #sg_novice-ai!!! 14 | 15 | 16 | REPO FOR 60DAYSOFUDACITY: 17 | ------------------------- 18 | https://github.com/ayivima/AI-SURFS/ 19 | 20 | PROGRESS: 21 | --------- 22 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 23 | 24 | 25 | ENCOURAGEMENTS 26 | -------------- 27 | Cheers to @Suparna S Nair, @erinSnPAI, @Andzelika Balyseviene , @Aleksandra Deis @PaulBruce, @Aisha Javed, @Temitope Oladokun, @K.S., @Labiba, @Shudipto Trafder, @Marwa, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 28 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY47.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | DAY 47: 4 | ======= 5 | 6 | ACTIVITIES 7 | --------------------------------------------------------------------------------------------------------------- 8 | GROUP SHOWCASE CHALLENGE PROJECT CONTINUES 9 | 10 | 1. 2 meetups for Documentation Team #sg_novice-ai. @cibaca @Pooja Vinod @Rosa Paccotacya @Stark @ayivima @Hung @Olivia 11 | 12 | 2. Brainstorming sessions for model strategy. @Alexander Villasoto @Ingus Terbets @Stark @ayivima @Hung @Olivia @George Christopoulus 13 | 14 | 3. Discussions for second model team spearheaded by @Anju Mercian. 15 | 16 | 4. Worked on preprocessing with @Aarthi Alagammai. 17 | 18 | 19 | Long live #sg_novice-ai!!! 20 | 21 | 22 | REPO FOR 60DAYSOFUDACITY: 23 | ------------------------- 24 | https://github.com/ayivima/AI-SURFS/ 25 | 26 | PROGRESS: 27 | --------- 28 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 29 | 30 | 31 | ENCOURAGEMENTS 32 | -------------- 33 | Cheers to @Suparna S Nair, @erinSnPAI, @Andzelika Balyseviene , @Aleksandra Deis @PaulBruce, @Aisha Javed, @Temitope Oladokun, @K.S., @Labiba, @Shudipto Trafder, @Marwa, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 34 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY48.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | DAY 48: 7 | ======= 8 | 9 | ACTIVITIES 10 | --------------------------------------------------------------------------------------------------------------- 11 | GROUP SHOWCASE CHALLENGE PROJECT CONTINUES 12 | 13 | 1. We had multiple stand-up sessions for various sub teams at #sg_novice-ai. 14 | 15 | 2. Took part in implementations for model. 16 | 17 | 18 | Long live #sg_novice-ai!!! 19 | 20 | 21 | REPO FOR 60DAYSOFUDACITY: 22 | ------------------------- 23 | https://github.com/ayivima/AI-SURFS/ 24 | 25 | PROGRESS: 26 | --------- 27 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 28 | 29 | 30 | ENCOURAGEMENTS 31 | -------------- 32 | Cheers to @Suparna S Nair, @erinSnPAI, @Andzelika Balyseviene , @Aleksandra Deis @PaulBruce, @Aisha Javed, @Temitope Oladokun, @K.S., @Labiba, @Shudipto Trafder, @Marwa, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 33 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY49.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | DAY 49: 4 | ======= 5 | 6 | ACTIVITIES 7 | --------------------------------------------------------------------------------------------------------------- 8 | GROUP SHOWCASE CHALLENGE PROJECT CONTINUES 9 | 10 | 1. We had a general stand-up to wrap up for the showcase challenge #sg_novice-ai. 11 | 12 | 2. We had a prior stand-up for documentation sub-team #sg_novice-ai. 13 | 14 | 3. Worked on select documentation sections. 15 | 16 | 17 | 18 | 19 | Long live #sg_novice-ai!!! 20 | 21 | 22 | REPO FOR 60DAYSOFUDACITY: 23 | ------------------------- 24 | https://github.com/ayivima/AI-SURFS/ 25 | 26 | PROGRESS: 27 | --------- 28 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 29 | 30 | 31 | ENCOURAGEMENTS 32 | -------------- 33 | Cheers to @Suparna S Nair, @erinSnPAI, @Andzelika Balyseviene , @Aleksandra Deis @PaulBruce, @Aisha Javed, @Temitope Oladokun, @K.S., @Labiba, @Shudipto Trafder, @Marwa, @iso., @Yemi, @Alexander Villasoto, @Ingus Terbets, @Agata [OR, USA], @Hung, @Eman, @Aarthi Alagammai, @Olivia, @Sharim, @Hung, @Varez.W, @THIYAGARAJAN R, @LauraT , @Anna Scott, @Nirupama Singh, @Frida, @Lisa Crossman, @Stark, @Samuela Anastasi, @nabhanpv, @Nana Aba T, @geekykant, @Shaam, @EPR, @Anshu Trivedi, @George Christopoulos, @Vivank Sharma, @Heather A, @Joyce Obi, @Aditya kumar, @vivek, @Florence Njeri, @Jess, @J. Luis Samper, @gfred, @Erika Yoon 34 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY50.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | DAY 50: 5 | ======= 6 | 7 | ACTIVITIES 8 | --------------------------------------------------------------------------------------------------------------- 9 | GROUP SHOWCASE CHALLENGE PROJECT CONTINUES 10 | 11 | 1. We are wrapping up for the showcase challenge submission #sg_novice-ai. We had climaxing brainstorming sessions within sub-teams. 12 | 13 | 2. Made contributions to documentation sub-team #sg_novice-ai. 14 | 15 | 3. Worked on security functionality for our model and data. 16 | 17 | 4. Great to hit the 50th day mark too...finally! 18 | 19 | 20 | REPO FOR 60DAYSOFUDACITY: 21 | ------------------------- 22 | https://github.com/ayivima/AI-SURFS/ 23 | 24 | PROGRESS: 25 | --------- 26 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 27 | 28 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY51.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | DAY 51: 6 | ======= 7 | 8 | ACTIVITIES 9 | --------------------------------------------------------------------------------------------------------------- 10 | GROUP SHOWCASE CHALLENGE PROJECT CONTINUES 11 | 12 | 1. Been working on final touches for our project across teams. 13 | 14 | 2. We finally submitted our project. It's been an amazing journey with an amazing team. Lots of love to every single soul in #sg_novice-ai - @Ingus Terbets @Anju Mercian, @Pooja Vinod, @Alexander Villasoto, @Olivia, @Hung, @Marwa, @Shudipto Trafder, 15 | @Aarthi Alagammai, @Agata [OR, USA], @Kapil Chandorikar, @Oudarjya Sen Sarma @Rosa Paccotacya @George Christopoulos @Stark. We made it!!!! The future is exciting!!!! 16 | 17 | 3. It was a roller-coaster, but fulfilling day!!! 18 | 19 | 20 | REPO FOR 60DAYSOFUDACITY: 21 | ------------------------- 22 | https://github.com/ayivima/AI-SURFS/ 23 | 24 | PROGRESS: 25 | --------- 26 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 27 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY52.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | DAY 52: 7 | ======= 8 | 9 | ACTIVITIES 10 | --------------------------------------------------------------------------------------------------------------- 11 | RELAXATION AND REGURGITATION OF COURSE CONTENT 12 | 13 | 1. Had a second look at Encrypted Deep Learning, and explored some external resources on Cryptography. Sample --> http://library.open.oregonstate.edu/cryptography/chapter/chapter-3-secret-sharing/ 14 | 15 | 2. Did some labs in IBM cloud on Keras...As part of cooling off mode. 16 | 17 | 18 | REPO FOR 60DAYSOFUDACITY: 19 | ------------------------- 20 | https://github.com/ayivima/AI-SURFS/ 21 | 22 | PROGRESS: 23 | --------- 24 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 25 | 26 | 27 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY53.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | DAY 53: 8 | ======= 9 | 10 | ACTIVITIES 11 | --------------------------------------------------------------------------------------------------------------- 12 | RELAXATION AND REGURGITATION OF COURSE CONTENT 13 | 14 | 1. Explored PySyft Source code...exploring the logic behind the implementations 15 | 16 | 2. Completed some exercises on Keras as part of an IBM course. 17 | 18 | 3. Took time to relax...and enjoy the good tidings floating in the air about our project #sg_novice-ai 19 | 20 | 21 | REPO FOR 60DAYSOFUDACITY: 22 | ------------------------- 23 | https://github.com/ayivima/AI-SURFS/ 24 | 25 | PROGRESS: 26 | --------- 27 | https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS 28 | 29 | 30 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY54.md: -------------------------------------------------------------------------------- 1 | 2 | DAY 54 3 | ====== 4 | 5 | 1. Worked on Logistic Regression multi classification problems using Scikit-Learn. 6 | 7 | 2. Worked on keras in IBM Cloud as part of an IBM AI course 8 | 9 | 3. Had a meetup in #sg_novice-ai 10 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY55.md: -------------------------------------------------------------------------------- 1 | 2 | DAY 55 3 | ====== 4 | 5 | I had a tragic miss yesterday. 6 | 7 | 1. Today was graced with a webinar on Deep Learning with Pytorch by @Labiba, @K.S., @Pooja Vinod 8 | 9 | 2. Worked on my ML course, and wrote extensions for sklearn as part of course exercises. 10 | 11 | 3. Special shout out to all those who have made it through the entire 60-Day challenge. It's a remarkable milestone to celebrate. 12 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY56.md: -------------------------------------------------------------------------------- 1 | 2 | DAY 56 3 | ====== 4 | 5 | 1. Continued with my extensions for sklearn. 6 | 7 | 2. Wrote a new script to retrieve new data for the corpus for my NLP with CNN project 8 | 9 | 3. Came across this meme and I thought I should share it: https://www.linkedin.com/posts/deeplearningai_aifun-activity-6571813392447795202-FoxG 10 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY57.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | DAY 57 4 | ====== 5 | 6 | 1. I attended the second of our end-of-course webinars on SPAI - "Differential Privacy" - with presentations by @Alexander Villasoto @Ingus Terbets. 7 | 8 | 2. Continued with writing extensions for sklearn as part of an ML course exercise. 9 | 10 | 3. Making some headway on my corpus for NLP with CNN stunt. 11 | 12 | *NB: Forgot to mention that we worked on our social poll posting yesterday. Check our project here: https://secureprivataischolar.slack.com/archives/CMHLLUAE5/p1566829378175700 13 | 14 | S P E C I A L C O N G R A T U L A T I O N S ! to all those who completed the 60 DAYS OF UDACITY CHALLENGE. 15 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY58.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | DAY 58 4 | ====== 5 | 6 | 1. I attended the final of our end-of-course webinars on SPAI - "The future of Secure Federated learning and Encrypted Deep Learning" - presented by @Anju Mercian @Shudipto Trafder @Oudarjya Sen Sarma. 7 | 8 | 9 | 2. Special Shout Out to all our presenters for all three enlightening webinars on 25 Aug, 27 Aug, and today, 28 Aug: 10 | 11 | @Pooja Vinod, @Labiba, @K.S - Deep Learning with Pytorch. 12 | 13 | @Alexander Villasoto, @Ingus Terbets - Differential Privacy 14 | 15 | @Anju Mercian @Shudipto Trafder @Oudarjya Sen Sarma - The future of Secure Federated learning and Encrypted Deep Learning 16 | 17 | 18 | 3. Special Thank You to @Yemi for the honorable recognition certificate. I dedicate this to you @Suparna S Nair and @Marwa; Thank you very much. 19 | 20 | 21 | S P E C I A L C O N G R A T U L A T I O N S !...to all those who completed the 60 DAYS OF UDACITY CHALLENGE. 22 | 23 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/DAY59.md: -------------------------------------------------------------------------------- 1 | 2 | DAY 59 3 | ====== 4 | 5 | ONE DAY MORE... 6 | 7 | 1. Wrapping up on the course. Started a final run through, to reinforce the concepts even more. 8 | 9 | 2. I have been working on keras as part of an IBM AI course. 10 | 11 | 3. And, continued writing machine learning extensions for sklearn as part of another ML course. 12 | 13 | S P E C I A L C O N G R A T U L A T I O N S ! to the winning teams from the showcase challenge, and to all those who completed the 60 DAYS OF UDACITY CHALLENGE. 14 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/README.md: -------------------------------------------------------------------------------- 1 | 2 | 09/07/2019 DAY12 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY12.md 3 | 4 | 10/07/2019 DAY13 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY13.md 5 | 6 | 11/07/2019 DAY14 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY14.md 7 | 8 | 12/07/2019 DAY15 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY15.md 9 | 10 | 13/07/2019 DAY16 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY16.md 11 | 12 | 14/07/2019 DAY17 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY17.md 13 | 14 | 15/07/2019 DAY18 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY18.md 15 | 16 | 16/07/2019 DAY19 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY19.md 17 | 18 | 17/07/2019 DAY20 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY20.md 19 | 20 | 18/07/2019 DAY21 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY21.md 21 | 22 | 19/07/2019 DAY22 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY22.md 23 | 24 | 21/07/2019 DAY23 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY23.md 25 | 26 | 22/07/2019 DAY24 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY24.md 27 | 28 | 23/07/2019 DAY25 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY25.md 29 | 30 | 24/07/2019 DAY26 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY26.md 31 | 32 | 25/07/2019 DAY27 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY27.md 33 | 34 | 26/07/2019 DAY28 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY28.md 35 | 36 | 27/07/2019 DAY29 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY29.md 37 | 38 | 29/07/2019 DAY30 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY30.md 39 | 40 | 30/07/2019 DAY31 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY31.md 41 | 42 | 31/07/2019 DAY32 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY32.md 43 | 44 | 1/08/2019 DAY33 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY33.md 45 | 46 | 2/08/2019 DAY34 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY34.md 47 | 48 | 3/08/2019 DAY35 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY35.md 49 | 50 | 4/08/2019 DAY36 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY36.md 51 | 52 | 5/08/2019 DAY37 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY37.md 53 | 54 | 6/08/2019 DAY38 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY38.md 55 | 56 | 16/08/2019 DAY48 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY48.md 57 | 58 | 18/08/2019 DAY49 -> https://github.com/ayivima/AI-SURFS/blob/master/DAYS_PROGRESS/DAY49.md 59 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/imgs/README.md: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /DAYS_PROGRESS/imgs/wrapper_shot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/DAYS_PROGRESS/imgs/wrapper_shot.png -------------------------------------------------------------------------------- /DAYS_PROGRESS/imgs/wrapper_shot2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/DAYS_PROGRESS/imgs/wrapper_shot2.png -------------------------------------------------------------------------------- /DAYS_PROGRESS/imgs/wrapper_shot3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/DAYS_PROGRESS/imgs/wrapper_shot3.png -------------------------------------------------------------------------------- /Differential_Privacy/Diff_Attack.md: -------------------------------------------------------------------------------- 1 | DIFFERENCING ATTACK: A SIMPLE OVERVIEW 2 | ====================================== 3 | 4 | At a very basic level, a DIFFERENCING ATTACK is when you use the DIFFERENCE between the results of two queries to find out information about an individual. 5 | 6 | FOR EXAMPLE: 7 | Consider the database of earnings of four people below. Assume, we wanted to find out how much Jane earns. But, we cannot have access to Jane's data directly. We can decide to find out how the sum of earnings change with and without Jane. 8 | 9 | ``` 10 | database = {"Jane":10000, "Doe":2000, "John":2500, "Dovy":3000} 11 | 12 | sum1 = 0 13 | for key in database: 14 | sum1 += database.get(key) 15 | 16 | sum2 = 0 17 | for key in database: 18 | if key!="Jane": 19 | sum2 += database.get(key) 20 | 21 | print("Total Sum: ", sum1) 22 | print("Sum without Jane: ", sum2) 23 | print("Earnings of Jane: ", sum1-sum2) 24 | ``` 25 | OUTPUT: 26 | ``` 27 | Total Sum: 17500 28 | Sum without Jane: 7500 29 | Earnings of Jane: 10000 30 | ``` 31 | That simple...And, we just found out the earning of Jane without querying it directly: Rightly, 10000. This is how the DIFFERENCING ATTACK works. 32 | If the data about people are different enough, when we take one person out, the query result can change and we can leverage this change to find out information about a targeted individual, without having direct access to the individual's data. 33 | 34 | -------------------------------------------------------------------------------- /Differential_Privacy/On_the_limits_of_DP.md: -------------------------------------------------------------------------------- 1 | 2 | ON THE LIMITS OF DIFFERENTIAL PRIVACY 3 | ===================================== 4 | 5 | 6 | 7 | HOW HIGH MUST THE EXPECTATIONS FOR DIFFERENTIAL PRIVACY BE? 8 | ---------------------------------------------------------- 9 | 10 | Firstly, we must be appreciative of the limits of differential privacy. DP is not a fix to all our privacy problems. Neither is the combination of differential privacy and its predecessors. 11 | However, we can safely say, it should provide the best privacy-preserving approach as of now, matching with the level at which data hungry systems keep "sniffing" our private data today. Additionally, it is a pretty "young" technology, and it is definitely still 12 | learning to "chew the bones" of data privacy issues. 13 | 14 | A deeper understanding of DP also unveils the true essence of the PROMISE. It is not an absolute assurance of non-maleficence. Differential privacy is not absolutely assuring us that we will not be harmed by allowing our sensitive data to be accessed. It rather promises that 15 | it will provide an amount of coverage for our data, in a manner that compares with having our data in private storage, and kept from public access. 16 | Subsequently, with DP, we can access private data and prevent extra harm that would be a result of this access. So that, whether we accessed the private data or not, 17 | the cost of privacy remains the same. 18 | 19 | 20 | WHAT CAN WE SAY ABOUT LIMITS OF DIFFERENTIAL PRIVACY? 21 | ----------------------------------------------------- 22 | 23 | We expect that some privacy costs will be immune to the strategies of DP and its precedecessors. It is expected! Foremost, because 24 | DP is still a growing approach to privacy. Then, DP cannot deal with problems of data storage and other faults of data strategy outside of access. DP is blind to them. 25 | 26 | The other twist, probably the most important, is that there are always issues that are resistant to technologies. 27 | At best, we can model these cases as "outliers". They may be considered as special cases, currently resistant to the best of technologies. 28 | Then, proactively, these should be the drivers for research into better privacy-preserving industry approaches - it may take a form of a better DP, 29 | or another strategy with or without the two augmenting each other. The former will be cool. 30 | -------------------------------------------------------------------------------- /Differential_Privacy/Sens_Epsilon.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | SENSITIVITY & EPSILON IN DIFFERENTIAL PRIVACY 4 | ============================================= 5 | 6 | SENSITIVITY 7 | ----------- 8 | The maximum extent to which the output of a query to a database will change with the removal of a datapoint. 9 | 10 | Example: 11 | 12 | - For, a database `A = [1, 6, 7, 19]`, the sensitivity of a sum query will become `19` 13 | - For, a database `B = [-1, -2, -5, -10]`, the sensitivity of a sum query becomes `10` 14 | - For, a database `C = [0, 1, 1, 0, 1, 1]`, the sensitivity of a sum query becomes `1` 15 | 16 | Taking `A = [1, 6, 7, 19]` as a case study, these are the sums for each round of summation with one item removed. 17 | 18 | Remove 1: `6 + 7 + 19 = 32` 19 | Remove 6: `1 + 7 + 19 = 27` 20 | Remove 7: `1 + 6 + 19 = 26` 21 | Remove 19: `1 + 6 + 7 = 14` 22 | 23 | The total of all numbers is `33`. Since `14` is our smallest sum, we can get the highest drop by subtracting it from `33`. 24 | 25 | Already, it is evident that our highest drop in the sums happens when we remove 19. 26 | 27 | 28 | 29 | 30 | EPSILON 31 | ------- 32 | 33 | To simplify things, remember that we pick a random number from a range of numbers and add to the output of our query as noise, in differential privacy. 34 | Now, we just do not want any range. We want a range of numbers that is centered around a certain number, and we choose the Laplacian distribution for this purpose. 35 | 36 | Let's visualise. 37 | 38 | We want to pick a random number to add to `3`, anytime someone asks for `3`. And, we want our random numbers to be centered around `0`. 39 | 40 | We can decide on any of these range of numbers: 41 | 42 | ``` 43 | X = [-0.2, -0.1, 0, 0.1, 0.2] 44 | Y = [-0.5, -0.4, -0.3, -0.2, -0.1, 0, 0.1, 0.2, 0.3, 0.4, 0.5] 45 | Z = [-0.9, -0.8, -0.7, -0.6, -0.5, -0.4, -0.3, -0.2, -0.1, 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9] 46 | ``` 47 | 48 | Notice that all these ranges of numbers center around `0`. 49 | 50 | In `X`, we have a narrow range of numbers. When we pick a number from `X` to add to `3`, the greatest changes will be `2.8` and `3.2` which are relatively close to `3`. The accuracy of the result will not be too affected. But, because the noised output will not be too far from the real 3, it will be easier to guess it...meaning less privacy. 51 | 52 | `Z`, has the widest range of numbers. Using Z for your noise will mean `3` may be outputed as anything from `2.1` to `3.9`. Acuuracy reduces as `2.1` and `3.9` are relatively far from `3` 53 | 54 | 55 | If you understand that, I used X, Y, Z to stand for the laplacian distribution...For easy understanding, just consider laplacian distribution as a range of numbers centered around a number for now. 56 | 57 | Then, Epsilon determines how narrow the distribution range is and, hence, how accurate or private the output will be. 58 | 59 | EPSILON, PRIVACY, SENSITIVITY 60 | ----------------------------- 61 | 62 | We can think of Epsilon in this light: 63 | 64 | - High Epsilon - High accuracy, Low Privacy...because we are choosing noise from a narrower range of numbers. 65 | 66 | - Low Epsilon - High Privacy, Low Accuracy...because we are choosing noise from a wider range of numbers. 67 | 68 | Thus, in the light of privacy, the higher the sensitivity of the database query, the lower the epsilon should be and the vice versa. 69 | 70 | Bringing it to the technical definition, epsilon is what we use to control how much information leak we can permit. If it is high, it means we are trying to priotize accuracy, if it is low, we are trying to prioritize privacy. In between, we get a balance. It becomes more of a threshold for information leak. 71 | -------------------------------------------------------------------------------- /FashionMNIST/README.md: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /FashionMNIST/output_23_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/FashionMNIST/output_23_1.png -------------------------------------------------------------------------------- /FashionMNIST/output_24_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/FashionMNIST/output_24_1.png -------------------------------------------------------------------------------- /FashionMNIST/output_39_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/FashionMNIST/output_39_1.png -------------------------------------------------------------------------------- /FashionMNIST/output_40_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/FashionMNIST/output_40_1.png -------------------------------------------------------------------------------- /FashionMNIST/output_41_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/FashionMNIST/output_41_1.png -------------------------------------------------------------------------------- /Federated_Learning/Remote_Execution_Overview.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | AN OVERVIEW OF REMOTE EXECUTION IN PYSYFT 5 | ========================================= 6 | 7 | An attempt at explaining, in simple terms, Remote Execution with PySyft. It becomes quite a simple process if we break it down in steps. 8 | 9 | 10 | What is remote execution? 11 | ------------------------- 12 | Remote computing allows us to perform tasks on another person's computer on our network. More simply, if I wanted to do a computation, and I cannot run it on my computer, I can do it remotely on another computer. A close example is how we train our models in the Udacity workspace or Google colab. We can see how mainstream this has become. 13 | 14 | When it comes to federated learning, we want to use training outcomes from several devices to make predictions. The only way we can do this is to be able to train and send outcomes remotely. 15 | This is what is demonstrated by the Bob example. 16 | 17 | 18 | What are Pointers? 19 | ------------------ 20 | 21 | When we store something in a computer's memory, how do we know where it is? 22 | 23 | That's the work of pointers. Pointers store the addresses, or locations, of the things we store in memory. When it is time to get back a stored value, a pointer for the value we look for is checked for the address, and then the value can be fetched. 24 | 25 | In federated learning, this is particularly useful because our resources, tensors and model resources etc, are stored on different devices. This means we need to locally know where what is, then we can get them when needed. Thus, pointers reference our various connections, or other devices we communicate to, if i should put it, so that our models can correspond with them when needed. 26 | 27 | 28 | What are Virtual Workers? 29 | ------------------------- 30 | 31 | Virtual workers are the heart of federated learning. They are responsible for running all the "errands". In otherwords, they are the objects that do the real communication between devices, sending commands and receiving outcomes. 32 | 33 | 34 | What is garbage collection? 35 | --------------------------- 36 | Garbage is simply something we do not need any longer. When our programs run, the variables and data we store in memory must be removed when we do not need them any longer. This is what Garbage collection achieves, ultimately. In federated learning, this is even more important because we do not want to congest the devices of users with data we do not need any longer. Thus, if Garbage collection is activated, when we retrieve outcomes from a remote device, these outcomes are deleted to save memory. 37 | 38 | 39 | What's the link between all these? 40 | ---------------------------------- 41 | 42 | Deep learning is the trend today. But, privacy issues continue to linger. Thus, we need to go at all odds to ensure that we do not cause harm to people by using their data to train our models. One of the emerging ways to achieve this is federated learning. Through federated learning, we give users control over their data. Instead of asking them to send us their data for our models to use in training and making predictions or classification, we ask them to keep their data, perform the training and other tasks on their devices, and send us the outcomes. This way we do not get to invade their privacy. Thus, we execute the tasks remotely on their devices, and everybody gets happy. 43 | -------------------------------------------------------------------------------- /Fundamentals_of_Deep_Learning/README.rst: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /Green_shade_classifier/README.md: -------------------------------------------------------------------------------- 1 | SINGLE LAYER SHADES OF GREEN CLASSIFIER IMPLEMENTED IN STANDARD/CORE PYTHON 2 | =========================================================================== 3 | 4 | 5 | A simple neural network built with core python, that classifies shades of green from non-shades of green. 6 | This was built as a demonstration and exploration of the basic functionality of neural networks. 7 | 8 | This demonstrates the inherent power of python and the amzing workings or artificial neural networks. 9 | 10 | *Victor mawusi Ayi* 11 | 12 | Sample Images 13 | ------------- 14 | 15 | 16 | Code 17 | ---- 18 | 19 | ``` 20 | from math import exp 21 | from PIL import Image 22 | import numpy as np 23 | 24 | 25 | def nn(features, weights): 26 | 27 | #weights = [-1.7,1.8,-1.7] 28 | bias = 1 29 | 30 | # sigmoid activation function 31 | def activation(x): 32 | return ( 33 | 1/(1 + exp(1-x)) 34 | ) 35 | 36 | # get inner product of features and weights 37 | feature_weight_dot = sum( 38 | [x*y for x,y in zip(features, weights)] 39 | ) 40 | 41 | # add dot product of features and weight to bias 42 | linear_result = feature_weight_dot + bias 43 | 44 | 45 | return(bool(round(activation(linear_result),3))) 46 | 47 | 48 | def load_images(list_of_image_paths, weights): 49 | # 50 | results = [] 51 | 52 | for path in list_of_image_paths: 53 | image = Image.open(path) 54 | arrays_from_image = np.array(image) 55 | 56 | results.append((path, nn(arrays_from_image[25][0], weights))) 57 | 58 | print("\n\n----", weights, "\n-----") 59 | return results 60 | 61 | 62 | 63 | img_paths = ["img{}.png".format(i) for i in range(1,13)] 64 | 65 | # Load images and classify them using different weights 66 | for x,y in load_images(img_paths, [-1.6,1.8,-1.6]): 67 | print(x.upper(), y) 68 | 69 | for x,y in load_images(img_paths, [-1.7,1.8,-1.7]): 70 | print(x.upper(), y) 71 | 72 | for x,y in load_images(img_paths, [-1.8,1.8,-1.8]): 73 | print(x.upper(), y) 74 | 75 | ``` 76 | 77 | How it works 78 | ------------ 79 | 80 | The load_images() function loads the image and gets the RGB values for its pixels. This system works on only homogenously coloured images. While this condition is satisfied, the load_images() function picks a pixel's RGB, and serves it as input to the neural network, with a specified array of corresponding weights. The bias is always 1. 81 | 82 | The summation of the bias and the inner product of the RGB features and weights, are passed through a sigmoid activation function and a boolean equivalent of the output is returned. `True` is returned if the image is a shade of green, and `False` is returned if it is not. 83 | 84 | 85 | Outcomes and Performance 86 | ------------------------ 87 | 88 | 1. Using weights [-1.6,1.8,-1.6] 89 | 90 | ``` 91 | [-1.6, 1.8, -1.6] 92 | ================== 93 | [x] IMG1.PNG True 94 | [x] IMG2.PNG True 95 | [x] IMG3.PNG True 96 | [x] IMG4.PNG False 97 | [x] IMG5.PNG False 98 | [ ] IMG6.PNG True 99 | [x] IMG7.PNG False 100 | [x] IMG8.PNG False 101 | [x] IMG9.PNG True 102 | [x] IMG10.PNG False 103 | [x] IMG11.PNG True 104 | [x] IMG12.PNG True 105 | ``` 106 | 107 | 108 | 109 | 110 | 2. Using weights [-1.7,1.8,-1.7] 111 | 112 | ``` 113 | [-1.7, 1.8, -1.7] 114 | ================== 115 | [x] IMG1.PNG True 116 | [x] IMG2.PNG True 117 | [x] IMG3.PNG True 118 | [x] IMG4.PNG False 119 | [x] IMG5.PNG False 120 | [ ] IMG6.PNG True 121 | [x] IMG7.PNG False 122 | [x] IMG8.PNG False 123 | [x] IMG9.PNG True 124 | [x] IMG10.PNG False 125 | [ ] IMG11.PNG False 126 | [ ] IMG12.PNG False 127 | ``` 128 | 129 | The weights `[-1.6, 1.8, -1.6]` produced the best classification. 130 | 131 | 132 | Conclusion 133 | ---------- 134 | 135 | - But for the trouble with IMG6.PNG, the neural network did great classifying shades of green from the sample images. 136 | - Changing the weights changed the classification and this demonstrates the refinement that is automatically achieved using pytorch's autograd and backpropagation. 137 | - Neural networks are amazing! 138 | 139 | 140 | -------------------------------------------------------------------------------- /Green_shade_classifier/img1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Green_shade_classifier/img1.png -------------------------------------------------------------------------------- /Green_shade_classifier/img10.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Green_shade_classifier/img10.png -------------------------------------------------------------------------------- /Green_shade_classifier/img11.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Green_shade_classifier/img11.png -------------------------------------------------------------------------------- /Green_shade_classifier/img12.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Green_shade_classifier/img12.png -------------------------------------------------------------------------------- /Green_shade_classifier/img2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Green_shade_classifier/img2.png -------------------------------------------------------------------------------- /Green_shade_classifier/img3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Green_shade_classifier/img3.png -------------------------------------------------------------------------------- /Green_shade_classifier/img4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Green_shade_classifier/img4.png -------------------------------------------------------------------------------- /Green_shade_classifier/img5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Green_shade_classifier/img5.png -------------------------------------------------------------------------------- /Green_shade_classifier/img6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Green_shade_classifier/img6.png -------------------------------------------------------------------------------- /Green_shade_classifier/img7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Green_shade_classifier/img7.png -------------------------------------------------------------------------------- /Green_shade_classifier/img8.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Green_shade_classifier/img8.png -------------------------------------------------------------------------------- /Green_shade_classifier/img9.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Green_shade_classifier/img9.png -------------------------------------------------------------------------------- /Green_shade_classifier/nn_in_py.py: -------------------------------------------------------------------------------- 1 | """ 2 | A simple neural network built with core python, 3 | that classifies shades of green from non-shades of green. 4 | 5 | Built as a demonstration and exploration of the basic 6 | functionality of neural networks. 7 | 8 | Victor mawusi Ayi 9 | """ 10 | 11 | import random 12 | from math import exp 13 | from PIL import Image 14 | import numpy as np 15 | 16 | 17 | def nn(features, weights): 18 | 19 | #weights = [-1.7,1.8,-1.7] 20 | bias = 1 21 | 22 | # sigmoid activation function 23 | def activation(x): 24 | return ( 25 | 1/(1 + exp(1-x)) 26 | ) 27 | 28 | # get inner product of features and weights 29 | feature_weight_dot = sum( 30 | [x*y for x,y in zip(features, weights)] 31 | ) 32 | 33 | # add dot product of features and weight to bias 34 | linear_result = feature_weight_dot + bias 35 | 36 | 37 | return(bool(round(activation(linear_result),3))) 38 | 39 | 40 | def load_images(list_of_image_paths, weights): 41 | # 42 | results = [] 43 | 44 | for path in list_of_image_paths: 45 | image = Image.open(path) 46 | arrays_from_image = np.array(image) 47 | 48 | results.append((path, nn(arrays_from_image[25][0], weights))) 49 | 50 | print("\n\n----", weights, "\n-----") 51 | return results 52 | 53 | 54 | 55 | img_paths = ["img{}.png".format(i) for i in range(1,13)] 56 | 57 | # Load images and classify them using different weights 58 | for x,y in load_images(img_paths, [-1.6,1.8,-1.6]): 59 | print(x.upper(), y) 60 | 61 | for x,y in load_images(img_paths, [-1.7,1.8,-1.7]): 62 | print(x.upper(), y) 63 | 64 | for x,y in load_images(img_paths, [-1.8,1.8,-1.8]): 65 | print(x.upper(), y) 66 | 67 | for x,y in load_images(img_paths, [-1.6,1.8,-1.6]): 68 | print(x.upper(), y) 69 | -------------------------------------------------------------------------------- /Green_shade_classifier/shot_of_images.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Green_shade_classifier/shot_of_images.png -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | BSD 2-Clause License 2 | 3 | Copyright (c) 2019, Victor Mawusi Ayi 4 | All rights reserved. 5 | 6 | Redistribution and use in source and binary forms, with or without 7 | modification, are permitted provided that the following conditions are met: 8 | 9 | 1. Redistributions of source code must retain the above copyright notice, this 10 | list of conditions and the following disclaimer. 11 | 12 | 2. Redistributions in binary form must reproduce the above copyright notice, 13 | this list of conditions and the following disclaimer in the documentation 14 | and/or other materials provided with the distribution. 15 | 16 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 17 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 18 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 19 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 20 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 21 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 22 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 23 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 24 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 25 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 26 | -------------------------------------------------------------------------------- /Matrixtools/README.md: -------------------------------------------------------------------------------- 1 | 2 | Matrix Arithmetic 3 | ----------------- 4 | 5 | Leveraging the flexibility and speed of standard python data structures to make matrix 6 | arithmetic handy for the beginner or pro python programmer. 7 | 8 | 9 | ``` 10 | D:\60AI\matrixtools\matrixkit>python 11 | Python 3.7.1 (v3.7.1:260ec2c36a, Oct 20 2018, 14:05:16) [MSC v.1915 32 bit (Intel)] on win32 12 | Type "help", "copyright", "credits" or "license" for more information. 13 | >>> 14 | >>> from matrixtools import Matrix 15 | >>> 16 | >>> x = Matrix([[1,2],[2,1]]) 17 | >>> 18 | >>> x.flattened() 19 | Matrix([1.0 2.0 2.0 1.0]) 20 | >>> 21 | >>> x 22 | Matrix([ 1.0 2.0 23 | 2.0 1.0]) 24 | >>> 25 | >>> y = Matrix([[2,1],[1,2]]) 26 | >>> 27 | >>> x.matmul(y) 28 | Matrix([ 4.0 5.0 29 | 5.0 4.0]) 30 | >>> 31 | >>> x.hadmul(y) 32 | Matrix([ 2.0 2.0 33 | 2.0 2.0]) 34 | >>> 35 | >>> x.add(y) 36 | Matrix([ 3.0 3.0 37 | 3.0 3.0]) 38 | >>> 39 | >>> x + y 40 | Matrix([ 3.0 3.0 41 | 3.0 3.0]) 42 | >>> 43 | >>> x**2 44 | Matrix([ 1.0 4.0 45 | 4.0 1.0]) 46 | >>> 47 | >>> from matrixtools import idmatrix 48 | >>> 49 | >>> idmatrix(3) 50 | Matrix([1.0 0.0 0.0 51 | 0.0 1.0 0.0 52 | 0.0 0.0 1.0]) 53 | >>> 54 | >>> idmatrix(7) 55 | Matrix([1.0 0.0 0.0 0.0 0.0 0.0 0.0 56 | 0.0 1.0 0.0 0.0 0.0 0.0 0.0 57 | 0.0 0.0 1.0 0.0 0.0 0.0 0.0 58 | 0.0 0.0 0.0 1.0 0.0 0.0 0.0 59 | 0.0 0.0 0.0 0.0 1.0 0.0 0.0 60 | 0.0 0.0 0.0 0.0 0.0 1.0 0.0 61 | 0.0 0.0 0.0 0.0 0.0 0.0 1.0]) 62 | >>> 63 | ``` 64 | -------------------------------------------------------------------------------- /Matrixtools/matmul_intro.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | MATRIX MULTIPLICATION: 4 | ===================== 5 | 6 | Matrix multiplication is core to our operations in DL, ML and AI in general. 7 | 8 | It is also, comparably, an expensive operation. And, investigations into optimization has been ongoing. 9 | 10 | A more optimized approach, called `Strassen algorithm`, attains a time complexity of O(n^2.8...). However, it is not as useful for small matrices and some other types of matrices. 11 | 12 | The fastest algorithm, `Coppersmith–Winograd algorithm`, attains O(n^2.3...). In recent years improvements have resulted in additional small gains in speed. However, it is rarely useful in industry, as it is thought to be practical for very large matrices, which may not be reslistic for our systems now. 13 | 14 | The standard operation, which we demonstrated, is more widely used, as it fits more used cases than the optimized options. Even though, we get a complexity of O(n^3) 15 | 16 | 17 | Key thing to note 18 | ----------------- 19 | 20 | Optimizing matrix multiplication requires knowledge of when to use which algorithms. The most practical algorithms now are the standard or naive algorithm, and the Strassen algorithm. For smaller matrices, the standard algorithm outperforms the Strassen. Then, as the size of matrices grow, the Strassen shines - for given types of matrices. 21 | 22 | 23 | THE STANDARD ALGORITHM 24 | ====================== 25 | 26 | Two matrices can only be multiplied if the number of columns of the first matrix equals the number of rows of the second matrix. 27 | 28 | For, a matrix represented as an n-dimesional array, the number of arrays in the parent array is the number of rows of the matrix. Then, the number of items in each sub-array is the number of columns in the matrix. 29 | 30 | We are only concerned with the naive implementation for our use case. This can be made more elegant. But, for the sake of time we wrap it up with this. 31 | 32 | ``` 33 | def mm(X, Y): 34 | 35 | # We get the number of rows and columns of the matrices. 36 | 37 | # the number of columns of matrices 38 | X_cols = len(X[0]) 39 | Y_cols = len(Y[0]) 40 | 41 | # the number of rows of both matrices 42 | X_rows = len(X) 43 | Y_rows = len(Y) 44 | 45 | # Number of columns of X, should be same as the number of rows of Y, to be able to perform the operation. 46 | if(X_cols == Y_rows): 47 | new_matrix = [] 48 | 49 | for i in range(X_rows): 50 | new_row = [] 51 | for j in range(Y_cols): 52 | new_val = 0 53 | for k in range(Y_rows): 54 | new_val += (X[i][k] * Y[k][j]) 55 | 56 | new_row.append(new_val) 57 | 58 | new_matrix.append(new_row) 59 | 60 | return new_matrix 61 | else: 62 | return "Arrays can NOT be multiplied" 63 | 64 | ``` 65 | 66 | We test this out for the following matrices: 67 | ``` 68 | P = [[1, 2, 1], [2, 1, 2]] 69 | Q = [[1, 1], [2, 2], [3, 3]] 70 | R = [[3, 3], [1, 1]] 71 | ``` 72 | 73 | OUTCOMES: 74 | ``` 75 | >>> mm(P, Q) 76 | [[8, 8], [10, 10]] 77 | >>> 78 | >>> mm(R, P) 79 | [[9, 9, 9], [3, 3, 3]] 80 | >>> 81 | >>> mm(P, R) 82 | Arrays can NOT be multiplied 83 | ``` 84 | 85 | 86 | -------------------------------------------------------------------------------- /Matrixtools/matrixtools.py: -------------------------------------------------------------------------------- 1 | """ 2 | 3 | (c) Copyright, Victor Mawusi Ayi. 2019 4 | All Rights Reserved! 5 | 6 | """ 7 | 8 | 9 | from vectorkit import Vector, isovector 10 | 11 | class Matrix(): 12 | 13 | def __init__(self, array_of_arrays): 14 | 15 | self.column_size = 0 16 | self.longest_value = 0 17 | self.row_size = 0 18 | self.rows = [] 19 | 20 | try: 21 | for array in array_of_arrays: 22 | arr_to_vec = Vector(array) 23 | self.rows.append(arr_to_vec) 24 | 25 | self.row_size += 1 26 | if arr_to_vec.dimensions > self.column_size: 27 | self.column_size = arr_to_vec.dimensions 28 | 29 | except ValueError: 30 | raise ValueError( 31 | "Matrices must contain arrays of numbers" 32 | ) 33 | 34 | self.__smoothen__() 35 | self.shape = (self.row_size, self.column_size) 36 | 37 | def __add__(self, other): 38 | if isinstance(other, Matrix): 39 | if self.shape == other.shape: 40 | return Matrix([ 41 | a.add(b).components for a, b in zip( 42 | self.rows, other.rows 43 | ) 44 | ]) 45 | else: 46 | raise ValueError( 47 | "Matrices of different dimensions cannot be added" 48 | ) 49 | 50 | else: 51 | raise TypeError( 52 | "Matrix addition can only occur between matrices" 53 | ) 54 | 55 | def __eq__(self, other): 56 | return self.rows==other.rows 57 | 58 | def __getitem__(self, key): 59 | return self.rows[key].components 60 | 61 | def __hash__(self): 62 | return hash(self.components) 63 | 64 | 65 | def __ne__(self, operand): 66 | return self.rows!=operand.rows 67 | 68 | def __neg__(self): 69 | return Matrix( 70 | [vector.reversed().components for vector in self.rows] 71 | ) 72 | 73 | def __pow__(self, power): 74 | return Matrix( 75 | [pow(vector, power).components for vector in self.rows] 76 | ) 77 | 78 | def __repr__(self): 79 | 80 | return "Matrix([{}])".format( 81 | "\n".ljust(9).join([ 82 | " ".join([ 83 | str(round(num, 2)).ljust(self.longest_value) for num in vector 84 | ]).rjust(10, " ") for vector in self.rows 85 | ]) 86 | ) 87 | 88 | def __smoothen__(self): 89 | long_num_len = 0 90 | for index in range(self.row_size): 91 | self.rows[index].pad(self.column_size) 92 | 93 | for component in self.rows[index]: 94 | component_len = len(str(round(component,2))) 95 | if component_len>long_num_len: 96 | long_num_len=component_len 97 | 98 | self.longest_value=long_num_len 99 | 100 | def matmul(self, other): 101 | if isinstance(other, Matrix): 102 | if self.shape[1]==other.shape[0]: 103 | 104 | result = [] 105 | other_components = [vector.components for vector in other.rows] 106 | 107 | other_prep = Matrix([ 108 | list(tuple) for tuple in zip(*other_components) 109 | ]) 110 | 111 | for x in self.rows: 112 | result.append( 113 | [x*y for y in other_prep.rows] 114 | ) 115 | else: 116 | raise ValueError( 117 | "The matrices do not have the right shapes to be multiplied" 118 | ) 119 | elif isinstance(other, Vector): 120 | if other.dimensions==self.shape[0]: 121 | result = [ 122 | other.dotmul(Vector(b)) for b in zip( 123 | *self.rows 124 | ) 125 | ] 126 | 127 | else: 128 | raise TypeError( 129 | "A matrix multiplication requires vectors or matrices." 130 | ) 131 | 132 | return Matrix(result) 133 | 134 | def hadmul(self, other): 135 | if isinstance(other, Matrix): 136 | if self.shape == other.shape: 137 | return Matrix([ 138 | [x*y for x,y in zip(a, b)] for a, b in zip( 139 | self.rows, other.rows 140 | ) 141 | ]) 142 | else: 143 | raise ValueError( 144 | "We cannot derive a Hadamard product" 145 | " of matrices of different dimensions" 146 | ) 147 | else: 148 | raise TypeError( 149 | "Hadamard product requires only matrices" 150 | ) 151 | 152 | def smul(self, other): 153 | if type(other) in (int, float): 154 | return ( 155 | Matrix([vector.smul(other).components for vector in self.rows]) 156 | ) 157 | else: 158 | raise ValueError( 159 | "Scalar multiplication must involve a sccalar and a matrix" 160 | ) 161 | 162 | def transposed(self): 163 | return ( 164 | Matrix([ 165 | tuple for tuple in zip( 166 | *[vector.components for vector in self.rows] 167 | ) 168 | ]) 169 | ) 170 | 171 | def add(self, other): 172 | return self.__add__(other) 173 | 174 | def subtract(self, other): 175 | if isinstance(other, Matrix): 176 | if self.shape == other.shape: 177 | return Matrix([ 178 | a.subtract(b).components for a, b in zip( 179 | self.rows, other.rows 180 | ) 181 | ]) 182 | else: 183 | raise ValueError( 184 | "Matrices of different dimensions cannot be subtracted" 185 | ) 186 | 187 | else: 188 | raise TypeError( 189 | "Matrix subtraction can only occur between matrices" 190 | ) 191 | 192 | def flattened(self): 193 | flat_matrix = self.rows[0] 194 | 195 | for index in range(1, self.row_size): 196 | flat_matrix = flat_matrix.concat(self.rows[index]) 197 | 198 | return Matrix([flat_matrix.components]) 199 | 200 | 201 | 202 | def idmatrix(n): 203 | rows = [] 204 | for i in range(n): 205 | rows.append( 206 | [0 if x!=i else 1 for x in range(n)] 207 | ) 208 | 209 | return Matrix(rows) 210 | 211 | 212 | -------------------------------------------------------------------------------- /ModelEncryptor/encryptor.py: -------------------------------------------------------------------------------- 1 | 2 | 3 | import syft 4 | import torch 5 | 6 | 7 | __author__ = "Victor Mawusi Ayi " 8 | 9 | 10 | class ModelEncryptor: 11 | 12 | def __init__(self, model, num_of_shares): 13 | 14 | hook = syft.TorchHook(torch) 15 | 16 | # Function for generating a share 17 | share = lambda share_id: ( 18 | syft.VirtualWorker(hook, id=share_id).add_worker(syft.local_worker) 19 | ) 20 | 21 | # Generate ids for shares based on number of shares 22 | share_ids = [ 23 | "".join(["share", str(num + 1)]) 24 | for num in range(num_of_shares + 1) 25 | ] 26 | 27 | # Generate shares based on number of shares specified 28 | self.shares = list(map(share, share_ids)) 29 | 30 | # Encrypt model 31 | self.model = ( 32 | model.fix_precision().share( 33 | *self.shares[:-1], crypto_provider=self.shares[-1] 34 | ) 35 | ) 36 | 37 | def encrypt_data(self, data): 38 | """Encrypts data.""" 39 | return ( 40 | data.fix_precision().share( 41 | *self.shares[:-1], crypto_provider=self.shares[-1] 42 | ) 43 | ) 44 | 45 | def predict(self, data): 46 | """Encrypts data, and returns the outcome of 47 | an encrypted prediction or classification. 48 | 49 | """ 50 | e_data = self.encrypt_data(data) 51 | 52 | return self.model(e_data).get().float_precision() 53 | 54 | 55 | -------------------------------------------------------------------------------- /Power_Of_Math_In_Image_Analysis/README.md: -------------------------------------------------------------------------------- 1 | # BRUTE FORCE DEMONSTRATION OF HOW CORRELATION BETWEEN RGB VALUES OF DIFFERENT IMAGES CAN BE USED FOR MATCHING THEM, AND HOW A WE CAN JUSTIFY CONVOLUTIONS AS NEEDFUL FOR IMAGE ANALYSIS. 2 | 3 | *NB: Acknowledgement: This exercise was initiated by a question by @Shaam from SPAIC.* 4 | 5 | ## OUTLINE 6 | - Code 7 | - Images 8 | - Explanation of Outcomes 9 | 10 | 11 | ## CODE 12 | ``` 13 | from PIL import Image 14 | import numpy as np 15 | 16 | 17 | def get_image_array(image_path): 18 | 19 | image = Image.open(image_path) 20 | array_from_image = np.array(image) 21 | 22 | return array_from_image 23 | 24 | 25 | def coalesce_into_column(multidir_image_array): 26 | single_array = [] 27 | a,b,c = multidir_image_array.shape 28 | 29 | # we are hitting an interesting time complexity here 30 | # we could possibly look out for a snappier way 31 | # not found one yet 32 | for i in range(a): 33 | for j in range(b): 34 | for k in range(c): 35 | single_array.append(multidir_image_array[i][j][k]) 36 | return single_array, len(single_array) 37 | 38 | 39 | def get_corr(array_of_image_arrays, slice_): 40 | # slice_ alllows us to slice images into same length somehow :) 41 | 42 | return np.corrcoef( 43 | [array[:slice_] for array in array_of_image_arrays] 44 | ) 45 | 46 | 47 | def corr_of_multiple_images(list_of_paths): 48 | array_of_image_arrays = [] 49 | minimum_length = float("+inf") 50 | 51 | # hitting another interest chaining 52 | # 53 | for image_path in list_of_paths: 54 | 55 | array, length = coalesce_into_column( 56 | get_image_array(image_path) 57 | ) 58 | 59 | # minimum_length will be useful for us to compare images 60 | # using the rough size of the smallest image 61 | minimum_length = ( 62 | minimum_length if length>minimum_length else length 63 | ) 64 | 65 | array_of_image_arrays.append(array) 66 | 67 | # In case the images are not the same, the 68 | # the minimum length helps us to slice all images to 69 | # same size. Would be better if the shapes are same. 70 | return get_corr(array_of_image_arrays, minimum_length) 71 | 72 | 73 | # You just need to submit a list of the image names to 74 | # corr_of_multiple_images() function if the images are many 75 | 76 | # change the image names available in your folder 77 | # this was for my demo 78 | 79 | print(corr_of_multiple_images(["samp1.jpeg","samp2.jpeg","samp3.jpeg","samp4.jpeg"])) 80 | ``` 81 | 82 | ## IMAGES 83 | ### SAMP1.jpeg 84 | 85 | 86 | ### SAMP2.jpeg 87 | 88 | 89 | ### SAMP3.jpeg 90 | 91 | 92 | ### SAMP4.jpeg 93 | 94 | 95 | ## EXPLANATION OF OUTCOMES 96 | 97 | The above code yielded the following results: 98 | 99 | ``` 100 | [[1. 0.25312495 0.54060884 0.52533697] 101 | [0.25312495 1. 0.33294508 0.07833567] 102 | [0.54060884 0.33294508 1. 0.39304165] 103 | [0.52533697 0.07833567 0.39304165 1. ]] 104 | ``` 105 | 106 | There was a positive correlation between SAMP1, SAMP3, and SAMP4. On visual inspection, we can confirm that this is true. The apple in SAMP3 resembles the frontmost apple in SAMP1, and the two green apples resemble the apples in SAMP4. We would expect that the apples in SAMP2 would also resemble the apple in SAMP3; this demonstrates the limitation of this brute force method, and the need for feature extraction, and tuning methods in Deep Learning and Computer Vision. 107 | 108 | ## TAKEAWAY 109 | 110 | This exercise explores correlation as a fundamental resource for image analysis, and then...the need for other methods in analysing and matching images. The notion of edges is particularly important, as has been the case forwarded by faithfuls of CNNs, if models can deeply assess simmilarities and disimilarities. 111 | 112 | 113 | 114 | 115 | -------------------------------------------------------------------------------- /Power_Of_Math_In_Image_Analysis/image_correlation.py: -------------------------------------------------------------------------------- 1 | 2 | from PIL import Image 3 | import numpy as np 4 | 5 | 6 | def get_image_array(image_path): 7 | 8 | image = Image.open(image_path) 9 | array_from_image = np.array(image) 10 | 11 | return array_from_image 12 | 13 | 14 | def coalesce_into_column(multidir_image_array): 15 | single_array = [] 16 | a,b,c = multidir_image_array.shape 17 | 18 | # we are hitting an interesting time complexity here 19 | # we could possibly look out for a snappier way 20 | # not found one yet 21 | for i in range(a): 22 | for j in range(b): 23 | for k in range(c): 24 | single_array.append(multidir_image_array[i][j][k]) 25 | return single_array, len(single_array) 26 | 27 | 28 | def get_corr(array_of_image_arrays, slice_): 29 | # slice_ alllows us to slice images into same length somehow :) 30 | 31 | return np.corrcoef( 32 | [array[:slice_] for array in array_of_image_arrays] 33 | ) 34 | 35 | 36 | def corr_of_multiple_images(list_of_paths): 37 | array_of_image_arrays = [] 38 | minimum_length = float("+inf") 39 | 40 | # hitting another interest chaining 41 | # 42 | for image_path in list_of_paths: 43 | 44 | array, length = coalesce_into_column( 45 | get_image_array(image_path) 46 | ) 47 | 48 | # minimum_length will be useful for us to compare images 49 | # using the rough size of the smallest image 50 | minimum_length = ( 51 | minimum_length if length>minimum_length else length 52 | ) 53 | 54 | array_of_image_arrays.append(array) 55 | 56 | # In case the images are not the same, the 57 | # the minimum length helps us to slice all images to 58 | # same size. Would be better if the shapes are same. 59 | return get_corr(array_of_image_arrays, minimum_length) 60 | 61 | 62 | # You just need to submit a list of the image names to 63 | # corr_of_multiple_images() function if the images are many 64 | 65 | # change the image names available in your folder 66 | # this was for my demo 67 | 68 | print(corr_of_multiple_images(["samp1.jpeg","samp2.jpeg","samp3.jpeg","samp4.jpeg"])) 69 | 70 | 71 | -------------------------------------------------------------------------------- /Power_Of_Math_In_Image_Analysis/samp1.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Power_Of_Math_In_Image_Analysis/samp1.jpeg -------------------------------------------------------------------------------- /Power_Of_Math_In_Image_Analysis/samp2.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Power_Of_Math_In_Image_Analysis/samp2.jpeg -------------------------------------------------------------------------------- /Power_Of_Math_In_Image_Analysis/samp3.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Power_Of_Math_In_Image_Analysis/samp3.jpeg -------------------------------------------------------------------------------- /Power_Of_Math_In_Image_Analysis/samp4.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Power_Of_Math_In_Image_Analysis/samp4.jpeg -------------------------------------------------------------------------------- /Python_Basics/Python_Tricks.md: -------------------------------------------------------------------------------- 1 | 2 | # Python Tricks 3 | 4 | It is a compilation of few common tricks that we can use in python which are comparitively easier with other languges. 5 | Feel free to contribute to these. 6 | 7 | ## Strings 8 | 9 | ### Reverse a string 10 | 11 | 12 | ```python 13 | s = "Reverse" 14 | #To reverse a string in python, use the following. 15 | s = s[::-1] # this will reverse the string 16 | print(s) 17 | ``` 18 | 19 | esreveR 20 | 21 | 22 | ### Separating a sentence into words 23 | 24 | we can use the str.split() function to split long sentence into list of substring. 25 | https://docs.python.org/3/library/stdtypes.html#str.split 26 | 27 | 28 | ```python 29 | s = "one two three" 30 | print(s.split()) 31 | ``` 32 | 33 | ['one', 'two', 'three'] 34 | 35 | 36 | 37 | ```python 38 | s = "1,2,3" 39 | #splitting by comma 40 | s.split(",") 41 | ``` 42 | 43 | 44 | 45 | 46 | ['1', '2', '3'] 47 | 48 | 49 | 50 | 51 | ```python 52 | #above on can be extended with any substring 53 | s="1/ /2/ /3/ /4/ /5" 54 | s.split('/ /') 55 | ``` 56 | 57 | 58 | 59 | 60 | ['1', '2', '3', '4', '5'] 61 | 62 | 63 | -------------------------------------------------------------------------------- /Python_Halls_of_Fame/Python_Uniqueness_Hall_Of_Fame.rst: -------------------------------------------------------------------------------- 1 | 2 | Python is a general purpose, versatile, programming language first authored and released by Guido Van Rossum in 1991. 3 | It has risen tremendously in popularity, thanks to its inherent design's encouragement of simple, readable, 4 | elegant code. Today, python is useful for the small and big projects, the advanced and the very simple stuff, and to 5 | the beginner and pro programmer alike. 6 | 7 | Remarkably, it has become a must-go-to for Data Science, Machine Learning, Deep learning, and Artificial Intelligence. 8 | Let's explore some core python(CPython) uniqueness which is upheld by python lovers. 9 | 10 | 11 | 12 | PYTHON UNIQUENESS HALL OF FAME 13 | ============================== 14 | 15 | DO NOT DECLARE VARIABLE TYPES 16 | ---------------------------------------------------------- 17 | 18 | Unlike many classic C-style languages, python variables do not need declaration; they just need to be assigned. 19 | Thus, you do not need to specify variable types. 20 | 21 | :Python 3: 22 | 23 | :: 24 | 25 | a_number = 1 26 | print(a_number) 27 | 28 | # OUTPUT: 1 29 | 30 | :Java: 31 | 32 | :: 33 | 34 | public class PythonVrsJava{ 35 | 36 | public static void main(String []args){ 37 | int a_number = 1; 38 | System.out.println(a_number); 39 | } 40 | 41 | } 42 | 43 | // OUTPUT: 1 44 | 45 | 46 | DO NOT USE CURLY BRACKETS - "{}" - FOR BLOCKS 47 | --------------------------------------------- 48 | 49 | Unlike many classic C-style languages, python delineates blocks with just indentation(using spaces or tabs) and colon(":"), 50 | instead of curly brackets. 51 | 52 | :Python 3: 53 | 54 | :: 55 | 56 | a_number = 1 57 | 58 | if a_number == 1: 59 | print("Number is 1.") 60 | else: 61 | print("Number is not 1.") 62 | 63 | # OUTPUT: 64 | # Number is 1. 65 | 66 | :Java: 67 | 68 | :: 69 | 70 | public class PythonVrsJava{ 71 | 72 | public static void main(String []args){ 73 | int a_number = 1; 74 | 75 | if (a_number == 1){ 76 | System.out.println("Number is 1."); 77 | } else { 78 | System.out.println("Number is not 1."); 79 | } 80 | } 81 | 82 | } 83 | 84 | // OUTPUT: Number is 1. 85 | 86 | 87 | A VARIABLE CAN STORE DIFFERENT TYPES OF VALUES DURING ITS LIFETIME 88 | ----------------------------------------------------------------- 89 | 90 | Unlike many classic C-style languages, python allows dynamic typing. Thus, you can assign a variable that previously stored an integer value, to a string value without throwing an error. Doing same in Java will result in an error. 91 | 92 | :Python 3: 93 | 94 | :: 95 | 96 | a_var = 1 97 | print(a_var) 98 | 99 | a_var = "Just a string" 100 | print(a_var) 101 | 102 | # OUTPUT: Just a string 103 | 104 | :Java: 105 | 106 | :: 107 | 108 | public class PythonVrsJava{ 109 | 110 | public static void main(String []args){ 111 | int a_var = 1; 112 | System.out.println(a_var); 113 | 114 | a_var = "Just a string"; 115 | System.out.println(a_var); 116 | } 117 | 118 | } 119 | 120 | // OUTPUT: 121 | // PythonVrsJava.java:7: error: incompatible types: String cannot be converted to int 122 | // a_var = "Just a string"; 123 | 124 | 125 | LOOPING THROUGH AN ITERABLE WITH FOR...IN RETURNS VALUES INSTEAD OF INDEXES 126 | --------------------------------------------------------------------------- 127 | 128 | Unlike some classic C-style languages like Javascript which return indexes, Python returns values for ``for...in`` loops. 129 | 130 | :Python 3: 131 | 132 | :: 133 | 134 | list1 = [1, 2, 3] 135 | 136 | for number in list1: 137 | print(number) 138 | 139 | 140 | # OUTPUT: 141 | # 1 142 | # 2 143 | # 3 144 | 145 | 146 | :Javascript: 147 | 148 | :: 149 | 150 | let list1 = [1, 2, 3]; 151 | 152 | for (let number in list1){ 153 | console.log(number) 154 | } 155 | 156 | 157 | // OUTPUT: 158 | // 0 159 | // 1 160 | // 2 161 | 162 | 163 | 164 | AN IMMUTABLE VALUE IS STORED IN ONLY ONE MEMORY LOCATION EVEN IF IT IS ASSIGNED TO SEPARATE VARIABLES 165 | ----------------------------------------------------------------------------------------------------- 166 | 167 | :Python 3: 168 | 169 | :: 170 | 171 | num1 = 1 172 | num2 = 1 173 | 174 | str1 = "string" 175 | str2 = "string" 176 | 177 | bool1 = 3 == 2 178 | bool2 = "the" == "not" 179 | 180 | tuple1 = (1, 2, 3) 181 | tuple2 = (1, 2, 3) 182 | 183 | print("num1 address is {}".format(hex(id(num1)))) 184 | print("num2 address is {}".format(hex(id(num2)))) 185 | print("str1 address is {}".format(hex(id(str1)))) 186 | print("str2 address is {}".format(hex(id(str2)))) 187 | print("bool1 address is {}".format(hex(id(bool1)))) 188 | print("bool2 address is {}".format(hex(id(bool2)))) 189 | print("tuple1 address is {}".format(hex(id(tuple1)))) 190 | print("tuple2 address is {}".format(hex(id(tuple2)))) 191 | 192 | # OUTPUT: 193 | # num1 address is 0x5fefc880 194 | # num2 address is 0x5fefc880 195 | # str1 address is 0x3137b20 196 | # str2 address is 0x3137b20 197 | # bool1 address is 0x5fec71c0 198 | # bool2 address is 0x5fec71c0 199 | # tuple1 address is 0x32a26c0 200 | # tuple2 address is 0x32a26c0 201 | 202 | 203 | 204 | UNLIKE IMMUTABLE VALUES, VALUES OF MUTABLE TYPES, LIKE LISTS AND DICTIONARIES, HAVE SEPARATE MEMORY ADDRESSES EVEN WHEN THEY ARE THE SAME FOR SEPARATE VARIABLES 205 | ------------------------------------------------------------------------------------------------------------------------------------ 206 | 207 | :Python 3: 208 | 209 | :: 210 | 211 | list1 = [1, 2, 3] 212 | list2 = [1, 2, 3] 213 | 214 | dict1 = {"a":1, "b":2} 215 | dict2 = {"a":1, "b":2} 216 | 217 | print("list1 address is {}".format(hex(id(list1)))) 218 | print("list2 address is {}".format(hex(id(list2)))) 219 | print("dict1 address is {}".format(hex(id(dict1)))) 220 | print("dict2 address is {}".format(hex(id(dict2)))) 221 | 222 | # OUTPUT: 223 | # list1 address is 0xb445d0 224 | # list2 address is 0xb44a58 225 | # dict1 address is 0xb955d0 226 | # dict2 address is 0xb95630 227 | 228 | 229 | A work in Progress...To be Continued 230 | 231 | 232 | *Copyright 2019, Victor Mawusi Ayi. All Rights Reserved.* 233 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | ARTIFICIAL INTELLIGENCE SURFS 2 | ================================ 3 | 4 | Artificial Intelligence is our journey into the wonderful workings of one of the most complex 5 | mysteries in human physiology, the work of the brain, as much as it is an endeavour to change 6 | the world and improve lives. 7 | 8 | :: 9 | 10 | A single neuron in the brain is an incredibly complex machine that even today we don't understand. 11 | A single 'neuron' in a neural network is an incredibly simple mathematical function that 12 | captures a minuscule fraction of the complexity of a biological neuron. 13 | - Andrew Ng 14 | 15 | 16 | Through Artificial Intelligence, we seek to tap into the incredible power of our human brains - 17 | we only need a fraction of that power - to make us more efficient and effective at doing the 18 | things we love. 19 | 20 | Let's explore Artificial Intelligence, and especially Secure and Private AI, through the power of 21 | the python programming language and facebook's pytorch. 22 | 23 | SURF LIST 24 | ----------- 25 | 26 | `>>> Python Uniqueness Hall Of Fame `_ 27 | 28 | `>>> Getting Started with Python Data Types `_ 29 | 30 | `>>> Tensor Exploration In Words `_ 31 | 32 | `>>> Exploring Implementation of Matrix Multiplication `_ 33 | 34 | `>>> A spontaneous, and miniscule, Matrix class `_ 35 | 36 | `>>> Understanding Image Channels `_ 37 | 38 | `>>> Understanding Image Flattening `_ 39 | 40 | `>>> Understanding Image Transposition `_ 41 | 42 | `>>> Python demonstration of how correlation between RGB values from pixels of images can be used to match images `_. 43 | 44 | `>>> Demonstrating display patterns of images with monotonous and heterogenous intensities, using gray color map in matplotlib `_ 45 | 46 | `>>> The Basics of Neural Networks: Softmax Activation Functions `_ 47 | 48 | `>>> Shades of Green classifier implemented in raw python `_. 49 | 50 | `>>> A Red, Green, Blue Double Layer classifier in core python `_ 51 | 52 | `>>> A Red, Green, Blue, Yellow, Magenta, Cyan Triple Layer classifier demonstrated in core python `_ 53 | 54 | `>>> Training a model on FashionMNIST: A walkthrough `_ 55 | 56 | `>>> On the limits of Differential Privacy `_ 57 | 58 | `>>> Differencing Attack: A simple overview `_ 59 | 60 | `>>> DIfferential Privacy: Sensitivity and Epsilon `_ 61 | 62 | `>>> An overview of Federated Learning Concepts `_ 63 | 64 | 65 | Experiments and Projects 66 | ------------------------ 67 | 68 | `>>> Exploring Vectors arithmetic with Python `_ 69 | 70 | `>>> Explore Vectorkit 0.1.3 `_. Link to PYPI: `` 71 | 72 | `>>> Delving deeper into Matrix Arithmetic with core python - I `_ 73 | 74 | `>>> A snappy class for encrypted deep learning `_ 75 | -------------------------------------------------------------------------------- /RGBYCM_Color_Classifier/README.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | RED, GREEN, BLUE, YELLOW, CYAN, MAGENTA CLASSIFIER IMPLEMENTED IN STANDARD/CORE PYTHON 4 | ====================================================================================== 5 | *Victor mawusi Ayi* 6 | 7 | An advancement on the red-green-blue classifier, with ah additional hidden layer with 6 nodes. 8 | 9 | Architecture 10 | ------------ 11 | 12 | 13 | Each selected pixel goes through each of the nodes of the first hidden layer, each node selective for either of red, green, and blue primary colors. Then, the outputs from the first hidden layer proceed to the second hidden layer, with 6 nodes selective for either of red, green, blue, yellow, cyan, and magenta. Finally, the outputs from the hidden nodes are passed through the softmax function at the output node, and the class is assigned based on the maximum probability, and on a first-in selection principle in the case of a tie. This is just a exploration of the possibilities with building neural networks in core python. 14 | 15 | Sample Images 16 | ------------- 17 | 18 | 19 | Code 20 | ---- 21 | 22 | ``` 23 | """ 24 | 25 | A simple neural network built with core python, 26 | that classifies shades of red, green, blue, yellow, magenta, cyan. 27 | 28 | Built as a demonstration and exploration of the basic 29 | functionality of neural networks. 30 | 31 | Victor Mawusi Ayi 32 | 33 | """ 34 | from math import exp 35 | from PIL import Image 36 | import numpy as np 37 | import random 38 | 39 | 40 | def node(features, weights): 41 | 42 | bias = 1 43 | 44 | # sigmoid activation function 45 | def activation(x): 46 | return ( 47 | 1/(1 + exp(-x)) 48 | ) 49 | 50 | # get inner product of features and weights 51 | feature_weight_dot = sum( 52 | [x*y for x,y in zip(features, weights)] 53 | ) 54 | 55 | # add dot product of features and weight to bias 56 | linear_result = feature_weight_dot + bias 57 | 58 | 59 | return round(activation(linear_result),3) 60 | 61 | 62 | def softmax(X): 63 | 64 | exps_of_x = [exp(i) for i in X] 65 | sum_of_exps = sum(exps_of_x) 66 | 67 | return [x_i/sum_of_exps for x_i in exps_of_x] 68 | 69 | 70 | def classifier(list_of_image_paths): 71 | classes = [ 72 | "RED", 73 | "GREEN", 74 | "BLUE", 75 | "YELLOW", 76 | "CYAN", 77 | "MAGENTA" 78 | ] 79 | 80 | weights1 = [ 81 | [0.8,-0.6,-0.6], 82 | [-0.6,0.8,-0.6], 83 | [-0.6,-0.6,0.8] 84 | ] 85 | 86 | weights2 = [ 87 | [0.8,-0.6,-0.6], 88 | [-0.6,0.8,-0.6], 89 | [-0.6,-0.6,0.8], 90 | [0.8,0.8,-0.6], 91 | [-0.6,0.8,0.8], 92 | [0.8,-0.6,0.8] 93 | ] 94 | 95 | results = [] 96 | 97 | for path in list_of_image_paths: 98 | image = Image.open(path) 99 | arrays_from_image = np.array(image) 100 | 101 | # select random pixel. This is for demonstration purposes. 102 | select_point = random.randint(0,49) 103 | 104 | input = arrays_from_image[select_point][0] 105 | 106 | # hidden layers 107 | hiddens1 = [node(input, weight) for weight in weights1] 108 | hiddens2 = [node(hiddens1, weight) for weight in weights2] 109 | 110 | # output layer 111 | output = softmax(hiddens2) 112 | 113 | # get the maximum probability 114 | max_probability = max(output) 115 | 116 | # get the first index of the maximum probability 117 | shade_index = output.index(max_probability) 118 | 119 | # get the class of the layer 120 | color_class = classes[shade_index] 121 | 122 | # add the class of colour to results 123 | results.append((path, classes[shade_index])) 124 | 125 | return results 126 | 127 | 128 | if __name__ == "__main__": 129 | img_paths = ["img{}.png".format(i) for i in range(1,17)] 130 | 131 | # Load and classify images 132 | for x,y in classifier(img_paths): 133 | print(x.upper(), y) 134 | 135 | ``` 136 | 137 | 138 | Outcomes and Performance 139 | ------------------------ 140 | 141 | 142 | 143 | 144 | 145 | 146 | -------------------------------------------------------------------------------- /RGBYCM_Color_Classifier/architecture_nn_3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/RGBYCM_Color_Classifier/architecture_nn_3.png -------------------------------------------------------------------------------- /RGBYCM_Color_Classifier/archtecture_nn_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/RGBYCM_Color_Classifier/archtecture_nn_2.png -------------------------------------------------------------------------------- /RGBYCM_Color_Classifier/img1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/RGBYCM_Color_Classifier/img1.png -------------------------------------------------------------------------------- /RGBYCM_Color_Classifier/img10.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/RGBYCM_Color_Classifier/img10.png -------------------------------------------------------------------------------- /RGBYCM_Color_Classifier/img11.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/RGBYCM_Color_Classifier/img11.png -------------------------------------------------------------------------------- /RGBYCM_Color_Classifier/img12.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/RGBYCM_Color_Classifier/img12.png -------------------------------------------------------------------------------- /RGBYCM_Color_Classifier/img13.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/RGBYCM_Color_Classifier/img13.png -------------------------------------------------------------------------------- /RGBYCM_Color_Classifier/img14.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/RGBYCM_Color_Classifier/img14.png -------------------------------------------------------------------------------- /RGBYCM_Color_Classifier/img15.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/RGBYCM_Color_Classifier/img15.png -------------------------------------------------------------------------------- /RGBYCM_Color_Classifier/img16.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/RGBYCM_Color_Classifier/img16.png -------------------------------------------------------------------------------- /RGBYCM_Color_Classifier/img2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/RGBYCM_Color_Classifier/img2.png -------------------------------------------------------------------------------- /RGBYCM_Color_Classifier/img3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/RGBYCM_Color_Classifier/img3.png -------------------------------------------------------------------------------- /RGBYCM_Color_Classifier/img4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/RGBYCM_Color_Classifier/img4.png -------------------------------------------------------------------------------- /RGBYCM_Color_Classifier/img5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/RGBYCM_Color_Classifier/img5.png -------------------------------------------------------------------------------- /RGBYCM_Color_Classifier/img6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/RGBYCM_Color_Classifier/img6.png -------------------------------------------------------------------------------- /RGBYCM_Color_Classifier/img7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/RGBYCM_Color_Classifier/img7.png -------------------------------------------------------------------------------- /RGBYCM_Color_Classifier/img8.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/RGBYCM_Color_Classifier/img8.png -------------------------------------------------------------------------------- /RGBYCM_Color_Classifier/img9.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/RGBYCM_Color_Classifier/img9.png -------------------------------------------------------------------------------- /RGBYCM_Color_Classifier/nn_in_py3.py: -------------------------------------------------------------------------------- 1 | """ 2 | 3 | A simple neural network built with core python, 4 | that classifies shades of red, green, blue, yellow, magenta, cyan. 5 | 6 | Built as a demonstration and exploration of the basic 7 | functionality of neural networks. 8 | 9 | Victor Mawusi Ayi 10 | 11 | """ 12 | 13 | from math import exp 14 | from PIL import Image 15 | import numpy as np 16 | import random 17 | 18 | 19 | def node(features, weights): 20 | 21 | bias = 1 22 | 23 | # sigmoid activation function 24 | def activation(x): 25 | return ( 26 | 1/(1 + exp(-x)) 27 | ) 28 | 29 | # get inner product of features and weights 30 | feature_weight_dot = sum( 31 | [x*y for x,y in zip(features, weights)] 32 | ) 33 | 34 | # add dot product of features and weight to bias 35 | linear_result = feature_weight_dot + bias 36 | 37 | 38 | return round(activation(linear_result),3) 39 | 40 | 41 | def softmax(X): 42 | 43 | exps_of_x = [exp(i) for i in X] 44 | sum_of_exps = sum(exps_of_x) 45 | 46 | return [x_i/sum_of_exps for x_i in exps_of_x] 47 | 48 | 49 | def classifier(list_of_image_paths): 50 | classes = [ 51 | "RED", 52 | "GREEN", 53 | "BLUE", 54 | "YELLOW", 55 | "CYAN", 56 | "MAGENTA" 57 | ] 58 | 59 | weights1 = [ 60 | [0.8,-0.6,-0.6], 61 | [-0.6,0.8,-0.6], 62 | [-0.6,-0.6,0.8] 63 | ] 64 | 65 | weights2 = [ 66 | [0.8,-0.6,-0.6], 67 | [-0.6,0.8,-0.6], 68 | [-0.6,-0.6,0.8], 69 | [0.8,0.8,-0.6], 70 | [-0.6,0.8,0.8], 71 | [0.8,-0.6,0.8] 72 | ] 73 | 74 | results = [] 75 | 76 | for path in list_of_image_paths: 77 | image = Image.open(path) 78 | arrays_from_image = np.array(image) 79 | 80 | # select random pixel. This is for demonstration purposes. 81 | select_point = random.randint(0,49) 82 | 83 | input = arrays_from_image[select_point][0] 84 | 85 | # hidden layers 86 | hiddens1 = [node(input, weight) for weight in weights1] 87 | hiddens2 = [node(hiddens1, weight) for weight in weights2] 88 | 89 | # output layer 90 | output = softmax(hiddens2) 91 | 92 | # get the maximum probability 93 | max_probability = max(output) 94 | 95 | # get the first index of the maximum probability 96 | shade_index = output.index(max_probability) 97 | 98 | # get the class of the layer 99 | color_class = classes[shade_index] 100 | 101 | # add the class of colour to results 102 | results.append((path, classes[shade_index])) 103 | 104 | return results 105 | 106 | 107 | if __name__ == "__main__": 108 | img_paths = ["img{}.png".format(i) for i in range(1,17)] 109 | 110 | # Load and classify images 111 | for x,y in classifier(img_paths): 112 | print(x.upper(), y) 113 | 114 | -------------------------------------------------------------------------------- /RGBYCM_Color_Classifier/nn_py3_shot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/RGBYCM_Color_Classifier/nn_py3_shot.png -------------------------------------------------------------------------------- /RGBYCM_Color_Classifier/shot_of_images3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/RGBYCM_Color_Classifier/shot_of_images3.png -------------------------------------------------------------------------------- /RGBYCM_Color_Classifier/snap_3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/RGBYCM_Color_Classifier/snap_3.png -------------------------------------------------------------------------------- /Red_Green_Blue_Classifier/RGB_Classifier.md: -------------------------------------------------------------------------------- 1 | 2 | RED, GREEN, BLUE CLASSIFIER IMPLEMENTED IN STANDARD/CORE PYTHON 3 | =========================================================================== 4 | *Victor mawusi Ayi* 5 | 6 | A simple neural network built with core python, that classifies shades of red, green and blue. 7 | An advancement on the 'shades of green' classifier, this neural network has an one input layer, 8 | one hidden layer with 3 nodes, and the output layer with a softmax activation function. 9 | This is a continuation of the demonstration and exploration of the basic functionality of neural networks 10 | using core python. Consequently, this will strengthen the appreciation of pytorch. 11 | 12 | Architecture 13 | ------------ 14 | 15 | 16 | Each selected pixel goes through each of the nodes of the hidden layer, each node selective for a particualr color. Then, the outputs from the hidden nodes are passed through through softmax function at the output node, and the class is assigned based on the maximum probability, and on a first-in selection principle in the case of a tie. This is just the beginning that will get refined as I add more hidden layers. 17 | 18 | Sample Images 19 | ------------- 20 | 21 | 22 | Code 23 | ---- 24 | 25 | ``` 26 | from math import exp 27 | from PIL import Image 28 | import numpy as np 29 | import random 30 | 31 | 32 | def node(features, weights): 33 | 34 | bias = 1 35 | 36 | # sigmoid activation function 37 | def activation(x): 38 | return ( 39 | 1/(1 + exp(-x)) 40 | ) 41 | 42 | # get inner product of features and weights 43 | feature_weight_dot = sum( 44 | [x*y for x,y in zip(features, weights)] 45 | ) 46 | 47 | # add dot product of features and weight to bias 48 | linear_result = feature_weight_dot + bias 49 | 50 | 51 | return round(activation(linear_result),3) 52 | 53 | 54 | def softmax(X): 55 | 56 | exps_of_x = [exp(i) for i in X] 57 | sum_of_exps = sum(exps_of_x) 58 | 59 | return [x_i/sum_of_exps for x_i in exps_of_x] 60 | 61 | 62 | def classifier(list_of_image_paths): 63 | classes = [ 64 | "RED", 65 | "GREEN", 66 | "BLUE", 67 | ] 68 | 69 | weights = [ 70 | [0.8,-0.6,-0.6], 71 | [-0.6,0.8,-0.6], 72 | [-0.6,-0.6,0.8] 73 | ] 74 | results = [] 75 | 76 | for path in list_of_image_paths: 77 | image = Image.open(path) 78 | arrays_from_image = np.array(image) 79 | 80 | # select random pixel. This is for demonstration purposes. 81 | select_point = random.randint(0,49) 82 | 83 | input = arrays_from_image[select_point][0] 84 | 85 | # hidden layers 86 | hiddens = [node(input, weight) for weight in weights] 87 | 88 | # output layers 89 | output = softmax(hiddens) 90 | 91 | # get the maximum probability 92 | max_probability = max(output) 93 | 94 | # get the first index of the maximum probability 95 | shade_index = output.index(max_probability) 96 | 97 | # get the class of the layer 98 | color_class = classes[shade_index] 99 | 100 | # add the class of colour to results 101 | results.append((path, classes[shade_index])) 102 | 103 | return results 104 | 105 | 106 | if __name__ = "__main__": 107 | img_paths = ["img{}.png".format(i) for i in range(1,17)] 108 | 109 | # Load and classify images 110 | for x,y in classifier(img_paths): 111 | print(x.upper(), y) 112 | 113 | ``` 114 | 115 | 116 | Outcomes and Performance 117 | ------------------------ 118 | 119 | 120 | 121 | 1. Using the weights [0.8,-0.6,-0.6], [-0.6,0.8,-0.6], [-0.6,-0.6,0.8] for the RED, GREEN, BLUE nodes of the hidden layer. 122 | 123 | ``` 124 | [x] IMG1.PNG GREEN 125 | [x] IMG2.PNG GREEN 126 | [x] IMG3.PNG GREEN 127 | [x] IMG4.PNG BLUE 128 | [x] IMG5.PNG BLUE 129 | [x] IMG6.PNG RED 130 | [x] IMG7.PNG BLUE 131 | [x] IMG8.PNG RED 132 | 133 | ``` 134 | 135 | 136 | 137 | 138 | ``` 139 | [x] IMG9.PNG GREEN 140 | [x] IMG10.PNG GREEN 141 | [x] IMG11.PNG RED 142 | [x] IMG12.PNG GREEN 143 | [x] IMG13.PNG RED 144 | [x] IMG14.PNG BLUE 145 | [x] IMG15.PNG GREEN 146 | [x] IMG16.PNG RED 147 | 148 | ``` 149 | 150 | Conclusion 151 | ---------- 152 | 153 | - It does well on Red, Green and blue colors. They are all it knows for now 154 | - Python and Neural networks are amazing! :) 155 | -------------------------------------------------------------------------------- /Red_Green_Blue_Classifier/archtecture_nn_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Red_Green_Blue_Classifier/archtecture_nn_2.png -------------------------------------------------------------------------------- /Red_Green_Blue_Classifier/img1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Red_Green_Blue_Classifier/img1.png -------------------------------------------------------------------------------- /Red_Green_Blue_Classifier/img10.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Red_Green_Blue_Classifier/img10.png -------------------------------------------------------------------------------- /Red_Green_Blue_Classifier/img11.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Red_Green_Blue_Classifier/img11.png -------------------------------------------------------------------------------- /Red_Green_Blue_Classifier/img12.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Red_Green_Blue_Classifier/img12.png -------------------------------------------------------------------------------- /Red_Green_Blue_Classifier/img13.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Red_Green_Blue_Classifier/img13.png -------------------------------------------------------------------------------- /Red_Green_Blue_Classifier/img14.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Red_Green_Blue_Classifier/img14.png -------------------------------------------------------------------------------- /Red_Green_Blue_Classifier/img15.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Red_Green_Blue_Classifier/img15.png -------------------------------------------------------------------------------- /Red_Green_Blue_Classifier/img16.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Red_Green_Blue_Classifier/img16.png -------------------------------------------------------------------------------- /Red_Green_Blue_Classifier/img2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Red_Green_Blue_Classifier/img2.png -------------------------------------------------------------------------------- /Red_Green_Blue_Classifier/img3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Red_Green_Blue_Classifier/img3.png -------------------------------------------------------------------------------- /Red_Green_Blue_Classifier/img4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Red_Green_Blue_Classifier/img4.png -------------------------------------------------------------------------------- /Red_Green_Blue_Classifier/img5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Red_Green_Blue_Classifier/img5.png -------------------------------------------------------------------------------- /Red_Green_Blue_Classifier/img6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Red_Green_Blue_Classifier/img6.png -------------------------------------------------------------------------------- /Red_Green_Blue_Classifier/img7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Red_Green_Blue_Classifier/img7.png -------------------------------------------------------------------------------- /Red_Green_Blue_Classifier/img8.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Red_Green_Blue_Classifier/img8.png -------------------------------------------------------------------------------- /Red_Green_Blue_Classifier/img9.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Red_Green_Blue_Classifier/img9.png -------------------------------------------------------------------------------- /Red_Green_Blue_Classifier/nn_in_py2.py: -------------------------------------------------------------------------------- 1 | """ 2 | 3 | A simple neural network built with core python, 4 | that classifies shades of red, green, and blue. 5 | 6 | Built as a demonstration and exploration of the basic 7 | functionality of neural networks. 8 | 9 | Victor Mawusi Ayi 10 | 11 | """ 12 | 13 | from math import exp 14 | from PIL import Image 15 | import numpy as np 16 | import random 17 | 18 | 19 | def node(features, weights): 20 | 21 | bias = 1 22 | 23 | # sigmoid activation function 24 | def activation(x): 25 | return ( 26 | 1/(1 + exp(-x)) 27 | ) 28 | 29 | # get inner product of features and weights 30 | feature_weight_dot = sum( 31 | [x*y for x,y in zip(features, weights)] 32 | ) 33 | 34 | # add dot product of features and weight to bias 35 | linear_result = feature_weight_dot + bias 36 | 37 | 38 | return round(activation(linear_result),3) 39 | 40 | 41 | def softmax(X): 42 | 43 | exps_of_x = [exp(i) for i in X] 44 | sum_of_exps = sum(exps_of_x) 45 | 46 | return [x_i/sum_of_exps for x_i in exps_of_x] 47 | 48 | 49 | def classifier(list_of_image_paths): 50 | classes = [ 51 | "RED", 52 | "GREEN", 53 | "BLUE", 54 | ] 55 | 56 | weights = [ 57 | [0.8,-0.6,-0.6], 58 | [-0.6,0.8,-0.6], 59 | [-0.6,-0.6,0.8] 60 | ] 61 | results = [] 62 | 63 | for path in list_of_image_paths: 64 | image = Image.open(path) 65 | arrays_from_image = np.array(image) 66 | 67 | # select random pixel. This is for demonstration purposes. 68 | select_point = random.randint(0,49) 69 | 70 | input = arrays_from_image[select_point][0] 71 | 72 | # hidden layers 73 | hiddens = [node(input, weight) for weight in weights] 74 | 75 | # output layers 76 | output = softmax(hiddens) 77 | 78 | # get the maximum probability 79 | max_probability = max(output) 80 | 81 | # get the first index of the maximum probability 82 | shade_index = output.index(max_probability) 83 | 84 | # get the class of the layer 85 | color_class = classes[shade_index] 86 | 87 | # add the class of colour to results 88 | results.append((path, classes[shade_index])) 89 | 90 | return results 91 | 92 | 93 | if __name__ = "__main__": 94 | img_paths = ["img{}.png".format(i) for i in range(1,17)] 95 | 96 | # Load and classify images 97 | for x,y in classifier(img_paths): 98 | print(x.upper(), y) 99 | 100 | -------------------------------------------------------------------------------- /Red_Green_Blue_Classifier/shot_of_images2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Red_Green_Blue_Classifier/shot_of_images2.png -------------------------------------------------------------------------------- /Red_Green_Blue_Classifier/shot_of_outcomes2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Red_Green_Blue_Classifier/shot_of_outcomes2.png -------------------------------------------------------------------------------- /Tinkering_With_Tensors/Explaining_Tensors.md: -------------------------------------------------------------------------------- 1 | TENSORS: ARE THEY SCALARS, VECTORS, MATRICES? 2 | =============================================== 3 | 4 | In past times, tensors would probably be most popular among faithfuls of the study of the human body, and especially of the musculoskeletal system. As modern technology keeps at breaking limits through artificial intelligence, however, this term is becoming mainstream, and the trend frequently stirs up the question: What is a tensor? 5 | 6 | What is a tensor? 7 | ================= 8 | 9 | Given the complexity with defining a tensor, I will approach it in steps. 10 | 11 | Step 1 12 | ------ 13 | Firstly, appreciate a tensor as the mathematical representation of an interest. 14 | An interest can be a color of an object, speed of a car, price of an ice cream, nationality of a person, shape of someone's nose. 15 | 16 | Certainly, Yes! Everything can be expressed in numbers. But how? 17 | Supposing we want to categorise 4 types of noses: 18 | * Fleshy Nose 19 | * Bumpy Nose 20 | * Snub Nose 21 | * Hawk Nose 22 | 23 | Fleshy Nose, being the first type, can be represented by `1`, Bumpy Nose `2`, Snub Nose `3`, Hawk Nose `4`. Afterwards, if I was sent the number `1`, and told it was a type of nose, I would automatically know it was referring to a Fleshy Nose. 24 | 25 | Why do we have to represent things in numbers anyway? 26 | 27 | Firstly, numbers are language agnostic. In otherwords, irrespective of whether a person only understands greek, latin, french, english or swahili among others, we all understand numbers. Thus, it becomes easier to pass information across varied backgrounds using numbers, as it is a sort of universal language. 28 | 29 | Secondly, computers understand only zeros and ones (numbers) and at the end of it all, whatever we want to do with our computers will get converted to numbers. So numbers are foundational in computing. 30 | 31 | Back to the noses example, 32 | `1` stands for Fleshy nose. Thus, `1` is a tensor representing a type of nose. Furthermore, `1` is a type of tensor called scalar, because it has just one dimension - it is just one number. Step two talks about another type of tensor that can have more dimensions. 33 | 34 | Step 2 35 | ------ 36 | Appreciate that interests are multifaceted in real life. In otherwords, interests are made of several parts or dimensions. 37 | 38 | Let's take the price of an ice cream. Assume it is $5.00. 39 | You might think it has just one dimension, but it has several dimensions: 40 | * the currency sign ($) 41 | * the first group of numbers before the dot (5) 42 | * the dot (it could have been a comma in another locale) 43 | * the group of numbers after the dot 44 | 45 | We can rewrite the price in a group of numbers. 46 | * Let's assign a number to currency. There are about 180 currencies in the world and we can decide to assign `1` as the US dollar currency. Subsequently, anytime we see `1` representing a currency, it's a US dollar. 47 | * We can decide to leave the group of numbers before the dot or comma as is, since they are already numbers 48 | * We can assume that the position of the dot can be occupied by just two symbols (technically, decimal separators): dot and comma. Thus, dot becomes `1`, and comma `2`. 49 | * We will leave the numbers after the dot as is 50 | * Finally, lets use the format, ``(currency, numbers before dot/comma, dot/comma, numbers after dot/comma)`` to rewrite the price of the ice cream, in its four dimensions, like this: 51 | ``` 52 | price_of_ice_cream = (1, 5, 1, 0) 53 | ``` 54 | In a locale, like Canada(French), where a comma is used as a decimal separator, if ice cream cost 5 dollars, this could be written as $5,00. Let's assign 2 to Canadian dollar. The price could be written as: 55 | ``` 56 | price_of_ice_cream = (2, 5, 2, 0) 57 | ``` 58 | The first `2` is for the canadian dollar, the second value `5` for the first part of price value, the third value `2` for the comma (decimal separator), and `0` for the last part of the price value. 59 | 60 | What we just did was to represent the price of an ice cream as a tensor. This time it is a vector, not a scalar, and it is a four-dimensional vector because it uses four numbers (dimensions). Depending on what we want to do, it could become a two dimensional vector (x, y), three dimensional (x, y, z), or even seven dimensional (x, y, z, a, b, c, d), if we gleaned more dimensions from the price. 61 | 62 | Step 3 63 | ------ 64 | Appreciate that an object or event can have multiple points of interests, which themselves have several dimensions. 65 | 66 | In describing an ice cream, points of interest can include: 67 | * price 68 | * producer 69 | * flavor 70 | 71 | Let's assume the following for one ice cream(Ice Cream A): 72 | * Price = US$5.00 73 | * Producer is Dovry Ice Creams, with a producer ID of 0052 74 | * The flavor is strawberry - the strawberry flavor has a code of 2153 75 | 76 | Let's assume these for another ice cream(Ice Cream B): 77 | * Price = CAN$5,00 78 | * Producer is Resty Ice Creams, with a producer ID of 1045 79 | * The flavor is strawberry, maitaining the code of 2153 80 | 81 | We already have the vectors for price. We can easily create vectors for producer from the digits of the ID, and flavor from the digits of flavor code. 82 | 83 | Ice cream A description in vectors: 84 | ``` 85 | # price is given in step 2 above 86 | price_of_ice_cream_A = (1, 5, 1, 0) 87 | producer_of_ice_cream_A = (0, 0, 5, 2) 88 | flavor_of_ice_cream_A = (2, 1, 5, 3) 89 | 90 | ``` 91 | Ice cream B description in vectors 92 | ``` 93 | # price is given in step 2 above 94 | price_of_ice_cream_A = (2, 5, 2, 0) 95 | producer_of_ice_cream_A = (1, 0, 4, 5) 96 | flavor_of_ice_cream_A = (2, 1, 5, 3) 97 | 98 | ``` 99 | But we need to to be able to transport all vectors related to a particular ice cream together. 100 | 101 | We might end up with these: 102 | ``` 103 | ice_cream_A = [(1, 5, 1, 0), (0, 0, 5, 2), (2, 1, 5, 3)] 104 | ice_cream_B = [(2, 5, 2, 0), (1, 0, 4, 5), (2, 1, 5, 3)] 105 | ``` 106 | And we can rewrite this more beautifully: 107 | ``` 108 | ice_cream_A = [(1, 5, 1, 0), 109 | (0, 0, 5, 2), 110 | (2, 1, 5, 3)] 111 | 112 | ice_cream_B = [(2, 5, 2, 0), 113 | (1, 0, 4, 5), 114 | (2, 1, 5, 3)] 115 | 116 | ``` 117 | These are also tensors. This time they are tensors carrying information from several vectors, and they look like tables. Each vector for a particular ice cream becomes a row in a tensor, and the dimensions form columns. This type of tensor is called a matrix(plural: matrices). 118 | 119 | We can remove the brackets and commas to clarify the matrices: 120 | ``` 121 | ice_cream_A = [ 1 5 1 0 122 | 0 0 5 2 123 | 2 1 5 3 ] 124 | 125 | ice_cream_B = [ 2 5 2 0 126 | 1 0 4 5 127 | 2 1 5 3 ] 128 | 129 | ``` 130 | There we go! Tensors can be matrices formed from several vectors. If it helps, you can look at matrices like tables of information. 131 | 132 | Step 4 133 | ------ 134 | Appreciate that often times we want to carry information about more than one thing around. 135 | 136 | In step 3, we used matrix to carry several information about only one particular ice cream. However, what is frequently done is to use matrices to represent information about the same interest from different objects. For example, if we are interested in comparing prices, a matrix can carry information about prices of different things. 137 | 138 | Let's illustrate using the two ice cream prices in step 2: 139 | 140 | We can have a matrix called prices_of_ice_creams and it will have the two prices US$5.00, CAN$5,00 as demonstrated below: 141 | ``` 142 | prices_of_ice_creams = [(1, 5, 1, 0), 143 | (2, 5, 2, 0)] 144 | 145 | ``` 146 | Let's clean it up: 147 | ``` 148 | prices_of_ice_creams = [ 1 5 1 0 149 | 2 5 2 0 ] 150 | 151 | ``` 152 | And we have another matrix with 2 vectors(rows) having four dimensions(columns). This is a tensor, a 2 X 4 matrix, as it has 2 rows and 4 columns. 153 | 154 | Takeaways 155 | ========= 156 | * Tensors represent things or objects mathematically 157 | * If the number representing an object is single, in otherwords, it has one dimension, it is called a `scalar`. For example: 158 | ``` 159 | dollar_sign = 1 160 | ``` 161 | * If a particular representation has more than one dimension, this tensor is called a vector. For example: 162 | ``` 163 | price_of_ice_cream_A = (1, 5, 1, 0) 164 | ``` 165 | * When a tensor has information about several things, like prices of several ice creams, it is called a matrix. A matrix can be seen as a table, with each row carrying information about a particular item. For example: 166 | ``` 167 | prices_of_ice_creams = [ 1 5 1 0 168 | 2 5 2 0 ] 169 | ``` 170 | Here each row contains the price information of one ice cream. 171 | * Tensors can be scalars, vectors, matrices or a combination of them. 172 | * You can look at vectors as a group of scalars. 173 | * You can look at matrices as a group of vectors. 174 | * A vector is a sequence of numbers; an array, list or tuple etc, depending on the programming language used. The number of dimensions of a vector refers to the number of items in it. `[1, 2, 3]` is a 3-dimensional vector, and `[2, 3, 4, 5, 6]` is a 5-dimensional vector. 175 | * A matrix is like a table and can have rows and columns. 176 | * A matrix can be described by the number of rows and columns. The following has `2` rows and `2` columns, and is called a 2X2 matrix. The first row is `1, 5` and the second row is `2, 3`. The first column, from up to down, has `1, 2` and the second column has `5, 3`. 177 | ``` 178 | a_matrix = [ 1 5 179 | 2 3 ] 180 | ``` 181 | 182 | Given this mathematical introduction to tensors, we can further explore its adoption in computer science. For example, in Tensorflow, we might come across a tensor that contains text and not numbers...Oops! Fret not! It all comes down to numbers along the line anyway, and an understanding of the numerical foundations of computing, it will all fall in place. 183 | 184 | Happy AI programming! 185 | 186 | ---- 187 | 188 | *Copyright 2019, Victor Mawusi Ayi. All Rights Reserved.* 189 | -------------------------------------------------------------------------------- /Tinkering_With_Tensors/Spontaneous_Matrix.rst: -------------------------------------------------------------------------------- 1 | 2 | PLAYING WITH MATRICES 3 | ===================== 4 | 5 | [ Victor Mawusi Ayi ] 6 | 7 | Disclaimer! This is a spontaneous play around with tensors in 8 | python, which may need some optimisation. This is just for 9 | demonstration, and a tip of a project underway. 10 | 11 | Just thought if we played around with code a bit, we would get to the place where we could quickly mobilise a small neural network when we do not have the resources to run powerful libraries...And we could do it from the scratch. 12 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 13 | 14 | Lets see a rough sketch of how we could quickly mobilise a matrix class, and possibly optimise it later, if applicable. 15 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 16 | 17 | MATRIX CLASS 18 | ============ 19 | 20 | Let's write a small matrix class 21 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 22 | 23 | .. code:: ipython3 24 | 25 | class Matrix(): 26 | 27 | def __init__(self, seq_of_seq): 28 | 29 | # seq_of_seq stands for sequence of sequence 30 | # my fancy way of calling N-dimensional arrays 31 | self.value = seq_of_seq 32 | 33 | # for a N-dimensional arrays, 34 | # number of rows is simply its length 35 | self.num_of_rows = len(seq_of_seq) 36 | 37 | if self.num_of_rows < 2: 38 | raise TypeError( 39 | "A matrix cannot have just one row!" 40 | ) 41 | 42 | # for an array of arrays, which translates to a table or matrix 43 | # number of columns should be equal to the 44 | # length of its longest sub-arrays, 45 | # with the assumption that the shorter arrays could be extended 46 | # with zeros into the dimensional space of the longest array. 47 | self.num_of_cols = len( 48 | sorted(seq_of_seq, key = lambda seq: len(seq))[-1] 49 | ) 50 | 51 | max_length = lambda seq: max([len(str(i)) for i in seq]) 52 | long_num_len = 1 53 | 54 | for t in seq_of_seq: 55 | new_max = max_length(t) 56 | if new_max > long_num_len: 57 | long_num_len = new_max 58 | 59 | self.long_num_len = long_num_len + 2 60 | 61 | if self.num_of_cols < 2: 62 | raise TypeError( 63 | "A matrix cannot have just one column!" 64 | ) 65 | 66 | self.shape = (self.num_of_rows, self.num_of_cols) 67 | 68 | def __repr__(self): 69 | return "matrix [{}]".format( 70 | "\n".ljust(9).join([ 71 | " ".join([ 72 | str(num).ljust(self.long_num_len) for num in self.extend(array) 73 | ]).rjust(10, " ") for array in self.value 74 | ]).rstrip() 75 | ) 76 | 77 | def describe(self): 78 | return "{} X {} Matrix".format(*self.shape) 79 | 80 | def extend(self, array): 81 | 82 | extension_list = [0] * (self.num_of_cols - len(array)) 83 | return array + extension_list 84 | 85 | 86 | ------------------------------------------------------------------------------------------ 87 | ========================================================================================== 88 | 89 | PLAYGROUND STARTS HERE... 90 | ========================= 91 | 92 | Example 1 93 | ~~~~~~~~~ 94 | 95 | :: 96 | 97 | >>> a = Matrix([[2,3], [4,5], [5, 6, 7]]) 98 | >>> a 99 | matrix [2 3 0 100 | 4 5 0 101 | 5 6 7] 102 | 103 | :: 104 | 105 | >>> a.shape 106 | (3, 3) 107 | 108 | 109 | 110 | Example 2 111 | ~~~~~~~~~ 112 | 113 | .. code:: ipython3 114 | 115 | >>> b = Matrix([[0.2,0.4567,0.34], [0.657, 8.9, 7], [90.8762, 89736.09, 562.89]]) 116 | >>> b 117 | matrix [0.2 0.4567 0.34 118 | 0.657 8.9 7 119 | 90.8762 89736.09 562.89] 120 | 121 | 122 | :: 123 | 124 | >>> b.describe() 125 | '3 X 3 Matrix' 126 | 127 | 128 | 129 | Example 3 130 | --------- 131 | 132 | :: 133 | 134 | >>> c = Matrix([[0.2,0.4567], [0.657, 8.9, 7], [90.8762, 89736.09, 562.89, 9983.654]]) 135 | >>> c 136 | matrix [0.2 0.4567 0 0 137 | 0.657 8.9 7 0 138 | 90.8762 89736.09 562.89 9983.654] 139 | 140 | :: 141 | 142 | >>> c.shape 143 | (3, 4) 144 | 145 | 146 | -------------------------------------------------------------------------------- /Udacity_DL_With_Pytorch_Exercises/Part 2 - Neural Networks in PyTorch (Exercises).md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/Udacity_DL_With_Pytorch_Exercises/Part 2 - Neural Networks in PyTorch (Exercises).md -------------------------------------------------------------------------------- /Udacity_DL_With_Pytorch_Exercises/README.rst: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /cmap_gray_behavior/README.md: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /cmap_gray_behavior/cmap_gray_demo.md: -------------------------------------------------------------------------------- 1 | 2 | # Demonstrating display patterns of images with monotonous and heterogenous intensities, using gray color map in matplotlib 3 | 4 | Victor Mawusi Ayi 5 | 6 | A mask obtained using cv2.inrange, where all pixels in the reference image passed, was displayed by `plt.imshow` as black using `cmap="gray"`. The contention was that pixel intensities in the image were all 255. In otherwords, the mask is all white image. 7 | 8 | On experimentation, as demonstrated below, the observation was that images with monotonous pixel intensities always got displayed as black by `plt.imshow` and `cmap="gray"`. If we display an image which has all pixel intensities being 150 it will display it as black. Likewise, if we display an image with all its pixels having intensity of 200, it will display as black. If we display an all white image, all pixel intensities are 255, it will still display as black. Once the image has a uniform intensity, like in our case "all white", `plt.imshow()` with `cmap="gray"` displays a black image. 9 | 10 | # Understanding the behaviour of plt.imshow(img, cmap="gray") 11 | 12 | ```python 13 | import matplotlib.pyplot as plt 14 | import numpy as np 15 | 16 | import random 17 | 18 | %matplotlib inline 19 | ``` 20 | 21 | 22 | #### Image with all pixels having intensity of 100 23 | 24 | 25 | ```python 26 | img = [[100 for x in range(10)] for y in range(10)] 27 | 28 | # changing to numpy array just for its better display of arrays 29 | # pyplot would still display the image as a regular python list 30 | img = np.array(img) 31 | print(img) 32 | 33 | plt.imshow(img, cmap="gray") 34 | ``` 35 | 36 | [[100 100 100 100 100 100 100 100 100 100] 37 | [100 100 100 100 100 100 100 100 100 100] 38 | [100 100 100 100 100 100 100 100 100 100] 39 | [100 100 100 100 100 100 100 100 100 100] 40 | [100 100 100 100 100 100 100 100 100 100] 41 | [100 100 100 100 100 100 100 100 100 100] 42 | [100 100 100 100 100 100 100 100 100 100] 43 | [100 100 100 100 100 100 100 100 100 100] 44 | [100 100 100 100 100 100 100 100 100 100] 45 | [100 100 100 100 100 100 100 100 100 100]] 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | ![png](output_5_2.png) 57 | 58 | 59 | #### Image with all pixels having intensity of 255 60 | 61 | 62 | ```python 63 | img = [[255 for x in range(10)] for y in range(10)] 64 | 65 | # changing to numpy array just for its better display of arrays 66 | # pyplot would still display the image as a regular python list 67 | img = np.array(img) 68 | 69 | print(img) 70 | plt.imshow(img, cmap="gray") 71 | ``` 72 | 73 | [[255 255 255 255 255 255 255 255 255 255] 74 | [255 255 255 255 255 255 255 255 255 255] 75 | [255 255 255 255 255 255 255 255 255 255] 76 | [255 255 255 255 255 255 255 255 255 255] 77 | [255 255 255 255 255 255 255 255 255 255] 78 | [255 255 255 255 255 255 255 255 255 255] 79 | [255 255 255 255 255 255 255 255 255 255] 80 | [255 255 255 255 255 255 255 255 255 255] 81 | [255 255 255 255 255 255 255 255 255 255] 82 | [255 255 255 255 255 255 255 255 255 255]] 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | 91 | 92 | 93 | ![png](output_7_2.png) 94 | 95 | 96 | #### Image with all pixels having intensity of 50 97 | 98 | 99 | ```python 100 | img = [[50 for x in range(10)] for y in range(10)] 101 | 102 | # changing to numpy array just for its better display of arrays 103 | # pyplot would still display the image as a regular python list 104 | img = np.array(img) 105 | 106 | print(img) 107 | plt.imshow(img, cmap="gray") 108 | ``` 109 | 110 | [[50 50 50 50 50 50 50 50 50 50] 111 | [50 50 50 50 50 50 50 50 50 50] 112 | [50 50 50 50 50 50 50 50 50 50] 113 | [50 50 50 50 50 50 50 50 50 50] 114 | [50 50 50 50 50 50 50 50 50 50] 115 | [50 50 50 50 50 50 50 50 50 50] 116 | [50 50 50 50 50 50 50 50 50 50] 117 | [50 50 50 50 50 50 50 50 50 50] 118 | [50 50 50 50 50 50 50 50 50 50] 119 | [50 50 50 50 50 50 50 50 50 50]] 120 | 121 | 122 | 123 | 124 | 125 | 126 | 127 | 128 | 129 | 130 | ![png](output_9_2.png) 131 | 132 | 133 | #### Image with non-uniform intensities 134 | 135 | 136 | ```python 137 | img = [] 138 | 139 | for i in (50, 100, 255): 140 | img += [[i for x in range(10)] for y in range(4)] 141 | 142 | random.shuffle(img) 143 | 144 | # changing to numpy array just for its better display of arrays 145 | # pyplot would still display the image as a regular python list 146 | img = np.array(img) 147 | print(img) 148 | 149 | plt.imshow(img, cmap="gray") 150 | ``` 151 | 152 | [[100 100 100 100 100 100 100 100 100 100] 153 | [100 100 100 100 100 100 100 100 100 100] 154 | [ 50 50 50 50 50 50 50 50 50 50] 155 | [ 50 50 50 50 50 50 50 50 50 50] 156 | [ 50 50 50 50 50 50 50 50 50 50] 157 | [100 100 100 100 100 100 100 100 100 100] 158 | [255 255 255 255 255 255 255 255 255 255] 159 | [255 255 255 255 255 255 255 255 255 255] 160 | [255 255 255 255 255 255 255 255 255 255] 161 | [100 100 100 100 100 100 100 100 100 100] 162 | [ 50 50 50 50 50 50 50 50 50 50] 163 | [255 255 255 255 255 255 255 255 255 255]] 164 | 165 | 166 | 167 | 168 | 169 | 170 | 171 | 172 | 173 | 174 | ![png](output_11_2.png) 175 | 176 | 177 | 178 | ```python 179 | 180 | ``` 181 | -------------------------------------------------------------------------------- /cmap_gray_behavior/output_11_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/cmap_gray_behavior/output_11_2.png -------------------------------------------------------------------------------- /cmap_gray_behavior/output_5_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/cmap_gray_behavior/output_5_2.png -------------------------------------------------------------------------------- /cmap_gray_behavior/output_7_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/cmap_gray_behavior/output_7_2.png -------------------------------------------------------------------------------- /cmap_gray_behavior/output_9_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/cmap_gray_behavior/output_9_2.png -------------------------------------------------------------------------------- /docs/README.md: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /docs/channels.md: -------------------------------------------------------------------------------- 1 | ##### Victor Mawusi Ayi 2 | 3 | ### What are channels? 4 | 5 | The first simple answer: Channels control the colors displayed in an image. 6 | 7 | ### How do channels control image colors? 8 | 9 | First, all images can be thought of as several dots of different colours, or shades of colors. These dots are called pixels. For the commonest system we use, the colors of pixels are obtained by combining different amounts of RED, GREEN and BLUE(RGB). 10 | For ease of use, the amounts of these three colors can be 255(full) or 0(empty). For example, if we want pure color RED, we will have 255(full) of RED, 0 of GREEN and 0 of BLUE. 11 | 12 | ![](/imgs/RGB_coms.png) 13 | 14 | For the RGB system, the RED, GREEN, BLUE we use to tune the colors are called CHANNELS. You can look at them like three knobs we will use to obtain a given color. 15 | 16 | There are several other systems and thus channels. RED, GREEN, BLUE channels are just one of many channels there can be. 17 | For example, black and white images have only one channel where 255(full) is white and 0 is black. They is also the HSV system where channels represent HUE, SATURATION and VALUE, which can also be tuned just like RGB channels to obtain desired colors. 18 | 19 | -------------------------------------------------------------------------------- /docs/flattening.md: -------------------------------------------------------------------------------- 1 | ##### Victor Mawusi Ayi 2 | 3 | ## What is flattening? 4 | 5 | When you flatten an image, you simply convert it to a vector or, in some cases, a matrix with only one row. 6 | 7 | ![](/imgs/flatten1.png) 8 | 9 | As an example, Consider a tensor, `X = [[1,2,3],[4,5,6],[7,8,9]]`. When we flatten X, we end up with `[1,2,3,4,5,6,7,8,9]` or `[[1,2,3,4,5,6,7,8,9]]`. This is the simplest way to look at it. 10 | 11 | ## How can we do it in code? 12 | 13 | Most of the libraries have these methods available: 14 | 15 | + flatten() 16 | + view() 17 | + reshape() 18 | 19 | We will use numpy to demonstrate the uses of these methods above. First of all, we get our simple red and green rectangular image and display it. Then, subsequently, we flatten it. 20 | 21 | ``` 22 | 23 | >>> import matplotlib.pyplot as plt 24 | >>> import numpy as np 25 | >>> 26 | >>> # Image tensor 27 | >>> image = np.array( 28 | ... [[[255, 0, 0], [255, 0, 0]], 29 | ... [[0, 255, 0], [0, 255, 0]]] 30 | ... ) 31 | >>> 32 | >>> 33 | >>> # Display image 34 | >>> plt.imshow(image) 35 | ``` 36 | OUTPUT: 37 | 38 | ![](/imgs/flatten_samp_image.png) 39 | 40 | We can now flatten this image and see what it becomes. 41 | ``` 42 | >>> image 43 | array([[[255, 0, 0], 44 | [255, 0, 0]], 45 | 46 | [[ 0, 255, 0], 47 | [ 0, 255, 0]]]) 48 | >>> 49 | >>> image.flatten() 50 | array([255, 0, 0, 255, 0, 0, 0, 255, 0, 0, 255, 0]) 51 | >>> 52 | ``` 53 | We can do the same as above using `reshape()`. Because we are looking at a vector or a simple array, we must pass the total number of values(items) in the tensor as a parameter. 54 | We have 12 values in our rectangular image above(Count all the numbers in the image tensor). Therefore, we can alternatively flatten by doing `reshape(12)`. 55 | 56 | ``` 57 | >>> image 58 | array([[[255, 0, 0], 59 | [255, 0, 0]], 60 | 61 | [[ 0, 255, 0], 62 | [ 0, 255, 0]]]) 63 | >>> 64 | >>> image.reshape(12) 65 | array([255, 0, 0, 255, 0, 0, 0, 255, 0, 0, 255, 0]) 66 | >>> 67 | ``` 68 | 69 | Sometimes, we will not know how many values(items) there are in an image tensor. In reality, we will almost always not know, and even for large images we cannot afford to count all items. 70 | But, not knowing should not get in the way of what we want to do, because we can ask a library to automatically count the number of items for us. 71 | The way we ask a library to figure the number of items for us is to specify `-1` as the parameter to `reshape` or `view`(if present in the given library). 72 | 73 | ``` 74 | >>> image 75 | array([[[255, 0, 0], 76 | [255, 0, 0]], 77 | 78 | [[ 0, 255, 0], 79 | [ 0, 255, 0]]]) 80 | >>> 81 | >>> image.reshape(-1) 82 | array([255, 0, 0, 255, 0, 0, 0, 255, 0, 0, 255, 0]) 83 | >>> 84 | ``` 85 | And, Bingo!!! We have the same outcome. 86 | 87 | 88 | Sometimes, we do not want a plain vector(or simple array). We may prefer a matrix with a single row carrying all the items. That means we want as many columns to accomodate all the items in the image tensor. 89 | In this case, we specify `1` (row) as the first parameter, and `-1`(column) as the second parameter. As we know, for a shape of a tensor, the first value represents number of rows and the second value represents number of columns. Additionally, as explained above, 90 | we specify `-1` for the columns so that the library will figure it out automatically. Let's try that in code: 91 | 92 | ``` 93 | >>> image 94 | array([[[255, 0, 0], 95 | [255, 0, 0]], 96 | 97 | [[ 0, 255, 0], 98 | [ 0, 255, 0]]]) 99 | >>> 100 | >>> image.reshape(1,-1) 101 | array([[255, 0, 0, 255, 0, 0, 0, 255, 0, 0, 255, 0]]) 102 | ``` 103 | 104 | 105 | 106 | 107 | 108 | 109 | -------------------------------------------------------------------------------- /docs/transpose.md: -------------------------------------------------------------------------------- 1 | ##### Victor Mawusi Ayi 2 | 3 | What is transpose? 4 | ------------------ 5 | 6 | Most of the operations in AI depend on Matrix math, and several concepts including `Transpose` pertain to matrices. 7 | From matrix operations, we use transpose to flip a matrix. Then, its rows become columns, and columns become rows. 8 | 9 | ![](/imgs/transpose.png) 10 | 11 | This can be demonstrated in code: 12 | 13 | ``` 14 | >>> import numpy as np 15 | >>> 16 | >>> matrix = np.array([[1,1,1],[2,2,2],[3,3,3]]) 17 | >>> 18 | >>> matrix 19 | array([[1, 1, 1], 20 | [2, 2, 2], 21 | [3, 3, 3]]) 22 | >>> 23 | >>> matrix.transpose() 24 | array([[1, 2, 3], 25 | [1, 2, 3], 26 | [1, 2, 3]]) 27 | ``` 28 | 29 | How does transpose become useful in image analysis? 30 | --------------------------------------------------- 31 | 32 | We have different ways of storing image information for computers. However, generally, they are stored as big nested matrices which store information on the height, width and [channels](/docs/channels.md) of the given images. 33 | For example, we can create a simple rectangle having red, blue, and green colors as shown below: 34 | 35 | ``` 36 | import numpy as np 37 | import matplotlib.pyplot as plt 38 | 39 | image = np.array( 40 | [[[255,0,0],[255,0,0]], 41 | [[0,255,0],[0,255,0]], 42 | [[0,0,255],[0,0,255]]] 43 | ) 44 | 45 | plt.imshow(image) 46 | ``` 47 | OUTPUT: 48 | 49 | ![](/imgs/samp_image_transpose.png) 50 | 51 | When we want a peek at the information of the image we just created, using `.shape`, we realize that its storage format is in this order: 52 | + height(number of pixel rows), 53 | + width(number of pixel columns), and then 54 | + channels(number of color channels) 55 | 56 | ``` 57 | >>> print("Image shape: ", image.shape) 58 | Image shape: (3, 2, 3) 59 | ``` 60 | This format is abbreviated as (H X W X C). 61 | 62 | We can look at it graphically as below: 63 | 64 | ![](/imgs/samp_image_transpose2b.png) 65 | 66 | Some libraries store images using the format `channels`, `height` and `width` (C X H X W). 67 | Therefore, when we are using several libraries together, we can have a problem passing images around, unless we change the format in which image information is stored, to suit a library we want to use at a given time. 68 | This is what we achieve with transpose. We can use it to flip/rotate the image matrices. As an example, if the format was `H X W X C`, we can get `C X H X W` and vice versa. 69 | For computers everything becomes a number or math at a time. So, we can, and, use the indices `0`, `1`, `2` to represent the `height`, `width`, `channels`. Then, when we want to change the format to `channels`, `height`, `width` we specify the order `2`, `0`, `1`. 70 | 71 | ![](/imgs/image_transpose.png) 72 | 73 | This is demonstrated in code below. 74 | 75 | ``` 76 | >>> 77 | >>> image.shape 78 | (3, 2, 3) 79 | >>> 80 | >>> image 81 | array([[[255, 0, 0], 82 | [255, 0, 0]], 83 | 84 | [[ 0, 255, 0], 85 | [ 0, 255, 0]], 86 | 87 | [[ 0, 0, 255], 88 | [ 0, 0, 255]]]) 89 | >>> 90 | >>> transposed_image = image.transpose(2,0,1) 91 | >>> transposed_image 92 | array([[[255, 255], 93 | [ 0, 0], 94 | [ 0, 0]], 95 | 96 | [[ 0, 0], 97 | [255, 255], 98 | [ 0, 0]], 99 | 100 | [[ 0, 0], 101 | [ 0, 0], 102 | [255, 255]]]) 103 | >>> 104 | >>> transposed_image.shape 105 | (3, 3, 2) 106 | >>> 107 | ``` 108 | ``` 109 | -------------------------------------------------------------------------------- /imgs/README.md: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /imgs/RGB_coms.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/imgs/RGB_coms.png -------------------------------------------------------------------------------- /imgs/flatten1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/imgs/flatten1.png -------------------------------------------------------------------------------- /imgs/flatten_samp_image.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/imgs/flatten_samp_image.png -------------------------------------------------------------------------------- /imgs/image_transpose.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/imgs/image_transpose.png -------------------------------------------------------------------------------- /imgs/samp_image_transpose.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/imgs/samp_image_transpose.png -------------------------------------------------------------------------------- /imgs/samp_image_transpose2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/imgs/samp_image_transpose2.png -------------------------------------------------------------------------------- /imgs/samp_image_transpose2b.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/imgs/samp_image_transpose2b.png -------------------------------------------------------------------------------- /imgs/samp_image_transpose3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/imgs/samp_image_transpose3.png -------------------------------------------------------------------------------- /imgs/transpose.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/imgs/transpose.png -------------------------------------------------------------------------------- /quicknets/README.md: -------------------------------------------------------------------------------- 1 | 2 | PYTORCH WRAPPER PROJECT 3 | -------------------------------------------------------------------------------- /quicknets/wrapper1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/quicknets/wrapper1.png -------------------------------------------------------------------------------- /quicknets/wrapper2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/quicknets/wrapper2.png -------------------------------------------------------------------------------- /quicknets/wrapper3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayivima/AI-SURFS/1b8f731cd38b7b3e8091e1ff8bb7929413ef739c/quicknets/wrapper3.png -------------------------------------------------------------------------------- /xray/README.md: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /xray/Starter_ Novice AI xrays v3 974b53f0-c f8d223.ipynb: -------------------------------------------------------------------------------- 1 | {"cells":[{"metadata":{},"cell_type":"markdown","source":"## Introduction"},{"metadata":{},"cell_type":"markdown","source":"## Importing Relevant Libraries and Models"},{"metadata":{"_kg_hide-input":false,"trusted":true},"cell_type":"code","source":"import os\nimport pathlib\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport torch\nfrom torch import nn\nfrom torch import optim\nfrom torch.nn import functional as F\nfrom torchvision import (\n models, \n datasets, \n transforms\n)\n\n","execution_count":1,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"## Transformation Pipelines"},{"metadata":{"trusted":true},"cell_type":"code","source":"\nsizing = 256\n\n# We are keeping minimalistic transforms\n# To get preserve effects in the xrays as much as possible\ntrain_transform = transforms.Compose([\n transforms.RandomRotation(10, expand=True),\n transforms.Resize(sizing),\n transforms.ToTensor()\n])\n\nvalid_transform = transforms.Compose([\n transforms.Resize(sizing),\n transforms.ToTensor()\n]) \n \ntest_transform = transforms.Compose([\n transforms.Resize(sizing),\n transforms.ToTensor(),\n])\n \n","execution_count":2,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"## Setting Up Loaders"},{"metadata":{"trusted":true},"cell_type":"code","source":"\n# Setting Data Sets for Train, Test, Validation Generators\ntrain_data = datasets.ImageFolder(\n '../input/x_ray_v3/content/x_ray/train',\n transform=train_transform\n)\n\nvalid_data = datasets.ImageFolder(\n '../input/x_ray_v3/content/x_ray/validation',\n transform=valid_transform\n)\n\ntest_data = datasets.ImageFolder(\n '../input/x_ray_v3/content/x_ray/test',\n transform=test_transform\n)\n","execution_count":3,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"batch_size = 20","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size)\nvalid_loader = torch.utils.data.DataLoader(valid_data, batch_size=batch_size)\ntest_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size)","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"## Validating Classes and Image Samples"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Printing out the classes and assigned indexes\n\nclasses_to_idx = train_data.class_to_idx.items()\nclasses = []\n\nprint(\"--Classes & Numerical Labels--\")\nfor key, value in classes_to_idx:\n print(value, key)\n classes.append(key)\n\nprint(\"\\n\", \"No_of_classes: \", len(classes))","execution_count":null,"outputs":[]},{"metadata":{"trusted":true,"_kg_hide-input":true},"cell_type":"code","source":"def visualize(loader, classes, num_of_image=5, fig_size=(25, 5)):\n images, labels = next(iter(loader))\n \n fig = plt.figure(figsize=fig_size)\n for idx in range(num_of_image):\n ax = fig.add_subplot(1, 5, idx + 1, xticks=[], yticks=[])\n\n img = images[idx]\n npimg = img.numpy()\n img = np.transpose(npimg, (1, 2, 0)) \n ax.imshow(img, cmap='gray')\n ax.set_title(classes[labels[idx]])","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"visualize(train_loader, classes)\n","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"## Setting Up The Model"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Setting up pre-trained model\n\nmodel = models.resnext(pretrained=True)\n","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"# Preventing adjustment of model weights above our custom classifier layer\n\nfor param in model.parameters():\n param.requires_grad = False","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"model.classifier","execution_count":null,"outputs":[]},{"metadata":{"trusted":true,"_kg_hide-input":true},"cell_type":"code","source":"\ndef milan(input, beta=-0.25):\n '''\n Applies the Mila function element-wise:\n Mila(x) = x * tanh(softplus(1 + β)) = x * tanh(ln(1 + exp(x+β)))\n See additional documentation for mila class.\n '''\n return input * torch.tanh(F.softplus(input+beta))\n\nclass mila(nn.Module):\n '''\n Applies the Mila function element-wise:\n Mila(x) = x * tanh(softplus(1 + β)) = x * tanh(ln(1 + exp(x+β)))\n Shape:\n - Input: (N, *) where * means, any number of additional\n dimensions\n - Output: (N, *), same shape as the input\n Examples:\n >>> m = mila(beta=1.0)\n >>> input = torch.randn(2)\n >>> output = m(input)\n '''\n def __init__(self, beta=-0.25):\n '''\n Init method.\n '''\n super().__init__()\n self.beta = beta\n\n def forward(self, input):\n '''\n Forward pass of the function.\n '''\n return milan(input, self.beta)","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"class fc(nn.Module):\n def __init__(self):\n super().__init__()\n self.fc1 = nn.Linear(1024, 500)\n self.fc2 = nn.Linear(500, 256)\n self.fc3 = nn.Linear(256, 8)\n self.dropout = nn.Dropout(0.5)\n self.logsoftmax = nn.LogSoftmax(dim=1)\n self.relu = nn.ReLU()\n def forward(self,x):\n x = x.view(x.size(0), -1)\n x = self.dropout(self.relu(self.fc1(x)))\n x = self.dropout(self.relu(self.fc2(x)))\n\n x = self.logsoftmax(self.fc3(x))\n return x\n","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"model.classifier = fc()","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"## Training The Model"},{"metadata":{"trusted":true},"cell_type":"code","source":"# setting up for possible use of GPU.\n# sacrificing short code for readability.\n\ndef device():\n if torch.cuda.is_available():\n devtype = \"cuda\"\n else:\n devtype = \"cpu\"\n return torch.device(devtype)\n","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"criterion = nn.NLLLoss()\noptimizer = optim.Adam(model.classifier.parameters(), lr=0.2)\ndevice = device()\n\nmodel.to(device);","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"def training(model, epochs=5):\n running_loss = 0\n\n for epoch in range(epochs):\n\n print(f\"EPOCH {epoch+1}/{epochs}...Training...\")\n\n for inputs, labels in train_loader:\n inputs, labels = inputs.to(device), labels.to(device)\n\n optimizer.zero_grad()\n\n logps = model.forward(inputs)\n loss = criterion(logps, labels)\n loss.backward()\n optimizer.step()\n\n running_loss += loss.item()\n\n else:\n test_loss = 0\n accuracy = 0\n model.eval()\n with torch.no_grad():\n for inputs, labels in valid_loader:\n inputs, labels = inputs.to(device), labels.to(device)\n logps = model.forward(inputs)\n batch_loss = criterion(logps, labels)\n\n test_loss += batch_loss.item()\n\n # Calculate accuracy\n ps = torch.exp(logps)\n top_p, top_class = ps.topk(1, dim=1)\n equals = top_class == labels.view(*top_class.shape)\n accuracy += torch.mean(equals.type(torch.FloatTensor))\n\n print(\n f\"Complete --> \"\n f\"Train loss: {running_loss/len(train_loader):.3f}.. \"\n f\"Validation loss: {test_loss/len(valid_loader):.3f}.. \"\n f\"Validation accuracy: {accuracy/len(valid_loader):.3f} \\n\"\n )\n running_loss = 0\n model.train()","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"# Actual Training of model\n\ntraining(model, epochs=2)","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"## Conclusion\nThis concludes your starter analysis! To go forward from here, click the blue \"Fork Notebook\" button at the top of this kernel. This will create a copy of the code and environment for you to edit. Delete, modify, and add code as you please. Happy Kaggling!"}],"metadata":{"language_info":{"name":"python","version":"3.6.6","mimetype":"text/x-python","codemirror_mode":{"name":"ipython","version":3},"pygments_lexer":"ipython3","nbconvert_exporter":"python","file_extension":".py"},"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"}},"nbformat":4,"nbformat_minor":1} 2 | -------------------------------------------------------------------------------- /xray/Starter_ Novice AI xrays v3 974b53f0-c f8d223.md: -------------------------------------------------------------------------------- 1 | 2 | # STAGING FOR CHEST XRAY DATASET 3 | 4 | ## Importing Relevant Libraries and Models 5 | 6 | 7 | ```python 8 | import os 9 | import pathlib 10 | 11 | import matplotlib.pyplot as plt 12 | import numpy as np 13 | import pandas as pd 14 | import torch 15 | from torch import nn 16 | from torch import optim 17 | from torch.nn import functional as F 18 | from torchvision import ( 19 | models, 20 | datasets, 21 | transforms 22 | ) 23 | 24 | 25 | ``` 26 | 27 | ## Transformation Pipelines 28 | 29 | 30 | ```python 31 | 32 | sizing = 256 33 | 34 | # We are keeping minimalistic transforms 35 | # To get preserve effects in the xrays as much as possible 36 | train_transform = transforms.Compose([ 37 | transforms.RandomRotation(10, expand=True), 38 | transforms.Resize(sizing), 39 | transforms.ToTensor() 40 | ]) 41 | 42 | valid_transform = transforms.Compose([ 43 | transforms.Resize(sizing), 44 | transforms.ToTensor() 45 | ]) 46 | 47 | test_transform = transforms.Compose([ 48 | transforms.Resize(sizing), 49 | transforms.ToTensor(), 50 | ]) 51 | 52 | 53 | ``` 54 | 55 | ## Setting Up Loaders 56 | 57 | 58 | ```python 59 | 60 | # Setting Data Sets for Train, Test, Validation Generators 61 | train_data = datasets.ImageFolder( 62 | '../input/x_ray_v3/content/x_ray/train', 63 | transform=train_transform 64 | ) 65 | 66 | valid_data = datasets.ImageFolder( 67 | '../input/x_ray_v3/content/x_ray/validation', 68 | transform=valid_transform 69 | ) 70 | 71 | test_data = datasets.ImageFolder( 72 | '../input/x_ray_v3/content/x_ray/test', 73 | transform=test_transform 74 | ) 75 | 76 | ``` 77 | 78 | 79 | ```python 80 | batch_size = 20 81 | ``` 82 | 83 | 84 | ```python 85 | train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size) 86 | valid_loader = torch.utils.data.DataLoader(valid_data, batch_size=batch_size) 87 | test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size) 88 | ``` 89 | 90 | ## Validating Classes and Image Samples 91 | 92 | 93 | ```python 94 | # Printing out the classes and assigned indexes 95 | 96 | classes_to_idx = train_data.class_to_idx.items() 97 | classes = [] 98 | 99 | print("--Classes & Numerical Labels--") 100 | for key, value in classes_to_idx: 101 | print(value, key) 102 | classes.append(key) 103 | 104 | print("\n", "No_of_classes: ", len(classes)) 105 | ``` 106 | 107 | 108 | ```python 109 | def visualize(loader, classes, num_of_image=5, fig_size=(25, 5)): 110 | images, labels = next(iter(loader)) 111 | 112 | fig = plt.figure(figsize=fig_size) 113 | for idx in range(num_of_image): 114 | ax = fig.add_subplot(1, 5, idx + 1, xticks=[], yticks=[]) 115 | 116 | img = images[idx] 117 | npimg = img.numpy() 118 | img = np.transpose(npimg, (1, 2, 0)) 119 | ax.imshow(img, cmap='gray') 120 | ax.set_title(classes[labels[idx]]) 121 | ``` 122 | 123 | 124 | ```python 125 | visualize(train_loader, classes) 126 | 127 | ``` 128 | 129 | ## Setting Up The Model 130 | 131 | 132 | ```python 133 | # Setting up pre-trained model 134 | 135 | model = models.resnext(pretrained=True) 136 | 137 | ``` 138 | 139 | 140 | ```python 141 | # Preventing adjustment of model weights above our custom classifier layer 142 | 143 | for param in model.parameters(): 144 | param.requires_grad = False 145 | ``` 146 | 147 | 148 | ```python 149 | model.classifier 150 | ``` 151 | 152 | ### Setting up mila for possible use 153 | 154 | 155 | ```python 156 | 157 | def milan(input, beta=-0.25): 158 | ''' 159 | Applies the Mila function element-wise: 160 | Mila(x) = x * tanh(softplus(1 + β)) = x * tanh(ln(1 + exp(x+β))) 161 | See additional documentation for mila class. 162 | ''' 163 | return input * torch.tanh(F.softplus(input+beta)) 164 | 165 | class mila(nn.Module): 166 | ''' 167 | Applies the Mila function element-wise: 168 | Mila(x) = x * tanh(softplus(1 + β)) = x * tanh(ln(1 + exp(x+β))) 169 | Shape: 170 | - Input: (N, *) where * means, any number of additional 171 | dimensions 172 | - Output: (N, *), same shape as the input 173 | Examples: 174 | >>> m = mila(beta=1.0) 175 | >>> input = torch.randn(2) 176 | >>> output = m(input) 177 | ''' 178 | def __init__(self, beta=-0.25): 179 | ''' 180 | Init method. 181 | ''' 182 | super().__init__() 183 | self.beta = beta 184 | 185 | def forward(self, input): 186 | ''' 187 | Forward pass of the function. 188 | ''' 189 | return milan(input, self.beta) 190 | ``` 191 | 192 | 193 | ```python 194 | class fc(nn.Module): 195 | def __init__(self): 196 | super().__init__() 197 | self.fc1 = nn.Linear(1024, 500) 198 | self.fc2 = nn.Linear(500, 256) 199 | self.fc3 = nn.Linear(256, 8) 200 | self.dropout = nn.Dropout(0.5) 201 | self.logsoftmax = nn.LogSoftmax(dim=1) 202 | self.relu = nn.ReLU() 203 | def forward(self,x): 204 | x = x.view(x.size(0), -1) 205 | x = self.dropout(self.relu(self.fc1(x))) 206 | x = self.dropout(self.relu(self.fc2(x))) 207 | 208 | x = self.logsoftmax(self.fc3(x)) 209 | return x 210 | 211 | ``` 212 | 213 | 214 | ```python 215 | model.classifier = fc() 216 | ``` 217 | 218 | ## Training The Model 219 | 220 | 221 | ```python 222 | # setting up for possible use of GPU. 223 | # sacrificing short code for readability. 224 | 225 | def device(): 226 | if torch.cuda.is_available(): 227 | devtype = "cuda" 228 | else: 229 | devtype = "cpu" 230 | return torch.device(devtype) 231 | 232 | ``` 233 | 234 | 235 | ```python 236 | criterion = nn.NLLLoss() 237 | optimizer = optim.Adam(model.classifier.parameters(), lr=0.2) 238 | device = device() 239 | 240 | model.to(device); 241 | ``` 242 | 243 | 244 | ```python 245 | def training(model, epochs=5): 246 | running_loss = 0 247 | 248 | for epoch in range(epochs): 249 | 250 | print(f"EPOCH {epoch+1}/{epochs}...Training...") 251 | 252 | for inputs, labels in train_loader: 253 | inputs, labels = inputs.to(device), labels.to(device) 254 | 255 | optimizer.zero_grad() 256 | 257 | logps = model.forward(inputs) 258 | loss = criterion(logps, labels) 259 | loss.backward() 260 | optimizer.step() 261 | 262 | running_loss += loss.item() 263 | 264 | else: 265 | test_loss = 0 266 | accuracy = 0 267 | model.eval() 268 | with torch.no_grad(): 269 | for inputs, labels in valid_loader: 270 | inputs, labels = inputs.to(device), labels.to(device) 271 | logps = model.forward(inputs) 272 | batch_loss = criterion(logps, labels) 273 | 274 | test_loss += batch_loss.item() 275 | 276 | # Calculate accuracy 277 | ps = torch.exp(logps) 278 | top_p, top_class = ps.topk(1, dim=1) 279 | equals = top_class == labels.view(*top_class.shape) 280 | accuracy += torch.mean(equals.type(torch.FloatTensor)) 281 | 282 | print( 283 | f"Complete --> " 284 | f"Train loss: {running_loss/len(train_loader):.3f}.. " 285 | f"Validation loss: {test_loss/len(valid_loader):.3f}.. " 286 | f"Validation accuracy: {accuracy/len(valid_loader):.3f} \n" 287 | ) 288 | running_loss = 0 289 | model.train() 290 | ``` 291 | 292 | 293 | ```python 294 | # Actual Training of model 295 | 296 | training(model, epochs=2) 297 | ``` 298 | 299 | 300 | ```python 301 | 302 | ``` 303 | --------------------------------------------------------------------------------