├── README.md ├── Week-0 └── README.md ├── Week-1 ├── Programming-Quizzes │ ├── MATLAB │ │ ├── README.md │ │ ├── solution-1.md │ │ ├── solution-10.md │ │ ├── solution-11.md │ │ ├── solution-12.md │ │ ├── solution-13.md │ │ ├── solution-14.md │ │ ├── solution-2.md │ │ ├── solution-3.md │ │ ├── solution-4.md │ │ ├── solution-5.md │ │ ├── solution-6.md │ │ ├── solution-7.md │ │ ├── solution-8.md │ │ └── solution-9.md │ ├── Python │ │ ├── README.md │ │ └── Solutions │ │ │ ├── 1.md │ │ │ ├── 10.md │ │ │ ├── 11.md │ │ │ ├── 12.md │ │ │ ├── 13.md │ │ │ ├── 14.md │ │ │ ├── 2.md │ │ │ ├── 3.md │ │ │ ├── 4.md │ │ │ ├── 5.md │ │ │ ├── 6.md │ │ │ ├── 7.md │ │ │ ├── 8.md │ │ │ └── 9.md │ └── README.md ├── notes │ ├── 1- overview.md │ ├── 2 - Computational NeuroScience: Descriptive Models.md │ ├── 3 - Computational NeuroScience: Mechanistic and Interpretive Models.md │ └── 4-NeuroBiology-101 │ │ ├── 4-1-Neurons.md │ │ ├── 4-2-Synapses.md │ │ ├── 4-3-Nervous System and Brain regions.md │ │ ├── 4-4-Conclusion.md │ │ └── README.md └── shared │ ├── acronyms.md │ └── lecture.md ├── Week-2 ├── Quiz │ ├── Programming │ │ ├── questions │ │ │ ├── 1.md │ │ │ ├── 2.md │ │ │ ├── 3.md │ │ │ └── 4.md │ │ ├── resources │ │ │ └── references.md │ │ ├── shared_codes │ │ │ ├── Python │ │ │ │ ├── compute_sta.py │ │ │ │ └── quiz2.py │ │ │ ├── README.md │ │ │ └── matlab │ │ │ │ ├── README.md │ │ │ │ ├── compute_sta.m │ │ │ │ └── quiz2.m │ │ └── solutions │ │ │ ├── README.md │ │ │ └── mathlab │ │ │ ├── compute_sta.m │ │ │ ├── output.md │ │ │ └── quiz2.m │ ├── README.md │ └── Theory │ │ ├── 1.md │ │ ├── 2.md │ │ ├── 3.md │ │ ├── 4.md │ │ └── 5.md ├── notes │ ├── 1 - Overview.md │ ├── 2- What is the Neural code?.md │ ├── 3-Neural Coding : Simple Models.md │ ├── 4 - Neural coding : Feature Selection.md │ └── 5 - Neural Coding : Variability.md └── shared │ └── resources.md ├── Week-3 ├── notes │ ├── notes │ │ ├── 1 - Neural Decoding and Signal Detection Theory.md │ │ ├── 2 - Population Coding and Bayesian Estimation.md │ │ ├── 3 - Reading minds : Stimulus Reconstruction.md │ │ └── guest-lecture.md │ └── shared │ │ └── resources.md └── requirements.txt ├── Week-4 ├── Quiz │ ├── Programming │ │ ├── 1.md │ │ ├── 2.md │ │ ├── 3.md │ │ ├── MATLAB │ │ │ └── README.md │ │ ├── Python │ │ │ └── README.md │ │ └── plotting │ │ │ ├── README.md │ │ │ └── output.md │ ├── README.md │ └── Theory │ │ ├── 1.md │ │ ├── 10.md │ │ ├── 2.md │ │ ├── 3.md │ │ ├── 4.md │ │ ├── 5.md │ │ ├── 6.md │ │ ├── 7.md │ │ ├── 8.md │ │ └── 9.md └── notes │ ├── notes │ ├── 1 - Information and Entropy.md │ ├── 2 - Calculating Information in Spike Trains.md │ └── 3 - Coding Principles.md │ └── shared │ └── resources.md ├── Week-5 ├── notes │ ├── notes │ │ ├── 1 - Modeling Neurons.md │ │ ├── 2 - Spikes.md │ │ ├── 3---Simplified-Modeled-Neurons │ │ │ ├── 1 - overview.md │ │ │ ├── 2 - introduction.md │ │ │ ├── 3 - Capturing the basic dynamics of neurons.md │ │ │ ├── 4 - Two-dimensional models.md │ │ │ └── 5 - The Simple Model.md │ │ ├── 4 - A forest of dendrites.md │ │ └── README.md │ └── shared │ │ └── resources.md └── quiz │ └── README.md ├── Week-6 ├── README.md └── notes │ ├── notes │ ├── 1 - Modelling connections between neurons.md │ ├── 2 - Introduction to network models.md │ └── 3 - The World of Recurrent networks.md │ └── shared │ └── resources.md ├── Week-7 ├── notes │ ├── notes │ │ ├── 1 - LTP and LTD.md │ │ ├── 2 - Hebb's rule.md │ │ ├── 3 - Covariance rule.md │ │ ├── 4 - Analyzing learning rules.md │ │ ├── 5 - Oja's Rule.md │ │ ├── 6 - Summary of Hebbian Learning.md │ │ ├── 7 - Statistical Learning.md │ │ ├── 8 - Introduction to unsupervised learning.md │ │ ├── 9 - Sparse coding and Predictive coding.md │ │ └── README.md │ └── shared │ │ └── resources.md └── quizzes │ ├── README.md │ └── programming │ ├── alpha_neuron.m │ └── alpha_neuron.py ├── Week-8 ├── notes │ ├── notes │ │ ├── 1 - Neurons as Classifiers and Supervised Learning.md │ │ ├── 2 - Reinforcement learning: Predicting rewards.md │ │ └── 3 - Reinforcement Learning: Time actions.md │ └── shared │ │ └── resources.md └── quiz │ └── README.md └── stuffs ├── installation.md ├── my-background.txt ├── shared-resources.md └── stacking ├── matlab.md └── other-notes.md /README.md: -------------------------------------------------------------------------------- 1 | # Computational-NeuroScience 2 | Computational NeuroScience is a rigorous 8-week course in Coursera from University of Washington that focus on basic computational techniques for analyzing, modelling and understanding the behaviour of cells and circuits in the brain. 3 |

4 | Added notes, quizzes, resources and more... 5 | ``` 6 | Course duration: May 1st 2015 to June 29th 2015 7 | ``` 8 | -------------------------------------------------------------------------------- /Week-0/README.md: -------------------------------------------------------------------------------- 1 | ``` 2 | Syllabus 3 | ``` 4 | ``` 5 | Topics that we will cover in this course: 6 | 7 | Basic Neurobiology 8 | Neural Encoding 9 | Neural Decoding 10 | Information Theory 11 | Modeling Single Neurons 12 | Synapse and Network Models: Feedforward and Recurrent Networks 13 | Synaptic Plasticity and Learning 14 | ``` 15 | ``` 16 | Schedule 17 | 18 | Week 1: Course Introduction and Basic Neurobiology (Rajesh Rao) 19 | Week 2: What do Neurons Encode? Neural Encoding Models (Adrienne Fairhall) 20 | Week 3: Extracting Information from Neurons: Neural Decoding (Adrienne Fairhall) 21 | Week 4: Information and Coding Principles (Adrienne Fairhall) 22 | Week 5: Simulating the Brain from the Ground Up: Models of Single Neurons (Adrienne Fairhall) 23 | Week 6: Modeling Synapses and Networks of Neurons (Rajesh Rao) 24 | Week 7: How do Brains Learn? Modeling Synaptic Plasticity and Learning (Rajesh Rao) 25 | Week 8: Learning to Act: Reinforcement Learning (Rajesh Rao) 26 | ``` 27 | ``` 28 | Recommended reading 29 | 30 | Week 1: Dayan and Abbott, Theoretical Neuroscience, Chapter 1 and Mathematical Appendix 31 | Week 2: Dayan and Abbott, Theoretical Neuroscience, Chapter 2; Spikes, Rieke et al, appendices 32 | Week 3: Dayan and Abbott, Theoretical Neuroscience, Chapter 3 33 | Week 4: Dayan and Abbott, Theoretical Neuroscience, Chapter 4 34 | Week 5: Dayan and Abbott, Theoretical Neuroscience, Chapters 5 & 6 35 | Week 6: Dayan and Abbott, Theoretical Neuroscience, Chapter 7 36 | Week 7: Dayan and Abbott, Theoretical Neuroscience, Chapters 8 & 10 37 | Week 8: Dayan and Abbott, Theoretical Neuroscience, Chapter 9 38 | ``` 39 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/MATLAB/README.md: -------------------------------------------------------------------------------- 1 | Score: 14/14 2 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/MATLAB/solution-1.md: -------------------------------------------------------------------------------- 1 | ``` 2 | Which of the following expressions generates the column vector 3 | ``` 4 | 5 | 6 | ```m 7 | >> [1,2,3] 8 | 9 | ans = 10 | 11 | 1 2 3 12 | 13 | >> [1;2;3] 14 | 15 | ans = 16 | 17 | 1 18 | 2 19 | 3 20 | 21 | >> [1,2,3]' 22 | 23 | ans = 24 | 25 | 1 26 | 2 27 | 3 28 | 29 | >> [1 2 3] 30 | 31 | ans = 32 | 33 | 1 2 3 34 | ``` 35 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/MATLAB/solution-10.md: -------------------------------------------------------------------------------- 1 | ``` 2 | Suppose we wish to generate a 2x3 matrix that contains only zeros. 3 | Which of the following expressions would achieve this goal? 4 | Select all the correct answers. 5 | ``` 6 | ```m 7 | >> A=[0,0,0;0,0,0] 8 | 9 | A = 10 | 11 | 0 0 0 12 | 0 0 0 13 | 14 | >> A=zeros(2) 15 | 16 | A = 17 | 18 | 0 0 19 | 0 0 20 | 21 | >> A=zeros(3) 22 | 23 | A = 24 | 25 | 0 0 0 26 | 0 0 0 27 | 0 0 0 28 | 29 | >> A=zeros(3,2) 30 | 31 | A = 32 | 33 | 0 0 34 | 0 0 35 | 0 0 36 | 37 | >> A=eye(2,3) 38 | 39 | A = 40 | 41 | 1 0 0 42 | 0 1 0 43 | 44 | >> A=zeros(2,3) 45 | 46 | A = 47 | 48 | 0 0 0 49 | 0 0 0 50 | 51 | >> A=[0 0 0; 0 0 0] 52 | 53 | A = 54 | 55 | 0 0 0 56 | 0 0 0 57 | 58 | >> A=[0 0; 0 0; 0 0] 59 | 60 | A = 61 | 62 | 0 0 63 | 0 0 64 | 0 0 65 | ``` 66 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/MATLAB/solution-11.md: -------------------------------------------------------------------------------- 1 | ``` 2 | Which of the following expressions could be used "to set all of the negative entries in A to zero"? 3 | (read this question carefully) 4 | ``` 5 | ```m 6 | >> A=[5,-2,3;2,-3,4;3,4,-8] 7 | 8 | A = 9 | 10 | 5 -2 3 11 | 2 -3 4 12 | 3 4 -8 13 | 14 | >> A<0=0 15 | A<0=0 16 | | 17 | Error: The expression to the left of the equals sign is not a valid target for an assignment. 18 | 19 | >> (A<0)=0 20 | (A<0)=0 21 | | 22 | Error: The expression to the left of the equals sign is not a valid target for an assignment. 23 | 24 | >> A(A<0)=0 25 | 26 | A = 27 | 28 | 5 0 3 29 | 2 0 4 30 | 3 4 0 31 | 32 | >> A(:)=0 33 | 34 | A = 35 | 36 | 0 0 0 37 | 0 0 0 38 | 0 0 0 39 | ``` 40 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/MATLAB/solution-12.md: -------------------------------------------------------------------------------- 1 | ```m 2 | >> A=[1;2;3] 3 | 4 | A = 5 | 6 | 1 7 | 2 8 | 3 9 | 10 | >> B=[-1,-2,-3] 11 | 12 | B = 13 | 14 | -1 -2 -3 15 | 16 | >> B=[-1;-2;-3] 17 | 18 | B = 19 | 20 | -1 21 | -2 22 | -3 23 | 24 | >> C=[-1;-4;-9] 25 | 26 | C = 27 | 28 | -1 29 | -4 30 | -9 31 | 32 | >> whos 33 | Name Size Bytes Class Attributes 34 | 35 | A 3x1 24 double 36 | B 3x1 24 double 37 | C 3x1 24 double 38 | 39 | >> C=A.*B 40 | 41 | C = 42 | 43 | -1 44 | -4 45 | -9 46 | 47 | >> C=A*B 48 | Error using * 49 | Inner matrix dimensions must agree. 50 | 51 | >> C=A'*B 52 | 53 | C = 54 | 55 | -14 56 | ``` 57 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/MATLAB/solution-13.md: -------------------------------------------------------------------------------- 1 | ``` 2 | Suppose x=[1 1 2 2 1 3 2 2 3 1]. Which expression returns the "index of the first element of x equal to 3"? 3 | (read this question carefully) 4 | ``` 5 | ```m 6 | >> x=[1 1 2 2 1 3 2 2 3 1] 7 | 8 | x = 9 | 10 | 1 1 2 2 1 3 2 2 3 1 11 | 12 | >> x==3 13 | 14 | ans = 15 | 16 | 0 0 0 0 0 1 0 0 1 0 17 | 18 | >> find(x==3) 19 | 20 | ans = 21 | 22 | 6 9 23 | 24 | >> find(x==3,1) 25 | 26 | ans = 27 | 28 | 6 29 | 30 | >> x=3 31 | 32 | x = 33 | 34 | 3 35 | ``` 36 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/MATLAB/solution-14.md: -------------------------------------------------------------------------------- 1 | ``` 2 | What does the keyboard command do when placed inside a MATLAB script 3 | ``` 4 | ``` 5 | (for example: 6 | ``` 7 | ```matlab 8 | x = 5; 9 | y = [3 5 7]; 10 | z = x * y; 11 | keyboard; 12 | w = z .^ 2; 13 | ``` 14 | Solution: 15 | ```m 16 | >> x=5; 17 | >> y=[3 5 7]; 18 | >> z=x*y; 19 | >> keyboard; 20 | K>> w=z.^2; 21 | K>> dbquit 22 | 23 | Stops execution of program and gives control to the keyboard. 24 | ``` 25 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/MATLAB/solution-2.md: -------------------------------------------------------------------------------- 1 | 2 | ```m 3 | >> A=[1,2,3;2,3,4;3,4,5;4,5,6] 4 | 5 | A = 6 | 7 | 1 2 3 8 | 2 3 4 9 | 3 4 5 10 | 4 5 6 11 | 12 | >> A(:,2) 13 | 14 | ans = 15 | 16 | 2 17 | 3 18 | 4 19 | 5 20 | 21 | >> A(2:3,:) 22 | 23 | ans = 24 | 25 | 2 3 4 26 | 3 4 5 27 | 28 | >> A(:,2:3) 29 | 30 | ans = 31 | 32 | 2 3 33 | 3 4 34 | 4 5 35 | 5 6 36 | 37 | >> A(:,:) 38 | 39 | ans = 40 | 41 | 1 2 3 42 | 2 3 4 43 | 3 4 5 44 | 4 5 6 45 | ``` 46 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/MATLAB/solution-3.md: -------------------------------------------------------------------------------- 1 | ``` 2 | Which of the following commands will NOT give an error? Select all the correct answers. 3 | ``` 4 | ```m 5 | >> A=[1,2;3,4] 6 | 7 | A = 8 | 9 | 1 2 10 | 3 4 11 | 12 | >> B=[2,2;3,3;4,4] 13 | 14 | B = 15 | 16 | 2 2 17 | 3 3 18 | 4 4 19 | 20 | >> C=eye(3) 21 | 22 | C = 23 | 24 | 1 0 0 25 | 0 1 0 26 | 0 0 1 27 | 28 | >> D=[1,2,3] 29 | 30 | D = 31 | 32 | 1 2 3 33 | 34 | >> E=zeros(3,3) 35 | 36 | E = 37 | 38 | 0 0 0 39 | 0 0 0 40 | 0 0 0 41 | 42 | >> A*B' 43 | 44 | ans = 45 | 46 | 6 9 12 47 | 14 21 28 48 | 49 | >> C.*E 50 | 51 | ans = 52 | 53 | 0 0 0 54 | 0 0 0 55 | 0 0 0 56 | 57 | >> D*B 58 | 59 | ans = 60 | 61 | 20 20 62 | 63 | >> C*E 64 | 65 | ans = 66 | 67 | 0 0 0 68 | 0 0 0 69 | 0 0 0 70 | 71 | >> A-B 72 | Error using - 73 | Matrix dimensions must agree. 74 | 75 | >> A*B 76 | Error using * 77 | Inner matrix dimensions must agree. 78 | ``` 79 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/MATLAB/solution-4.md: -------------------------------------------------------------------------------- 1 | ``` 2 | Which of the following commands will NOT give an error? Select all the correct answers. 3 | ``` 4 | ```m 5 | >> B=[2,2;3,3;4,4] 6 | 7 | B = 8 | 9 | 2 2 10 | 3 3 11 | 4 4 12 | 13 | >> d=[1,2,3] 14 | 15 | d = 16 | 17 | 1 2 3 18 | 19 | >> f=[8;9] 20 | 21 | f = 22 | 23 | 8 24 | 9 25 | 26 | >> B-[d' d'*2] 27 | 28 | ans = 29 | 30 | 1 0 31 | 1 -1 32 | 1 -2 33 | 34 | >> B+repmat(f',3,1) 35 | 36 | ans = 37 | 38 | 10 11 39 | 11 12 40 | 12 13 41 | 42 | >> B-repmat(f,1,3) 43 | Error using - 44 | Matrix dimensions must agree. 45 | 46 | >> B+[f;f;f] 47 | Error using + 48 | Matrix dimensions must agree. 49 | ``` 50 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/MATLAB/solution-5.md: -------------------------------------------------------------------------------- 1 | ``` 2 | Suppose we wish to generate a 4x1 vector that contains the number 5 in every position. 3 | Which of the following expressions will accomplish this task? Select all the correct answers. 4 | ``` 5 | ```m 6 | >> a=[5;5;5;5] 7 | 8 | a = 9 | 10 | 5 11 | 5 12 | 5 13 | 5 14 | 15 | >> a=fives(4,1) 16 | Undefined function or variable 'fives'. 17 | 18 | >> a=ones(4,1) 19 | 20 | a = 21 | 22 | 1 23 | 1 24 | 1 25 | 1 26 | 27 | >> a=ones(4)*5 28 | 29 | a = 30 | 31 | 5 5 5 5 32 | 5 5 5 5 33 | 5 5 5 5 34 | 5 5 5 5 35 | 36 | >> a=eye(4)*5 37 | 38 | a = 39 | 40 | 5 0 0 0 41 | 0 5 0 0 42 | 0 0 5 0 43 | 0 0 0 5 44 | 45 | >> a=ones(4,1)*5 46 | 47 | a = 48 | 49 | 5 50 | 5 51 | 5 52 | 5 53 | ``` 54 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/MATLAB/solution-6.md: -------------------------------------------------------------------------------- 1 | ```m 2 | >> A=[1,2,3,4,5;2,3,4,5,6;3,4,5,6,7;4,5,6,7,8;5,6,7,8,9] 3 | 4 | A = 5 | 6 | 1 2 3 4 5 7 | 2 3 4 5 6 8 | 3 4 5 6 7 9 | 4 5 6 7 8 10 | 5 6 7 8 9 11 | 12 | >> A(:,1:2:5) 13 | 14 | ans = 15 | 16 | 1 3 5 17 | 2 4 6 18 | 3 5 7 19 | 4 6 8 20 | 5 7 9 21 | 22 | >> [A(:,1) A(:,3) A(:,5)] 23 | 24 | ans = 25 | 26 | 1 3 5 27 | 2 4 6 28 | 3 5 7 29 | 4 6 8 30 | 5 7 9 31 | 32 | >> A(:,1:3) 33 | 34 | ans = 35 | 36 | 1 2 3 37 | 2 3 4 38 | 3 4 5 39 | 4 5 6 40 | 5 6 7 41 | 42 | >> [A(1,:) A(1,:) A(1,:)] 43 | 44 | ans = 45 | 46 | 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 47 | 48 | >> A(1:2:5,:) 49 | 50 | ans = 51 | 52 | 1 2 3 4 5 53 | 3 4 5 6 7 54 | 5 6 7 8 9 55 | ``` 56 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/MATLAB/solution-7.md: -------------------------------------------------------------------------------- 1 | ```m 2 | >> x=0:0.05:5;y=sin(x.^2);plot(x,y); 3 | ``` 4 | ![](http://geekresearchlab.net/coursera/neuro/figure-1_.jpg) 5 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/MATLAB/solution-8.md: -------------------------------------------------------------------------------- 1 | ``` 2 | What is the vector b equal to after this block of code is executed? 3 | ``` 4 | ```matlab 5 | A = [1 0 -4 8 3; 4 -2 3 3 1]; 6 | b = zeros(1,5); 7 | for index = 1:size(A,2) 8 | if A(1,index) > A(2,index) 9 | b(index) = A(1,index); 10 | else 11 | b(index) = A(2,index); 12 | end 13 | end 14 | ``` 15 | Solution: 16 | ```m 17 | >> A=[1,0,-4,8,3;4,-2,3,3,1]; 18 | >> A 19 | 20 | A = 21 | 22 | 1 0 -4 8 3 23 | 4 -2 3 3 1 24 | 25 | >> B=zeros(1,5); 26 | >> B 27 | 28 | B = 29 | 30 | 0 0 0 0 0 31 | 32 | >> for index=1:size(A,2) 33 | if A(1,index)>A(2,index) 34 | B(index)=A(1,index); 35 | else 36 | B(index)=A(2,index); 37 | end 38 | end 39 | >> index 40 | 41 | index = 42 | 43 | 5 44 | 45 | >> B 46 | 47 | B = 48 | 49 | 4 0 3 8 3 50 | ``` 51 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/MATLAB/solution-9.md: -------------------------------------------------------------------------------- 1 | ``` 2 | What is the value (rounded to three significant figures) of x after this block of code is executed? 3 | ``` 4 | ```matlab 5 | x = 1; 6 | while x > 1e-5 7 | x = x / 2; 8 | end 9 | ``` 10 | Solution: 11 | ```m 12 | >> x=1; 13 | >> while x>1e-5 14 | x=x/2; 15 | end 16 | >> x 17 | 18 | x = 19 | 20 | 7.6294e-06 21 | ``` 22 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/Python/README.md: -------------------------------------------------------------------------------- 1 | Score: 14/14 2 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/Python/Solutions/1.md: -------------------------------------------------------------------------------- 1 | Question:
2 | ``` 3 | A = np.array([[1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5, 6]]) 4 | ``` 5 | Answer: 6 | ```py 7 | >>> A=np.array([[1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5, 6]]) 8 | >>> A 9 | array([[1, 2, 3], 10 | [2, 3, 4], 11 | [3, 4, 5], 12 | [4, 5, 6]]) 13 | ``` 14 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/Python/Solutions/10.md: -------------------------------------------------------------------------------- 1 | Answer: 2 | ```py 3 | >>> import numpy as np 4 | >>> import matplotlib.pyplot as plt 5 | >>> x=np.arange(0,5,step=0.05) 6 | >>> y=np.sin(x**2) 7 | >>> plt.plot(x,y) 8 | [] 9 | >>> plt.show() 10 | ``` 11 | ![](http://geekresearchlab.net/coursera/neuro/figure_2.jpeg) 12 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/Python/Solutions/11.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | Given the array x = np.array([1,2,3,4,5]), how do you create an array y that contains the cubes of all the elements of x? 4 | 5 | y = x**3 6 | 7 | y = x.^3 8 | 9 | y = x^3 10 | ``` 11 | Answer: 12 | ```py 13 | >>> x=np.array([1,2,3,4,5]) 14 | >>> x 15 | array([1, 2, 3, 4, 5]) 16 | >>> y=x**3 17 | >>> y 18 | array([ 1, 8, 27, 64, 125], dtype=int32) 19 | >>> y=x.^3 20 | SyntaxError: invalid syntax 21 | >>> y=x^3 22 | >>> y 23 | array([2, 1, 0, 7, 6], dtype=int32) 24 | ``` 25 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/Python/Solutions/12.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | What is the mathematical representation of x after this sequence of commands? 4 | ``` 5 | ```py 6 | x = np.array([[1,2,3],[2,3,4]]) 7 | x *= 5 8 | x -= 1 9 | x[x > 10] = 0 10 | x = x.T 11 | ``` 12 | Answer: 13 | ```py 14 | >>> x=np.array([[1,2,3],[2,3,4]]) 15 | >>> x 16 | array([[1, 2, 3], 17 | [2, 3, 4]]) 18 | >>> x*=5 19 | >>> x-=1 20 | >>> x[x>10]=0 21 | >>> x=x.T 22 | >>> x 23 | array([[4, 9], 24 | [9, 0], 25 | [0, 0]]) 26 | ``` 27 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/Python/Solutions/13.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | Which of the following pieces of code sets the value of y to True if the value of x is either 2, 5, or 9, and to False otherwise? Check all that apply. 4 | 5 | ``` 6 | option 1 7 | ```py 8 | if x in [2, 5, 9]: 9 | y = True 10 | else: 11 | y = False 12 | ``` 13 | option 2 14 | ```py 15 | y = x in [2, 5, 9] 16 | ``` 17 | option 3 18 | ```py 19 | if x == [2, 5, 9]: 20 | y = True 21 | else: 22 | y = False 23 | ``` 24 | option 4 25 | ```py 26 | y = False 27 | if x in [2, 5, 9]: 28 | y = True 29 | ``` 30 | Answer-- 1,2,4 31 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/Python/Solutions/14.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | What do the commands 4 | import pdb; pdb.set_trace() 5 | do when placed inside a Python script? 6 | ``` 7 | E.g., 8 | ```py 9 | x = np.arange(5) 10 | y = -np.arange(5) 11 | x[y < 2] = 0 12 | import pdb; pdb.set_trace() 13 | x *= 9 14 | print x 15 | ``` 16 | Answer:
17 | It actually interrupts. Give it a try in Python shell. =) 18 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/Python/Solutions/2.md: -------------------------------------------------------------------------------- 1 | Answer: 2 | ```py 3 | >>> A=np.array([[1,2,3,4],[2,3,4,5],[3,4,5,6]]) 4 | >>> A 5 | array([[1, 2, 3, 4], 6 | [2, 3, 4, 5], 7 | [3, 4, 5, 6]]) 8 | >>> B=A[[0,2],1:] 9 | >>> B 10 | array([[2, 3, 4], 11 | [4, 5, 6]]) 12 | ``` 13 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/Python/Solutions/3.md: -------------------------------------------------------------------------------- 1 | Question:
2 | Suppose you have a script that contains the line 3 | ```py 4 | A = np.array([1, 2, 3]) 5 | ``` 6 | but when you run it, the following error occurs: 7 | ```py 8 | Traceback (most recent call last): 9 | File "", line 1, in 10 | NameError: name 'np' is not defined 11 | ``` 12 | Answer: 13 | ```py 14 | >>> import numpy as np 15 | >>> A=np.array([1,2,3]) 16 | >>> A 17 | array([1, 2, 3]) 18 | ``` 19 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/Python/Solutions/4.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | Given that numpy is imported as np, and that you have defined the one-dimensional array a=[1234], 4 | which of the following commands will not raise an error? 5 | 6 | Check all that apply. 7 | Choices--- 8 | 9 | b = a[4] 10 | 11 | b = a[:2, :2] 12 | 13 | b = np.ones(5, ) 14 | 15 | b = a[4:] 16 | 17 | b = a[:5] 18 | 19 | b = np.ones(5, 5) 20 | 21 | b = a[:2] 22 | 23 | b = np.ones((5, 5)) 24 | ``` 25 | Answers: 26 | ```py 27 | >>> a=np.array([1,2,3,4]) 28 | >>> a 29 | array([1, 2, 3, 4]) 30 | >>> b=a[4] 31 | Traceback (most recent call last): 32 | File "", line 1, in 33 | b=a[4] 34 | IndexError: index 4 is out of bounds for axis 0 with size 4 35 | >>> b=a[:2,:2] 36 | Traceback (most recent call last): 37 | File "", line 1, in 38 | b=a[:2,:2] 39 | IndexError: too many indices for array 40 | >>> b=np.ones(5,) 41 | >>> b 42 | array([ 1., 1., 1., 1., 1.]) 43 | >>> b=a[4:] 44 | >>> b 45 | array([], dtype=int32) 46 | >>> b=a[:5] 47 | >>> b 48 | array([1, 2, 3, 4]) 49 | >>> b=np.ones(5,5) 50 | Traceback (most recent call last): 51 | File "", line 1, in 52 | b=np.ones(5,5) 53 | File "D:\WinPython-64bit-3.4.3.2\python-3.4.3.amd64\lib\site-packages\numpy\core\numeric.py", line 183, in ones 54 | a = empty(shape, dtype, order) 55 | TypeError: data type not understood 56 | >>> b=a[:2] 57 | >>> b 58 | array([1, 2]) 59 | >>> b=np.ones((5,5)) 60 | >>> b 61 | array([[ 1., 1., 1., 1., 1.], 62 | [ 1., 1., 1., 1., 1.], 63 | [ 1., 1., 1., 1., 1.], 64 | [ 1., 1., 1., 1., 1.], 65 | [ 1., 1., 1., 1., 1.]]) 66 | ``` 67 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/Python/Solutions/5.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | Which piece of code generates an array x of 100 random numbers between 0 and 1? 4 | 5 | x = np.random(100) 6 | 7 | x = random(100) 8 | 9 | x = np.random.rand(100) 10 | ``` 11 | Answer: 12 | ```py 13 | >>> x=np.random(100) 14 | Traceback (most recent call last): 15 | File "", line 1, in 16 | x=np.random(100) 17 | TypeError: 'module' object is not callable 18 | >>> x=random(100) 19 | Traceback (most recent call last): 20 | File "", line 1, in 21 | x=random(100) 22 | NameError: name 'random' is not defined 23 | >>> x=np.random.rand(100) 24 | >>> x 25 | array([ 0.97242505, 0.8997873 , 0.05948199, 0.11541465, 0.19513624, 26 | 0.68711198, 0.76703181, 0.72983176, 0.70180359, 0.99995405, 27 | 0.14501983, 0.45442001, 0.95071072, 0.33446144, 0.32608089, 28 | 0.53551475, 0.42375901, 0.13759822, 0.46987233, 0.86715963, 29 | 0.17227361, 0.46108722, 0.9639906 , 0.24375939, 0.75630169, 30 | 0.1564429 , 0.83430317, 0.39852616, 0.57194232, 0.09946557, 31 | 0.57102112, 0.32118764, 0.64416744, 0.07610198, 0.34983916, 32 | 0.37975925, 0.95427101, 0.98277691, 0.80269886, 0.64525006, 33 | 0.90640652, 0.16676009, 0.0479203 , 0.28569842, 0.47810725, 34 | 0.84556138, 0.62178504, 0.32685945, 0.79664272, 0.62528386, 35 | 0.0290814 , 0.04944595, 0.4422104 , 0.37343932, 0.38800376, 36 | 0.860599 , 0.57148282, 0.24807951, 0.3895091 , 0.18609951, 37 | 0.571622 , 0.49702227, 0.34605323, 0.13895267, 0.3492419 , 38 | 0.45501214, 0.81307873, 0.3732032 , 0.78694034, 0.93890971, 39 | 0.13387857, 0.15762817, 0.68141378, 0.05876083, 0.22312342, 40 | 0.95464074, 0.62419264, 0.3606512 , 0.93171365, 0.86525317, 41 | 0.86694366, 0.64842315, 0.32099752, 0.15535852, 0.57295248, 42 | 0.4876184 , 0.31922459, 0.26911208, 0.75989353, 0.00884791, 43 | 0.110637 , 0.0082132 , 0.25970633, 0.88824515, 0.40500321, 44 | 0.11752287, 0.5549363 , 0.22658992, 0.306906 , 0.27096286]) 45 | ``` 46 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/Python/Solutions/6.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | Suppose x is an array of 100 random numbers between 0 and 1. Which piece of code sets to 1 all elements of x that are greater than 0.5? 4 | 5 | x[> 0.5] = 1 6 | 7 | [x > 0.5] = 1 8 | 9 | if x > 0.5: 10 | x = 1 11 | 12 | x[x > 0.5] = 1 13 | ``` 14 | Answer: 15 | ```py 16 | >>> x=np.random.rand(100) 17 | >>> x 18 | array([ 0.97242505, 0.8997873 , 0.05948199, 0.11541465, 0.19513624, 19 | 0.68711198, 0.76703181, 0.72983176, 0.70180359, 0.99995405, 20 | 0.14501983, 0.45442001, 0.95071072, 0.33446144, 0.32608089, 21 | 0.53551475, 0.42375901, 0.13759822, 0.46987233, 0.86715963, 22 | 0.17227361, 0.46108722, 0.9639906 , 0.24375939, 0.75630169, 23 | 0.1564429 , 0.83430317, 0.39852616, 0.57194232, 0.09946557, 24 | 0.57102112, 0.32118764, 0.64416744, 0.07610198, 0.34983916, 25 | 0.37975925, 0.95427101, 0.98277691, 0.80269886, 0.64525006, 26 | 0.90640652, 0.16676009, 0.0479203 , 0.28569842, 0.47810725, 27 | 0.84556138, 0.62178504, 0.32685945, 0.79664272, 0.62528386, 28 | 0.0290814 , 0.04944595, 0.4422104 , 0.37343932, 0.38800376, 29 | 0.860599 , 0.57148282, 0.24807951, 0.3895091 , 0.18609951, 30 | 0.571622 , 0.49702227, 0.34605323, 0.13895267, 0.3492419 , 31 | 0.45501214, 0.81307873, 0.3732032 , 0.78694034, 0.93890971, 32 | 0.13387857, 0.15762817, 0.68141378, 0.05876083, 0.22312342, 33 | 0.95464074, 0.62419264, 0.3606512 , 0.93171365, 0.86525317, 34 | 0.86694366, 0.64842315, 0.32099752, 0.15535852, 0.57295248, 35 | 0.4876184 , 0.31922459, 0.26911208, 0.75989353, 0.00884791, 36 | 0.110637 , 0.0082132 , 0.25970633, 0.88824515, 0.40500321, 37 | 0.11752287, 0.5549363 , 0.22658992, 0.306906 , 0.27096286]) 38 | >>> x[>0.5]=1 39 | SyntaxError: invalid syntax 40 | >>> [x>0.5]=1 41 | SyntaxError: can't assign to comparison 42 | >>> if x>0.5: 43 | x=1 44 | 45 | 46 | Traceback (most recent call last): 47 | File "", line 1, in 48 | if x>0.5: 49 | ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() 50 | 51 | 52 | >>> x[x>0.5]=1 53 | >>> x 54 | array([ 1. , 1. , 0.05948199, 0.11541465, 0.19513624, 55 | 1. , 1. , 1. , 1. , 1. , 56 | 0.14501983, 0.45442001, 1. , 0.33446144, 0.32608089, 57 | 1. , 0.42375901, 0.13759822, 0.46987233, 1. , 58 | 0.17227361, 0.46108722, 1. , 0.24375939, 1. , 59 | 0.1564429 , 1. , 0.39852616, 1. , 0.09946557, 60 | 1. , 0.32118764, 1. , 0.07610198, 0.34983916, 61 | 0.37975925, 1. , 1. , 1. , 1. , 62 | 1. , 0.16676009, 0.0479203 , 0.28569842, 0.47810725, 63 | 1. , 1. , 0.32685945, 1. , 1. , 64 | 0.0290814 , 0.04944595, 0.4422104 , 0.37343932, 0.38800376, 65 | 1. , 1. , 0.24807951, 0.3895091 , 0.18609951, 66 | 1. , 0.49702227, 0.34605323, 0.13895267, 0.3492419 , 67 | 0.45501214, 1. , 0.3732032 , 1. , 1. , 68 | 0.13387857, 0.15762817, 1. , 0.05876083, 0.22312342, 69 | 1. , 1. , 0.3606512 , 1. , 1. , 70 | 1. , 1. , 0.32099752, 0.15535852, 1. , 71 | 0.4876184 , 0.31922459, 0.26911208, 1. , 0.00884791, 72 | 0.110637 , 0.0082132 , 0.25970633, 1. , 0.40500321, 73 | 0.11752287, 1. , 0.22658992, 0.306906 , 0.27096286]) 74 | ``` 75 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/Python/Solutions/7.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | Which piece of code returns the numerical indices of the first three elements of the one-dimensional array x that are greater than 1? 4 | 5 | x[:3] > 1 6 | 7 | (x > 1)[:3] 8 | 9 | (x > 1).nonzero()[0][:3] 10 | 11 | x[x > 1][:3] 12 | ``` 13 | Answer: 14 | ```py 15 | >>> x=np.array([1,2,3,4,5]) 16 | >>> x 17 | array([1, 2, 3, 4, 5]) 18 | >>> x[:3]>1 19 | array([False, True, True], dtype=bool) 20 | >>> (x>1)[:3] 21 | array([False, True, True], dtype=bool) 22 | >>> (x>1).nonzero()[0][:3] 23 | array([1, 2, 3], dtype=int64) 24 | >>> x[x>1][:3] 25 | array([2, 3, 4]) 26 | ``` 27 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/Python/Solutions/8.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | What piece of code loads the file 'data.pickle', which contains a dict object, into the variable data? You can assume that the directory containing 'data.pickle' is in your path (i.e., is accessible). 4 | 5 | *The end result should be that the variable data is a dict object. 6 | ``` 7 | ```py 8 | import pickle 9 | with open('data.pickle') as f: 10 | data = pickle.load(f) 11 | 12 | import pickle 13 | with open('data.pickle') as f: 14 | data = f.open() 15 | 16 | import pickle 17 | data = pickle.open(f) 18 | 19 | import pickle 20 | with open('data.pickle') as f: 21 | data = f 22 | ``` 23 | Working:
24 | option_1.py 25 | ```py 26 | import pickle 27 | with open('data.pickle') as f: 28 | data = pickle.load(f) 29 | ``` 30 | option_2.py 31 | ```py 32 | import pickle 33 | with open('data.pickle') as f: 34 | data = f.open() 35 | ``` 36 | option_3.py 37 | ```py 38 | import pickle 39 | data = pickle.open(f) 40 | ``` 41 | option_4.py 42 | ```py 43 | import pickle 44 | with open('data.pickle') as f: 45 | data = f 46 | ``` 47 | Execution along with Git: 48 | ```py 49 | dell@DELL3521 /d/WinPython-64bit-3.4.3.2 50 | $ python option_1.py 51 | Traceback (most recent call last): 52 | File "option_1.py", line 3, in 53 | data=pickle.load(f) 54 | TypeError: 'str' does not support the buffer interface 55 | 56 | dell@DELL3521 /d/WinPython-64bit-3.4.3.2 57 | $ python option_2.py 58 | Traceback (most recent call last): 59 | File "option_2.py", line 3, in 60 | data=f.open() 61 | AttributeError: '_io.TextIOWrapper' object has no attribute 'open' 62 | 63 | dell@DELL3521 /d/WinPython-64bit-3.4.3.2 64 | $ python option_3.py 65 | Traceback (most recent call last): 66 | File "option_3.py", line 2, in 67 | data=pickle.open(f) 68 | AttributeError: 'module' object has no attribute 'open' 69 | 70 | dell@DELL3521 /d/WinPython-64bit-3.4.3.2 71 | $ python option_4.py 72 | $ 73 | ``` 74 | But... this seems to work as correct answer 75 | ```py 76 | with open('data.pickle', 'rb') as f: 77 | ``` 78 | Source: https://docs.python.org/3.1/library/pickle.html 79 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/Python/Solutions/9.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | Suppose the dict called data has been set to {'a': 3, 'c': 9, 'b': 5}. How do you set the value corresponding to the key 'b' to 100? 4 | 5 | data.b = 100 6 | 7 | data['b'] = 100 8 | 9 | set(data, b, 100) 10 | 11 | data('b') = 100 12 | ``` 13 | Answer: 14 | ```py 15 | >>> data={'a':3,'c':9,'b':5} 16 | >>> data 17 | {'c': 9, 'b': 5, 'a': 3} 18 | >>> data.b=100 19 | Traceback (most recent call last): 20 | File "", line 1, in 21 | data.b=100 22 | AttributeError: 'dict' object has no attribute 'b' 23 | >>> data['b']=100 24 | >>> data 25 | {'c': 9, 'b': 100, 'a': 3} 26 | >>> set(data,b,100) 27 | Traceback (most recent call last): 28 | File "", line 1, in 29 | set(data,b,100) 30 | NameError: name 'b' is not defined 31 | >>> data('b')=100 32 | SyntaxError: can't assign to function call 33 | ``` 34 | -------------------------------------------------------------------------------- /Week-1/Programming-Quizzes/README.md: -------------------------------------------------------------------------------- 1 | Only for my future reference... Not for the purpose to share solutions with anyone.. 2 | -------------------------------------------------------------------------------- /Week-1/notes/1- overview.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/1-1-outline.jpg) 2 |

3 | ![](http://geekresearchlab.net/coursera/neuro/1-2-goals.jpg) 4 | -------------------------------------------------------------------------------- /Week-1/notes/2 - Computational NeuroScience: Descriptive Models.md: -------------------------------------------------------------------------------- 1 | ``` 2 | 1. Computational Neuroscience 3 | -- Characterizing "what" nervous system do 4 | -- Determining "how" they function 5 | -- Understanding "why" they operate in particular ways 6 | ``` 7 | ``` 8 | 2. Types of computational models 9 | -- Descriptive Models ("what") 10 | -- Mechanistic Models ("How") 11 | -- Interpretive Models ("Why") 12 | ``` 13 | ``` 14 | 3. Examples: 15 | Models of Receptive Fields 16 | (Responses of a neuron in an intact cat brain) 17 | Mechanism 18 | -- Using electrode from the brain, the electrical signals are recorded from the brain. 19 | -- These electrical signals that generated as outputs are in the form of digital pulses known as spikes. 20 | ``` 21 | ``` 22 | 4. Question: 23 | This describes a model of a specific neuron in a cat responding to visual stimuli. 24 | Which of the following functions most accurately depict the model we are talking about here? 25 | Remember: y = f(x) means that y is a function of (depends on) x. 26 | Answer: 27 | Frequency of spikes = f(Light bar's orientation) 28 | Explantion: 29 | In computational terms, this model is defined by a function, which we can estimate mathematically. 30 | In this case, we are trying to estimate an encoding function - one which converts a stimulus into a neurological response. 31 | A greater response corresponds to more frequent "spikes" (also known as "action potentials") being generated by the cat's neuron. 32 | So the function we would estimate in this case is: 33 | Frequency of spikes = f(Light bar's orientation). 34 | We refer to the "receptive field" of a neuron as the particular orientation of a bar of light that produces the best response (that is, maximizes f(Light Bar's orientation)). 35 | ``` 36 | ``` 37 | 5. Receptive field: 38 | -- Specific properties of a sensory stimulus generating a strong response from the cell. 39 | Examples: 40 | - Spot of light over particular location on the retina 41 | - Bar of light over particular orientation and location on the retina. 42 | ``` 43 | Receptive field of the Descriptive Model:
44 | ![](http://geekresearchlab.net/coursera/neuro/receptive-des.jpg)

45 | ![](http://geekresearchlab.net/coursera/neuro/receptive-des-c.jpg) 46 | ``` 47 | 6. Question: 48 | When a cell has an "on-center, off-surround" receptive field, 49 | which center is being referred to? 50 | Answer: 51 | The center of the small patch of retina associated with the cell. 52 | Explanation: 53 | Each cell tends to respond to light input in only a small area of the retina and visual field. 54 | An on-center, off-surround cell becomes more active when only the center of this area is illuminated and less active when the only edges of this area (the surround) are illuminated. 55 | Generally a cell is not affected much by input in far away areas of the retina (although recent studies have begun to show that some subtle long-range communication may exist). 56 | ``` 57 | ``` 58 | 7. Question: 59 | The On-Center / Off-Surround receptive field can be thought of as a filter. 60 | This filter results in more activation due to certain stimuli, and a depression in activation due to other stimuli. 61 | What is this particular filter doing? 62 | Answer: 63 | Causing activation with stimuli concentrated on the center of the receptive field, 64 | and depressing activation with stimuli which are concentrated in the surround 65 | Explanation: 66 | This filter is causing more activation when the light is concentrated in the center of the receptive field, and less when more light on the outside. 67 | Later, we will show how these filters can be thought of as simple mathematical operations. 68 | ``` 69 | ![](http://geekresearchlab.net/coursera/neuro/receptive-des-cort.jpg)

70 | ![](http://geekresearchlab.net/coursera/neuro/receptive-des-cort-c.jpg) 71 | ``` 72 | How are these oriented receptive fields obtained from center-surround receptive fields? 73 | ``` 74 | The answer comes in next chapter... 75 | -------------------------------------------------------------------------------- /Week-1/notes/3 - Computational NeuroScience: Mechanistic and Interpretive Models.md: -------------------------------------------------------------------------------- 1 | ``` 2 | Continuation from previous chapter... 3 | Question: 4 | How are these oriented receptive fields obtained from center-surround receptive fields? 5 | Answer: 6 | (Primary Visual Cortex, V1 model) 7 | Using Mechanistic Models, 8 | Arrange the (center surround) inputs (LGN cells) in 45 degree vertical manner (preferred orientation), 9 | then make a feed-forward connection convering inputs to a particular cell (V1). 10 | -- Proposed in 1960 by Hubel & Wiesel. 11 | -- Controversial model (It does not talk other recurrence inputs, though those input contribute to V1 cell). 12 | -- Two kinds of inputs:- Feed-forward and recurrent networks. 13 | ``` 14 | ![](http://geekresearchlab.net/coursera/neuro/receptive-mech.jpg) 15 |
16 | ``` 17 | Interpretive Models: 18 | RF1, RF2, RF3 and RF4 --> neurons 19 | ``` 20 | ![](http://geekresearchlab.net/coursera/neuro/receptive-inter.jpg)

21 | ![](http://geekresearchlab.net/coursera/neuro/receptive-inter-2.jpg)
22 | ``` 23 | Question: 24 | When we say "linear combination" we are talking about a specific mathematical way of combining several things together. 25 | Which of the following looks most like an example of a linear combination of receptive fields to form an image reconstruction, 26 | assuming I is the image, and the RFs are the receptive fields we are combining? 27 | Answer: 28 | I = 3*RF1 + 2*RF2 + 5*RF3 29 | Explanation: 30 | Put simply, a linear combination involves adding multiples of many things together. 31 | For example, 32 | if we linearly combine X1, X2, X3, ... together, with multipliers equal to a, b, c, ..., we get: a*X1 + b*X2 + c*X3 + ... 33 | Likewise, 34 | we can linearly combine receptive fields where the multipliers are the strength of the response of a neuron (with a particular receptive field) to the visual stimulus. 35 | ``` 36 | ``` 37 | (contd..) Interpretive models of receptive field: 38 | -- When it comes to reconstruction of images and errors... 39 | Start out with random RF running an efficient coding algorithm over natural image patches. 40 | -- Types of efficient coding algorithms: 41 | (i) Sparse coding [Olshausen & Field, 1996] 42 | (ii) Independent Component Analysis (ICA) [Bell & Sejnowski, 1997] 43 | (iii) Predictive coding 44 | ``` 45 | ![](http://geekresearchlab.net/coursera/neuro/receptive-inter-3.jpg) 46 | ``` 47 | Question: 48 | With this interpretive model of V1 receptive fields, our purpose is to: 49 | Answers: 50 | (1) Give an explanation why V1 receptive fields are formed the way we observe them to be... 51 | (2) Demonstrate that the perceptual processes in the brain may be formed in a way that allows for a faithful and efficient encoding of the natural environment. 52 | (3)Provide a computational model of one aspect of the brain's organization, which gives us insights that may be difficult to derive from strict empirical measurements alone. 53 | ``` 54 | -------------------------------------------------------------------------------- /Week-1/notes/4-NeuroBiology-101/4-1-Neurons.md: -------------------------------------------------------------------------------- 1 | ``` 2 | Neuron Doctrine: 3 | -- Neuron is the fundamental structural & functional unit of the brain. 4 | -- Neurons are discrete cells, but not continuous. 5 | -- The information flows from the dendrites to the axon via the cell body. 6 | ``` 7 | ![](http://geekresearchlab.net/coursera/neuro/neurons-1.jpg)

8 | ![](http://geekresearchlab.net/coursera/neuro/neurons-2.jpg)

9 | ![](http://geekresearchlab.net/coursera/neuro/neurons-3.jpg)
10 | ``` 11 | Question: 12 | What are the characterisitics of the idealized brain model? 13 | Answers: 14 | (i) The brain is broken into individual, discrete parts called 'neurons.' 15 | (ii) The shape of neurons varies in some general way from one area of the brain to another. 16 | (iii) Dendrites are the input ends of the neuron, whereas axons are the output ends. 17 | ``` 18 | ![](http://geekresearchlab.net/coursera/neuro/neurons-4.jpg)
19 | ``` 20 | Question: 21 | Spikes (output) from a neuron occur when? 22 | Answer: 23 | The sum of inputs from neighboring neurons reaches a certain threshold. 24 | Explanation: 25 | Generally, you can think of the neuron as summing over all of its inputs over time and location along its dendrites. 26 | When that sum reaches a high enough point, the cell is said to "fire" a spike. 27 | ``` 28 | ``` 29 | Neuron: 30 | -- Neuron is a leaky bag of charged liquid. 31 | -- The contents are enclosed within a cell membrane. 32 | -- The cell membrane is a lipid bilayer, 33 | where it's impermeable to charged ion species such as Sodium(Na), Chlorine(Cl) & Potassium(K). 34 | -- The ionic channels are embedded in cell membrane where the ions are allowed to flow in or out. 35 | -- Each neuron maintains a potential difference across it's membrane, 36 | where the inside has about -70mV relative to the outside. 37 | Na & Cl has higher outside as well K & organic anions(A) higher inside. 38 | The ionic pump maintain -70mV difference by expelling Na out & allowing K ions in. 39 | ``` 40 | ``` 41 | How can the electrical potential be changed in local regions of a neuron? 42 | Answer: Ionic channels 43 | ``` 44 | ``` 45 | Ionic channels: 46 | -- The ionic channels are embedded in cell membrane where the ions are allowed to flow in or out. 47 | -- The ionic channels are gated which are as follows: 48 | (i) Voltage-gated 49 | Probability of the opening depends upon the membrane voltage. 50 | Voltage-gated channels-- 51 | - Depolarization: a positive change in voltage. 52 | - Polarization: Negative change that causes spike or action potential. 53 | (ii) Chemically-gated 54 | Binding to a chemical causes the channel to open. 55 | Eg: Synapses 56 | (iii) Mechanically gated 57 | Sensitive to pressure or stretch. 58 | ``` 59 | ``` 60 | Question: 61 | Spikes (output) from a neuron occur when? 62 | Answer: 63 | Other neurons' outputs cause some of this neuron's gates to open and allow in a different concentration of ions, which leads to a strong depolarization and increases the chance of a spike. 64 | ``` 65 | ``` 66 | Propagation of a Spike along an Axon: 67 | Refer this link --> http://psych.hanover.edu/krantz/neural/actpotanim.html 68 | ``` 69 |
70 | ![](http://geekresearchlab.net/coursera/neuro/neurons-5.jpg)

71 | ![](http://geekresearchlab.net/coursera/neuro/neurons-6.jpg)
72 | ``` 73 | What happens to the spike (action potential) when it reaches the end of an axon? 74 | This will be continued in the next chapter... 75 | ``` 76 | -------------------------------------------------------------------------------- /Week-1/notes/4-NeuroBiology-101/4-2-Synapses.md: -------------------------------------------------------------------------------- 1 | ``` 2 | Synapse: 3 | -- Synapse is termed as the connection or junction between two neurons. 4 | -- Synapse doctrine: Synapse form as the basis for "memory" and "learning". 5 | -- Types: Electrical synapses and Chemical synapses. 6 | -- The electrical synapses use gap junctions. 7 | -- The chemical synapses use neuro-transmitters. 8 | ``` 9 | ``` 10 | Electrical Synapses:- 11 | -- The electrical synapses use gap junctions. 12 | -- The mechanism can be related to the connectivity that is done through cell phones or computers. 13 | ``` 14 | ``` 15 | -- Mechanism of Electrical Synapses: 16 | ``` 17 | ![](http://geekresearchlab.net/coursera/neuro/synapse-1.jpg)
18 | ``` 19 | Explanation of the mechanism: 20 | (i) The Neurons A & B are connected where the gap junctions serves as an interface between them. 21 | (ii) The yellowish thingies are the ionic channels like where the Sodium (Na) ions migrate from Neuron A to Neuron B passing through gap junction. 22 | (iii) It provides fast connections. 23 | ``` 24 | ``` 25 | Chemical Synapses:- 26 | ``` 27 | ![](http://geekresearchlab.net/coursera/neuro/synapse-2.jpg)
28 | ``` 29 | Explanation of the mechanism: 30 | Suppose if we have Neuron A and Neuron B, 31 | The Spike (action potential) is passed through Neuron A. 32 | The bags found on Neuron A are known as vesicles that are filled with Neuro-transmitter molecules. 33 | So, when the spike hits through the vesicles, these bags releases the molecules towards the junction. 34 | The junction area filled with these molecules are known as the Synapic claft. 35 | These molecules fuse with the receptors, where the receptors are the chemically-gated ionic channels. 36 | Then, the channels will gradually open after the fusion, where they allows the ions to enter inside or outside depending upon the concentration. 37 | ``` 38 | ``` 39 | Chemical Synapses (vs) Electrical Synapses: 40 | The chemical synapses are those more predominantly used in things like memory formation as these chemicals also have the ability to alter synaptic interactions in the synaptic cleft, 41 | whereas the electrical synapses are a means by which networks of neurons can coordinate and in some cases synchronize, their outputs. 42 | ``` 43 | ``` 44 | Beautiful picture: 45 | ``` 46 | ![](http://geekresearchlab.net/coursera/neuro/synapse-0.jpg)
47 | ``` 48 | Explaining the beautiful picture: 49 | Each of the bright spots corresponds to a synapse, 50 | where the typical neuron has around 10,000 synapses on dendrites of the cell body. 51 | The synapses can be either excitatory or inhibitory. 52 | The excitatory synapse is the one that has the ability to increase the postsynaptic membrane potential. 53 | Whereas, the inhibitory synapse does the opposite of the excitatory synapse. 54 | ``` 55 | ![](http://geekresearchlab.net/coursera/neuro/synapse-3.jpg)

56 | ![](http://geekresearchlab.net/coursera/neuro/synapse-4.jpg)
57 | ``` 58 | How the synapses play an important role in memory and learning? 59 | The brain learn through Synaptic plasticity. 60 | ``` 61 | Hebbian Plasticity:

62 | ![](http://geekresearchlab.net/coursera/neuro/synapse-5.jpg)
63 | ``` 64 | -- Proposed by a Canadian Psychologist. 65 | -- If Neuron A repeatedly takes part in firing Neuron B, then the synapse from A to B is strengthened. 66 | -- Crazy saying: Neurons that fire together wire together. 67 | ``` 68 | ``` 69 | Examples, where plasticity plays an important role: 70 | 1. A young child learning its first language 71 | 2. Practicing your kung fu enough to make it second nature 72 | 3. Studying for your examinations 73 | 4. Bicycling more timidly after a bad bicycle accident 74 | ``` 75 | ![](http://geekresearchlab.net/coursera/neuro/synapse-6.jpg)

76 | ![](http://geekresearchlab.net/coursera/neuro/synapse-7.jpg)

77 | ![](http://geekresearchlab.net/coursera/neuro/synapse-8.jpg)

78 | ![](http://geekresearchlab.net/coursera/neuro/synapse-9.jpg)
79 | ``` 80 | What do we know about how networks of neurons give rise to perception, behavior and consciousness? 81 | This will be continued in the next chapter... 82 | ``` 83 | -------------------------------------------------------------------------------- /Week-1/notes/4-NeuroBiology-101/4-3-Nervous System and Brain regions.md: -------------------------------------------------------------------------------- 1 | ``` 2 | Nervous Systems: 3 | ``` 4 | ``` 5 | Types:- 6 | Pheriperal Nervous System and Central Nervous System 7 | ``` 8 | ``` 9 | Pheripheral Nervous System consists of Somatic and Autonomic nervous systems. 10 | ``` 11 | ``` 12 | The Somatic system is where the nerves connect towards the "voluntary skeletal muscles and sensory receptors". 13 | The nerves are the bundle of axons. 14 | Sensory receptors consist of Afferent Nerve Fibers (incoming) and Efferent Nerve Fibers (outcoming). 15 | Afferent Nerve fibers are the incoming ones where the axons carry information away from the pheriphery to the Central Nervous System (CNS). 16 | Efferent Nerves Fibers are the outgoing ones where the axons carry information from the CNS outward to the pheriphery. 17 | ``` 18 | ``` 19 | Autonomic nervous system is where the nerves connect to the internal organs such as the heart, blood vessels, smooth muscles and glands. 20 | It operates below the level of consciousness. 21 | It exhibits various functions such as the heart rate, digestion, respiration rate and so on. 22 | ``` 23 | ``` 24 | Central nervous system consists of the spinal cord and brain. 25 | ``` 26 | ``` 27 | Spinal Cord: 28 | -- Local feedback loops control reflexes known as the reflex arcs. 29 | -- Descending motor control signals from the brain activate the spinal motor neurons. 30 | -- Ascending sensory axons convey the sensory information from the muscles and the skin back to the brain. 31 | ``` 32 | ``` 33 | BRAIN REGIONS: 34 | ``` 35 | ``` 36 | 1. Hindbrain 37 | (i) Medulla Oblangata: Controls breathing, muscle tone and blood pressure. 38 | (ii) Pons: Connected to the cerebellum. It's involved in sleep and arousal. 39 | (iii) Cerebellum: Co-ordination and timing of the voluntary movements, sense of equilibrium, language, attention, etc 40 | ``` 41 | ![](http://geekresearchlab.net/coursera/neuro/brain-1.jpg)
42 | ``` 43 | 2. Midbrain and Reticular formation 44 | (i) Midbrain focus on eye movements, visual and auditory reflexes. 45 | (ii) Reticular formation modulates the muscle reflexes, breathing & pain perception. Also, sleep regulation, wakefulness & arousal. 46 | ``` 47 | ![](http://geekresearchlab.net/coursera/neuro/brain-2.jpg)
48 | ``` 49 | 3. Thalamus and Hypothalamus 50 | (i) Thalamus is the relay station for all the sensory information (except smell) to the cortex and also, regulates sleep/wakefulness. 51 | (ii) Hypothalamus regulates the basic needs such as Fighting, Fleeing, Feeding and Mating. 52 | ``` 53 | ![](http://geekresearchlab.net/coursera/neuro/brain-3.jpg)
54 | ``` 55 | 4. Cerebrum 56 | -- It consists of cerebral cortex, basal ganglia, hippocampus and amygdala. 57 | -- It's involved in perception and motor control, cognitive functions, emotion, memory and learning. 58 | ``` 59 | ``` 60 | -- Discussing parts of cerebrum in detail: 61 | (a) Cerebral cortex: 62 | - The cerebral cortex is the layered sheet of neurons. 63 | - It's about 1/8th of an inch thick. 64 | - Approximately 30 billion neurons. 65 | - Each neuron makes about 10,000 synapses with a total of approximately 300 trillion connections. 66 | - Cerebral cortex has 6 layers of neurons that are relatively in uniform structure. 67 | - As a computational neuroscience researcher, 68 | Is there a common computational principle operating across the cortex? (A question to ask ourself and research on it) 69 | ``` 70 | ![](http://geekresearchlab.net/coursera/neuro/brain-4.jpg)
71 | ``` 72 | How do all of these brain regions interact to produce cognition and behavior? 73 | Answer not yet discovered, but there are bunch of possible techniques such as: 74 | (i) Electrophysiological 75 | (ii) Optical 76 | (iii) Molecular 77 | (iv) Functional imaging 78 | (v) Psychophysical 79 | (vi) Anatomical 80 | (vii) Connectomic 81 | (viii) Lesion (brain damage) studies 82 | ``` 83 | ``` 84 | Neural Computing (vs) Digital Computing 85 | (i) Device count: 86 | Human brain: 10^11 neurons (each neuron approximately 10^4 connections) 87 | Silicon chip: 10^10 transistors with sparse connectivity. 88 | (ii) Device speed: 89 | Biology has 100Us (U expressed as Meuuu & s is seconds) temporal resolution. 90 | Digital circuits has 100ps clock (10 GHz) 91 | (iii) Computing Paradigm: 92 | Brain: Massively parallel computation & adaptive connectivity. 93 | Digital computers: Sequential information processing via CPUs with fixed connectivity. 94 | (iv) Capabilities: 95 | Digital computers excel in math & symbol processing. 96 | Brains: Better at solving ill-posed problems such as speech & vision. 97 | ``` 98 | -------------------------------------------------------------------------------- /Week-1/notes/4-NeuroBiology-101/4-4-Conclusion.md: -------------------------------------------------------------------------------- 1 | Conclusions and Summary: 2 | ``` 3 | The structure and organization of the brain suggests computational analogies such as: 4 | (i) Information storage: 5 | Physical/Chemical structure of neurons and synapses. 6 | (ii) Information transmission: 7 | Electrical and Chemical Signalling. 8 | (iii) Primary computing elements:- Neurons 9 | (iv) Computational basis:- currently unknown. 10 | ``` 11 | -------------------------------------------------------------------------------- /Week-1/notes/4-NeuroBiology-101/README.md: -------------------------------------------------------------------------------- 1 | NeuroBiology-101 consists of the portions that will discuss about the Neurons, Synapses and Brain regions. 2 | -------------------------------------------------------------------------------- /Week-1/shared/acronyms.md: -------------------------------------------------------------------------------- 1 | ``` 2 | CNS Central Nervous System (Lecture 1-6, 3:23) 3 | 4 | CPU Central Processing Unit (Lecture 1-6, 15:03) 5 | 6 | EPSP Excitatory Post-Synaptic Potential (Lecture 1-4, 4:28; Lecture 1-5, 8:30) 7 | 8 | GABA gamma-aminobutyric acid, C4H9NO2, a neurotransmitter (Lecture 1-5, 10:07) 9 | 10 | GABAB GABA-B metabotropic receptors that use G-protein intermediaries to open/close K+ ion channels (Lecture 1-5, 10:24; Wikipedia) 11 | 12 | ICA Independent Component Analysis (Bell and Sejnowski, 1997) (Lecture 1-3, 9:45) 13 | 14 | IPSP Inhibitory Post-Synaptic Potential (Lecture 1-5, 9:47) 15 | 16 | LGN Lateral Geniculate Nucleus (Lecture 1-2, 8:50) 17 | 18 | LTD Long-Term Depression (Lecture 1-5, 16:22) 19 | 20 | LTP Long-Term Potentiation (Lecture 1-5, 14:45) 21 | 22 | RF Receptive Field (Lecture 1-3, 1:28) 23 | 24 | PNS Peripheral Nervous System (Lecture 1-6, 0:35) 25 | 26 | STDP Spike-Timing Dependent Plasticity (Lecture 1-5, 19:25) 27 | 28 | V1 Primary Visual Cortex (Lecture 1-2, 9:04) 29 | ``` 30 | -------------------------------------------------------------------------------- /Week-1/shared/lecture.md: -------------------------------------------------------------------------------- 1 | Lecture notes for week 1 --- click here 2 | -------------------------------------------------------------------------------- /Week-2/Quiz/Programming/questions/1.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | The data set we have given you is comprised of a stimulus vector (named stim) and a binary vector (named rho). 4 | These two vectors are the same length because they represent measurements of two different quantities over the same time period. 5 | The binary vector has a 1 if a spike occurred in the time bin corresponding to the that index and a 0 otherwise. 6 | The sampling rate for the data set was 500 Hz. 7 | 8 | How many milliseconds are there between adjacent samples (what is the sampling period)? Only enter the number, not the units. 9 | If your answer is not an integer, round to the nearest integer value. 10 | Set the variable named sampling_period in quiz2.m (quiz2.py) equal to this value. 11 | ``` 12 | Solution: 13 | Check-in my solutions 14 | -------------------------------------------------------------------------------- /Week-2/Quiz/Programming/questions/2.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | We wish to compute the spike-triggered average for this neuron over a window of width 300 ms. 4 | Suppose we do not care about the value exactly 300 ms before the spike. 5 | How many elements (time steps) will be in our resulting spike-triggered average vector? 6 | Set the variable named num_timesteps in quiz2.m (quiz2.py) equal to this value and enter it below. 7 | 8 | Hint: Your answer should be an even number. 9 | ``` 10 | Solution: 11 | ```matlab 12 | Hint--- 13 | 14 | num_timesteps = time_window / sampling_period 15 | print(num_timesteps) 16 | ``` 17 | Check-in solutions and write the answer. 18 | -------------------------------------------------------------------------------- /Week-2/Quiz/Programming/questions/3.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | In order to calculate the average, it is necessary for us to know how many time windows (stimulus vectors) we are averaging over. This is equal to the number of observed spikes. Write code to calculate the total number of spikes in the data set c1p8.mat. 4 | How many spikes were observed in this recording? 5 | You should not count any spikes that occur before 300 ms from the beginning of the recording. 6 | 7 | Set the variable named num_spikes in compute_sta equal to this value, 8 | or (better yet) use the expression/variable/code you used to calculate this value and set it equal to num_spikes so that your code will work for any set of parameters (different sampling rate, different time window in which average is calculated etc.) passed to compute_sta. 9 | ``` 10 | Solution: 11 | ```matlab 12 | Hint --- 13 | 14 | rho = mat['rho'][num_timesteps:] 15 | 16 | spike_times = rho.nonzero()[0] 17 | num_spikes = spike_times.size 18 | print(num_spikes) 19 | ``` 20 | ``` 21 | 53583 22 | ``` 23 | For more, check-in my solution 24 | -------------------------------------------------------------------------------- /Week-2/Quiz/Programming/questions/4.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | Now we may compute the spike-triggered average. 4 | To do this, add code to compute_sta. 5 | Remember that the spike-triggered average is the element-wise mean of the time windows starting 300 ms before (exclusive) and ending 0 ms before a spike. 6 | 7 | Note that we have given you code to find all of the indices in the stimulus vector that correspond to the spike times (labeled as the variable spike_times in compute_sta). 8 | 9 | Which of these plots most closely matches the spike-triggered average for this data set? 10 | ``` 11 | Solution: 12 | 13 | Output generated... ---> check here 14 | 15 | For more, check-in my solution 16 | -------------------------------------------------------------------------------- /Week-2/Quiz/Programming/resources/references.md: -------------------------------------------------------------------------------- 1 | [1] Shared Resources in Coursera
2 | [2] Preprocessing and analysis of spike-train data
3 | [3] Preprocessing and analysis of spike and local field potential data
4 | [4] Spike Triggered Average (STA)
5 | [5] Neural Networks - Tutorial
6 | [6] 7 | -------------------------------------------------------------------------------- /Week-2/Quiz/Programming/shared_codes/Python/compute_sta.py: -------------------------------------------------------------------------------- 1 | """ 2 | Created on Wed Apr 22 15:21:11 2015 3 | 4 | Code to compute spike-triggered average. 5 | """ 6 | 7 | from __future__ import division 8 | import numpy as np 9 | #import matplotlib.pyplot as plt 10 | 11 | 12 | def compute_sta(stim, rho, num_timesteps): 13 | """Compute the spike-triggered average from a stimulus and spike-train. 14 | 15 | Args: 16 | stim: stimulus time-series 17 | rho: spike-train time-series 18 | num_timesteps: how many timesteps to use in STA 19 | 20 | Returns: 21 | spike-triggered average for num_timesteps timesteps before spike""" 22 | 23 | sta = np.zeros((num_timesteps,)) 24 | 25 | # This command finds the indices of all of the spikes that occur 26 | # after 300 ms into the recording. 27 | spike_times = rho[num_timesteps+1:].nonzero()[0] + num_timesteps 28 | 29 | # Fill in this value. Note that you should not count spikes that occur 30 | # before 300 ms into the recording. 31 | num_spikes = sta(spike_times) 32 | 33 | # Compute the spike-triggered average of the spikes found. 34 | # To do this, compute the average of all of the vectors 35 | # starting 300 ms (exclusive) before a spike and ending at the time of 36 | # the event (inclusive). Each of these vectors defines a list of 37 | # samples that is contained within a window of 300 ms before each 38 | # spike. The average of these vectors should be completed in an 39 | # element-wise manner. 40 | # 41 | # Your code goes here. 42 | 43 | return sta 44 | -------------------------------------------------------------------------------- /Week-2/Quiz/Programming/shared_codes/Python/quiz2.py: -------------------------------------------------------------------------------- 1 | """ 2 | Created on Wed Apr 22 15:15:16 2015 3 | 4 | Quiz 2 code. 5 | """ 6 | 7 | from __future__ import division 8 | import numpy as np 9 | import scipy 10 | 11 | try: 12 | import matplotlib.pyplot as plt 13 | except ImportError: 14 | pass 15 | 16 | try: 17 | import numpy.matlib as matlib 18 | except ImportError: 19 | pass 20 | 21 | import pickle 22 | 23 | from compute_sta import compute_sta 24 | 25 | 26 | FILENAME = 'neuro/week2/c1p8.pickle' 27 | 28 | with open(FILENAME, 'rb') as f: 29 | data = pickle.load(f) 30 | 31 | stim = data['stim'] 32 | rho = data['rho'] 33 | 34 | 35 | # Filled in these values 36 | sampling_period = 2 # in ms 37 | num_timesteps = 150 38 | 39 | sta = compute_sta(stim, rho, num_timesteps) 40 | 41 | time = (np.arange(-num_timesteps, 0) + 1) * sampling_period 42 | 43 | plt.plot(time, sta) 44 | plt.xlabel('Time (ms)') 45 | plt.ylabel('Stimulus') 46 | plt.title('Spike-Triggered Average') 47 | 48 | plt.show() 49 | -------------------------------------------------------------------------------- /Week-2/Quiz/Programming/shared_codes/README.md: -------------------------------------------------------------------------------- 1 | These are incomplete codes that were shared in the quiz. I've edited only few lines. 2 | -------------------------------------------------------------------------------- /Week-2/Quiz/Programming/shared_codes/matlab/README.md: -------------------------------------------------------------------------------- 1 |
2 | Solutions can be found ---> here! 3 |
4 | -------------------------------------------------------------------------------- /Week-2/Quiz/Programming/shared_codes/matlab/compute_sta.m: -------------------------------------------------------------------------------- 1 | % function [ sta ] = compute_sta( stim, rho, num_timesteps ) 2 | 3 | function [ sta ] = compute_sta( stim, rho, num_timesteps ) 4 | 5 | 6 | 7 | 8 | %COMPUTE_STA Calculates the spike-triggered average for a neuron that 9 | % is driven by a stimulus defined in stim. The spike- 10 | % triggered average is computed over num_timesteps timesteps. 11 | sta = zeros(num_timesteps, 1); 12 | 13 | % This command finds the indices of all of the spikes that occur 14 | % after 300 ms into the recording. 15 | spike_times = find(rho(num_timesteps+1:end)) + num_timesteps; 16 | 17 | % Fill in this value. Note that you should not count spikes that occur 18 | % before 300 ms into the recording. 19 | 20 | num_spikes = size(spike_times); 21 | 22 | % Compute the spike-triggered average of the spikes found using the 23 | % find command. To do this, compute the average of all of the vectors 24 | % starting 300 ms (exclusive) before a spike and ending at the time of 25 | % the event (inclusive). Each of these vectors defines a list of 26 | % samples that is contained within a window of 300 ms before the each 27 | % spike. The average of these vectors should be completed in an 28 | % element-wise manner. 29 | % 30 | % Your code goes here. 31 | -------------------------------------------------------------------------------- /Week-2/Quiz/Programming/shared_codes/matlab/quiz2.m: -------------------------------------------------------------------------------- 1 | close all; 2 | %clear all; 3 | clc; 4 | load('c1p8.mat'); 5 | 6 | % Fill in these values 7 | sampling_period = 2; % in ms 8 | num_timesteps = 150; 9 | 10 | sta = compute_sta(stim, rho, num_timesteps); 11 | 12 | time = -sampling_period*(num_timesteps-1):sampling_period:0; % in ms 13 | 14 | figure(1); 15 | plot(time, sta); 16 | xlabel('Time (ms)'); 17 | ylabel('Stimulus'); 18 | title('Spike-Triggered Average'); 19 | -------------------------------------------------------------------------------- /Week-2/Quiz/Programming/solutions/README.md: -------------------------------------------------------------------------------- 1 | Either any one of the MATLAB (OR) Python is enough to compute the solution...
2 | For now, I have generated solutions in MATLAB.

3 | I'll be generating for Python 3.4.3.2 later on...

4 | Hint: Both MATLAB and Python are same...
5 | Requires only few additional lines to be coded along with the shared codes provided by the instructors. 6 | -------------------------------------------------------------------------------- /Week-2/Quiz/Programming/solutions/mathlab/compute_sta.m: -------------------------------------------------------------------------------- 1 | function [ sta ] = compute_sta( stim, rho, num_timesteps ) 2 | 3 | %COMPUTE_STA Calculates the spike-triggered average for a neuron that 4 | % is driven by a stimulus defined in stim. The spike- 5 | % triggered average is computed over num_timesteps timesteps. 6 | sta = zeros(num_timesteps, 1); 7 | 8 | % This command finds the indices of all of the spikes that occur 9 | % after 300 ms into the recording. 10 | spike_times = find(rho(num_timesteps+1:end)) + num_timesteps; 11 | 12 | % Filling in this value. Note that you should not count spikes that occur 13 | % before 300 ms into the recording. 14 | 15 | num_spikes = size(spike_times,1); 16 | 17 | % Compute the spike-triggered average of the spikes found using the 18 | % find command. To do this, compute the average of all of the vectors 19 | % starting 300 ms (exclusive) before a spike and ending at the time of 20 | % the event (inclusive). Each of these vectors defines a list of 21 | % samples that is contained within a window of 300 ms before the each 22 | % spike. The average of these vectors should be completed in an 23 | % element-wise manner. 24 | 25 | for i=1:num_spikes 26 | window=stim(spike_times(i)-num_timesteps+1:spike_times(i)); 27 | sta=sta+window; 28 | end 29 | 30 | sta=sta/num_spikes; 31 | end 32 | -------------------------------------------------------------------------------- /Week-2/Quiz/Programming/solutions/mathlab/output.md: -------------------------------------------------------------------------------- 1 | ``` 2 | Generated Output while executing... 3 | ``` 4 | ![](http://geekresearchlab.net/coursera/neuro/spike.jpg) 5 | -------------------------------------------------------------------------------- /Week-2/Quiz/Programming/solutions/mathlab/quiz2.m: -------------------------------------------------------------------------------- 1 | close all; 2 | %clear all; 3 | clc; 4 | load('c1p8.mat'); 5 | 6 | sampling_period = 2; % in ms 7 | num_timesteps = 150; 8 | 9 | sta = compute_sta(stim, rho, num_timesteps); 10 | 11 | time = -sampling_period*(num_timesteps-1):sampling_period:0; %ms 12 | 13 | figure(1); 14 | plot(time, sta); 15 | xlabel('Time (ms)'); 16 | ylabel('Stimulus'); 17 | title('Spike-Triggered Average'); 18 | -------------------------------------------------------------------------------- /Week-2/Quiz/README.md: -------------------------------------------------------------------------------- 1 | No. of Attempts: 1/10
2 | Score: 9/9 3 | -------------------------------------------------------------------------------- /Week-2/Quiz/Theory/1.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | What is the definition of a spike triggered average for a neuron? 4 | 5 | (i) The set of stimuli preceding a spike, each averaged over time. 6 | (ii) The stimuli preceding a spike, averaged over all stimuli that elicited a spike. 7 | (iii) None of these. 8 | (iv) The average time between spikes in a recording. 9 | (v) The set of all stimuli that elicit a spike. 10 | ``` 11 | Solution: 12 | ``` 13 | The STA is computed as an average over the spike-triggered ensemble. 14 | 15 | For example, if each stimulus in the spike-triggered ensemble is a 100 ms time-series, 16 | then the STA will also be a 100 ms time-series. 17 | 18 | so, The stimuli preceding a spike, averaged over all stimuli that elicited a spike. 19 | ``` 20 | For details,
21 | Refer STA-Tutorials 22 | -------------------------------------------------------------------------------- /Week-2/Quiz/Theory/2.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | What is the nature of this neuron? That is, what mathematical operation of the stimulus does it compute? 4 | 5 | (i) Differentiation. 6 | (ii) Running average/sum. 7 | (iii) No response. 8 | (iv) Leaky integration. 9 | ``` 10 | Solution: 11 |

12 | this neuron refers to the one that was computed as an output in the programming question.
13 | This is more like temporal filtering that is more like leaky integration.
14 |
15 | Refer notes-3 16 | -------------------------------------------------------------------------------- /Week-2/Quiz/Theory/3.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | Which of the following stimuli would you expect this neuron to respond most strongly to? 4 | You may assume that all non-zero values of the stimulus have the same magnitude. 5 | That is, assume that all positive stimuli have a value of c and all negative stimuli have a value of −c where c>0. 6 | 7 | (i) No difference. 8 | (ii) A constant positive value. 9 | (iii) A positive value followed by a negative value. 10 | (iv) A constant negative value. 11 | (v) A negative value followed by a positive value. 12 | ``` 13 | Solution: 14 | ``` 15 | (ii) A constant positive value. 16 | ``` 17 | Refer notes-3. 18 | -------------------------------------------------------------------------------- /Week-2/Quiz/Theory/4.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | Suppose we had reason to suspect that this neuron responded to two modes (features) of the stimulus. 4 | Which of the following methods is most likely to help us determine those two modes? 5 | 6 | (i) Principal component analysis/covariance analysis 7 | (ii) Dividing the spikes into two disjoint sets and computing the spike-triggered average for each of those sets independently. 8 | (iii) Computing the spike-triggered average to get the first mode and then subtracting it from the stimulus in each time window before a spike and then computing the spike-triggered average for the resulting signal to get the second mode. 9 | (iv) Computing the spike-triggered average normally. 10 | ``` 11 | Solution: 12 | -------------------------------------------------------------------------------- /Week-2/Quiz/Theory/5.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | Which of the following is not an example of a linear filtering system? 4 | 5 | Let x(t) denote the input signal and y(t) denote the output signal. 6 | 7 | (i) y(t)=3x(t)−5x(t−τ), where τ is positive. 8 | (ii) y(t)=cos[x(t−θ)] 9 | (iii) y(t)=∫∞0e−τx(t−τ)dτ 10 | (iv) y(t)=∑∞n=0anx(t−nτ), where a is between 0 and 1, and τ is positive. 11 | ``` 12 | Solution: 13 | ``` 14 | I find that τ is required for any linear filtering system. I don't see it in (ii). 15 | ``` 16 | -------------------------------------------------------------------------------- /Week-2/notes/1 - Overview.md: -------------------------------------------------------------------------------- 1 | ``` 2 | Overview: 3 | ``` 4 | ``` 5 | What will be discussed? 6 | -- Techniques for recording from the brain. 7 | -- Tools for discovering how the brain represents information. 8 | -- Models that express out understanding of this representation. 9 | -- Some methods for inferring what the brain is doing based upon it's activity (Week-3). 10 | -- Using Information Theory to quantify neural representation (Week-4). 11 | -- The Bio-Physical basis of how the brain processes inputs and performs complex computations (Week-5). 12 | ``` 13 | -------------------------------------------------------------------------------- /Week-2/notes/2- What is the Neural code?.md: -------------------------------------------------------------------------------- 1 | ``` 2 | Recording from the brain: 3 | Examples: 4 | 1. functional Magentic Resonance Imaging (fMRI) 5 | fMRI helps in approximately discovering the regions of neural activity. 6 | The recorded responses were slow. 7 | 2. Electroencephalography (EEG) 8 | Here, The recorded responses are fast, 9 | because it helps in electrically capturing the neural circuits in the brain much faster. 10 | ``` 11 | ``` 12 | Question: 13 | fMRI and EEG are useful because they can noninvasively record neural activity in awake and behaving humans. 14 | What is a limitation of these techniques? 15 | Answer: 16 | Neither of them can record the activity of individual neurons. 17 | Explanation: 18 | These techniques are useful because they are easy to perform on all sorts of test subjects non-invasively as both kinds of signals can be read through the skull and scalp. 19 | Also, all organisms with brains, including humans, produce these kinds of signals, which is useful for researchers in that they can compare similar activity in anatomically homologous regions of the brain across individuals of the same species. 20 | However, as we are not recording directly from single neurons, the signals are averages of the approximate activity of large groups of neurons and so the exact timing and location of responses can be difficult to discern. 21 | ``` 22 | ``` 23 | Reading out the neural code -- Examples: 24 | (i) Electrode arrays 25 | (ii) Calcium imaging 26 | ``` 27 | ``` 28 | Looking inside the single cells to see how the signals are being generated:- 29 | ----- 30 | | =======> electrode 31 | ----- 32 | | | | =====> Micropipette 33 | | | | 34 | | | 35 | -------- 36 | | | ======> Sodium (Na) ion channel 37 | |------- ======> cell membrane 38 | ``` 39 | ``` 40 | Question: 41 | Preforming the patch clamp technique is difficult and requires quite a bit of finesse. 42 | Can you speculate why a researcher would want to record the internal activity of a single neuron instead of merely recording external signals? 43 | 44 | Yes 45 | 46 | Explanation: 47 | Action potentials (and other activity) recorded internally have higher recorded amplitudes as a result of the proximity to the changes in voltage. Activity recorded externally is of lesser amplitude and requires much amplification because voltage changes decay quickly over short distances. 48 | Also, when patched onto a small portion of the plasma membrane of a neuron, a researcher has the ability to discern the properties of single ion channels, as seen in the diagram. 49 | Additionally, some internal recording techniques allow for the injection of certain chemicals and ions to which the researcher can record a change in internal activity of the patched cell. 50 | ``` 51 | Retina:-

52 | ![](http://geekresearchlab.net/coursera/neuro/retina-1.jpg)

53 | ![](http://geekresearchlab.net/coursera/neuro/retina-2.jpg)
54 | ``` 55 | Question: 56 | In a raster plot such as the one just shown, 57 | in which a short experiment is repeated many times, 58 | what does a vertical structuring of the dots at one point in time indicate? 59 | Answer: 60 | The cell fires very consistently at that time (with respect to each iteration of the stimulus). 61 | Explanation: 62 | A vertical bar in the raster plot indicates a strong response, meaning that one cell fires very reliably at a particular time during the stimulus presentation, and suggesting that the cell is responding to the particular feature of the stimulus that occurs at that time. Weaker responses are represented by thin red bars (where the cell fires during some iterations but not others). 63 | As such, a weak response indicates that there is something about the feature (orientation of a bar of light in the video for example) that the cell likes, but is not the feature it responds to best (the orientation is off by a few degrees from what the cell prefers, for example). 64 | ``` 65 | Example -- while watching a movie:

66 | ![](http://geekresearchlab.net/coursera/neuro/retina-3.jpg)
67 | ``` 68 | Encoding and Decoding: 69 | -- Encoding 70 | How does a stimulus cause a pattern of responses? 71 | Answer: Building quasi-mechanistic models. 72 | P(reponse | stimulus) 73 | -- Decoding 74 | What do these responses tell us about the stimulus? 75 | How can we reoconstruct what the brain is doing? 76 | P(stimulus | response) 77 | ``` 78 | Challenges:
79 | What is the reponse? What is the stimulus?
80 | What is the relationship between them?
81 |
82 | Diagrammatic representation:
83 | ![](http://geekresearchlab.net/coursera/neuro/neural.jpg) 84 |
85 | ``` 86 | Question: 87 | If the stimulus in question is an audio recording of someone speaking a certain syllable, 88 | which of the following could be an appropriate choice of stimulus parameter? 89 | Answers: 90 | (i) The average amplitude of the signal. 91 | (ii) The relative amplitude of two frequency components in the signal. 92 | Explanation: 93 | In general there are a multitude of different stimulus parameters that could affect a neuron's response. 94 | Part of designing a good experiment involves picking a natural set of parameters to investigate while holding all others as constant as possible. 95 | With neurons in lower cortical areas, it is often relatively easy to figure out what parameters are important (e.g., the location of a spot of light for a neuron in LGN). 96 | In higher cortical areas, however, it can be quite difficult to understand exactly what parameter of a stimulus a neuron is tuned to. 97 | ``` 98 | ![](http://geekresearchlab.net/coursera/neuro/tuning-curves-1.jpg)

99 | ![](http://geekresearchlab.net/coursera/neuro/tuning-curves-2.jpg)

100 | ![](http://geekresearchlab.net/coursera/neuro/neural-2.jpg)
101 | ``` 102 | How reponse models will be constructed? 103 | This will be continued in the next chapter... 104 | ``` 105 | -------------------------------------------------------------------------------- /Week-2/notes/3-Neural Coding : Simple Models.md: -------------------------------------------------------------------------------- 1 | ``` 2 | Basic coding models: 3 | ``` 4 | ``` 5 | 1. linear response 6 | The response of the time is highly dependent upon the stimulus of the time. 7 | s(t) -> r(t) 8 | ``` 9 | ``` 10 | 2. Temporal filtering 11 | ``` 12 | ![](http://geekresearchlab.net/coursera/neuro/simple-models-1.jpg)
13 | ``` 14 | Question: 15 | Since this is a linear system, 16 | which of the following will always be true? 17 | Answers: 18 | (i) If you scale the input by a constant, the output will be scaled by the same constant. 19 | (ii) The output of a sum of different inputs is equal to the sum of the outputs of each of the individual inputs. 20 | ``` 21 | ``` 22 | Examples of temporal filtering: 23 | 1. Calculating average 24 | 2. Calculating leaky average 25 | ``` 26 | ``` 27 | 3. Spatial filtering 28 | Hint: Apply temporal filtering to the time 29 | ``` 30 | ![](http://geekresearchlab.net/coursera/neuro/simple-models-2.jpg)
31 | ``` 32 | Spatial filtering and retinal receptive fields: 33 | ``` 34 | [1] WikiBooks
35 | [2] Spatial Filtering
36 |
37 | ![](http://geekresearchlab.net/coursera/neuro/simple-models-3.jpg)
38 | ``` 39 | 4. Spatiotemporal filtering 40 | ``` 41 | ![](http://geekresearchlab.net/coursera/neuro/simple-models-4.jpg) 42 | ``` 43 | Question: 44 | Which of the following inputs might cause a linear system with a positive filter to predict a negative firing rate? 45 | Answer: 46 | An input that slowly varies between a large positive value and a large negative value. 47 | ``` 48 | ![](http://geekresearchlab.net/coursera/neuro/simple-models-5.jpg) 49 |
50 | ``` 51 | How to find the components of this model? 52 | This will be continued in the next chapter... 53 | ``` 54 | -------------------------------------------------------------------------------- /Week-2/notes/4 - Neural coding : Feature Selection.md: -------------------------------------------------------------------------------- 1 | ``` 2 | How to find the components of the model? 3 | 4 | As discussed earlier, P(reponse | stimulus) and relating to watching-movie on how it's visualized through retina, 5 | we face "dimensionality" problem especially with the frames generated in pixels. 6 | 7 | So, we require to sample the responses of the system to stimuli. 8 | As we sample, then they could characterize/feature the input details that triggers the responses. 9 | 10 | P(response | stimulus) -> P(response | s1) 11 | where s1 -> stimulus 12 | 13 | ``` 14 | ``` 15 | Dimensionality reduction: 16 | Let's start out with a very high dimensionaity description... 17 | For example: an image or a time-varying waveform. 18 | On taking the example, just pick out a small set of relevant dimensions. 19 | ``` 20 | ``` 21 | Let s be the stimulus and the plot points be s1, s2,... 22 | Let t be the time and the plot points be t1, t2,... 23 | ``` 24 | ![](http://geekresearchlab.net/coursera/neuro/neural-feature-1.jpg) 25 |
26 | ``` 27 | Question: 28 | If we discretize a stimulus waveform in time, 29 | we can represent it as a vector in some vector space. 30 | What is the dimensionality of this vector space? 31 | Answer: 32 | The number of points used in the discretization. 33 | Explanation: 34 | When we represent a discretized stimulus as a vector, the first axis corresponds to the value of the stimulus at the first time point, the second axis to the value of the stimulus at the second time point, and so on. 35 | Thus, the dimensionality of the vector space (and all the vectors within it) is the number of time points used to discretize the stimulus. Note that we often label the axes by their corresponding times, even though the elements of the stimulus vector have the same units as the y-axis of the original stimulus waveform. 36 | ``` 37 | ``` 38 | Determining the right stimulus to use:- 39 | As sampling the reponses over variety of stimuli upon featuring the input that triggers, 40 | P(reponse | stimulus) -> P(response | s1,s2,s3... sn) 41 | The cool useful method that can be used is "Gaussian white noise". 42 | ``` 43 | ![](http://geekresearchlab.net/coursera/neuro/neural-feature-2.jpg) 44 |
45 | ``` 46 | Question: 47 | When we represent Gaussian distributions using the classic bell-curve shape, 48 | what do the axes represent? 49 | Answer: 50 | The x-axis is the stimulus parameter and the y-axis is the probability (density) of the stimulus parameter. 51 | Explanation: 52 | When we plot a one-dimensional distribution the x-axis represents some stimulus parameter (such as the projection of the stimulus onto a feature), and the y-axis represents the probability density of sampling a stimulus with that parameter value from the distribution. 53 | If we were interested in the probability density over two features, we would let the x- and y-axes be the projections of the stimulus onto those two different features and the z-axis be the probability. 54 | ``` 55 | ![](http://geekresearchlab.net/coursera/neuro/neural-feature-3.jpg)

56 | ![](http://geekresearchlab.net/coursera/neuro/neural-feature-4.jpg)

57 | ![](http://geekresearchlab.net/coursera/neuro/neural-feature-5.jpg)

58 | ![](http://geekresearchlab.net/coursera/neuro/neural-feature-6.jpg)
59 | ``` 60 | Now, We have found how to find the components of this model? 61 | The components of the model can be found by using, 62 | (i) Gaussian White Noise and (ii) Spike Trigggered Average (STA) 63 | ``` 64 |

65 | ![](http://geekresearchlab.net/coursera/neuro/neural-feature-7.jpg) 66 |
67 | ``` 68 | Now, the next question is: "How do we find the input/output of the system" with respect to the feature? 69 | ``` 70 | ![](http://geekresearchlab.net/coursera/neuro/neural-feature-8.jpg) 71 |
72 | ``` 73 | Question: 74 | When we plot P(spike | s1) as a function of s1, why does P(spike) only act as a scaling factor, rather than as something that changes the general shape of the function? 75 | Answer: 76 | P(spike) is not a function of s1. 77 | Explanation: 78 | P(spike) is the prior probability of observing a spike. 79 | This is calculated independent of the stimulus. 80 | ``` 81 | ![](http://geekresearchlab.net/coursera/neuro/neural-feature-9.jpg)

82 | ![](http://geekresearchlab.net/coursera/neuro/neural-feature-10.jpg)

83 | ![](http://geekresearchlab.net/coursera/neuro/neural-feature-11.jpg)
84 | 85 | ``` 86 | Principal Component Analysis (PCA): 87 | ``` 88 | ![](http://geekresearchlab.net/coursera/neuro/neural-feature-12.jpg) 89 |
90 | ``` 91 | Let x, y, z be the axis and plots being plotted in three dimensional space. 92 | Actually, the data lies in the two-dimensional plane. 93 | If we run PCA, then we'll find two principal components. 94 | And these components correspond to the set of vectors that expand the two-dimensional space. 95 | 96 | But, what if we had 100 dimensions? We can't be plotting the coordinates one against the other. 97 | 98 | For those of you having linear algebra background, PCA gives a basis for the datasets that represents the data, 99 | where it uses a lot smaller than the original representation. 100 | Hence, we get compression and it's also well-matched dataset to that particular dataset. 101 | ``` 102 | ![](http://geekresearchlab.net/coursera/neuro/neural-feature-13.jpg)

103 | ![](http://geekresearchlab.net/coursera/neuro/neural-feature-14.jpg)
104 | ``` 105 | Question: 106 | Principal component analysis (PCA) gives us a method to ? 107 | Answers: 108 | (i) Find a representation of our data which has lower dimensionality, giving us a computationally easier problem to work with. 109 | (ii) Find the vectors along which the variation of our data is maximal in our feature space. 110 | Explanation: 111 | The spike-triggered average gives us a simple view of the stimuli that lead up to a spike, but because of its simplicity it cannot capture some of the interesting dynamics that can occur. 112 | In this case, if a cell responds to both positive/negative changes and negative/positive changes, the spike-triggering stimuli will average to zero, making the spike-triggered average look like a flat line. 113 | That does not give us much useful description! 114 | Principal component analysis can help is pull out the right number of dimensions we need to describe the stimulus features the neuron is looking for. 115 | ``` 116 | ![](http://geekresearchlab.net/coursera/neuro/neural-feature-15.jpg) 117 | -------------------------------------------------------------------------------- /Week-2/notes/5 - Neural Coding : Variability.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/v-1.jpg) 2 | ``` 3 | When have you found a good feature? 4 | (i) When the input/output curve over your variable is interesting. 5 | (ii) Hoe to quantify interesting? 6 | ``` 7 | ![](http://geekresearchlab.net/coursera/neuro/v-2-1.jpg)

8 | ![](http://geekresearchlab.net/coursera/neuro/v-2-2.jpg)

9 | ![](http://geekresearchlab.net/coursera/neuro/v-3.jpg)

10 | ![](http://geekresearchlab.net/coursera/neuro/v-4.jpg)

11 | ``` 12 | Question: 13 | In summary, some advantages of maximally-informative dimensions are (check all that apply): 14 | Answers: 15 | (i) It gives us a way of seeking filters that maximize the discriminability of the spike-conditioned distribution and the prior. 16 | (ii) It does not require a specific structure for the distributions, such as Gaussians. 17 | (iii) You sound super-smart when you mention it at a party. 18 | ``` 19 |
20 | ![](http://geekresearchlab.net/coursera/neuro/v-5.jpg)

21 | ![](http://geekresearchlab.net/coursera/neuro/v-6.jpg)
22 | ``` 23 | Example 1: Bernoulli trials 24 | ``` 25 | ``` 26 | Example 2: Binomial spiking 27 | ``` 28 | ![](http://geekresearchlab.net/coursera/neuro/v-7.jpg)
29 | ``` 30 | Example 3: Poisson spiking 31 | ``` 32 | ![](http://geekresearchlab.net/coursera/neuro/v-8.jpg)
33 | ``` 34 | Question: 35 | Suppose that while a stimulus is present, a neuron's mean firing rate is r=4 spikes/second. 36 | If this neuron's spiking is characterized by Poisson spiking, then the probability that the neuron fires k spikes in T seconds is given by: p(k)=((rT)^k e^−rT)/k! 37 | What is the probability that when this stimulus is shown for one second the neuron does not fire any spikes? 38 | Answer: 39 | e^-4 40 | Explanation: 41 | The probability that the neuron fires no spikes in one second is given by p(0), 42 | where we set T=1 second. 43 | Thus, rT=4 and 44 | p(0)=(4)^0 e^−4/0!=e^−4 45 | ``` 46 |
47 | ![](http://geekresearchlab.net/coursera/neuro/v-9.jpg)

48 | ![](http://geekresearchlab.net/coursera/neuro/v-10.jpg)
49 | ``` 50 | Question: 51 | Poisson models are accurate descriptions of some neurons but poor descriptions of others. 52 | Which of the following neurons is least likely to be characterized by Poisson spiking? 53 | Answer: 54 | A neuron that fires many hundreds of times per second. 55 | Explanation: 56 | Poisson spiking assumes that each spike time is independent of all the others. 57 | However, in real neurons there exists a refractory period (usually on the order of a millisecond or so) that prevents the cell from spiking immediately after a previous spike has just occurred. 58 | The more times a cell spikes per second, the larger the role of this effect. 59 | There can also be more complex effects that render a neuron's spiking non-Poisson. 60 | ``` 61 | ![](http://geekresearchlab.net/coursera/neuro/v-11.jpg)
62 | ``` 63 | Question: 64 | Which of the following additions to our spike-generation model would improve its accuracy? 65 | Answers: 66 | (i) Taking into account the history of the cell's own firing. 67 | (ii) Taking into account the cell's interaction with other cells. 68 | Explanation: 69 | In the real world, the activity of a neuron is often affected by its own spiking history as well as the activity of the neurons that it is connected to. 70 | We will now examine models that try account for these effects. 71 | ``` 72 | ![](http://geekresearchlab.net/coursera/neuro/v-12.jpg)

73 | ![](http://geekresearchlab.net/coursera/neuro/v-13.jpg)

74 | ![](http://geekresearchlab.net/coursera/neuro/v-14.jpg)
75 | 76 | ``` 77 | The sweet lecturer says that i deserve a puppy for watching this week videos... =P 78 | She is right... the theory was too deep and the math problems related to it were challenging.. 79 | Next week is on decoding... 80 | ``` 81 | -------------------------------------------------------------------------------- /Week-2/shared/resources.md: -------------------------------------------------------------------------------- 1 | [1] Lecture notes
2 | [2] Vector basics
3 | [3] STA Tutorial
4 | -------------------------------------------------------------------------------- /Week-3/notes/notes/1 - Neural Decoding and Signal Detection Theory.md: -------------------------------------------------------------------------------- 1 | ``` 2 | How well can we learn what thw stimulus is by looking at the neural responses? 3 | It is "Decoding". 4 | ``` 5 | ``` 6 | Making a decision: 7 | ``` 8 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-1.jpg) 9 | ``` 10 | From the diagram, 11 | If a monkey is looking at these dot patterns, 12 | then as the dot patterns move in one direction, 13 | then the monkey eyes too move in the same direction. 14 | 15 | The challenging thing is predicting the direction of the dot patterns. 16 | Sometimes, we can't say or predict which way are they going. 17 | 18 | When the dot patterns move randomly in their chosen direction, then it's 0% coherence. 19 | At one extreme, there is a stimulus where the dot patterns move altogether, then that's 100% coherence. 20 | ``` 21 | ``` 22 | Prediciting from neural activity: 23 | ``` 24 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-2.jpg) 25 | ``` 26 | The experiment was done in monkey's brain by counting the spikes for doing prediction analysis from it's neural activity. 27 | From the diagram, 28 | Distribution plotting:- 29 | The dark black colored portion is the "number of trials", where it represents the "upward choices". 30 | Then, the checked portion is the "number of spikes" that represents the "downward choices". 31 | 32 | Question: 33 | Can you speculate what might happen to these 2 distributions as the coherence decreases? 34 | Answer: 35 | They move towards eachother. 36 | Explanation: 37 | As the number of dots that move together (coherence) decreases the ability of the monkey to distinguish the direction of the group as a whole decreases as well, causing his decisions to be no better than a guess. 38 | 39 | ``` 40 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-3.jpg) 41 | ``` 42 | Now, it's changed where the visual information is less that corresponds from left to right. 43 | The firing rates are similiar to the triggered responses. 44 | ``` 45 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-4.jpg) 46 | ``` 47 | For this diagram, where the coherence is almost 0%, 48 | The motion appears to discriminate from left to right where the distributions over-lapses in smaller amounts. 49 | ``` 50 | ``` 51 | Now the question is: 52 | How should one decode the firing rate inorder to get the best guess about where the stimulus is moving as upward or downward? 53 | ``` 54 | ``` 55 | The above can be answered from the "behavioral performance", that does the decoding. 56 | ``` 57 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-5.jpg) 58 | ``` 59 | For determining the decoding areas of the behavioral performance, we will go for "signal detection theory". 60 | ``` 61 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-6.jpg) 62 | ``` 63 | Here, r is the number of spikes in a single trial. 64 | The probability of responses moving at "upward" is shown in the form of red curve. 65 | The probability of responses moving at "downward" is shown in the form of blue curve. 66 | The upward is represented as p(r|+) 67 | The downward is represented as p(r|-) 68 | ``` 69 | ``` 70 | Question: 71 | Where would you put the threshold value? 72 | Answer: 73 | At the midpoint between the 2 curves, where their probabilities are equal. 74 | ``` 75 | Explanation to the answer:
76 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-7.jpg)
77 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-8.jpg)
78 | ``` 79 | Question: 80 | Suppose that there is a very large penalty for false alarms, 81 | i.e., if you decide that the signal is + when it is in fact -, 82 | you pay a very high price (but not necessarily the other way around: there is no penalty for deciding - when the signal is +). 83 | Which way would you move the threshold z in order to decrease the chances of having to pay this penalty? 84 | Answer: 85 | To the right. 86 | Explanation: 87 | Pay attentive to the last explanation given in the last answer. 88 | ``` 89 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-9.jpg)

90 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-10.jpg)

91 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-11.jpg) 92 | ``` 93 | where s is the likelihood. 94 | ``` 95 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-12.jpg)

96 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-13.jpg)

97 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-14.jpg)

98 | ``` 99 | Below diagrams are the observations from the retina cells, 100 | ``` 101 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-15.jpg)

102 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-16.jpg) 103 | ``` 104 | Now, the distributions look more like this: 105 | ``` 106 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-17.jpg) 107 |

108 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-18.jpg)

109 | ``` 110 | To learn more on the retina cells, check my guest lecture notes of Dr.Fred Reike:- 111 | https://github.com/ashumeow/Computational-NeuroScience/blob/master/Week-3/notes/notes/guest-lecture.md 112 | ``` 113 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-19.jpg) 114 |

115 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-20.jpg) 116 | -------------------------------------------------------------------------------- /Week-3/notes/notes/2 - Population Coding and Bayesian Estimation.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-21.jpg) 2 | ``` 3 | Here, we will discuss on decoding from many neurons through population coding. 4 | ``` 5 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-22.jpg) 6 | 7 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-23.jpg) 8 | ``` 9 | The ath neuron has a preferred direction where the response is maximum that is denoted by the angle Sa or vector Ca. 10 | ``` 11 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-24.jpg)

12 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-25.jpg)

13 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-26.jpg) 14 | ``` 15 | Question: 16 | In this equation, 17 | we normalize the contribution of each neuron to the population vector by its maximum firing rate. 18 | Why do we do this? 19 | Answer: 20 | Some neurons have an intrinsically higher firing rate and we want each neuron to contribute to the population vector in a way that is proportional to its relative activation. 21 | Explanation: 22 | When combining the effects of several neurons, we want to take into account that the neurons have intrinsic differences independent of the stimulus. 23 | For instance, if a neuron's maximum firing rate is 10Hz, and we observe an average of 9Hz in the presence of a particular stimulus, we should recognize this as a stronger response than if we had observed a 9Hz average rate for a neuron whose maximum firing rate is 100Hz. 24 | ``` 25 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-27.jpg) 26 | ``` 27 | There are also some limitations experienced in the population coding. 28 | ``` 29 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-28.jpg) 30 | ``` 31 | At first, we will focus on the posteriori distribution and likelihood function. 32 | ``` 33 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-29.jpg)

34 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-30.jpg)

35 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-31.jpg)

36 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-32.jpg) 37 | ``` 38 | Question: 39 | What is the standard deviation of a poisson neuron with an average firing rate of r? 40 | Answer: 41 | √r 42 | Explanation: 43 | The variance of a poisson distribution is equal to the mean and the standard deviation is the square root of the variance. 44 | Therefore, if the mean firing rate is r, the variance is equal to r, and the standard deviation is equal to √r 45 | ``` 46 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-33.jpg)

47 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-34.jpg)
48 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-34-1.jpg)

49 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-35.jpg)
50 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-35-1.jpg)

51 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-36.jpg)

52 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-37.jpg) 53 | ``` 54 | Question: 55 | If the likelihood distribution is P(A|B), 56 | what is the a posteriori distribution? 57 | Answer: 58 | P(B|A) 59 | Explanation: 60 | Sorry for all the Latin phrases! Mathematicians love that stuff. 61 | Remember that with Bayesian reasoning, we start with an initial belief about a distribution of something, we call that the "prior" distribution. 62 | As we collect new evidence, we can adjust our belief. 63 | Our belief after adjusting for the new evidence is called the "a posteriori" distribution. The likelihood tells us how likely we are to observe our evidence, given the various possible values of the thing we are concerned with, which is something we taken into account while estimating the posterior distribution. 64 | Bayes Rule tells us precisely how we can take that likelihood into account to come up with the posterior. 65 | 66 | In the case of this question, our prior would be P(B). Our evidence is represented by A, and the likelihood of our evidence is P(A|B). 67 | In the Bayes Rule formula, it is P(B|A) that plays the role of the posterior. 68 | ``` 69 | ``` 70 | So, we can use Bayes Law to write the likehood, prime and sum used in the posteriori. 71 | ``` 72 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-38.jpg)

73 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-39.jpg)

74 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-40.jpg) 75 | -------------------------------------------------------------------------------- /Week-3/notes/notes/3 - Reading minds : Stimulus Reconstruction.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-41.jpg)

2 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-42.jpg)

3 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-43.jpg)

4 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-44.jpg)
5 | ``` 6 | Stimulus Re-construction 7 | ``` 8 | ``` 9 | Imagine that this is our Spike Triggered Average (STA). 10 | Everytime when there is a spike, the spike train is being measured. 11 | ``` 12 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-45.jpg) 13 | ``` 14 | As the spike train is measured and as the firing rate keeps going. 15 | Results will be...from lower to higher firing rate. 16 | ``` 17 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-46.jpg) 18 | ``` 19 | This will create issues. 20 | ``` 21 | ``` 22 | The fly has two blah-blah neurons... 23 | One that encodes the left motion.. 24 | And the another that encodes the right motion.. 25 | 26 | We can also reconstruct velocity with both positive and negative inputs. 27 | ``` 28 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-47.jpg)

29 | ``` 30 | Another experiment 31 | ``` 32 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-48.jpg)

33 | ``` 34 | Question: 35 | In this experiment, what do r and s represent? 36 | Answer: 37 | the fMRI signal, the video clip 38 | Explanation: 39 | r represents the response, in this case brain activity. 40 | s represents the stimuli that elicited that brain activity. 41 | In this case, the fMRI was used to measure the brain activity, and the video clips were used to elicit it. 42 | How would you figure out the dimensionalities of the vectors r and s? 43 | ``` 44 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-49.jpg)
45 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-50.jpg)
46 | ![](http://geekresearchlab.net/coursera/neuro/neuro-decision-51.jpg) 47 | -------------------------------------------------------------------------------- /Week-3/notes/notes/guest-lecture.md: -------------------------------------------------------------------------------- 1 | Guest lecture on Retina -- Dr. Fred Rieke 2 | ``` 3 | Retina cell --- where visual processing begins. 4 | Photo-receptors are transduced and converted into electrical responses. 5 | The rand photo-receptors are receptive to dim light. 6 | The cone photo-receptors helps in extending higher visual levels. 7 | The signals produced in the rand and cone segments are processed by serveral layers of cells within the retina before passing to the optic nerve of the brain. 8 | The few photons are coming-in are being absorbed within a pool of several 100s upto 1000s of rods generate electrical signals which are not only reliably detective with the retina, but it sends to brain with reliable perception. 9 | ``` 10 | ![](http://geekresearchlab.net/coursera/neuro/retina-g-1.jpg) 11 | ``` 12 | So, what are we looking for is, 13 | The threshold of retaining the non-linearity of signals on rods from the photons, where it ejects noise from the remaining rods. 14 | It's the case where in the nervous systems over the aspects of convergence of many inputs on downstream cell and those small subset of inputs where all those outputs are generating noise. 15 | ``` 16 | ![](http://geekresearchlab.net/coursera/neuro/retina-g-2.jpg) 17 | ``` 18 | Advantages: 19 | ``` 20 | ``` 21 | 1. Access to rod signal and noise properties, 22 | where it measure the response of the rods of single photons. 23 | We can measure the noise of the rod responses and 24 | We can summarize it by constructing the distributions that capture the probability of the given rod generates the amplitude responses, 25 | where we can plot it as the probability versus the amplitude. 26 | For those in black curve, the rod failed to absorb the photon that's being generating noise. 27 | For those in red curve, the rod absorbs the photon that's being generated as single photons. 28 | 29 | This helps in giving a theoritical prediction of how to make selection of signals and noise. 30 | ``` 31 | ![](http://geekresearchlab.net/coursera/neuro/retina-g-3.jpg) 32 | ``` 33 | 2. Circuitry points 34 | -- The circuitry points are likely the location at synapse between rods and rod bipolar cells. 35 | -- Rod bipolar cell seems to receive multiple inputs. 36 | Combination of multiple inputs in a linear manner. 37 | ``` 38 | ``` 39 | 3. Direct recording 40 | -- Majority of rod single photon responses were rejected. 41 | ``` 42 | ![](http://geekresearchlab.net/coursera/neuro/retina-g-4.jpg) 43 | ``` 44 | The plots above were kinda misleading... 45 | ``` 46 | ``` 47 | Here is an improved way of representing the distribution. 48 | ``` 49 | ![](http://geekresearchlab.net/coursera/neuro/retina-g-5.jpg) 50 | ``` 51 | Now, the question is, 52 | What will happen to the visual threshold (like 1 in 1000 rods absorb the photons)? 53 | The second chart shows it (in the above diagram). 54 | The maximum amplitude is gained from the rod associating with the noise and then, associating with signal. 55 | This helps in predicting the position of non-linearity. 56 | ``` 57 |
58 | ![](http://geekresearchlab.net/coursera/neuro/retina-g-6.jpg) 59 | -------------------------------------------------------------------------------- /Week-3/notes/shared/resources.md: -------------------------------------------------------------------------------- 1 | [1] Lecture notes
2 | [2] Math notes - Basis functions 3 | -------------------------------------------------------------------------------- /Week-3/requirements.txt: -------------------------------------------------------------------------------- 1 | additional background of calculus required 2 | -------------------------------------------------------------------------------- /Week-4/Quiz/Programming/1.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | In the following three questions, 4 | we will explore Poisson neuron models and population coding. 5 | 6 | This exercise is based on a set of artificial "experiments" that we've run on four simulated neurons that emulate the behavior found in the cercal organs of a cricket. 7 | Please note that all the supplied data is synthetic. 8 | Any resemblance to a real cricket is purely coincidental. 9 | 10 | In the first set of experiments, we probed each neuron with a range of air velocity stimuli of uniform intensity and differing direction. 11 | We recorded the firing rate of each of the neurons in response to each of the stimulus values. 12 | Each of these recordings lasted 10 seconds and we repeated this process 100 times for each neuron-stimulus combination. 13 | 14 | We've supplied you with a file containing data for each of the neurons that contains the recorded firing rates (in Hz). 15 | These are named neuron1, neuron2, neuron3, and neuron4. 16 | The stimulus, that is, the direction of the air velocity, is in the vector named stim. 17 | ``` 18 | Download the file here, and save it into your MATLAB/Octave directory. 19 | ``` 20 | To load the data, use the following command: 21 | ``` 22 | ``` 23 | load('tuning.mat') 24 | ``` 25 | ``` 26 | The equivalent data files for Python 2.7 and Python 3.4 are: 27 | ``` 28 | tuning.pickle (Python 2.7) or tuning.pickle (Python 3.4). 29 | ``` 30 | To load the data in tuning.pickle, make sure you are in the same directory you saved it and add the following to your script: 31 | ``` 32 | ``` 33 | import pickle 34 | with open('tuning.pickle', 'rb') as f: 35 | data = pickle.load(f) 36 | ``` 37 | ``` 38 | This will load everything into a dict called data, 39 | and you'll be able to access the stim and neuron responses using data['stim'], data['neuron1'], etc. 40 | (In general, data.keys() will show you all the keys available in the dict.) 41 | ``` 42 | ``` 43 | The matrices contain the results of running a set of experiments in which we probed the synthetic neuron with the stimuli in stim. 44 | Each column of a neuron matrix contains the firing rate of that neuron (in Hz) in response to the corresponding stimulus value in stim. 45 | That is, the nth column of neuron1 contains the 100 trials in which we applied the stimulus of value stim(n) to neuron1. 46 | ``` 47 | ``` 48 | Plot for each of the neurons: 49 | the tuning curve, 50 | the mean firing rate of the neuron as a function of the stimulus. 51 | ``` 52 | ``` 53 | Which of the following functions best describes the tuning curve? 54 | 55 | (i) Linear function. 56 | (ii) Gaussian. 57 | (iii) Unrectified cosine. 58 | (iv) Half-wave rectified cosine. 59 | ``` 60 | Solution: 61 | ``` 62 | (iii) Unrectified cosine 63 | ``` 64 | Explanation: 65 | ``` 66 | After executing the program and generating a graph (link found below), 67 | then the diagram helps to find what type of function it describes the tuning curve. 68 | 69 | If you're not very familiar with these functions, then check some sample graphs of all choices, 70 | Then see which graph suits the output graph (link found below) 71 | ``` 72 | Here is the output graphs --> click here
73 | // TODO -- share the code that has been executed 74 | -------------------------------------------------------------------------------- /Week-4/Quiz/Programming/2.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | Finally, we ran an additional set of experiments in which we exposed each of the neurons to a single stimulus of unknown direction for 10 trials of 10 seconds each. 4 | We have placed the results of this experiment in pop_coding.mat, 5 | ``` 6 | which you may download here. 7 | ``` 8 | You should save the file into your MATLAB/Octave directory and import the data using the following command: 9 | ``` 10 | ``` 11 | load('pop_coding.mat') 12 | ``` 13 | ``` 14 | The equivalent python files are: 15 | ``` 16 | pop_coding.pickle (Python 2.7) and pop_coding.pickle (Python 3.4), 17 | ``` 18 | and can be loaded in the same way as 'tuning.pickle' was loaded in question 11. 19 | ``` 20 | ``` 21 | pop_coding contains four vectors named r1, r2, r3, and r4 that contain the responses 22 | (firing rate in Hz) of the four neurons to this mystery stimulus. 23 | 24 | It also contains four vectors named c1, c2, c3, and c4. 25 | These are the basis vectors corresponding to neuron 1, neuron 2, neuron 3, and neuron 4. 26 | 27 | Decode the neural responses and recover the mystery stimulus vector by computing the population vector for these neurons. 28 | You should use the maximum average firing rate (over any of the stimulus values in 'tuning.mat') for a neuron as the value of rmax for that neuron. 29 | That is, rmax should be the maximum value in the tuning curve for that neuron. 30 | 31 | What is the direction, in degrees, of the population vector? 32 | You should round your answer to the nearest degree. 33 | Your answer should contain the value only (no units!) and should be between 0 degree and 360 degree. 34 | If your calculations give a negative number or a number greater than or equal to 360, convert it to a number in the proper range 35 | (you may use the mod function to do this). 36 | 37 | You may need to convert your resulting vector from Cartesian coordinates to polar coordinates to find the angle. 38 | You may use the atan() function in MATLAB to do this. 39 | 40 | Note that the convention we're using defines 0 degree to point in the direction of the positive y-axis and 90 degree to point in the direction of the positive x-axis 41 | (i.e., 0 degrees is north, 90 degrees is east). 42 | ``` 43 | -------------------------------------------------------------------------------- /Week-4/Quiz/Programming/3.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | We have reason to suspect that one of the neurons is not like the others. 4 | Three of the neurons are Poisson neurons 5 | (they are accurately modeling using a Poisson process), 6 | but we believe that the remaining one might not be. 7 | Which of the neurons (if any) is NOT Poisson? 8 | ``` 9 | ``` 10 | Hint: 11 | Think carefully about what it means for a neuron to be Poisson. 12 | You may find it useful to review the last lecture of week 2. 13 | Note that we give you the firing rate of each of the neurons, not the spike count. 14 | You may find it useful to convert the firing rates to spike counts in order to test for "Poisson-ness", 15 | however this is not necessary. 16 | 17 | In order to realize why this might be helpful, 18 | consider the fact that, for a constant a and a random variable X, 19 | E[aX]=aE[X] but Var(aX)=a2Var(X). 20 | What might this imply about the Poisson statistics 21 | (like the Fano factor) 22 | when we convert the spike counts (the raw output of the Poisson spike generator) into a firing rate 23 | (what we gave you)? 24 | ``` 25 | ``` 26 | (i) Neuron 2. 27 | (ii) Neuron 4. 28 | (iii) Neuron 3. 29 | (iv) None. 30 | (v) Neuron 1. 31 | ``` 32 | Solution:
33 | Check the output graphs that was generated and you will find the solution. 34 | -------------------------------------------------------------------------------- /Week-4/Quiz/Programming/MATLAB/README.md: -------------------------------------------------------------------------------- 1 | // TODO -- Add 2 | -------------------------------------------------------------------------------- /Week-4/Quiz/Programming/Python/README.md: -------------------------------------------------------------------------------- 1 | // TODO -- Add 2 | -------------------------------------------------------------------------------- /Week-4/Quiz/Programming/plotting/README.md: -------------------------------------------------------------------------------- 1 | Used MATLAB for plotting :) 2 | -------------------------------------------------------------------------------- /Week-4/Quiz/Programming/plotting/output.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/x/11.jpg)

2 | ![](http://geekresearchlab.net/coursera/neuro/x/12.jpg)

3 | ![](http://geekresearchlab.net/coursera/neuro/x/13.jpg)

4 | -------------------------------------------------------------------------------- /Week-4/Quiz/README.md: -------------------------------------------------------------------------------- 1 | The quiz is the combination of activities done during Week-3 and Week-4. 2 | ``` 3 | No.of attempts: 2/10 4 | Final Score: 13/13 5 | ``` 6 | // TODO -- add solutions as well as explanations and related links by the end of the course :) 7 | -------------------------------------------------------------------------------- /Week-4/Quiz/Theory/1.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | Likelihood ratio test with asymmetric costs 4 | 5 | Suppose we have a stimulus defined by a single variable called s. 6 | s can take one of two values, which we will call s1 and s2. 7 | You could think of these as lights flashing in the eyes at one of two possible frequencies. 8 | Or perhaps listening to punk rock vs. listening to Dvorak. 9 | 10 | Let's call the firing rate response of a neuron to this stimulus r. 11 | 12 | Suppose that under stimulus s1 the response rate of the neuron can be roughly approximated 13 | with a Gaussian distribution with the following parameters: 14 | μ (mean): 5 15 | σ (standard deviation): 0.5 16 | 17 | And likewise for s2: 18 | μ: 7 19 | σ: 1 20 | 21 | Lets say that both stimuli are equally likely and we are given no other prior information. 22 | 23 | Now let's throw in another twist. 24 | Let's say that we receive a measurement of the neuron's response and want to guess which stimulus was presented, 25 | but that to us, 26 | it is twice as bad to mistakenly think it is s2 than to mistakenly think it is s1. 27 | ``` 28 | ``` 29 | Which of these firing rates would make the best decision threshold for us in determining the value of s given a neuron's firing rate? 30 | ``` 31 | ``` 32 | Hint: 33 | There are several functions available to help you evaluate Gaussian distributions. 34 | In Octave and in Matlab's stats toolbox you can use the 'normpdf' function. 35 | If you know how to set the problem up, you will be able to try all the answers below to find the one that works best. 36 | 37 | If you decide to challenge yourself to solve this algebraically instead, 38 | you can use the univariate Gaussian PDF, 39 | ``` 40 | given at the top of this wiki. 41 | ``` 42 | ``` 43 | -------------------------------------------------------------------------------- /Week-4/Quiz/Theory/10.md: -------------------------------------------------------------------------------- 1 | Question:

2 | Continued from Question 7: 3 | ``` 4 | What does λ represent? 5 | 6 | (i) The importance of coding efficiency. 7 | (ii) The level of transparency vs. opacity/influence of each piece. 8 | (iii) The pieces that make up the image. 9 | (iv) The difference between the actual image and the representation. 10 | ``` 11 | -------------------------------------------------------------------------------- /Week-4/Quiz/Theory/2.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | Suppose we are diagnosing a very rare illness, 4 | which happens only once in 100 million people on average. 5 | 6 | Luckily, we have a test for this illness but it is not perfectly accurate. 7 | 8 | If somebody has the disease, it will report positive 99% of the time. 9 | If somebody does not have the disease, it will report positive 2% of the time. 10 | 11 | Suppose a patient walks in and tests positive for the disease. 12 | 13 | Using the maximum likelihood (ML) criterion, would we diagnose them positive? 14 | 15 | (i) Yes 16 | (ii) No 17 | ``` 18 | -------------------------------------------------------------------------------- /Week-4/Quiz/Theory/3.md: -------------------------------------------------------------------------------- 1 | Question: 2 |

3 | Continued from Question 2: 4 | ``` 5 | What if we used the maximum a posteriori (MAP) criterion? 6 | 7 | (i) Yes 8 | (ii) No 9 | ``` 10 | -------------------------------------------------------------------------------- /Week-4/Quiz/Theory/4.md: -------------------------------------------------------------------------------- 1 | Question: 2 |

3 | Continued from Question 2: 4 | ``` 5 | Why do we see a difference between the two criteria, if there is one? 6 | 7 | (i) Since ML assumes a Gaussian distribution, unlike MAP, it oversimplifies the world. 8 | (ii) The role of the prior probability is different between the two. 9 | (iii) Unlike MAP, ML assumes the same model for all people. 10 | (iv) There is no difference between the two, because in this case they are equivalent. 11 | ``` 12 | -------------------------------------------------------------------------------- /Week-4/Quiz/Theory/5.md: -------------------------------------------------------------------------------- 1 | Question: 2 | ``` 3 | Suppose that we have a neuron 4 | which, in a given time period, will fire with probability 0.1, 5 | yielding a Bernoulli distribution for the neuron's firing 6 | (denoted by the random variable F = 0 or 1) with P(F = 1) = 0.1. 7 | 8 | Which of these is closest to the entropy H(F) of this distribution 9 | (calculated in bits, i.e., using the base 2 logarithm)? 10 | 11 | (i) 1.999 12 | (ii) 0.1954 13 | (iii) 0.4690 14 | (iv) -0.1954 15 | ``` 16 | -------------------------------------------------------------------------------- /Week-4/Quiz/Theory/6.md: -------------------------------------------------------------------------------- 1 | Question: 2 |

3 | Continued from Question 5: 4 | ```` 5 | Now let's add a stimulus to the picture. 6 | Suppose that we think this neuron's activity is related to a light flashing in the eye. 7 | Let us say that the light is flashing in a given time period with probability 0.10. 8 | Call this stimulus random variable S. 9 | 10 | If there is a flash, the neuron will fire with probability 1/2. 11 | If there is not a flash, the neuron will fire with probability 1/18. 12 | Call this random variable F (whether the neuron fires or not). 13 | 14 | Which of these is closest, in bits (log base 2 units), to the mutual information MI(S,F)? 15 | 16 | (Note: this question requires several calculations.) 17 | 18 | (i) 0.0904 19 | (ii) -0.3786 20 | (iii) 0.8476 21 | (iv) 0.3786 22 | ``` 23 | -------------------------------------------------------------------------------- /Week-4/Quiz/Theory/7.md: -------------------------------------------------------------------------------- 1 | Question: 2 |

3 | ![](http://geekresearchlab.net/coursera/neuro/quizo.jpg) 4 | ``` 5 | This math from lecture 4-3 could potentially be intimidating, but in fact the concept is really simple. 6 | Getting an intuition for it will help with many types of problems. 7 | Let's work out a metaphor to understand it. 8 | 9 | Suppose we want to build a complex image. 10 | We could do that by layering a whole bunch of pieces together (mathematically - summing). 11 | This is like drawing on transparencies with various levels of opacity and putting them on top of each other. 12 | Those familiar with Photoshop or Gimp will recognize that concept. 13 | If we had to build an image in Photoshop with a bicycle on a road, 14 | for instance, perhaps we could have an image of a sky, and one of the road, and one of the bike. 15 | We could "add" these pieces together to make our target image. 16 | 17 | Of course, if our neural system was trying to make visual fields that worked for any sort of input, 18 | we would want more than just roads, skies, and bikes to work with! 19 | One possibility is to have a bunch of generic shapes of various sizes, orientations, and locations within the image. 20 | If we chose the right variety, we could blend/sum these primitive pieces together to make just about any image! 21 | One way to blend them is to let them have varying transparencies/opacities, and to set them on top of each other. 22 | That is what we would call a weighted sum, where the weights are how transparent each piece is. 23 | 24 | Of course, we may not want to have too many possible shapes to use. 25 | As mentioned in the video, the organism likely wants to conserve energy. 26 | That means having as few neurons firing as possible at once. 27 | If we conceptually make a correlation between these shapes and the neurons, 28 | then we can point out we would want to use as few shapes as we could while maintaining an accurate image. 29 | 30 | This math gives us a way of summing a bunch of pieces together to represent an image, to attempt to make that representation look as much like the image as possible, 31 | and to make that representation efficient - using as few pieces as possible. 32 | That is a lot of work for two lines of math! 33 | 34 | Now let's put this metaphor into action to understand what all these symbols mean. 35 | I'll give you one to start with. 36 | The vector x in the equation above represents the coordinates of a point in the image. 37 | 38 | Now you fill in the rest: 39 | 40 | What do the ϕis, called the "basis functions," represent in our metaphor? 41 | 42 | (i) The importance of coding efficiency. 43 | (ii) The level of transparency vs. opacity/influence of each piece. 44 | (iii) The difference between the actual image and the representation. 45 | (iv) The pieces that make up the image. 46 | ``` 47 | -------------------------------------------------------------------------------- /Week-4/Quiz/Theory/8.md: -------------------------------------------------------------------------------- 1 | Question:

2 | Continued from Question 7: 3 | ``` 4 | What does ϵ represent? 5 | 6 | (i) The level of transparency vs. opacity/influence of each piece. 7 | (ii) The pieces that make up the image. 8 | (iii) The importance of coding efficiency. 9 | (iv) The difference between the actual image and the representation. 10 | ``` 11 | -------------------------------------------------------------------------------- /Week-4/Quiz/Theory/9.md: -------------------------------------------------------------------------------- 1 | Question:

2 | Continued from Question 7: 3 | ``` 4 | What do the ai's represent? 5 | 6 | (i) The difference between the actual image and the representation. 7 | (ii) The pieces that make up the image. 8 | (iii) The importance of coding efficiency. 9 | (iv) The level of transparency vs. opacity/influence of each piece 10 | ``` 11 | -------------------------------------------------------------------------------- /Week-4/notes/notes/1 - Information and Entropy.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/n-1.jpg) 2 | ``` 3 | Each bit of information specifies location by additional factor of 2. 4 | ``` 5 | ``` 6 | Entropy is the average information of a random variable. 7 | It measures variability. 8 | It computes a number of yes/no questions. 9 | ``` 10 | ![](http://geekresearchlab.net/coursera/neuro/n-2.jpg) 11 | ``` 12 | Question: 13 | Can you speculate how many bits of information are required to locate your car (the old-fashioned bug) either by calculating or by intuition? 14 | Answer: 15 | 3 16 | ``` 17 | Explanation:
18 | ![](http://geekresearchlab.net/coursera/neuro/n-3.jpg) 19 |
20 | In the above diagram, there was a minus missing in the 4th line before 1/8. 21 |
22 |
23 | ![](http://geekresearchlab.net/coursera/neuro/n-4.jpg)

24 | ![](http://geekresearchlab.net/coursera/neuro/n-5.jpg)

25 | ![](http://geekresearchlab.net/coursera/neuro/n-6.jpg) 26 | ``` 27 | Back to spike code: Stimulus?? 28 | From the below diagram, 29 | Entropy tells us about the intrinsic variability of outputs. 30 | But, we need to consider the stimulus and how it's driving the responses. 31 | The stimulus can one in perfect direction where each of them is perfectly encoded. 32 | Let r be spiking response and s be the stimulus. 33 | Everytime, there is a stimulus, we will get a spike. 34 | ``` 35 | ![](http://geekresearchlab.net/coursera/neuro/n-7.jpg)

36 | ``` 37 | let q be the error probabilty. 38 | ``` 39 | ![](http://geekresearchlab.net/coursera/neuro/n-8.jpg) 40 |

41 | ![](http://geekresearchlab.net/coursera/neuro/n-9.jpg) 42 | ``` 43 | On below diagram, 44 | Now, going back to binomial calculations, 45 | And see how the mutual information depends upon the noise. 46 | p => maximizes the entropy 47 | Let's assume that noise is as same as for the spike and silence keeping the value to be q. 48 | 49 | So, this should be intuitive. 50 | 51 | As error grows larger, the spiking is less likely to appear 52 | where the mutual information decreases. 53 | 54 | When q = 0.5, then there is no mutual information between r and s. 55 | ``` 56 | ![](http://geekresearchlab.net/coursera/neuro/n-10.jpg) 57 | ``` 58 | Question: 59 | If the stimulus and response are independent events with their own probabilities, p(s) & p(r), 60 | what is p(r|s)? 61 | ``` 62 | ![](http://geekresearchlab.net/coursera/neuro/n-11.jpg) 63 | ``` 64 | Answer: 65 | p(r) 66 | 67 | Explanation: 68 | Because there is no relationship between response and stimulus. 69 | ``` 70 | ``` 71 | From the above same diagram, 72 | Question: 73 | If p(r|s) = p(r), 74 | what is the mutual information (MI) of r and s? 75 | Answer: 76 | 0 77 | Explanation: 78 | Recall that when p(r|s) = p(r), 79 | r and s are independent because the statement tells us that knowing s does not help us narrow down what r could be. 80 | Because s does not reduce the level of uncertainty about r, it is said that s gives us no information about r. 81 | Thus, the mutual information of r and s is 0. 82 | 83 | Note that the word "mutual" refers to the fact that the relationship between two variables can be viewed both ways: 84 | if knowing r reduces uncertainty about s, then knowing s necessarily reduces uncertainty about r by the same amount. 85 | Thus, when we quantify this reduction in uncertainty we call it "mutual information." 86 | ``` 87 | ``` 88 | The 2 questions refers to Part 1 in that diagram... 89 | Summarizing Part 1, 90 | Noise entropy is equal to the total entropy. 91 | The MI is zero. 92 | Coming to Part 2, 93 | At the opposite extreme, 94 | The response is perfectly predicted by the stimulus. 95 | 96 | In this case, the noise entropy is 0, 97 | So, the MI will be given by the total entropy of the response. 98 | All the responses coding capacity is used in encoding the stimulus. 99 | ``` 100 | ![](http://geekresearchlab.net/coursera/neuro/n-12.jpg) 101 |

102 | ![](http://geekresearchlab.net/coursera/neuro/n-13.jpg) 103 | ``` 104 | Below diagram, 105 | P(r,s) = joint distribution 106 | P(r)P(s) = marginal distribution 107 | It can be rewritten using conditional distribution 108 | Hence, totally computed. 109 | Thereby, the information is completely symmetric. 110 | ``` 111 | ![](http://geekresearchlab.net/coursera/neuro/n-14.jpg) 112 | ``` 113 | A small recipe for calculating... 114 | ``` 115 | ![](http://geekresearchlab.net/coursera/neuro/n-15.jpg) 116 | ``` 117 | In the next part, we will discuss on calculating information in spike trains... 118 | Two methods were being used... 119 | --- Information in spike patterns 120 | --- Information in single spikes 121 | ``` 122 | -------------------------------------------------------------------------------- /Week-4/notes/notes/2 - Calculating Information in Spike Trains.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/i-1.jpg)

2 | ![](http://geekresearchlab.net/coursera/neuro/i-2.jpg)

3 | ![](http://geekresearchlab.net/coursera/neuro/i-3.jpg)

4 | ![](http://geekresearchlab.net/coursera/neuro/i-4.jpg)

5 | ![](http://geekresearchlab.net/coursera/neuro/i-5.jpg) 6 | ``` 7 | Question: 8 | Take a moment to imagine the possible effects of this jitter. 9 | Take a guess about which statement is true (or all of the above): 10 | 11 | Answers: 12 | Some is to be expected as a result of noise inherent in neural activity. 13 | 14 | Information about a stimulus can still be reliably transmitted if jitter exists. 15 | 16 | The greater the width of this jitter, the less confident we are that the cell is accurately representing the stimulus. 17 | 18 | Explanation: 19 | All of these statements are true. Biological systems, including neural systems, exhibit variability. 20 | Excess variability can corrupt neural signals, making them less reliable. 21 | However, neural systems deal with this variability in fascinating ways, sometimes reducing this "noise" as much as possible within fundamental physical limits. 22 | Certain neural systems even use noise to their advantage. 23 | We will touch on just a small number of examples of these incredible adaptations in this course. 24 | ``` 25 | ![](http://geekresearchlab.net/coursera/neuro/i-6.jpg)

26 | ![](http://geekresearchlab.net/coursera/neuro/i-7.jpg)

27 | ![](http://geekresearchlab.net/coursera/neuro/i-8.jpg) 28 | ``` 29 | Question: 30 | Can you recall how to compute the entropy of the response? 31 | Answer: 32 | Total entropy - noise entropy 33 | ``` 34 | ![](http://geekresearchlab.net/coursera/neuro/i-9.jpg)
35 | ![](http://geekresearchlab.net/coursera/neuro/i-10.jpg)
36 | ![](http://geekresearchlab.net/coursera/neuro/i-11.jpg) 37 | -------------------------------------------------------------------------------- /Week-4/notes/notes/3 - Coding Principles.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/o-1.jpg)

2 | ![](http://geekresearchlab.net/coursera/neuro/o-2.jpg)

3 | ![](http://geekresearchlab.net/coursera/neuro/o-3.jpg)

4 | ![](http://geekresearchlab.net/coursera/neuro/o-4.jpg)

5 | ![](http://geekresearchlab.net/coursera/neuro/o-5.jpg)

6 | ![](http://geekresearchlab.net/coursera/neuro/o-6.jpg)

7 | ![](http://geekresearchlab.net/coursera/neuro/o-7.jpg)

8 | ![](http://geekresearchlab.net/coursera/neuro/o-8.jpg) 9 | ``` 10 | Question: 11 | If fluctuations in some stimulus start out large and suddenly decrease in amplitude 12 | (without changing the stimulus average), 13 | the optimal input-output curve for a sensory neuron encoding the stimulus will: 14 | Answer: 15 | contract (become more steep) 16 | Explanation: 17 | If the amplitude of stimulus fluctuations decreases, 18 | the probability distribution of stimulus values will become skinnier. 19 | To adjust for this change in stimulus statistics, 20 | a sensory neuron responding to the stimulus should correspondingly squeeze its input-output curve to best encode a smaller range of stimulus values. 21 | This way, the input-output curve will go back to being similar to the cumulative integral of the stimulus distribution (as described earlier in this lecture), 22 | and the neuron will be able to efficiently encode the range of the stimulus. 23 | ``` 24 | ![](http://geekresearchlab.net/coursera/neuro/o-9.jpg)

25 | ![](http://geekresearchlab.net/coursera/neuro/o-10.jpg)

26 | ![](http://geekresearchlab.net/coursera/neuro/o-11.jpg)

27 | ![](http://geekresearchlab.net/coursera/neuro/o-12.jpg)

28 | ![](http://geekresearchlab.net/coursera/neuro/o-13.jpg)

29 | ![](http://geekresearchlab.net/coursera/neuro/o-14.jpg)
30 | ``` 31 | Coding Principles: 32 | --- Coding efficiency 33 | --- Adaptation to stimulus statistics 34 | --- Sparseness 35 | ``` 36 | ![](http://geekresearchlab.net/coursera/neuro/o-15.jpg) 37 | ``` 38 | What have we missed?? 39 | ``` 40 | ![](http://geekresearchlab.net/coursera/neuro/o-16.jpg) 41 | ``` 42 | In next lecture, the bio-physics of coding will be discussed. 43 | ``` 44 | -------------------------------------------------------------------------------- /Week-4/notes/shared/resources.md: -------------------------------------------------------------------------------- 1 | [1] Lectures
2 | [2] Derivation of Definition of Entropy 3 | -------------------------------------------------------------------------------- /Week-5/notes/notes/1 - Modeling Neurons.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/model-1.jpg)

2 | ![](http://geekresearchlab.net/coursera/neuro/model-2.jpg)

3 | ![](http://geekresearchlab.net/coursera/neuro/model-3.jpg)

4 | ![](http://geekresearchlab.net/coursera/neuro/model-4.jpg)

5 | ![](http://geekresearchlab.net/coursera/neuro/model-5.jpg)

6 | ![](http://geekresearchlab.net/coursera/neuro/model-6.jpg)

7 | ![](http://geekresearchlab.net/coursera/neuro/model-7.jpg)

8 | ![](http://geekresearchlab.net/coursera/neuro/model-8.jpg)

9 | ![](http://geekresearchlab.net/coursera/neuro/model-9.jpg)

10 | ![](http://geekresearchlab.net/coursera/neuro/model-10.jpg)

11 | ![](http://geekresearchlab.net/coursera/neuro/model-11.jpg)

12 | ![](http://geekresearchlab.net/coursera/neuro/model-12.jpg)

13 | ![](http://geekresearchlab.net/coursera/neuro/model-13.jpg)

14 | ![](http://geekresearchlab.net/coursera/neuro/model-14.jpg)
15 | -------------------------------------------------------------------------------- /Week-5/notes/notes/2 - Spikes.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/spi-1.jpg)

2 | ![](http://geekresearchlab.net/coursera/neuro/spi-2.jpg)

3 | ![](http://geekresearchlab.net/coursera/neuro/spi-3.jpg)

4 | ![](http://geekresearchlab.net/coursera/neuro/spi-4.jpg)

5 | ![](http://geekresearchlab.net/coursera/neuro/spi-5.jpg)

6 | ![](http://geekresearchlab.net/coursera/neuro/spi-6.jpg)

7 | ![](http://geekresearchlab.net/coursera/neuro/spi-7.jpg)

8 | ![](http://geekresearchlab.net/coursera/neuro/spi-8.jpg)

9 | ![](http://geekresearchlab.net/coursera/neuro/spi-9.jpg)

10 | ![](http://geekresearchlab.net/coursera/neuro/spi-10.jpg)

11 | ![](http://geekresearchlab.net/coursera/neuro/spi-11.jpg)

12 | ![](http://geekresearchlab.net/coursera/neuro/spi-12.jpg)

13 | ![](http://geekresearchlab.net/coursera/neuro/spi-13.jpg)

14 | ![](http://geekresearchlab.net/coursera/neuro/spi-14.jpg)

15 | ![](http://geekresearchlab.net/coursera/neuro/spi-15.jpg)

16 | ![](http://geekresearchlab.net/coursera/neuro/spi-16.jpg)
17 | -------------------------------------------------------------------------------- /Week-5/notes/notes/3---Simplified-Modeled-Neurons/1 - overview.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/spik-1.jpg) 2 | -------------------------------------------------------------------------------- /Week-5/notes/notes/3---Simplified-Modeled-Neurons/2 - introduction.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/spik-2.jpg) 2 | -------------------------------------------------------------------------------- /Week-5/notes/notes/3---Simplified-Modeled-Neurons/3 - Capturing the basic dynamics of neurons.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/spik-3.jpg)

2 | ![](http://geekresearchlab.net/coursera/neuro/spik-3-1.jpg)

3 | ![](http://geekresearchlab.net/coursera/neuro/spik-3-2.jpg)

4 | ![](http://geekresearchlab.net/coursera/neuro/spik-3-3.jpg)

5 | ![](http://geekresearchlab.net/coursera/neuro/spik-4.jpg)

6 | ![](http://geekresearchlab.net/coursera/neuro/spik-5.jpg)

7 | ![](http://geekresearchlab.net/coursera/neuro/spik-5-1.jpg)

8 | ![](http://geekresearchlab.net/coursera/neuro/spik-6.jpg)

9 | ![](http://geekresearchlab.net/coursera/neuro/spik-7.jpg)

10 | ![](http://geekresearchlab.net/coursera/neuro/spik-8.jpg)
11 | ![](http://geekresearchlab.net/coursera/neuro/spik-8-1.jpg)
12 | -------------------------------------------------------------------------------- /Week-5/notes/notes/3---Simplified-Modeled-Neurons/4 - Two-dimensional models.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/spik-9.jpg)

2 | ![](http://geekresearchlab.net/coursera/neuro/spik-9-1.jpg)

3 | ![](http://geekresearchlab.net/coursera/neuro/spik-9-2.jpg)

4 | ![](http://geekresearchlab.net/coursera/neuro/spik-9-3.jpg)

5 | ![](http://geekresearchlab.net/coursera/neuro/spik-9-4.jpg)
6 | -------------------------------------------------------------------------------- /Week-5/notes/notes/3---Simplified-Modeled-Neurons/5 - The Simple Model.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/spik-10.jpg)

2 | ![](http://geekresearchlab.net/coursera/neuro/spik-10-1.jpg)
3 | -------------------------------------------------------------------------------- /Week-5/notes/notes/4 - A forest of dendrites.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/dend-1.jpg)

2 | ![](http://geekresearchlab.net/coursera/neuro/dend-2.jpg)

3 | ![](http://geekresearchlab.net/coursera/neuro/dend-3.jpg)

4 | ![](http://geekresearchlab.net/coursera/neuro/dend-4.jpg)

5 | ![](http://geekresearchlab.net/coursera/neuro/dend-5.jpg)

6 | ![](http://geekresearchlab.net/coursera/neuro/dend-6.jpg)

7 | ![](http://geekresearchlab.net/coursera/neuro/dend-7.jpg)

8 | ![](http://geekresearchlab.net/coursera/neuro/dend-8.jpg)

9 | ![](http://geekresearchlab.net/coursera/neuro/dend-9.jpg)

10 | ![](http://geekresearchlab.net/coursera/neuro/dend-10.jpg)

11 | ![](http://geekresearchlab.net/coursera/neuro/dend-11.jpg)

12 | ![](http://geekresearchlab.net/coursera/neuro/dend-12.jpg)

13 | ![](http://geekresearchlab.net/coursera/neuro/dend-13.jpg)

14 | ![](http://geekresearchlab.net/coursera/neuro/dend-14.jpg)
15 | -------------------------------------------------------------------------------- /Week-5/notes/notes/README.md: -------------------------------------------------------------------------------- 1 | ``` 2 | Computing in Carbon 3 | ``` 4 | ``` 5 | Synopsis: 6 | 7 | Neuroelectronics 8 | -- membranes 9 | -- ion channels 10 | -- wiring 11 | 12 | Simplified neuron models 13 | -- basic dynamics of neuronal excitability 14 | 15 | Neuronal geometry 16 | -- Dendrites and dendritic computing 17 | ``` 18 | ![](http://geekresearchlab.net/coursera/neuro/dend-15.jpg)
19 | -------------------------------------------------------------------------------- /Week-5/notes/shared/resources.md: -------------------------------------------------------------------------------- 1 | [1] Lectures
2 | [2] RC circuits and solving first-order differential equations
3 | -------------------------------------------------------------------------------- /Week-5/quiz/README.md: -------------------------------------------------------------------------------- 1 | ``` 2 | No. of attempts: 2/10 3 | Attempt #1: 15/16 4 | Attempt #2: 16/16 5 | 6 | Final Score: 16/16 7 | ``` 8 | // TODO --- Add solutions, explanation and related links by the end of the course. 9 | -------------------------------------------------------------------------------- /Week-6/README.md: -------------------------------------------------------------------------------- 1 | no quiz this week (^_^) 2 | -------------------------------------------------------------------------------- /Week-6/notes/notes/1 - Modelling connections between neurons.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/net-1.jpg) 2 | ``` 3 | How do neurons connect to form networks? 4 | 5 | Neurons use synapses 6 | ``` 7 | ``` 8 | What do these chemical "synapses" do? 9 | 10 | -- Increase or Decrease postsynaptic membrane potential. 11 | ``` 12 | ``` 13 | In an excitatory synapse, 14 | (Steps) 15 | [1] Input spike 16 | [2] Neurotransmitter release (Eg.Glutamate) 17 | [3] Binds to receptors 18 | [4] Ion channels open 19 | [5] Positive ions (Eg.Na+) enter cell 20 | [6] Depolarization (increase local membrane potential) 21 | ``` 22 | ``` 23 | In an inhibitory synapse, 24 | (Steps) 25 | [1] Input spike 26 | [2] Neurotransmitter release (Eg.GABA) 27 | [3] Binds to receptors 28 | [4] Ion channels open 29 | [5] Positive ions (Eg.K+) leave the cell 30 | [6] Hyperpolarization (dencrease local membrane potential) 31 | ``` 32 | ``` 33 | What we want to do is... 34 | a computational model of the effects of a synapse on the membrane potential V. 35 | 36 | So.. how do we do this? 37 | ``` 38 | ![](http://geekresearchlab.net/coursera/neuro/net-2.jpg)

39 | ![](http://geekresearchlab.net/coursera/neuro/net-3.jpg) 40 | ``` 41 | Question: 42 | What is the effect of τm, the "membrane time constant", 43 | on how fast the cell's voltage changes in response to an input? 44 | Answer: 45 | As τm increases, 46 | it takes longer for the cell to reach steady state when an input is turned on, 47 | and longer to decrease to equilibrium when it is turned off. 48 | Explanation: 49 | If you divide both sides of the membrane potential equation by τm, 50 | you can see that if the time constant increases, 51 | the terms on the right hand side governing the rate of change of voltage decrease. 52 | ``` 53 | ``` 54 | How do we model the effects of a synapse on the membrane potential V? 55 | ``` 56 | ![](http://geekresearchlab.net/coursera/neuro/net-4.jpg)

57 | ![](http://geekresearchlab.net/coursera/neuro/net-5.jpg)

58 | ![](http://geekresearchlab.net/coursera/neuro/net-6.jpg) 59 | ``` 60 | Question: 61 | What is the value of Ps at equilibrium? 62 | That is, what is the value of Ps such that dPs/dt is 0? 63 | Answer: 64 | ``` 65 | ![](http://geekresearchlab.net/coursera/neuro/net-7.jpg)

66 | ![](http://geekresearchlab.net/coursera/neuro/net-8.jpg) 67 | ``` 68 | How do we model the effects of multiple spikes in synaptic conductance? 69 | ``` 70 | ![](http://geekresearchlab.net/coursera/neuro/net-9.jpg)

71 | ![](http://geekresearchlab.net/coursera/neuro/net-10.jpg)
72 | -------------------------------------------------------------------------------- /Week-6/notes/notes/2 - Introduction to network models.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/x/model-1.jpg) 2 | ``` 3 | Modeling networks: Spiking versus firing rate 4 | 5 | Option 1: Model networks using spiking neurons 6 | (i) Advantages: Model computation and learning based on, 7 | -- Spike Timing 8 | -- Spike Correlations/Synchrony between neurons 9 | (ii) Disadvantages: 10 | -- Computationally expensive 11 | 12 | Option 2: Use neurons with firing rate outputs (real valued outputs) 13 | (i) Advantages: 14 | -- Greater efficiency 15 | -- Scales well to large networks 16 | (ii) Disadvantages: 17 | -- Ignores spike timing issues 18 | ``` 19 | ``` 20 | Question is... 21 | 22 | How are these 2 approaches related? 23 | 24 | Answer is... 25 | ``` 26 | ![](http://geekresearchlab.net/coursera/neuro/x/model-2.jpg)

27 | ![](http://geekresearchlab.net/coursera/neuro/x/model-3.jpg)

28 | ![](http://geekresearchlab.net/coursera/neuro/x/model-4.jpg)

29 | ![](http://geekresearchlab.net/coursera/neuro/x/model-5.jpg)

30 | ![](http://geekresearchlab.net/coursera/neuro/x/model-6.jpg)

31 | ![](http://geekresearchlab.net/coursera/neuro/x/model-7.jpg)

32 | ![](http://geekresearchlab.net/coursera/neuro/x/model-8.jpg)

33 | ![](http://geekresearchlab.net/coursera/neuro/x/model-9.jpg)

34 | ![](http://geekresearchlab.net/coursera/neuro/x/model-10.jpg)

35 | ![](http://geekresearchlab.net/coursera/neuro/x/model-11.jpg)

36 | ![](http://geekresearchlab.net/coursera/neuro/x/model-12.jpg)

37 | ![](http://geekresearchlab.net/coursera/neuro/x/model-13.jpg)
38 | ![](http://geekresearchlab.net/coursera/neuro/x/model-13-1.jpg)
39 | ![](http://geekresearchlab.net/coursera/neuro/x/model-13-2.jpg)

40 | ![](http://geekresearchlab.net/coursera/neuro/x/model-14.jpg)

41 | ![](http://geekresearchlab.net/coursera/neuro/x/model-15.jpg)

42 | ![](http://geekresearchlab.net/coursera/neuro/x/model-16.jpg)

43 | ![](http://geekresearchlab.net/coursera/neuro/x/model-17.jpg)

44 | ![](http://geekresearchlab.net/coursera/neuro/x/model-18.jpg)
45 | -------------------------------------------------------------------------------- /Week-6/notes/notes/3 - The World of Recurrent networks.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/recurrent-1.jpg)

2 | ![](http://geekresearchlab.net/coursera/neuro/recurrent-2.jpg)

3 | ![](http://geekresearchlab.net/coursera/neuro/recurrent-3.jpg)

4 | ![](http://geekresearchlab.net/coursera/neuro/recurrent-4.jpg)

5 | ![](http://geekresearchlab.net/coursera/neuro/recurrent-5.jpg) 6 | ``` 7 | Question: 8 | We have used the term "steady state" several times now over the past few weeks. 9 | What do we mean by "steady state value" here? 10 | Answer: 11 | The value of v, given our weight matrices W and M, such that v does not change further over time. 12 | ``` 13 | ![](http://geekresearchlab.net/coursera/neuro/recurrent-6.jpg)

14 | ![](http://geekresearchlab.net/coursera/neuro/recurrent-7.jpg)

15 | ![](http://geekresearchlab.net/coursera/neuro/recurrent-8.jpg)

16 | ![](http://geekresearchlab.net/coursera/neuro/recurrent-9.jpg)

17 | ![](http://geekresearchlab.net/coursera/neuro/recurrent-10.jpg)

18 | ![](http://geekresearchlab.net/coursera/neuro/recurrent-11.jpg)

19 | ![](http://geekresearchlab.net/coursera/neuro/recurrent-12.jpg)

20 | ![](http://geekresearchlab.net/coursera/neuro/recurrent-13.jpg)

21 | ![](http://geekresearchlab.net/coursera/neuro/recurrent-14.jpg) 22 | ``` 23 | Question: 24 | Why might gain modulation be a useful property for a network to have? 25 | Answer: 26 | It might help the network discriminate among several signals that are close together. 27 | Explanation: 28 | An input set of neurons might have a response vector that changes only a little bit over a certain range of stimuli, making it difficult to read out. 29 | However, if a downstream network performs gain modulation, then little changes in the input response will yield large changes in the output response, so looking at the output response will better allow you to discriminate among stimuli. 30 | ``` 31 | ![](http://geekresearchlab.net/coursera/neuro/recurrent-15.jpg)

32 | ![](http://geekresearchlab.net/coursera/neuro/recurrent-16.jpg)

33 | ![](http://geekresearchlab.net/coursera/neuro/recurrent-17.jpg)

34 | ![](http://geekresearchlab.net/coursera/neuro/recurrent-18.jpg)

35 | ![](http://geekresearchlab.net/coursera/neuro/recurrent-19.jpg)
36 | -------------------------------------------------------------------------------- /Week-6/notes/shared/resources.md: -------------------------------------------------------------------------------- 1 | [1] Lecture Notes
2 | [2] Dr. Rao's Notes on Recurrent Networks! 3 | -------------------------------------------------------------------------------- /Week-7/notes/notes/1 - LTP and LTD.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/xx/ls-1-1.jpg)

2 | ![](http://geekresearchlab.net/coursera/neuro/xx/ls-1-2.jpg)

3 | ![](http://geekresearchlab.net/coursera/neuro/xx/ls-1-3.jpg) 4 | -------------------------------------------------------------------------------- /Week-7/notes/notes/2 - Hebb's rule.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/xx/ls-2-1.jpg)

2 | ![](http://geekresearchlab.net/coursera/neuro/xx/ls-2-2.jpg)

3 | ![](http://geekresearchlab.net/coursera/neuro/xx/ls-2-3.jpg) 4 | -------------------------------------------------------------------------------- /Week-7/notes/notes/3 - Covariance rule.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/xx/ls-2-4.jpg)

2 | ![](http://geekresearchlab.net/coursera/neuro/xx/ls-2-5.jpg)

3 | ![](http://geekresearchlab.net/coursera/neuro/xx/ls-2-6.jpg)

4 | ![](http://geekresearchlab.net/coursera/neuro/xx/ls-2-7.jpg)

5 | ![](http://geekresearchlab.net/coursera/neuro/xx/ls-2-8.jpg)

6 | -------------------------------------------------------------------------------- /Week-7/notes/notes/4 - Analyzing learning rules.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/xx/ls-3-1.jpg) 2 | -------------------------------------------------------------------------------- /Week-7/notes/notes/5 - Oja's Rule.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/xx/ls-3-2.jpg) 2 | -------------------------------------------------------------------------------- /Week-7/notes/notes/6 - Summary of Hebbian Learning.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/xx/ls-3-3.jpg)

2 | ![](http://geekresearchlab.net/coursera/neuro/xx/ls-3-5.jpg)

3 | ![](http://geekresearchlab.net/coursera/neuro/xx/ls-3-4.jpg)

4 | ![](http://geekresearchlab.net/coursera/neuro/xx/ls-3-4-1.jpg)
5 | -------------------------------------------------------------------------------- /Week-7/notes/notes/7 - Statistical Learning.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/xx/ls-4-1.jpg)

2 | ![](http://geekresearchlab.net/coursera/neuro/xx/ls-4-2.jpg)

3 | ![](http://geekresearchlab.net/coursera/neuro/xx/ls-4-3.jpg) 4 | -------------------------------------------------------------------------------- /Week-7/notes/notes/8 - Introduction to unsupervised learning.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/ul-1.jpg)

2 | ![](http://geekresearchlab.net/coursera/neuro/ul-2.jpg)

3 | ![](http://geekresearchlab.net/coursera/neuro/ul-3.jpg)

4 | ![](http://geekresearchlab.net/coursera/neuro/ul-4.jpg)

5 | ![](http://geekresearchlab.net/coursera/neuro/ul-5.jpg)

6 | ![](http://geekresearchlab.net/coursera/neuro/ul-6.jpg)

7 | ![](http://geekresearchlab.net/coursera/neuro/ul-7.jpg)

8 | ![](http://geekresearchlab.net/coursera/neuro/ul-8.jpg)

9 | ![](http://geekresearchlab.net/coursera/neuro/ul-9.jpg)

10 | ![](http://geekresearchlab.net/coursera/neuro/ul-10.jpg)

11 | ![](http://geekresearchlab.net/coursera/neuro/ul-11.jpg)

12 | ![](http://geekresearchlab.net/coursera/neuro/ul-12.jpg)

13 | ![](http://geekresearchlab.net/coursera/neuro/ul-13.jpg)

14 | ![](http://geekresearchlab.net/coursera/neuro/ul-14.jpg)

15 | ![](http://geekresearchlab.net/coursera/neuro/ul-15.jpg)

16 | ![](http://geekresearchlab.net/coursera/neuro/ul-16.jpg)

17 | ![](http://geekresearchlab.net/coursera/neuro/ul-17.jpg) 18 | ``` 19 | 100X100 has how many dimensions? It's 10000. 20 | Explanation:- (that lecture didn't provide proper explanation) 21 | Imagine a matrix... :) 22 | Every matrix is the form of rowsXcolumns or heightXwidth 23 | 1X1 matrix => 1^2 => 1 dimension 24 | 100X100 matrix => 100^2 => 10000 dimensions 25 | Graph is in the form of matrix x-y axis... 26 | Simulation relays on matrix plots & values are found based on it or sometimes other math methods. 27 | Everything is matrix.. 28 | ``` 29 | -------------------------------------------------------------------------------- /Week-7/notes/notes/9 - Sparse coding and Predictive coding.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/ev-1.jpg)

2 | ![](http://geekresearchlab.net/coursera/neuro/ev-2.jpg)

3 | ![](http://geekresearchlab.net/coursera/neuro/ev-3.jpg)

4 | ![](http://geekresearchlab.net/coursera/neuro/ev-4.jpg)

5 | ![](http://geekresearchlab.net/coursera/neuro/ev-5.jpg)

6 | ![](http://geekresearchlab.net/coursera/neuro/ev-6.jpg)

7 | ![](http://geekresearchlab.net/coursera/neuro/ev-7.jpg)

8 | ![](http://geekresearchlab.net/coursera/neuro/ev-8.jpg)

9 | ![](http://geekresearchlab.net/coursera/neuro/ev-9.jpg)

10 | ![](http://geekresearchlab.net/coursera/neuro/ev-10.jpg)

11 | ![](http://geekresearchlab.net/coursera/neuro/ev-11.jpg)

12 | ![](http://geekresearchlab.net/coursera/neuro/ev-12.jpg)

13 | ![](http://geekresearchlab.net/coursera/neuro/ev-13.jpg)

14 | ![](http://geekresearchlab.net/coursera/neuro/ev-14.jpg)

15 | ![](http://geekresearchlab.net/coursera/neuro/ev-15.jpg)

16 | ![](http://geekresearchlab.net/coursera/neuro/ev-16.jpg) 17 | -------------------------------------------------------------------------------- /Week-7/notes/notes/README.md: -------------------------------------------------------------------------------- 1 | ``` 2 | Topics 3 | ``` 4 | ``` 5 | 1: Synaptic Plasticity, Hebb's Rule, and Statistical Learning 6 | 7 | 2: Introduction to Unsupervised Learning 8 | 9 | 3: Sparse Coding and Predictive Coding 10 | ``` 11 | -------------------------------------------------------------------------------- /Week-7/notes/shared/resources.md: -------------------------------------------------------------------------------- 1 | [1] Lecture Notes
2 | [2] Papers
3 | [2.A] Olshausen & Field (1996)
4 | [2.B] Rao (1999)
5 | [2.C] Rao & Ballard (1999)
6 | [2.D] Koch & Poggio (1999)
7 | [2.E] Huang & Rao (2011)
8 | -------------------------------------------------------------------------------- /Week-7/quizzes/README.md: -------------------------------------------------------------------------------- 1 | Quizzes based on Week-6 and Week-7. 2 | ``` 3 | Difficulty level: 10/10 4 | (contains heavy math, programming and neuroscience theory) 5 | No. of attempts: 3/10 6 | 7 | Final Score: 14/15 8 | 9 | I still have 7 more attempts, but I am okay with this score itself. :) 10 | *takes a deep breathe* 11 | ``` 12 | -------------------------------------------------------------------------------- /Week-7/quizzes/programming/alpha_neuron.m: -------------------------------------------------------------------------------- 1 | % Fire a neuron via alpha function synapse and random input spike train 2 | % R Rao 2007 3 | 4 | clear 5 | rand('state',0) 6 | % I & F implementation dV/dt = - V/RC + I/C 7 | h = 1; % step size, Euler method, = dt ms 8 | t_max= 200; % ms, simulation time period 9 | tstop = t_max/h; % number of time steps 10 | ref = 0; % refractory period counter 11 | 12 | % Generate random input spikes 13 | % Note: This is not entirely realistic - no refractory period 14 | % Also: if you change step size h, input spike train changes too... 15 | spike_train = rand(tstop,1); 16 | thr = 0.9; % threshold for random spikes 17 | spike_train(find(spike_train > thr)) = ones(size((find(spike_train > thr)))); 18 | spike_train(find(spike_train < thr)) = zeros(size((find(spike_train < thr)))); 19 | 20 | % alpha func synaptic conductance 21 | t_a = 100; % Max duration of syn conductance 22 | t_peak = 1; % ms 23 | g_peak = 0.05; % nS (peak synaptic conductance) 24 | const = g_peak/(t_peak*exp(-1)); 25 | t_vec = 0:h:t_a; 26 | alpha_func = const*t_vec.*(exp(-t_vec/t_peak)); 27 | clf 28 | plot(t_vec(1:80),alpha_func(1:80)) 29 | xlabel('t (in ms)') 30 | title('Alpha Function (Synaptic Conductance for Spike at t=0)') 31 | pause(2) 32 | 33 | % capacitance and leak resistance 34 | C = 0.5 % nF 35 | R = 40 % M ohms 36 | 37 | % conductance and associated parameters to simulate spike rate adaptation 38 | g_ad = 0; 39 | G_inc = 1/h; 40 | tau_ad = 2; 41 | 42 | % Initialize basic parameters 43 | E_leak = -60; % mV, equilibrium potential 44 | E_syn = 0; % Excitatory synapse (why is this excitatory?) 45 | g_syn = 0; % Current syn conductance 46 | V_th = -40; % spike threshold mV 47 | V_spike = 50; % spike value mV 48 | ref_max = 4/h; % Starting value of ref period counter 49 | t_list = []; 50 | V = E_leak; 51 | V_trace = [V]; 52 | t_trace = [0]; 53 | 54 | clf 55 | subplot(2,1,1) 56 | plot(0:h:t_max,[0; spike_train]) 57 | title('Input spike train') 58 | 59 | for t = 1:tstop 60 | 61 | % Compute input 62 | if (spike_train(t) > 0) % check for input spike 63 | t_list = [t_list; 1]; 64 | end 65 | % Calculate synaptic current due to current and past input spikes 66 | g_syn = sum(alpha_func(t_list)); 67 | I_syn = g_syn*(E_syn - V); 68 | 69 | % Update spike times 70 | if t_list 71 | t_list = t_list + 1; 72 | if (t_list(1) == t_a) % Reached max duration of syn conductance 73 | t_list = t_list(2:max(size(t_list))); 74 | end 75 | end 76 | 77 | % Compute membrane voltage 78 | % Euler method: V(t+h) = V(t) + h*dV/dt 79 | if ~ref 80 | V = V + h*(- ((V-E_leak)*(1+R*g_ad)/(R*C)) + (I_syn/C)); 81 | g_ad = g_ad + h*(- g_ad/tau_ad); % spike rate adaptation 82 | else 83 | ref = ref - 1; 84 | V = V_th-10; % reset voltage after spike 85 | g_ad = 0; 86 | end 87 | 88 | % Generate spike 89 | if ((V > V_th) && ~ref) 90 | V = V_spike; 91 | ref = ref_max; 92 | g_ad = g_ad + G_inc; 93 | end 94 | 95 | V_trace = [V_trace V]; 96 | t_trace = [t_trace t*h]; 97 | end 98 | 99 | subplot(2,1,2) 100 | plot(t_trace,V_trace) 101 | drawnow 102 | title('Output spike train') 103 | -------------------------------------------------------------------------------- /Week-7/quizzes/programming/alpha_neuron.py: -------------------------------------------------------------------------------- 1 | """ 2 | Created on Wed Apr 22 16:13:18 2015 3 | 4 | Fire a neuron via alpha function synapse and random input spike train 5 | R Rao 2007 6 | 7 | translated to python by rkp 2015 8 | """ 9 | from __future__ import print_function, division 10 | 11 | import time 12 | import numpy as np 13 | from numpy import concatenate as cc 14 | import matplotlib.pyplot as plt 15 | 16 | np.random.seed(0) 17 | # I & F implementation dV/dt = - V/RC + I/C 18 | h = 1. # step size, Euler method, = dt ms 19 | t_max= 200 # ms, simulation time period 20 | tstop = int(t_max/h) # number of time steps 21 | ref = 0 # refractory period counter 22 | 23 | # Generate random input spikes 24 | # Note: This is not entirely realistic - no refractory period 25 | # Also: if you change step size h, input spike train changes too... 26 | thr = 0.9 # threshold for random spikes 27 | spike_train = np.random.rand(tstop) > thr 28 | 29 | # alpha func synaptic conductance 30 | t_a = 100 # Max duration of syn conductance 31 | t_peak = 1 # ms 32 | g_peak = 0.05 # nS (peak synaptic conductance) 33 | const = g_peak / (t_peak*np.exp(-1)); 34 | t_vec = np.arange(0, t_a + h, h) 35 | alpha_func = const * t_vec * (np.exp(-t_vec/t_peak)) 36 | 37 | plt.plot(t_vec[:80], alpha_func[:80]) 38 | plt.xlabel('t (in ms)') 39 | plt.title('Alpha Function (Synaptic Conductance for Spike at t=0)') 40 | plt.draw() 41 | time.sleep(2) 42 | 43 | # capacitance and leak resistance 44 | C = 0.5 # nF 45 | R = 40 # M ohms 46 | print('C = {}'.format(C)) 47 | print('R = {}'.format(R)) 48 | 49 | # conductance and associated parameters to simulate spike rate adaptation 50 | g_ad = 0 51 | G_inc = 1/h 52 | tau_ad = 2 53 | 54 | # Initialize basic parameters 55 | E_leak = -60 # mV, equilibrium potential 56 | E_syn = 0 # Excitatory synapse (why is this excitatory?) 57 | g_syn = 0 # Current syn conductance 58 | V_th = -40 # spike threshold mV 59 | V_spike = 50 # spike value mV 60 | ref_max = 4/h # Starting value of ref period counter 61 | t_list = np.array([], dtype=int) 62 | V = E_leak 63 | V_trace = [V] 64 | t_trace = [0] 65 | 66 | fig, axs = plt.subplots(2, 1) 67 | axs[0].plot(np.arange(0,t_max,h), spike_train) 68 | axs[0].set_title('Input spike train') 69 | 70 | for t in range(tstop): 71 | 72 | # Compute input 73 | if spike_train[t]: # check for input spike 74 | t_list = cc([t_list, [1]]) 75 | 76 | # Calculate synaptic current due to current and past input spikes 77 | g_syn = np.sum(alpha_func[t_list]) 78 | I_syn = g_syn*(E_syn - V) 79 | 80 | # Update spike times 81 | if np.any(t_list): 82 | t_list = t_list + 1 83 | if t_list[0] == t_a: # Reached max duration of syn conductance 84 | t_list = t_list[1:] 85 | 86 | # Compute membrane voltage 87 | # Euler method: V(t+h) = V(t) + h*dV/dt 88 | if not ref: 89 | V = V + h*(-((V-E_leak)*(1+R*g_ad)/(R*C)) + (I_syn/C)) 90 | g_ad = g_ad + h*(-g_ad/tau_ad) # spike rate adaptation 91 | else: 92 | ref -= 1 93 | V = V_th - 10 # reset voltage after spike 94 | g_ad = 0 95 | 96 | # Generate spike 97 | if (V > V_th) and not ref: 98 | V = V_spike 99 | ref = ref_max 100 | g_ad = g_ad + G_inc 101 | 102 | V_trace += [V] 103 | t_trace += [t*h] 104 | 105 | 106 | axs[1].plot(t_trace,V_trace) 107 | plt.draw() 108 | axs[1].set_title('Output spike train') 109 | plt.show() 110 | -------------------------------------------------------------------------------- /Week-8/notes/notes/1 - Neurons as Classifiers and Supervised Learning.md: -------------------------------------------------------------------------------- 1 | ``` 2 | The Classification Problem 3 | 4 | Some random images are displayed. 5 | So, How do we build a classifier that can distinguish between faces & other objects? 6 | ``` 7 | ![](http://geekresearchlab.net/coursera/neuro/cll-1.jpg)

8 | ![](http://geekresearchlab.net/coursera/neuro/cll-2.jpg)

9 | ![](http://geekresearchlab.net/coursera/neuro/cll-3.jpg)

10 | ![](http://geekresearchlab.net/coursera/neuro/cll-4.jpg)

11 | ![](http://geekresearchlab.net/coursera/neuro/cll-5.jpg)

12 | ![](http://geekresearchlab.net/coursera/neuro/cll-6.jpg)

13 | ![](http://geekresearchlab.net/coursera/neuro/cll-7.jpg)

14 | ![](http://geekresearchlab.net/coursera/neuro/cll-8.jpg)

15 | ![](http://geekresearchlab.net/coursera/neuro/cll-9.jpg)

16 | ![](http://geekresearchlab.net/coursera/neuro/cll-10.jpg)

17 | ![](http://geekresearchlab.net/coursera/neuro/cll-11.jpg) 18 | ``` 19 | Question: 20 | Which of the following could you use to minimize the error function with respect to W & w? 21 | (Hint: Where have we seen the use of gradients before with respect to optimization?) 22 | Answer: 23 | Gradient descent 24 | Explanation: 25 | We used gradient ascent for maximizing the log posterior function in the Sparse Coding and Predictive Coding lecture. 26 | Here we use gradient descent for minimizing a function. 27 | ``` 28 | ![](http://geekresearchlab.net/coursera/neuro/cll-12.jpg)

29 | ![](http://geekresearchlab.net/coursera/neuro/cll-13.jpg) 30 | -------------------------------------------------------------------------------- /Week-8/notes/notes/2 - Reinforcement learning: Predicting rewards.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/rl-1.jpg)

2 | ![](http://geekresearchlab.net/coursera/neuro/rl-2.jpg)

3 | ![](http://geekresearchlab.net/coursera/neuro/rl-3.jpg)

4 | ![](http://geekresearchlab.net/coursera/neuro/rl-4.jpg)

5 | ![](http://geekresearchlab.net/coursera/neuro/rl-5.jpg)

6 | ![](http://geekresearchlab.net/coursera/neuro/rl-6.jpg)

7 | ![](http://geekresearchlab.net/coursera/neuro/rl-7.jpg)

8 | ![](http://geekresearchlab.net/coursera/neuro/rl-8.jpg)

9 | ![](http://geekresearchlab.net/coursera/neuro/rl-9.jpg)

10 | ![](http://geekresearchlab.net/coursera/neuro/rl-10.jpg)

11 | ![](http://geekresearchlab.net/coursera/neuro/rl-11.jpg)

12 | ![](http://geekresearchlab.net/coursera/neuro/rl-12.jpg) 13 | -------------------------------------------------------------------------------- /Week-8/notes/notes/3 - Reinforcement Learning: Time actions.md: -------------------------------------------------------------------------------- 1 | ![](http://geekresearchlab.net/coursera/neuro/xxx/rlt-1.jpg)

2 | ![](http://geekresearchlab.net/coursera/neuro/xxx/rlt-2.jpg)

3 | ![](http://geekresearchlab.net/coursera/neuro/xxx/rlt-3.jpg)

4 | ![](http://geekresearchlab.net/coursera/neuro/xxx/rlt-4.jpg)

5 | ![](http://geekresearchlab.net/coursera/neuro/xxx/rlt-5.jpg)

6 | ![](http://geekresearchlab.net/coursera/neuro/xxx/rlt-6.jpg)

7 | ![](http://geekresearchlab.net/coursera/neuro/xxx/rlt-7.jpg)

8 | ![](http://geekresearchlab.net/coursera/neuro/xxx/rlt-8.jpg)

9 | ![](http://geekresearchlab.net/coursera/neuro/xxx/rlt-9.jpg) 10 | -------------------------------------------------------------------------------- /Week-8/notes/shared/resources.md: -------------------------------------------------------------------------------- 1 | [1] Lecture notes
2 | [2] Backpropagation Algorithm for Multilayer Networks
3 | [3] Reinforcement Learning textbook
4 | [4] Actor-critic models of brain functions
5 | [4.A.] Barto's 1995 article on the model
6 | [4.B.] Scholarpedia review article by Jim Houk
7 | [4.C.] Recent probabilistic model (Rao, 2010)
8 | [5] Reinforcement learning of autonomous helicopter flight
9 | -------------------------------------------------------------------------------- /Week-8/quiz/README.md: -------------------------------------------------------------------------------- 1 | ``` 2 | Final Score:- 10/10 3 | ``` 4 | //TODO Add solutions/explanations later if possible. 5 | -------------------------------------------------------------------------------- /stuffs/installation.md: -------------------------------------------------------------------------------- 1 | ## Python -- NumPy and MatPlotLib 2 | For Windows Users:
3 | There is an opensource thingy available called as WinPython.
4 | Make sure to install the latest version 5 | which consists of all neccessary packages 6 | such as NumPy and matplotlib required for the course.
7 | Also, star that repository. It deserves more stars.
8 |
9 | For testing if it works or not:-
10 | Open Python 3.4.3 shell and type these following commands... 11 | ```py 12 | >>> import numpy as np 13 | >>> import matplotlib.pyplot as plt 14 | >>> x = np.array([1,-1,2,-2,3]) 15 | >>> plt.plot(x) 16 | [] 17 | >>> plt.show() 18 | ``` 19 | ![](http://geekresearchlab.net/coursera/neuro/figure_1.jpeg) 20 | ```py 21 | >>> 3/5 22 | 0.6 23 | >>> from __future__ import division 24 | >>> 3/5 25 | 0.6 26 | ``` 27 | Now, all is well.
28 | ## MatLab 29 | You can avail MatLab Trial version sponsored by the Computational-Neuroscience MOOC till the end of the course.
30 | Or if you have money, then you can directly purchase.

31 | There is also an alternative option --- Octave. 32 | -------------------------------------------------------------------------------- /stuffs/my-background.txt: -------------------------------------------------------------------------------- 1 | Self-evaluation... Ignore this! 2 | 3 | -- Had knowledge in general science (includes basic biology) [1996-2001] 4 | -- Had knowledge in biology (school level) [2001-2006] 5 | -- Had knowledge in biology (difficult level - core subject in high school) [2006-2008] 6 | 7 | Levels: 8 | 1: Neurology (100% desire, 80% capable) 9 | 2: Cardiology (80% desire, 60% capable) 10 | 3: Reproductive System (70% desire, 50% capable) 11 | 4: Disecting parts (60% desire, 50% capable) 12 | 5: MicroBiology (50% desire, 40% capable) 13 | 6: Nutrition (40% desire, 30% capable) 14 | 7: Bio-chemistry (30% desire, 20% capable) 15 | 8: Molecular Biology (20% desire, 40% capable) 16 | 9: Muscles and Tissues (10% desire, 10% capable) 17 | 10: Bio-related math (10% desire, 20% capable) 18 | 19 | Having good discrete math background and related to it... 20 | https://github.com/ashumeow/cryptography-I/blob/master/stuffs/my-background.txt 21 | 22 | Has some programming background too. 23 | 24 | Math appears as one of the core for this course. 25 | -------------------------------------------------------------------------------- /stuffs/shared-resources.md: -------------------------------------------------------------------------------- 1 | ``` 2 | Shared resources during the course... To check more, then navigate to stuffs/stacking 3 | ``` 4 | 1. Textbook
5 | 1.a.Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems (only exercises)
6 | 1.b.Tutorial on Neural Systems Modeling
7 | 1.c.Foundations of Cellular Neurophysiology => The classic text for quantitative neurophysiology
8 | 1.d.The Biophysics of Computation => A compendium of neuronal hardware, its dynamics and functional implications for coding.
9 | 1.e.Dynamical systems in neuroscience => A highly recommended introduction to nonlinear dynamics applied to neuronal excitability.
10 | 1.f.Spikes: Exploring the Neural Code => Classic introductory book on neural coding.
11 | 1.g.Brain-Computer Interfacing: An Introduction
12 |
13 | 2. Math resources -- Khan academy
14 | 2.a.Linear Algebra
15 | 2.b.Calculus
16 | 2.c.Probability
17 |
18 | 3. A Geometric Review of Linear Algebra
19 |
20 | Useful papers:
21 | [1] Two-Dimensional Time Coding in the Auditory Brainstem
22 | [2] Selectivity for Multiple Stimulus Features in Retinal Ganglion Cells
23 | [3] Characterization of neural responses with stochastic stimuli
24 | [4] Analyzing Neural Responses to Natural Signals: Maximally Informative Dimensions
25 | [5] A Mathematical Theory of Communication
26 | [6] A Neural Substrate of Prediction and Reward
27 | -------------------------------------------------------------------------------- /stuffs/stacking/matlab.md: -------------------------------------------------------------------------------- 1 | [1] A Practical Introduction to Matlab
2 | [2] Matlab® Tutorial
3 | -------------------------------------------------------------------------------- /stuffs/stacking/other-notes.md: -------------------------------------------------------------------------------- 1 | [1] Goethe University Frankfurt
2 | [2] An introduction to feature selection
3 | [3] Paper - Different Origins of Gamma Rhythm and High-Gamma Activity in Macaque Visual Cortex
4 | [4] Models that contain the Modeling Application : Python
5 | [5] FULL TEXTBOOK [PDF] Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems
6 | [6] Paul's online Math notes
7 |
8 | Other stuffs:
9 | [1] Stephen Wolfram's publications
10 | --------------------------------------------------------------------------------