├── Learning-Detection-and-Tracking ├── Learning and Detection │ ├── SubmissionBallLocation.mat │ ├── SubmissionBallSegmentation.mat │ ├── _d544576262dde3240e03488364d81cae_CourseraProgWeek1Instruction.pdf │ ├── detectBall.m │ ├── eval_progW1.p │ ├── example_bw.m │ ├── example_rgb.m │ ├── example_test.m │ ├── runeval.m │ ├── samples.mat │ ├── test │ │ ├── test001.mat │ │ ├── test002.mat │ │ ├── test003.mat │ │ └── test004.mat │ └── train │ │ ├── 001.png │ │ ├── 002.png │ │ ├── 003.png │ │ ├── 004.png │ │ ├── 005.png │ │ ├── 006.png │ │ ├── 007.png │ │ ├── 008.png │ │ ├── 009.png │ │ ├── 010.png │ │ ├── 011.png │ │ ├── 012.png │ │ ├── 013.png │ │ ├── 014.png │ │ ├── 015.png │ │ ├── 016.png │ │ ├── 017.png │ │ ├── 018.png │ │ └── 019.png └── Tracking │ ├── SubmissionFilter.mat │ ├── _42dffae470c61153d40215a304ab2bd3_CourseraProgWeek2Instruction_same_copy__2.pdf │ ├── eval_progW2.p │ ├── example_test.m │ ├── kalmanFilter.m │ ├── runeval.m │ ├── solution5.mat │ ├── solution6.mat │ ├── testing.mat │ ├── training5.mat │ └── training6.mat ├── Occupancy Grid Mapping ├── SubmissionMap1.mat ├── SubmissionMap2.mat ├── _d21f28106d84e29d4ed77b1d7e7a7a5e_CourseraProgWeek3Instruction.pdf ├── bresenham.p ├── eval_progW3.p ├── example_bresenham.m ├── example_lidar.m ├── example_test.m ├── myMap.fig ├── myMap.png ├── occGridMapping.m ├── practice.mat ├── runeval.m ├── test.mat └── yourTestResult.mat ├── Pose tracking ├── SubmissionLocalization.mat ├── _52665b49b13fca9e5103efe032c36977_CourseraProgWeek4Instruction.pdf ├── bresenham.p ├── eval_progW4.p ├── example_test.m ├── particleLocalization.m ├── particleLocalization1.m ├── practice-answer.mat ├── practice.mat ├── runeval.m ├── testing.mat └── untitled.png └── README.md /Learning-Detection-and-Tracking/Learning and Detection/SubmissionBallLocation.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/SubmissionBallLocation.mat -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/SubmissionBallSegmentation.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/SubmissionBallSegmentation.mat -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/_d544576262dde3240e03488364d81cae_CourseraProgWeek1Instruction.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/_d544576262dde3240e03488364d81cae_CourseraProgWeek1Instruction.pdf -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/detectBall.m: -------------------------------------------------------------------------------- 1 | % Robotics: Estimation and Learning 2 | % WEEK 1 3 | % 4 | % Complete this function following the instruction. 5 | function [segI, loc] = detectBall(I) 6 | % function [segI, loc] = detectBall(I) 7 | % 8 | % INPUT 9 | % I 120x160x3 numerial array 10 | % 11 | % OUTPUT 12 | % segI 120x160 numeric array 13 | % loc 1x2 or 2x1 numeric array 14 | 15 | 16 | 17 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 18 | % Hard code your learned model parameters here 19 | % 20 | mu = [150.2015 144.7533 60.0073]'; 21 | covar = [166.7231 109.4645 -173.8674; 22 | 109.4645 126.5266 -158.6514; 23 | -173.8674 -158.6514 303.4564]; 24 | thre = 1/((2*pi)^1.5*det(covar)^0.5); 25 | 26 | 27 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 28 | % Find ball-color pixels using your model 29 | % 30 | I_d = double(I); 31 | p = zeros(size(I_d,1),size(I_d,2)); 32 | for ct_r=1:size(I_d,1) 33 | for ct_c=1:size(I_d,2) 34 | dummy = I_d(ct_r,ct_c,:); 35 | difference = dummy(:)-mu; 36 | p(ct_r,ct_c) = exp(-0.5*difference'*((covar)\difference))/((2*pi)^1.5*det(covar)^0.5); 37 | % segI(ct_r,ct_c) = p>thre; 38 | end 39 | end 40 | 41 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 42 | % Do more processing to segment out the right cluster of pixels. 43 | % You may use the following functions. 44 | % bwconncomp 45 | % regionprops 46 | % Please see example_bw.m if you need an example code. 47 | 48 | 49 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 50 | % Compute the location of the ball center 51 | % 52 | 53 | segI = p>thre/10; 54 | % figure; 55 | % imshow(segI); 56 | % figure; 57 | % imshow(I); 58 | 59 | % Compute connected components 60 | CC = bwconncomp(segI); 61 | numPixels = cellfun(@numel,CC.PixelIdxList); 62 | [biggest,idx] = max(numPixels); 63 | 64 | % show the centroid 65 | S = regionprops(CC,'Centroid'); 66 | loc = S(idx).Centroid; 67 | % 68 | % Note: In this assigment, the center of the segmented ball area will be considered for grading. 69 | % (You don't need to consider the whole ball shape if the ball is occluded.) 70 | 71 | end 72 | -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/eval_progW1.p: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/eval_progW1.p -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/example_bw.m: -------------------------------------------------------------------------------- 1 | % Robotics: Estimation and Learning 2 | % WEEK 1 3 | % 4 | % This is an example code for bwconncomp 5 | % http://www.mathworks.com/help/images/ref/regionprops.html 6 | a = imread('circlesBrightDark.png'); 7 | bw = a < 100; 8 | figure, imshow(bw); 9 | title('Image with Circles') 10 | 11 | % create new empty binary image 12 | bw_biggest = false(size(bw)); 13 | 14 | % http://www.mathworks.com/help/images/ref/bwconncomp.html 15 | CC = bwconncomp(bw); 16 | numPixels = cellfun(@numel,CC.PixelIdxList); 17 | [biggest,idx] = max(numPixels); 18 | bw_biggest(CC.PixelIdxList{idx}) = true; 19 | figure, 20 | imshow(bw_biggest); hold on; 21 | 22 | % show the centroid 23 | % http://www.mathworks.com/help/images/ref/regionprops.html 24 | S = regionprops(CC,'Centroid'); 25 | loc = S(idx).Centroid; 26 | plot(loc(1), loc(2),'r+'); -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/example_rgb.m: -------------------------------------------------------------------------------- 1 | % Robotics: Estimation and Learning 2 | % WEEK 3 3 | % 4 | % This is an example code for collecting ball sample colors using roipoly 5 | close all 6 | 7 | imagepath = './train'; 8 | Samples = []; 9 | for k=1:15 10 | % Load image 11 | I = imread(sprintf('%s/%03d.png',imagepath,k)); 12 | 13 | % You may consider other color space than RGB 14 | R = I(:,:,1); 15 | G = I(:,:,2); 16 | B = I(:,:,3); 17 | 18 | % Collect samples 19 | disp(''); 20 | disp('INTRUCTION: Click along the boundary of the ball. Double-click when you get back to the initial point.') 21 | disp('INTRUCTION: You can maximize the window size of the figure for precise clicks.') 22 | figure(1), 23 | mask = roipoly(I); 24 | figure(2), imshow(mask); title('Mask'); 25 | sample_ind = find(mask > 0); 26 | 27 | R = R(sample_ind); 28 | G = G(sample_ind); 29 | B = B(sample_ind); 30 | 31 | Samples = [Samples; [R G B]]; 32 | 33 | disp('INTRUCTION: Press any key to continue. (Ctrl+c to exit)') 34 | pause 35 | end 36 | 37 | % visualize the sample distribution 38 | figure, 39 | scatter3(Samples(:,1),Samples(:,2),Samples(:,3),'.'); 40 | title('Pixel Color Distribubtion'); 41 | xlabel('Red'); 42 | ylabel('Green'); 43 | zlabel('Blue'); 44 | 45 | %% 46 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 47 | % [IMPORTANT] 48 | % 49 | % Now choose you model type and estimate the parameters (mu and Sigma) from 50 | % the sample data. 51 | % 52 | % Trying MLE 53 | Samples_d = double(Samples); 54 | mu = mean(Samples_d); 55 | difference = Samples_d - ones(length(Samples_d),1)*mu; 56 | dummy = zeros(length(mu),length(mu)); 57 | for ct = 1:length(Samples_d) 58 | dummy = dummy + difference(ct,:)'*difference(ct,:); 59 | end 60 | covar = dummy/length(Samples_d); 61 | -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/example_test.m: -------------------------------------------------------------------------------- 1 | % Robotics: Estimation and Learning 2 | % WEEK 1 3 | % 4 | % This script is to help run your algorithm and visualize the result from it. 5 | close all 6 | clear; 7 | 8 | imagepath = './train'; 9 | figure(1); figure(2); 10 | for k=1:19 11 | % Load image 12 | I = imread(sprintf('%s/%03d.png',imagepath,k)); 13 | 14 | % Implement your own detectBall function 15 | [segI, loc] = detectBall(I); 16 | 17 | figure(1); 18 | imshow(segI); hold on; 19 | plot(loc(1), loc(2), '+b','MarkerSize',7); 20 | 21 | figure(2); 22 | imshow(I); 23 | disp('Press any key to continue. (Ctrl+c to exit)') 24 | pause(1) 25 | end 26 | 27 | 28 | -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/runeval.m: -------------------------------------------------------------------------------- 1 | imagepath = './test'; % This is the folder where test images are saved. Change the path if needed. 2 | eval_progW1(imagepath) 3 | 4 | 5 | -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/samples.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/samples.mat -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/test/test001.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/test/test001.mat -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/test/test002.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/test/test002.mat -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/test/test003.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/test/test003.mat -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/test/test004.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/test/test004.mat -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/train/001.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/train/001.png -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/train/002.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/train/002.png -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/train/003.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/train/003.png -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/train/004.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/train/004.png -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/train/005.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/train/005.png -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/train/006.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/train/006.png -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/train/007.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/train/007.png -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/train/008.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/train/008.png -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/train/009.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/train/009.png -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/train/010.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/train/010.png -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/train/011.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/train/011.png -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/train/012.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/train/012.png -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/train/013.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/train/013.png -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/train/014.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/train/014.png -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/train/015.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/train/015.png -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/train/016.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/train/016.png -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/train/017.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/train/017.png -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/train/018.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/train/018.png -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Learning and Detection/train/019.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Learning and Detection/train/019.png -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Tracking/SubmissionFilter.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Tracking/SubmissionFilter.mat -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Tracking/_42dffae470c61153d40215a304ab2bd3_CourseraProgWeek2Instruction_same_copy__2.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Tracking/_42dffae470c61153d40215a304ab2bd3_CourseraProgWeek2Instruction_same_copy__2.pdf -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Tracking/eval_progW2.p: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Tracking/eval_progW2.p -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Tracking/example_test.m: -------------------------------------------------------------------------------- 1 | % Robotics: Estimation and Learning 2 | % WEEK 2 3 | % 4 | % This script is to help run your algorithm and visualize the result from it. 5 | 6 | %% Load data 7 | clear all; 8 | close all; 9 | clc; clear; 10 | load training6.mat 11 | % This will load three variables: t, ball, rgb 12 | % [1] t is K-by-1 array containing time in second. (K=3701) 13 | % You may not use time info for implementation. 14 | % [2] ball is a 2-by-K array of ball position readings. 15 | % e.g. ball(1,k) is the x position at time index k, and ball(2,k) is 16 | % the y position at time index k 17 | % [3] rgb is a cell array of images containing the image of the scene at 18 | % the time of image recording. Only used to help you visualize the 19 | % scene; not used in the function 20 | 21 | %% Plot the path of the ball 22 | figure(1); 23 | clf; 24 | plot(ball(1, :), ball(2, :), 'bo-'); 25 | hold on; 26 | % End at red 27 | plot(ball(1, end), ball(2, end), 's', ... 28 | 'MarkerSize', 10, 'MarkerEdgeColor', [.5 0 0], 'MarkerFaceColor', 'r'); 29 | % start at green 30 | plot(ball(1, 1), ball(2, 1), 's', ... 31 | 'MarkerSize', 10, 'MarkerEdgeColor', [0 .5 0], 'MarkerFaceColor', 'g'); 32 | hold off; 33 | axis equal; 34 | title('Ball Position tracks'); 35 | xlabel('X (meters)'); 36 | ylabel('Y (meters)'); 37 | 38 | %% Run algorithm 39 | % Call your mapping function here. 40 | % Running time could take long depending on the efficiency of your code. 41 | % This overlays the predicted positions against the observed positions 42 | state = [0,0,0,0]; 43 | last_t = -1; 44 | N = numel(t); 45 | myPredictions = zeros(2, N); 46 | param = {}; 47 | for i=1:N 48 | [ px, py, state, param ] = kalmanFilter( t(i), ball(1,i), ball(2,i), state, param, last_t); 49 | if numel(state)~=4 50 | error('Your state should be four dimensions.'); 51 | end 52 | last_t = t(i); 53 | myPredictions(1, i) = px; 54 | myPredictions(2, i) = py; 55 | end 56 | clear px py; 57 | 58 | %% Overlay the predictions 59 | figure(1); 60 | hold on; 61 | plot(myPredictions(1, :), myPredictions(2, :), 'k+-'); 62 | hold off; 63 | 64 | %% Show the error 65 | nSkip = 10; 66 | myError = myPredictions(:, 1:end-nSkip) - ball(:, 1+nSkip:end); 67 | myError_dist = sqrt(myError(1,:).^2 + myError(2,:).^2); 68 | myError_mean = mean(myError_dist); 69 | figure(2); 70 | clf; 71 | plot(myError_dist); 72 | title('Prediction Error Over Time'); 73 | xlabel('Frame'); 74 | xlim([1, numel(myError_dist)]); 75 | ylabel('Error (meters)'); 76 | legend(sprintf('Your Prediction: %.2f mean', myError_mean)); 77 | 78 | %% Load the solution 79 | load solution6.mat 80 | 81 | % Error 82 | error = predictions(:, 1:end-nSkip) - ball(:, 1+nSkip:end); 83 | error_dist = sqrt(error(1,:).^2 + error(2,:).^2); 84 | error_mean = mean(error_dist); 85 | figure(2); 86 | hold on; 87 | plot(error_dist); 88 | hold off; 89 | %title(sprintf('Kalman Prediction Error: %.2f mean', error_mean)); 90 | legend(sprintf('Your Prediction: %.2f mean', myError_mean),... 91 | sprintf('Kalman Prediction: %.2f mean', error_mean)); 92 | 93 | figure(1); 94 | hold on; 95 | plot(predictions(1, :), predictions(2, :), 'mo-'); 96 | hold off; 97 | legend('Observed','End','Start','Your Prediction','Kalman Prediction'); 98 | 99 | % figure(20); 100 | % clf;hold on; 101 | % plot(ball(1, 1+nSkip:end));plot(predictions(1, 1:end-nSkip)); 102 | -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Tracking/kalmanFilter.m: -------------------------------------------------------------------------------- 1 | function [ predictx, predicty, state, param ] = kalmanFilter( t, x, y, state, param, previous_t ) 2 | %UNTITLED Summary of this function goes here 3 | % Four dimensional state:[position_x, position_y, velocity_x, 4 | % velocity_y]' 5 | 6 | %% Place parameters like covarainces, etc. here: 7 | % P = eye(4) 8 | % R = eye(2) 9 | 10 | % Check if the first time running this function 11 | if previous_t<0 12 | state = [x, y, 0, 0]'; 13 | param.P = diag([10,10,10000,10000]); 14 | param.Q = diag([0.2,0.2,1,1]); 15 | param.R = 0.5*eye(2); 16 | param.Phi = [1 0 0.0334 0; 17 | 0 1 0 0.0334; 18 | 0 0 1 0; 19 | 0 0 0 1]; 20 | param.Gamma = [0.0334 0 0.0006 0; 21 | 0 0.0334 0 0.0006; 22 | 0 0 0.0334 0; 23 | 0 0 0 0.0334]; 24 | param.C = [1 0 0 0; 25 | 0 1 0 0]; 26 | predictx = x; 27 | predicty = y; 28 | param.dt = 0.0334; 29 | return; 30 | end 31 | 32 | %% TODO: Add Kalman filter updates 33 | % As an example, here is a Naive estimate without a Kalman filter 34 | xk1_k = param.Phi*state; 35 | Pk1_k = param.Phi*param.P*param.Phi'+param.Gamma*param.Q*param.Gamma'; 36 | K = Pk1_k*param.C'/(param.R+param.C*Pk1_k*param.C'); 37 | state = xk1_k+K*([x;y] - param.C*xk1_k); 38 | param.P = param.P -K*param.C*param.P; 39 | % You should replace this code 40 | % % vx = (x - state(1)) / (t - previous_t); 41 | % % vy = (y - state(2)) / (t - previous_t); 42 | % Predict 330ms into the future 43 | % predictx = state(1) + state(3) * 0.330; 44 | % predicty = state(2) + state(4) * 0.330; 45 | predictx = state(1); 46 | predicty = state(2); 47 | % % predictx = x + vx * 0.330; 48 | % % predicty = y + vy * 0.330; 49 | % % % State is a four dimensional element 50 | % % state = [x, y, vx, vy]; 51 | end 52 | -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Tracking/runeval.m: -------------------------------------------------------------------------------- 1 | testpath = '.'; % This is the folder where test.mat is saved. Change the path if needed. 2 | eval_progW2(testpath) -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Tracking/solution5.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Tracking/solution5.mat -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Tracking/solution6.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Tracking/solution6.mat -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Tracking/testing.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Tracking/testing.mat -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Tracking/training5.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Tracking/training5.mat -------------------------------------------------------------------------------- /Learning-Detection-and-Tracking/Tracking/training6.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Learning-Detection-and-Tracking/Tracking/training6.mat -------------------------------------------------------------------------------- /Occupancy Grid Mapping/SubmissionMap1.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Occupancy Grid Mapping/SubmissionMap1.mat -------------------------------------------------------------------------------- /Occupancy Grid Mapping/SubmissionMap2.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Occupancy Grid Mapping/SubmissionMap2.mat -------------------------------------------------------------------------------- /Occupancy Grid Mapping/_d21f28106d84e29d4ed77b1d7e7a7a5e_CourseraProgWeek3Instruction.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Occupancy Grid Mapping/_d21f28106d84e29d4ed77b1d7e7a7a5e_CourseraProgWeek3Instruction.pdf -------------------------------------------------------------------------------- /Occupancy Grid Mapping/bresenham.p: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Occupancy Grid Mapping/bresenham.p -------------------------------------------------------------------------------- /Occupancy Grid Mapping/eval_progW3.p: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Occupancy Grid Mapping/eval_progW3.p -------------------------------------------------------------------------------- /Occupancy Grid Mapping/example_bresenham.m: -------------------------------------------------------------------------------- 1 | % Robotics: Estimation and Learning 2 | % WEEK 3 3 | % 4 | % This script is to show how to use bresenham. 5 | map = zeros(30,30); 6 | orig = [10,5]; % start point 7 | occ = [20,20]; % end point 8 | % get cells in between 9 | [freex, freey] = bresenham(orig(1),orig(2),occ(1),occ(2)); 10 | % convert to 1d index 11 | free = sub2ind(size(map),freey,freex); 12 | % set end point value 13 | map(occ(2),occ(1)) = 3; 14 | % set free cell values 15 | map(free) = 1; 16 | 17 | figure(1), 18 | imagesc(map); hold on; 19 | plot(orig(1),orig(2),'rx','LineWidth',3); % indicate start point 20 | axis equal; 21 | -------------------------------------------------------------------------------- /Occupancy Grid Mapping/example_lidar.m: -------------------------------------------------------------------------------- 1 | clear all; 2 | close all; 3 | 4 | load practice.mat 5 | % This will load four variables: ranges, scanAngles, t, pose 6 | % [1] t is K-by-1 array containing time in second. (K=3701) 7 | % You may not need this time info for implementation. 8 | % [2] ranges is 1081-by-K lidar sensor readings. 9 | % e.g. ranges(:,k) is the lidar measurement at time index k. 10 | % [3] scanAngles is 1081-by-1 array containing at what angles (in radian) the 1081-by-1 lidar 11 | % values ranges(:,k) were measured. This holds for any time index k. The 12 | % angles are with respect to the body coordinate frame. 13 | % [4] pose is 3-by-K array containing the pose of the mobile robot over time. 14 | % e.g. pose(:,k) is the [x,y,theta(in radian)] at time index k. 15 | 16 | lidar_local = [ranges(:,1).*cos(scanAngles) -ranges(:,1).*sin(scanAngles)]; 17 | 18 | figure, 19 | plot(0,0,'rs'); hold on; 20 | plot(lidar_local(:,1),lidar_local(:,2),'.-'); 21 | axis equal; 22 | set(gca,'YDir','reverse'); 23 | xlabel('x'); 24 | ylabel('y'); 25 | grid on; 26 | title('Lidar measurement in the body frame'); 27 | 28 | % Note: There are some noise close to the robot, but they should not affect 29 | % the mapping result. 30 | -------------------------------------------------------------------------------- /Occupancy Grid Mapping/example_test.m: -------------------------------------------------------------------------------- 1 | % Robotics: Estimation and Learning 2 | % WEEK 3 3 | % 4 | % This script is to help run your algorithm and visualize the result from it. 5 | % 6 | % Please see example_lidar first to understand the lidar measurements, 7 | % and see example_bresenham to understand how to use it. 8 | clear all; 9 | close all; 10 | 11 | load practice.mat 12 | % This will load four variables: ranges, scanAngles, t, pose 13 | % [1] t is K-by-1 array containing time in second. (K=3701) 14 | % You may not use time info for implementation. 15 | % [2] ranges is 1081-by-K lidar sensor readings. 16 | % e.g. ranges(:,k) is the lidar measurement (in meter) at time index k. 17 | % [3] scanAngles is 1081-by-1 array containing at what angles (in radian) the 1081-by-1 lidar 18 | % values ranges(:,k) were measured. This holds for any time index k. The 19 | % angles are with respect to the body coordinate frame. 20 | % [4] pose is 3-by-K array containing the pose of the mobile robot over time. 21 | % e.g. pose(:,k) is the [x(meter),y(meter),theta(in radian)] at time index k. 22 | 23 | % 1. Decide map resolution, i.e., the number of grids for 1 meter. 24 | param.resol = 25; 25 | 26 | % 2. Decide the initial map size in pixels 27 | param.size = [900, 900]; 28 | 29 | % 3. Indicate where you will put the origin in pixels 30 | param.origin = [700,600]'; 31 | 32 | % 4. Log-odd parameters 33 | param.lo_occ = 1; 34 | param.lo_free = 0.5; 35 | param.lo_max = 100; 36 | param.lo_min = -100; 37 | 38 | 39 | % Call your mapping function here. 40 | % Running time could take long depending on the efficiency of your code. 41 | % For a quicker test, you may take some hundreds frames as input arguments as 42 | % shown. 43 | myMap = occGridMapping(ranges(:,1:end), scanAngles, pose(:,1:end), param); 44 | 45 | % The final grid map: 46 | figure, 47 | imagesc(myMap); 48 | % colormap('gray'); 49 | axis equal; 50 | -------------------------------------------------------------------------------- /Occupancy Grid Mapping/myMap.fig: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Occupancy Grid Mapping/myMap.fig -------------------------------------------------------------------------------- /Occupancy Grid Mapping/myMap.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Occupancy Grid Mapping/myMap.png -------------------------------------------------------------------------------- /Occupancy Grid Mapping/occGridMapping.m: -------------------------------------------------------------------------------- 1 | % Robotics: Estimation and Learning 2 | % WEEK 3 3 | % 4 | % Complete this function following the instruction. 5 | function myMap = occGridMapping(ranges, scanAngles, pose, param) 6 | 7 | 8 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5 9 | % Parameters 10 | % 11 | % the number of grids for 1 meter. 12 | myResol = param.resol; 13 | % the initial map size in pixels 14 | myMap = zeros(param.size); 15 | % the origin of the map in pixels 16 | myorigin = param.origin; 17 | % 18 | % % 4. Log-odd parameters 19 | lo_occ = param.lo_occ; 20 | lo_free = param.lo_free; 21 | lo_max = param.lo_max; 22 | lo_min = param.lo_min; 23 | 24 | % figure(1); 25 | % imagesc(myMap); hold on; 26 | 27 | 28 | N = length(pose); 29 | M = length(scanAngles); % No. of rays 30 | robotGrid = []; 31 | 32 | for j = 1:N % for each time, 33 | 34 | 35 | robot_grid = ceil([pose(1,j);pose(2,j)]*myResol)+ myorigin; 36 | 37 | % free = sub2ind(size(myMap),robot_grid(2),robot_grid(1)); 38 | % myMap(free) = 100; 39 | % Find grids hit by the rays (in the grid map coordinate) 40 | xocc = ranges(:,j).*cos(pose(3,j)+scanAngles) + pose(1,j); 41 | yocc = -ranges(:,j).*sin(pose(3,j)+scanAngles) + pose(2,j); 42 | 43 | 44 | % for ct=1:length(scanAngles) % for each scanAngle 45 | % dummy = [ranges(ct,j)*cos(pose(3,j)+scanAngles(ct)); -ranges(ct,j)*sin(pose(3,j)+scanAngles(ct))] + [pose(1,j);pose(2,j)]; 46 | % occ(:,ct) = ceil(dummy/myResol) + myorigin; 47 | % 48 | % % get free cells in between 49 | % [freex, freey] = bresenham(robot_grid(1),robot_grid(2),occ(1,ct),occ(2,ct)); 50 | % % convert to 1d index 51 | % free = sub2ind(size(myMap),freey,freex); 52 | % % set end point value 53 | % myMap(occ(2,ct),occ(1,ct)) = myMap(occ(2,ct),occ(1,ct))+lo_occ; 54 | % % set free cell values 55 | % myMap(free) = myMap(free)-lo_free; 56 | % 57 | % % Saturate the log-odd values 58 | % myMap(myMap>lo_max) = lo_max; 59 | % myMap(myMaplo_max) = lo_max; 81 | myMap(myMap size(map,2) | occ_y_ > size(map,1); 66 | 67 | occ_x_(del_occ) = []; 68 | occ_y_(del_occ) = []; 69 | 70 | 71 | occ_index = sub2ind(size(map),occ_y_',occ_x_'); 72 | %disp(sum(sum(map(occ_index) >= 0.5))); 73 | %disp(sum(sum(map(occ_index) < 0.5))); 74 | w = w + sum(sum(map(occ_index) >= 0.5)) * 10; 75 | w = w - sum(sum(map(occ_index) < -0.2)) * 2; 76 | 77 | % x_o = ranges(angle,j) * cos(scanAngles(angle,1) + P(3,m)) + P(1,m); 78 | % y_o = -1*ranges(angle,j) * sin(scanAngles(angle,1) + P(3,m)) + P(2,m); 79 | % 80 | % occ= [ceil(x_o*r)+myOrigin(1) ceil(y_o*r) + myOrigin(2)]; 81 | % car = [ceil(P(1,m)*r) + myOrigin(1) ceil(P(2,m)*r) + myOrigin(2)]; 82 | % 83 | % if occ(2)>0 && occ(1)>0 && occ(2) < size(map,1)+1 && occ(1) < size(map,2)+1 84 | % w = w + (map(occ(2),occ(1)) > 0.5) * 10; 85 | % w = w - (map(occ(2),occ(1)) < 0.5) * 5; 86 | % end 87 | 88 | % % 2-2) For each particle, calculate the correlation scores of the particles 89 | % [freex,freey] = bresenham(car(1),car(2),occ(1),occ(2)); 90 | 91 | % if size(freex,2)>0 92 | % freex_ = freex'; 93 | % freey_ = freey'; 94 | % del_index = freex_< 1 | freex_()> size(map,2) | freey_<1 | freey_>size(map,1); 95 | 96 | % freex_(del_index) = []; 97 | % freey_(del_index) = []; 98 | % freex = freex_'; 99 | % freey = freey_'; 100 | % free = sub2ind(size(map),freey,freex); 101 | 102 | % w = w - sum((map(free)>0.5) * 5); 103 | % w = w + sum((map(free)<0.5) * 1); 104 | %end 105 | % % 2-3) Update the particle weights 106 | 107 | %Weights(1,m) = Weights(1,m) * w; 108 | Weights(1,m) = w; 109 | end 110 | % % 2-4) Choose the best particle to update the pose 111 | Weights = Weights/sum(Weights); 112 | [Max_,Ind_] = max(Weights); 113 | myPose(:,j) = P(:,Ind_); 114 | %disp(Weights); 115 | % % 3) Resample if the effective number of particles is smaller than a threshold 116 | n_effective = sum(Weights) * sum(Weights) / sum(Weights)^2; 117 | % disp(n_effective); 118 | % disp(j); 119 | %disp(Weights) 120 | % 121 | % if n_effective <50 122 | % c_w = cumsum(Weights); 123 | % P_new = repmat([0;0;0], [1, M]); 124 | % Weights_new = ones(1,M) * (1/M); 125 | % for k = 1:min(M) 126 | % rand_n = rand(); 127 | % index_ = check_index(rand_n, c_w); 128 | % P_new(:,k) = P(:,index_); 129 | % Weights_new(k) = Weights(index_); 130 | % end 131 | % P = P_new; 132 | % Weights = Weights_new; 133 | % end 134 | 135 | 136 | 137 | %disp(Weights); 138 | %n_effective = sum(Weights) * sum(Weights) / sumsqr(Weights); 139 | %disp(n_effective); 140 | 141 | 142 | %disp(n_effective); 143 | % % 4) Visualize the pose on the map as needed 144 | 145 | end -------------------------------------------------------------------------------- /Pose tracking/particleLocalization1.m: -------------------------------------------------------------------------------- 1 | % Robotics: Estimation and Learning 2 | % WEEK 4 3 | % 4 | % Complete this function following the instruction. 5 | function myPose = particleLocalization(ranges, scanAngles, map, param) 6 | 7 | % Number of poses to calculate 8 | N = size(ranges, 2); 9 | % Output format is [x1 x2, ...; y1, y2, ...; z1, z2, ...] 10 | myPose = zeros(3, N); 11 | 12 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5 13 | % Map Parameters 14 | % 15 | % the number of grids for 1 meter. 16 | myResol = param.resol; 17 | % the origin of the map in pixels 18 | myOrigin = param.origin; 19 | 20 | % The initial pose is given 21 | myPose(:,1) = param.init_pose; 22 | % You should put the given initial pose into myPose for j=1, ignoring the j=1 ranges. 23 | % The pose(:,1) should be the pose when ranges(:,j) were measured. 24 | 25 | % Used for making video 26 | t = param.t; % Time vector 27 | pose = param.pose; % Actual pose 28 | 29 | % Decide the number of particles, M. 30 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 31 | M = 200; % Please decide a reasonable number of M, 32 | % based on your experiment using the practice data. 33 | Q = diag([0.0015,0.0015,0.0005]); % Segment 1 34 | % Q = diag([0.025 0.025 0.03]); 35 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 36 | % Create M number of particles 37 | % P = mvnrnd(myPose(:,1),[0.01,0.01,0.005],M)'; 38 | P = repmat(myPose(:,1), [1, M]); 39 | 40 | w = ones(1,M)/M; % weights for each particle 41 | correlation = zeros(1,M); 42 | 43 | figure(10); 44 | imagesc(map); hold on; 45 | colormap('gray'); 46 | axis equal; 47 | hold on; 48 | particlesPlot = scatter(P(1,:)*myResol+myOrigin(1),P(2,:)*myResol+myOrigin(2), 6, 'MarkerEdgeColor',[0 .5 .5],... 49 | 'MarkerFaceColor',[0 .7 .7],... 50 | 'LineWidth',1.5); 51 | 52 | figure(100); 53 | imagesc(map); hold on; 54 | colormap('gray'); 55 | axis equal; 56 | hold on; 57 | lidar_global(:,1) = (ranges(:,1).*cos(scanAngles + myPose(3,1)) + myPose(1,1))*myResol + myOrigin(1); 58 | lidar_global(:,2) = (-ranges(:,1).*sin(scanAngles + myPose(3,1)) + myPose(2,1))*myResol + myOrigin(2); 59 | lidarPlot = plot(lidar_global(:,1), lidar_global(:,2), 'g.'); 60 | posPlot = plot(myPose(1,1)*param.resol+param.origin(1), ... 61 | myPose(2,1)*param.resol+param.origin(2), 'r.-'); 62 | 63 | 64 | % Pose compared (used for making video) 65 | figure(1000); 66 | subplot(3,1,1); grid; 67 | hold on; xActual = plot(t(1), pose(1,1),'k', 'LineWidth', 2); 68 | xCalc = plot(t(1), myPose(1,1),'r', 'LineWidth', 1); 69 | ylabel('$x~$', 'FontSize', 26, 'Interpreter', 'latex'); 70 | title('Pose comparison in local co-ordinate frame', 'FontSize', 26, 'Interpreter', 'latex'); 71 | 72 | h1 = legend('Actual pose', 'Estimated pose' ); 73 | set(h1,'FontSize',18); 74 | 75 | subplot(3,1,2); grid; 76 | hold on; yActual = plot(t(1), pose(2,1),'k', 'LineWidth', 2); 77 | yCalc = plot(t(1), myPose(2,1),'r', 'LineWidth', 2); 78 | ylabel('$y~$', 'FontSize', 26, 'Interpreter', 'latex'); 79 | 80 | subplot(3,1,3); grid; 81 | hold on; thetaActual = plot(t(1), pose(3,1),'k', 'LineWidth', 2); 82 | thetaCalc = plot(t(1), myPose(3,1),'r', 'LineWidth', 2); 83 | ylabel('$\theta~$', 'FontSize', 26, 'Interpreter', 'latex'); 84 | xlabel('$time~(s)$', 'FontSize', 20, 'Interpreter', 'latex'); 85 | set(findobj('type','axes'),'fontsize',18); 86 | 87 | pause; 88 | 89 | for j = 2:N % You will start estimating myPose from j=2 using ranges(:,2). 90 | 91 | % 1) Propagate the particles 92 | P = diag(myPose(:,j-1))*ones(3,M) + mvnrnd([0;0;0],Q,M)'; 93 | 94 | % 2) Measurement Update 95 | for p = 1:M 96 | % closest 80% ranges 97 | nRanges = ceil(1.0*size(ranges,1)); 98 | % [~, idx] = sort(ranges(:,j)); 99 | idx = 1:nRanges; 100 | % robot_grid = ceil([P(1,p);P(2,p)]*myResol)+ myOrigin; 101 | 102 | % 2-1) Find grid cells hit by the rays (in the grid map coordinate 103 | % frame) 104 | xocc = ranges(idx,j).*cos(P(3,p)+scanAngles(idx)) + P(1,p); 105 | yocc = -ranges(idx,j).*sin(P(3,p)+scanAngles(idx)) + P(2,p); 106 | occ = ceil([xocc';yocc']*myResol) + myOrigin*ones(1,nRanges); 107 | del_occ = occ(1,:)<1 | occ(2,:)<1 | occ(1,:) > size(map,2) | occ(2,:) > size(map,1); 108 | 109 | occ(:,del_occ) = []; 110 | 111 | % 2-2) For each particle, calculate the correlation scores of the particles 112 | 113 | % num = [occ(1,:)>size(map,2); occ(2,:)>size(map,1)]; 114 | % sumnum = logical(sum(num,1)); 115 | % occ(:,sumnum) = []; 116 | occ_index = sub2ind(size(map),occ(2,:),occ(1,:)); 117 | % occ_index = unique(occ_index); 118 | 119 | % % free = []; 120 | % % for ct1=1:length(occ_index) 121 | % % % get cells in between 122 | % % [freex, freey] = bresenham(robot_grid(1),robot_grid(2),occ(1,(ct1)),occ(2,(ct1))); 123 | % % % convert to 1d index 124 | % % if ~isempty(freex) 125 | % % free = [free;sub2ind(size(map),freey,freex)]; 126 | % % end 127 | % % end 128 | % % free = unique(free); 129 | occ_values = map(occ_index); 130 | % % free_values = map(free); 131 | correlation(1,p) = sum(occ_values(occ_values>=0.5)*10) + sum(occ_values(occ_values<=-0.2)*2);% + sum(free_values(free_values<0)*-3) + sum(free_values(free_values>0)*-5);% - sum(sumnum)*0.05; 132 | end 133 | % 2-3) Update the particle weights 134 | correlation(correlation<0)= 0; 135 | w = correlation; 136 | w = w./sum(w); 137 | if sum(w<0)>0 138 | pause 139 | end 140 | 141 | % 2-4) Choose the best particle to update the pose 142 | [~,ind]=max(w); 143 | myPose(:,j) = P(:,ind); 144 | 145 | 146 | % 3) Resample if the effective number of particles is smaller than a threshold 147 | Neff = 1/sum(w.*w); 148 | if Neff < 0.1*M 149 | edges = min([0 cumsum(w)],1); % protect against accumulated round-off 150 | edges(end) = 1; % get the upper edge exact 151 | % edges correspond to the c variable in Algorthm 2 in reference 1 152 | u1 = rand/M; 153 | % this works like the inverse of the empirical distribution and returns 154 | % the interval where the sample is to be found 155 | [~, idx] = histc(u1:1/M:1,edges); 156 | P = P(:,idx); % extract new particles 157 | w = w(:,idx); 158 | w = w./sum(w); 159 | % w = repmat(1/M, 1, M); % now all particles have the same weight 160 | end 161 | % myPose(:,j)=sum(repmat(w,3,1).*P,2); 162 | 163 | % 4) Visualize the pose on the map as needed 164 | particlesPlot.XData = P(1,:)*myResol+myOrigin(1); 165 | particlesPlot.YData = P(2,:)*myResol+myOrigin(2); 166 | 167 | lidarPlot.XData = (ranges(:,j).*cos(scanAngles + myPose(3,j)) + myPose(1,j))*myResol + myOrigin(1); 168 | lidarPlot.YData = (-ranges(:,j).*sin(scanAngles + myPose(3,j)) + myPose(2,j))*myResol + myOrigin(2); 169 | 170 | dummyx = myPose(1,j)*param.resol+param.origin(1); 171 | dummyy = myPose(2,j)*param.resol+param.origin(2); 172 | posPlot.XData = [posPlot.XData dummyx]; 173 | posPlot.YData = [posPlot.YData dummyy]; 174 | 175 | figure(10); 176 | xlim([dummyx-40 dummyx+40]) 177 | ylim([dummyy-40 dummyy+40]) 178 | 179 | % For video 180 | xActual.XData = t(1:j)'; xActual.YData = [xActual.YData pose(1,j)]; 181 | yActual.XData = t(1:j)'; yActual.YData = [yActual.YData pose(2,j)]; 182 | thetaActual.XData = t(1:j)'; thetaActual.YData = [thetaActual.YData pose(3,j)]; 183 | xCalc.XData = t(1:j)'; xCalc.YData = [xCalc.YData myPose(1,j)]; 184 | yCalc.XData = t(1:j)'; yCalc.YData = [yCalc.YData myPose(2,j)]; 185 | thetaCalc.XData = t(1:j)'; thetaCalc.YData = [thetaCalc.YData myPose(3,j)]; 186 | 187 | drawnow; 188 | 189 | 190 | end 191 | 192 | end 193 | 194 | -------------------------------------------------------------------------------- /Pose tracking/practice-answer.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Pose tracking/practice-answer.mat -------------------------------------------------------------------------------- /Pose tracking/practice.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Pose tracking/practice.mat -------------------------------------------------------------------------------- /Pose tracking/runeval.m: -------------------------------------------------------------------------------- 1 | testpath = '.'; % This is the folder where test.mat is saved. Change the path if needed. 2 | eval_progW4(testpath) -------------------------------------------------------------------------------- /Pose tracking/testing.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Pose tracking/testing.mat -------------------------------------------------------------------------------- /Pose tracking/untitled.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kaku289/Robotics-Estimation-and-Learning/bc3a910870317023f6dbe89e266248bb47ee25ff/Pose tracking/untitled.png -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Robotics-Estimation-and-Learning 2 | Coursera MOOC on Robotics: Estimation and Learning 3 | 4 | This repository contains the MATLAB code corresponding to different projects completed during the course on Robotics: Estimation and Learning. 5 | 6 | ###1. Learning, Detection and Tracking 7 | A multivariate Gaussian model is learned to detect a yellow ball and a Kalman filter is implemented to track the ball in 2D space. [Video of the result][ref1] 8 | [ref1]: https://www.youtube.com/watch?v=e05qmVSwBko 9 | 10 | ###2. Robotic Mapping - Occpancy Grid Mapping 11 | The occupancy grid mapping algorithm is implemented for a 2D floor. The range sensor readings and known poses at each of the measurement times are incorporated in order to build this map. [Video of the result][ref2] 12 | [ref2]: https://www.youtube.com/watch?v=QFJehL9-pNo 13 | 14 | ###3. Pose tracking using a Particle Filter 15 | A particle filter is implemented for the purpose of robot localization using LIDAR measurements in 2D space for a known occupancy grid map. [Video of the result][ref3] 16 | [ref3]: https://www.youtube.com/watch?v=eQhzn74np0g 17 | --------------------------------------------------------------------------------