├── SLIC-Algorithm_Superpixel-Segmentation.m ├── LICENSE ├── Color_Space_Lab-Representation.m ├── README.md ├── svm_binary.m ├── svm_classifier.m ├── SVM_Classification-Models(Plot-Posterior-Probability-Regions).m ├── Superpixel_Texture-Extraction(SURF_Features).m ├── SURF-Features_Texture_Extraction.m ├── SVM-Classifier_Training(Gaussian-Kernel).m ├── SVM-Classifier_Fit(Bayesian-Optimization).m ├── SVM-Classifier_Training(Custom-Kernel).m ├── SVM-Analyzation(Linear).m ├── Color-Based_Segmentation(single).m ├── Color-Based_Segmentation(multi).m ├── Gabor-Features_Texture_Extraction(gray).m ├── Gabor-Features_Texture_Extraction(rgb).m └── Superpixel_Texture-Extraction(Gabor-Features).m /SLIC-Algorithm_Superpixel-Segmentation.m: -------------------------------------------------------------------------------- 1 | %Convert your source RGB image into an L*a*b* image using rgb2lab 2 | labImage = rgb2lab(source); 3 | 4 | B = superpixels(labImage,100,'IsInputLab',true, 'Method','slic') 5 | bw = boundarymask(B); 6 | imshow(imoverlay(source,bw,'cyan'),'InitialMagnification',67); -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 Panagiotis Prattis 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /Color_Space_Lab-Representation.m: -------------------------------------------------------------------------------- 1 | %acquire image 2 | source = imread("vegetables.jpg"); 3 | 4 | %Convert your source RGB image into an L*a*b* image using rgb2lab 5 | labImage = rgb2lab(source); 6 | 7 | %Get each channel 'L*', 'a*' and 'b*' for the L*a*b* image 8 | LImage = labImage(:, :, 1); 9 | AImage = labImage(:, :, 2); 10 | BImage = labImage(:, :, 3); 11 | 12 | %Show the L*a*b* image 13 | subplot(4, 2, 1.5); 14 | imshow(labImage); 15 | title('L*a*b* Image', 'FontSize', 15); 16 | 17 | %Show each of the channels individually 18 | %and by scaling the display based on the range of pixel values 19 | subplot(4, 2, 3); 20 | imshow(LImage); 21 | title('L channel Image', 'FontSize', 15); 22 | subplot(4, 2, 4); 23 | imshow(LImage, []); 24 | title('L channel scaled Image', 'FontSize', 15); 25 | subplot(4, 2, 5); 26 | imshow(AImage); 27 | title('A channel Image', 'FontSize', 15); 28 | subplot(4, 2, 6); 29 | imshow(AImage, []); 30 | title('A channel scaled Image', 'FontSize', 15); 31 | subplot(4, 2, 7); 32 | imshow(BImage); 33 | title('B channel Image', 'FontSize', 15); 34 | subplot(4, 2, 8); 35 | imshow(BImage, []); 36 | title('B channel scaled Image', 'FontSize', 15); -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # A Matlab Exercise / Project 2 | 3 | **This is a Matlab project from my early days as a Computer Science student** 4 | 5 | _This scripts was created for the seventh semester class Image Analysis 6 | and it is the final project necessary to pass the class_ 7 | 8 | > #### Description of project 9 | > 10 | >>Matlab scripts that implement necessary algorithmic procedures to automatically color a black and white image. In particular, you need to develop code to perform some computing activities: 11 | > 12 | 13 | > #### Implementation of project 14 | > 15 | > 1. Image Representation in Color Lab Space 16 | > 2. Dissect the Color Lab on the basis of a set of related training images. 17 | > 3. Image segmentation in Superpixels according to the SLIC algorithm. 18 | > 4. Export SURF Features & Gabor Features per Super Pixel. 19 | > 5. Learning Local Color Forecast Models Using SVM Classifiers 20 | > 6. Estimation of Color Content in Black and White Using Graphics Cutting Algorithms. 21 | 22 | > #### About this project 23 | > 24 | > - Steps 5 & 6 are partly implemented 25 | > - This program was written in Matlab IDE 26 | > - This repository was created to show the variety of the work I did and experience I gained as a student 27 | > 28 | -------------------------------------------------------------------------------- /svm_binary.m: -------------------------------------------------------------------------------- 1 | X = 1:length(cluster_idx); 2 | X = X(:); 3 | X = [X cluster_idx]; 4 | 5 | X= ab; 6 | Y = cluster_idx; 7 | X = X(2:100001,:); 8 | Y = Y(1:100000,:); 9 | 10 | 11 | 12 | load carsmall 13 | rng 'default' % For reproducibility 14 | %Specify Horsepower and Weight as the predictor variables (X) and MPG as the response variable (Y). 15 | X = [Horsepower Weight]; 16 | Y = MPG; 17 | %Cross-validate two SVM regression models using 5-fold cross-validation. For both models, specify to standardize the predictors. 18 | %For one of the models, specify to train using the default linear kernel, and the Gaussian kernel for the other model. 19 | MdlLin = fitrsvm(X,Y,'Standardize',true,'KFold',5) 20 | MdlGau = fitrsvm(X,Y,'Standardize',true,'KFold',5,'KernelFunction','gaussian') 21 | MdlLin.Trained 22 | %Compare the generalization error of the models. In this case, the generalization error is the out-of-sample mean-squared error. 23 | mseLin = kfoldLoss(MdlLin) 24 | mseGau = kfoldLoss(MdlGau) 25 | %The SVM regression model using the Gaussian kernel performs better than the one using the linear kernel. 26 | %Create a model suitable for making predictions by passing the entire data set to fitrsvm, 27 | %and specify all name-value pair arguments that yielded the better-performing model. However, do not specify any cross-validation options. 28 | MdlGau = fitrsvm(X,Y,'Standardize',true,'KernelFunction','gaussian'); 29 | rng default 30 | %Find hyperparameters that minimize five-fold cross-validation loss by using automatic hyperparameter optimization. 31 | %For reproducibility, set the random seed and use the 'expected-improvement-plus' acquisition function. 32 | Mdl = fitrsvm(X,Y,'OptimizeHyperparameters','auto',... 33 | 'HyperparameterOptimizationOptions',struct('AcquisitionFunctionName',... 34 | 'expected-improvement-plus')) -------------------------------------------------------------------------------- /svm_classifier.m: -------------------------------------------------------------------------------- 1 | %% preparing dataset 2 | 3 | load fisheriris 4 | 5 | species_num = grp2idx(species); 6 | %% 7 | 8 | % binary classification 9 | X = randn(100,10); 10 | X(:,[1,3,5,7]) = meas(1:100,:); % 1, 3, 5, 7 11 | y = species_num(1:100); 12 | 13 | rand_num = randperm(size(X,1)); 14 | X_train = X(rand_num(1:round(0.8*length(rand_num))),:); 15 | y_train = y(rand_num(1:round(0.8*length(rand_num))),:); 16 | 17 | X_test = X(rand_num(round(0.8*length(rand_num))+1:end),:); 18 | y_test = y(rand_num(round(0.8*length(rand_num))+1:end),:); 19 | %% CV partition 20 | 21 | c = cvpartition(y_train,'k',5); 22 | %% feature selection 23 | 24 | opts = statset('display','iter'); 25 | classf = @(train_data, train_labels, test_data, test_labels)... 26 | sum(predict(fitcsvm(train_data, train_labels,'KernelFunction','rbf'), test_data) ~= test_labels); 27 | 28 | [fs, history] = sequentialfs(classf, X_train, y_train, 'cv', c, 'options', opts,'nfeatures',2); 29 | %% Best hyperparameter 30 | 31 | X_train_w_best_feature = X_train(:,fs); 32 | 33 | Md1 = fitcsvm(X_train_w_best_feature,y_train,'KernelFunction','rbf','OptimizeHyperparameters','auto',... 34 | 'HyperparameterOptimizationOptions',struct('AcquisitionFunctionName',... 35 | 'expected-improvement-plus','ShowPlots',true)); % Bayes' Optimization 36 | 37 | 38 | %% Final test with test set 39 | X_test_w_best_feature = X_test(:,fs); 40 | test_accuracy_for_iter = sum((predict(Md1,X_test_w_best_feature) == y_test))/length(y_test)*100 41 | 42 | %% hyperplane 43 | 44 | figure; 45 | hgscatter = gscatter(X_train_w_best_feature(:,1),X_train_w_best_feature(:,2),y_train); 46 | hold on; 47 | h_sv=plot(Md1.SupportVectors(:,1),Md1.SupportVectors(:,2),'ko','markersize',8); 48 | 49 | 50 | % test set data 51 | 52 | gscatter(X_test_w_best_feature(:,1),X_test_w_best_feature(:,2),y_test,'rb','xx') 53 | 54 | % decision plane 55 | XLIMs = get(gca,'xlim'); 56 | YLIMs = get(gca,'ylim'); 57 | [xi,yi] = meshgrid([XLIMs(1):0.01:XLIMs(2)],[YLIMs(1):0.01:YLIMs(2)]); 58 | dd = [xi(:), yi(:)]; 59 | pred_mesh = predict(Md1, dd); 60 | redcolor = [1, 0.8, 0.8]; 61 | bluecolor = [0.8, 0.8, 1]; 62 | pos = find(pred_mesh == 1); 63 | h1 = plot(dd(pos,1), dd(pos,2),'s','color',redcolor,'Markersize',5,'MarkerEdgeColor',redcolor,'MarkerFaceColor',redcolor); 64 | pos = find(pred_mesh == 2); 65 | h2 = plot(dd(pos,1), dd(pos,2),'s','color',bluecolor,'Markersize',5,'MarkerEdgeColor',bluecolor,'MarkerFaceColor',bluecolor); 66 | uistack(h1,'bottom'); 67 | uistack(h2,'bottom'); 68 | legend([hgscatter;h_sv],{'setosa','versicolor','support vectors'}) -------------------------------------------------------------------------------- /SVM_Classification-Models(Plot-Posterior-Probability-Regions).m: -------------------------------------------------------------------------------- 1 | %Plot Posterior Probability Regions for SVM Classification Models 2 | 3 | %{ 4 | This example shows how to predict posterior probabilities of SVM models over a grid of observations, 5 | and then plot the posterior probabilities over the grid. Plotting posterior probabilities exposes decision boundaries. 6 | %} 7 | 8 | %Load Fisher's iris data set. Train the classifier using the petal lengths and widths, and remove the virginica species from the data. 9 | 10 | load fisheriris 11 | classKeep = ~strcmp(species,'virginica'); 12 | X = meas(classKeep,3:4); 13 | y = species(classKeep); 14 | 15 | %Train an SVM classifier using the data. It is good practice to specify the order of the classes. 16 | SVMModel = fitcsvm(X,y,'ClassNames',{'setosa','versicolor'}); 17 | 18 | %Estimate the optimal score transformation function. 19 | rng(1); % For reproducibility 20 | [SVMModel,ScoreParameters] = fitPosterior(SVMModel); 21 | 22 | %Warning: Classes are perfectly separated. The optimal score-to-posterior transformation is a step function. 23 | ScoreParameters 24 | 25 | %{ 26 | The optimal score transformation function is the step function because the classes are separable. The fields LowerBound 27 | and UpperBound of ScoreParameters indicate the lower and upper end points of the interval of scores corresponding to observations 28 | within the class-separating hyperplanes (the margin). No training observation falls within the margin. If a new score is in the interval, 29 | then the software assigns the corresponding observation a positive class posterior probability, i.e., the value in the PositiveClassProbability field of ScoreParameters. 30 | 31 | Define a grid of values in the observed predictor space. Predict the posterior probabilities for each instance in the grid. 32 | %} 33 | xMax = max(X); 34 | xMin = min(X); 35 | d = 0.01; 36 | [x1Grid,x2Grid] = meshgrid(xMin(1):d:xMax(1),xMin(2):d:xMax(2)); 37 | 38 | [~,PosteriorRegion] = predict(SVMModel,[x1Grid(:),x2Grid(:)]); 39 | 40 | %Plot the positive class posterior probability region and the training data 41 | figure; 42 | contourf(x1Grid,x2Grid,... 43 | reshape(PosteriorRegion(:,2),size(x1Grid,1),size(x1Grid,2))); 44 | h = colorbar; 45 | h.Label.String = 'P({\it{versicolor}})'; 46 | h.YLabel.FontSize = 16; 47 | caxis([0 1]); 48 | colormap jet; 49 | 50 | hold on 51 | gscatter(X(:,1),X(:,2),y,'mc','.x',[15,10]); 52 | sv = X(SVMModel.IsSupportVector,:); 53 | plot(sv(:,1),sv(:,2),'yo','MarkerSize',15,'LineWidth',2); 54 | axis tight 55 | hold off 56 | 57 | %{ 58 | In two-class learning, if the classes are separable, then there are three regions: one where observations have positive class posterior probability 0, one where it is 1, 59 | and the other where it is the positive class prior probability 60 | %} -------------------------------------------------------------------------------- /Superpixel_Texture-Extraction(SURF_Features).m: -------------------------------------------------------------------------------- 1 | %acquire image 2 | source = imread("vegetables.jpg"); 3 | 4 | I = rgb2gray(source); 5 | %Ilab = rgb2lab(source); 6 | 7 | [L,N] = superpixels(source,100); 8 | 9 | BW = boundarymask(L); 10 | 11 | outputImage = zeros(size(source), 'like', source); 12 | 13 | %{ 14 | I'll use the function label2idx to compute the indices of the pixels in each superpixel cluster. 15 | That will let me access the red, green, and blue component values using linear indexing 16 | %} 17 | idx = label2idx(L); 18 | numRows = size(source,1); 19 | numCols = size(source,2); 20 | 21 | %{ 22 | For each of the N superpixel clusters, use linear indexing to access the red, green, and blue components, 23 | reconstruct the corresponding pixels while detecting SURF Features for the grayscale version of the image 24 | and showing the 10 strongest of them, in the output grayscale image 25 | %} 26 | for labelVal = 1:N 27 | redIdx = idx{labelVal}; 28 | greenIdx = idx{labelVal}+numRows*numCols; 29 | blueIdx = idx{labelVal}+2*numRows*numCols; 30 | outputImage(redIdx) = source(redIdx); 31 | outputImage(greenIdx) = source(greenIdx); 32 | outputImage(blueIdx) = source(blueIdx); 33 | points = detectSURFFeatures(rgb2gray(outputImage)); 34 | %imshow(rgb2gray(outputImage)); hold on; 35 | imshow(outputImage); hold on; 36 | plot(points.selectStrongest(10)); 37 | end 38 | 39 | %-------------------------------------------------------------% 40 | %Find Corresponding Points Between Two Images Using SURF Features 41 | %Read images 42 | img = imread('p1.jpg'); 43 | source1 = rgb2gray(img); 44 | source2 = imread('pg.jpg'); 45 | 46 | %Detect SURF features 47 | points1 = detectSURFFeatures(source1); 48 | points2 = detectSURFFeatures(source2); 49 | 50 | %Extract features 51 | [f1, vpts1] = extractFeatures(source1, points1); 52 | [f2, vpts2] = extractFeatures(source2, points2); 53 | 54 | %Match features 55 | indexPairs = matchFeatures(f1, f2) ; 56 | matchedPoints1 = vpts1(indexPairs(:, 1)); 57 | matchedPoints2 = vpts2(indexPairs(:, 2)); 58 | 59 | %Visualize candidate matches 60 | figure; ax = axes; 61 | showMatchedFeatures(source1,source2,matchedPoints1,matchedPoints2,'Parent',ax); 62 | title(ax, 'Putative point matches'); 63 | legend(ax,'Matched points 1','Matched points 2'); 64 | 65 | figure; ax = axes; 66 | showMatchedFeatures(source1,source2,matchedPoints1,matchedPoints2,'montage','Parent',ax); 67 | title(ax, 'Candidate point matches'); 68 | legend(ax, 'Matched points 1','Matched points 2'); 69 | 70 | %-------------------------------------------------------------% 71 | %acquire image 72 | source = imread("pg.jpg"); 73 | 74 | %Extract SURF features from an image 75 | points = detectSURFFeatures(source); 76 | [features, valid_points] = extractFeatures(source,points); 77 | 78 | %Visualize 10 strongest SURF features, including their scales and orientation which were determined during the descriptor extraction process. 79 | imshow(source); 80 | hold on; 81 | strongestPoints = valid_points.selectStrongest(10); 82 | strongestPoints.plot('showOrientation',true); -------------------------------------------------------------------------------- /SURF-Features_Texture_Extraction.m: -------------------------------------------------------------------------------- 1 | %acquire images 2 | source1 = imread("vg.jpg"); 3 | img = imread("v1.jpg"); 4 | source2 = rgb2gray(img); 5 | 6 | [L1,N1] = superpixels(source1,100); 7 | [L2,N2] = superpixels(source2,100); 8 | 9 | BW1 = boundarymask(L1); 10 | BW2 = boundarymask(L2); 11 | 12 | outputImage1 = zeros(size(source1), 'like', source1); 13 | outputImage2 = zeros(size(img), 'like', img); 14 | 15 | %{ 16 | I'll use the function label2idx to compute the indices of the pixels in each superpixel cluster. 17 | That will let me access the red, green, and blue component values using linear indexing 18 | %} 19 | idx1 = label2idx(L1); 20 | 21 | idx2 = label2idx(L2); 22 | numRows = size(img,1); 23 | numCols = size(img,2); 24 | 25 | %{ 26 | For each of the N superpixel clusters, use linear indexing reconstruct the corresponding pixels, 27 | while detecting/extracting SURF Features and showing the 10 strongest of them, in the grayscale image 28 | %} 29 | for labelVal1 = 1:N1 30 | Idx = idx1{labelVal1}; 31 | outputImage1(Idx) = source1(Idx); 32 | points1 = detectSURFFeatures(outputImage1); 33 | [f1, vpts1] = extractFeatures(outputImage1, points1); 34 | figure(1); 35 | imshow(outputImage1); hold on; 36 | strongestPoints1 = points1.selectStrongest(10); 37 | strongestPoints1.plot('showOrientation',true); 38 | grid; 39 | end 40 | 41 | %{ 42 | For each of the N superpixel clusters, use linear indexing to access the red, green, and blue components, 43 | reconstruct the corresponding pixels while detecting/extracting SURF Features for the grayscale version of the image 44 | and showing the 10 strongest of them, in the output grayscale image 45 | %} 46 | for labelVal = 1:N2 47 | redIdx = idx2{labelVal}; 48 | greenIdx = idx2{labelVal}+numRows*numCols; 49 | blueIdx = idx2{labelVal}+2*numRows*numCols; 50 | outputImage2(redIdx) = img(redIdx); 51 | outputImage2(greenIdx) = img(greenIdx); 52 | outputImage2(blueIdx) = img(blueIdx); 53 | points2 = detectSURFFeatures(rgb2gray(outputImage2)); 54 | [f2, vpts2] = extractFeatures(rgb2gray(outputImage2), points2); 55 | figure(2); 56 | %imshow(img); hold on; 57 | imshow(outputImage2); hold on; 58 | strongestPoints2 = points2.selectStrongest(10); 59 | strongestPoints2.plot('showOrientation',true); 60 | grid; 61 | end 62 | 63 | %Match features both ways 64 | indexPairs1 = matchFeatures(f1, f2); 65 | indexPairs2 = matchFeatures(f2, f1); 66 | matchedPoints1 = vpts1(indexPairs1(:, 1)); 67 | matchedPoints2 = vpts2(indexPairs1(:, 2)); 68 | matchedPoints3 = vpts1(indexPairs2(:, 2)); 69 | matchedPoints4 = vpts2(indexPairs2(:, 1)); 70 | 71 | %Visualize candidate matches 72 | figure; ax = axes; 73 | showMatchedFeatures(source1,source2,matchedPoints1,matchedPoints2,'Parent',ax); 74 | showMatchedFeatures(source1,source2,matchedPoints3,matchedPoints4,'Parent',ax); 75 | title(ax, 'Putative point matches'); 76 | legend(ax,'Matched points 1','Matched points 2'); 77 | 78 | figure; ax = axes; 79 | showMatchedFeatures(source1,source2,matchedPoints1,matchedPoints2,'montage','Parent',ax); 80 | showMatchedFeatures(source1,source2,matchedPoints3,matchedPoints4,'montage','Parent',ax); 81 | title(ax, 'Candidate point matches'); 82 | legend(ax, 'Matched points 1','Matched points 2'); -------------------------------------------------------------------------------- /SVM-Classifier_Training(Gaussian-Kernel).m: -------------------------------------------------------------------------------- 1 | %Train SVM Classifiers Using a Gaussian Kernel 2 | 3 | %{ 4 | This example shows how to generate a nonlinear classifier with Gaussian kernel function. 5 | First, generate one class of points inside the unit disk in two dimensions, and another class of points 6 | in the annulus from radius 1 to radius 2. Then, generates a classifier based on the data 7 | with the Gaussian radial basis function kernel. The default linear classifier is obviously unsuitable 8 | for this problem, since the model is circularly symmetric. Set the box constraint parameter to Inf 9 | to make a strict classification, meaning no misclassified training points. Other kernel functions 10 | might not work with this strict box constraint, since they might be unable to provide a strict classification. 11 | Even though the rbf classifier can separate the classes, the result can be overtrained. 12 | 13 | Generate 100 points uniformly distributed in the unit disk. To do so, generate a radius r as the square root 14 | of a uniform random variable, generate an angle t uniformly in (0, ), and put the point at (r cos( t ), r sin( t )). 15 | %} 16 | rng(1); % For reproducibility 17 | r = sqrt(rand(100,1)); % Radius 18 | t = 2*pi*rand(100,1); % Angle 19 | data1 = [r.*cos(t), r.*sin(t)]; % Points 20 | 21 | %{ 22 | Generate 100 points uniformly distributed in the annulus. The radius is again proportional to a square root, 23 | this time a square root of the uniform distribution from 1 through 4. 24 | %} 25 | r2 = sqrt(3*rand(100,1)+1); % Radius 26 | t2 = 2*pi*rand(100,1); % Angle 27 | data2 = [r2.*cos(t2), r2.*sin(t2)]; % points 28 | 29 | %Plot the points, and plot circles of radii 1 and 2 for comparison 30 | figure; 31 | plot(data1(:,1),data1(:,2),'r.','MarkerSize',15) 32 | hold on 33 | plot(data2(:,1),data2(:,2),'b.','MarkerSize',15) 34 | ezpolar(@(x)1);ezpolar(@(x)2); 35 | axis equal 36 | hold off 37 | 38 | %Put the data in one matrix, and make a vector of classifications 39 | data3 = [data1;data2]; 40 | theclass = ones(200,1); 41 | theclass(1:100) = -1; 42 | 43 | %{ 44 | Train an SVM classifier with KernelFunction set to 'rbf' and BoxConstraint set to Inf. 45 | Plot the decision boundary and flag the support vectors 46 | %} 47 | %Train the SVM Classifier 48 | cl = fitcsvm(data3,theclass,'KernelFunction','rbf',... 49 | 'BoxConstraint',Inf,'ClassNames',[-1,1]); 50 | 51 | % Predict scores over the grid 52 | d = 0.02; 53 | [x1Grid,x2Grid] = meshgrid(min(data3(:,1)):d:max(data3(:,1)),... 54 | min(data3(:,2)):d:max(data3(:,2))); 55 | xGrid = [x1Grid(:),x2Grid(:)]; 56 | [~,scores] = predict(cl,xGrid); 57 | 58 | % Plot the data and the decision boundary 59 | figure; 60 | h(1:2) = gscatter(data3(:,1),data3(:,2),theclass,'rb','.'); 61 | hold on 62 | ezpolar(@(x)1); 63 | h(3) = plot(data3(cl.IsSupportVector,1),data3(cl.IsSupportVector,2),'ko'); 64 | contour(x1Grid,x2Grid,reshape(scores(:,2),size(x1Grid)),[0 0],'k'); 65 | legend(h,{'-1','+1','Support Vectors'}); 66 | axis equal 67 | hold off 68 | 69 | %fitcsvm generates a classifier that is close to a circle of radius 1. The difference is due to the random training data. 70 | 71 | %{ 72 | Training with the default parameters makes a more nearly circular classification boundary, 73 | but one that misclassifies some training data. Also, the default value of BoxConstraint is 1, 74 | and, therefore, there are more support vectors. 75 | %} 76 | cl2 = fitcsvm(data3,theclass,'KernelFunction','rbf'); 77 | [~,scores2] = predict(cl2,xGrid); 78 | 79 | figure; 80 | h(1:2) = gscatter(data3(:,1),data3(:,2),theclass,'rb','.'); 81 | hold on 82 | ezpolar(@(x)1); 83 | h(3) = plot(data3(cl2.IsSupportVector,1),data3(cl2.IsSupportVector,2),'ko'); 84 | contour(x1Grid,x2Grid,reshape(scores2(:,2),size(x1Grid)),[0 0],'k'); 85 | legend(h,{'-1','+1','Support Vectors'}); 86 | axis equal 87 | hold off -------------------------------------------------------------------------------- /SVM-Classifier_Fit(Bayesian-Optimization).m: -------------------------------------------------------------------------------- 1 | %Optimize an SVM Classifier Fit Using Bayesian Optimization 2 | %{ 3 | This example shows how to optimize an SVM classification using the fitcsvm function and OptimizeHyperparameters name-value pair. 4 | The classification works on locations of points from a Gaussian mixture model. 5 | In The Elements of Statistical Learning, Hastie, Tibshirani, and Friedman (2009), page 17 describes the model. 6 | The model begins with generating 10 base points for a "green" class, distributed as 2-D independent normals with mean (1,0) 7 | and unit variance. It also generates 10 base points for a "red" class, distributed as 2-D independent normals with mean (0,1) 8 | and unit variance. For each class (green and red), generate 100 random points as follows: 9 | 10 | -Choose a base point m of the appropriate color uniformly at random. 11 | 12 | -Generate an independent random point with 2-D normal distribution with mean m and variance I/5, where I is the 2-by-2 identity matrix. 13 | In this example, use a variance I/50 to show the advantage of optimization more clearly. 14 | %} 15 | 16 | %Generate the Points and Classifier 17 | %Generate the 10 base points for each class 18 | 19 | rng default % For reproducibility 20 | grnpop = mvnrnd([1,0],eye(2),10); 21 | redpop = mvnrnd([0,1],eye(2),10); 22 | 23 | %View the base points 24 | plot(grnpop(:,1),grnpop(:,2),'go') 25 | hold on 26 | plot(redpop(:,1),redpop(:,2),'ro') 27 | hold off 28 | 29 | %Since some red base points are close to green base points, it can be difficult to classify the data points based on location alone. 30 | 31 | %Generate the 100 data points of each class. 32 | redpts = zeros(100,2);grnpts = redpts; 33 | for i = 1:100 34 | grnpts(i,:) = mvnrnd(grnpop(randi(10),:),eye(2)*0.02); 35 | redpts(i,:) = mvnrnd(redpop(randi(10),:),eye(2)*0.02); 36 | end 37 | 38 | %View the data points. 39 | figure 40 | plot(grnpts(:,1),grnpts(:,2),'go') 41 | hold on 42 | plot(redpts(:,1),redpts(:,2),'ro') 43 | hold off 44 | 45 | %Prepare Data For Classification 46 | 47 | %Put the data into one matrix, and make a vector grp that labels the class of each point. 48 | cdata = [grnpts;redpts]; 49 | grp = ones(200,1); 50 | % Green label 1, red label -1 51 | grp(101:200) = -1; 52 | 53 | %Prepare Cross-Validation 54 | %Set up a partition for cross-validation. This step fixes the train and test sets that the optimization uses at each step. 55 | 56 | c = cvpartition(200,'KFold',10); 57 | 58 | %Optimize the Fit 59 | 60 | %{ 61 | To find a good fit, meaning one with a low cross-validation loss, set options to use Bayesian optimization. 62 | Use the same cross-validation partition c in all optimizations. 63 | %} 64 | 65 | %For reproducibility, use the 'expected-improvement-plus' acquisition function. 66 | 67 | opts = struct('Optimizer','bayesopt','ShowPlots',true,'CVPartition',c,... 68 | 'AcquisitionFunctionName','expected-improvement-plus'); 69 | svmmod = fitcsvm(cdata,grp,'KernelFunction','rbf',... 70 | 'OptimizeHyperparameters','auto','HyperparameterOptimizationOptions',opts) 71 | 72 | %Find the loss of the optimized model. 73 | lossnew = kfoldLoss(fitcsvm(cdata,grp,'CVPartition',c,'KernelFunction','rbf',... 74 | 'BoxConstraint',svmmod.HyperparameterOptimizationResults.XAtMinObjective.BoxConstraint,... 75 | 'KernelScale',svmmod.HyperparameterOptimizationResults.XAtMinObjective.KernelScale)) 76 | 77 | %This loss is the same as the loss reported in the optimization output under "Observed objective function value". 78 | 79 | %Visualize the optimized classifier. 80 | d = 0.02; 81 | [x1Grid,x2Grid] = meshgrid(min(cdata(:,1)):d:max(cdata(:,1)),... 82 | min(cdata(:,2)):d:max(cdata(:,2))); 83 | xGrid = [x1Grid(:),x2Grid(:)]; 84 | [~,scores] = predict(svmmod,xGrid); 85 | figure; 86 | h = nan(3,1); % Preallocation 87 | h(1:2) = gscatter(cdata(:,1),cdata(:,2),grp,'rg','+*'); 88 | hold on 89 | h(3) = plot(cdata(svmmod.IsSupportVector,1),... 90 | cdata(svmmod.IsSupportVector,2),'ko'); 91 | contour(x1Grid,x2Grid,reshape(scores(:,2),size(x1Grid)),[0 0],'k'); 92 | legend(h,{'-1','+1','Support Vectors'},'Location','Southeast'); 93 | axis equal 94 | hold off -------------------------------------------------------------------------------- /SVM-Classifier_Training(Custom-Kernel).m: -------------------------------------------------------------------------------- 1 | %Train SVM Classifier Using Custom Kernel 2 | 3 | %{ 4 | This example shows how to use a custom kernel function, such as the sigmoid kernel, to train SVM classifiers, 5 | and adjust custom kernel function parameters. 6 | 7 | Generate a random set of points within the unit circle. Label points in the first and third quadrants 8 | as belonging to the positive class, and those in the second and fourth quadrants in the negative class. 9 | %} 10 | rng(1); % For reproducibility 11 | n = 100; % Number of points per quadrant 12 | 13 | r1 = sqrt(rand(2*n,1)); % Random radii 14 | t1 = [pi/2*rand(n,1); (pi/2*rand(n,1)+pi)]; % Random angles for Q1 and Q3 15 | X1 = [r1.*cos(t1) r1.*sin(t1)]; % Polar-to-Cartesian conversion 16 | 17 | r2 = sqrt(rand(2*n,1)); 18 | t2 = [pi/2*rand(n,1)+pi/2; (pi/2*rand(n,1)-pi/2)]; % Random angles for Q2 and Q4 19 | X2 = [r2.*cos(t2) r2.*sin(t2)]; 20 | 21 | X = [X1; X2]; % Predictors 22 | Y = ones(4*n,1); 23 | Y(2*n + 1:end) = -1; % Labels 24 | 25 | %Plot the data. 26 | figure; 27 | gscatter(X(:,1),X(:,2),Y); 28 | title('Scatter Diagram of Simulated Data') 29 | 30 | %Write a function that accepts two matrices in the feature space as inputs, and transforms them into a Gram matrix using the sigmoid kernel. 31 | function G = mysigmoid(U,V) 32 | % Sigmoid kernel function with slope gamma and intercept c 33 | gamma = 1; 34 | c = -1; 35 | G = tanh(gamma*U*V' + c); 36 | end 37 | 38 | %Train an SVM classifier using the sigmoid kernel function. It is good practice to standardize the data. 39 | 40 | Mdl1 = fitcsvm(X,Y,'KernelFunction','mysigmoid','Standardize',true); 41 | 42 | %Mdl1 is a ClassificationSVM classifier containing the estimated parameters. 43 | 44 | %Plot the data, and identify the support vectors and the decision boundary. 45 | 46 | % Compute the scores over a grid 47 | d = 0.02; % Step size of the grid 48 | [x1Grid,x2Grid] = meshgrid(min(X(:,1)):d:max(X(:,1)),... 49 | min(X(:,2)):d:max(X(:,2))); 50 | xGrid = [x1Grid(:),x2Grid(:)]; % The grid 51 | [~,scores1] = predict(Mdl1,xGrid); % The scores 52 | 53 | figure; 54 | h(1:2) = gscatter(X(:,1),X(:,2),Y); 55 | hold on 56 | h(3) = plot(X(Mdl1.IsSupportVector,1),... 57 | X(Mdl1.IsSupportVector,2),'ko','MarkerSize',10); 58 | % Support vectors 59 | contour(x1Grid,x2Grid,reshape(scores1(:,2),size(x1Grid)),[0 0],'k'); 60 | % Decision boundary 61 | title('Scatter Diagram with the Decision Boundary') 62 | legend({'-1','1','Support Vectors'},'Location','Best'); 63 | hold off 64 | 65 | %{ 66 | You can adjust the kernel parameters in an attempt to improve the shape of the decision boundary. This might also 67 | decrease the within-sample misclassification rate, but, you should first determine the out-of-sample misclassification rate. 68 | %} 69 | %Determine the out-of-sample misclassification rate by using 10-fold cross validation 70 | CVMdl1 = crossval(Mdl1); 71 | misclass1 = kfoldLoss(CVMdl1); 72 | misclass1 73 | 74 | %Write another sigmoid function, but Set gamma = 0.5;. 75 | function G = mysigmoid2(U,V) 76 | % Sigmoid kernel function with slope gamma and intercept c 77 | gamma = 0.5; 78 | c = -1; 79 | G = tanh(gamma*U*V' + c); 80 | end 81 | 82 | %Train another SVM classifier using the adjusted sigmoid kernel. Plot the data and the decision region, and determine the out-of-sample misclassification rate. 83 | Mdl2 = fitcsvm(X,Y,'KernelFunction','mysigmoid2','Standardize',true); 84 | [~,scores2] = predict(Mdl2,xGrid); 85 | 86 | figure; 87 | h(1:2) = gscatter(X(:,1),X(:,2),Y); 88 | hold on 89 | h(3) = plot(X(Mdl2.IsSupportVector,1),... 90 | X(Mdl2.IsSupportVector,2),'ko','MarkerSize',10); 91 | title('Scatter Diagram with the Decision Boundary') 92 | contour(x1Grid,x2Grid,reshape(scores2(:,2),size(x1Grid)),[0 0],'k'); 93 | legend({'-1','1','Support Vectors'},'Location','Best'); 94 | hold off 95 | 96 | CVMdl2 = crossval(Mdl2); 97 | misclass2 = kfoldLoss(CVMdl2); 98 | misclass2 99 | 100 | 101 | %After the sigmoid slope adjustment, the new decision boundary seems to provide a better within-sample fit, and the cross-validation rate contracts by more than 66% -------------------------------------------------------------------------------- /SVM-Analyzation(Linear).m: -------------------------------------------------------------------------------- 1 | %Analyze Images Using Linear Support Vector Machines 2 | %{ 3 | This example shows how to determine which quadrant of an image a shape occupies by training an error-correcting output codes (ECOC) model 4 | comprised of linear SVM binary learners. This example also illustrates the disk-space consumption of ECOC models that store support vectors, 5 | their labels, and the estimated coefficients. 6 | %} 7 | 8 | %Create the Data Set 9 | %{ 10 | Randomly place a circle with radius five in a 50-by-50 image. Make 5000 images. Create a label for each image indicating the quadrant that the circle occupies. 11 | Quadrant 1 is in the upper right, quadrant 2 is in the upper left, quadrant 3 is in the lower left, and quadrant 4 is in the lower right. 12 | The predictors are the intensities of each pixel. 13 | %} 14 | d = 50; % Height and width of the images in pixels 15 | n = 5e4; % Sample size 16 | 17 | X = zeros(n,d^2); % Predictor matrix preallocation 18 | Y = zeros(n,1); % Label preallocation 19 | theta = 0:(1/d):(2*pi); 20 | r = 5; % Circle radius 21 | rng(1); % For reproducibility 22 | 23 | for j = 1:n; 24 | figmat = zeros(d); % Empty image 25 | c = datasample((r + 1):(d - r - 1),2); % Random circle center 26 | x = r*cos(theta) + c(1); % Make the circle 27 | y = r*sin(theta) + c(2); 28 | idx = sub2ind([d d],round(y),round(x)); % Convert to linear indexing 29 | figmat(idx) = 1; % Draw the circle 30 | X(j,:) = figmat(:); % Store the data 31 | Y(j) = (c(2) >= floor(d/2)) + 2*(c(2) < floor(d/2)) + ... 32 | (c(1) < floor(d/2)) + ... 33 | 2*((c(1) >= floor(d/2)) & (c(2) < floor(d/2))); % Determine the quadrant 34 | end 35 | 36 | %Plot an observation. 37 | figure; 38 | imagesc(figmat); 39 | h = gca; 40 | h.YDir = 'normal'; 41 | title(sprintf('Quadrant %d',Y(end))); 42 | 43 | %Train the ECOC Model 44 | %Use a 25% holdout sample and specify the training and holdout sample indices. 45 | p = 0.25; 46 | CVP = cvpartition(Y,'Holdout',p); % Cross-validation data partition 47 | isIdx = training(CVP); % Training sample indices 48 | oosIdx = test(CVP); % Test sample indices 49 | 50 | %{ 51 | Create an SVM template that specifies storing the support vectors of the binary learners. Pass it and the training data to fitcecoc to train the model. 52 | Determine the training sample classification error. 53 | %} 54 | t = templateSVM('SaveSupportVectors',true); 55 | MdlSV = fitcecoc(X(isIdx,:),Y(isIdx),'Learners',t); 56 | isLoss = resubLoss(MdlSV) 57 | 58 | %{ 59 | MdlSV is a trained ClassificationECOC multiclass model. It stores the training data and the support vectors of each binary learner. 60 | For large data sets, such as those in image analysis, the model can consume a lot of memory. 61 | %} 62 | %Determine the amount of disk space that the ECOC model consumes. 63 | infoMdlSV = whos('MdlSV'); 64 | mbMdlSV = infoMdlSV.bytes/1.049e6 65 | 66 | %Improve Model Efficiency 67 | %{ 68 | You can assess out-of-sample performance. You can also assess whether the model has been overfit with a compacted model 69 | that does not contain the support vectors, their related parameters, and the training data. 70 | %} 71 | 72 | %Discard the support vectors and related parameters from the trained ECOC model. Then, discard the training data from the resulting model by using compact. 73 | Mdl = discardSupportVectors(MdlSV); 74 | CMdl = compact(Mdl); 75 | info = whos('Mdl','CMdl'); 76 | [bytesCMdl,bytesMdl] = info.bytes; 77 | memReduction = 1 - [bytesMdl bytesCMdl]/infoMdlSV.bytes 78 | 79 | %{ 80 | In this case, discarding the support vectors reduces the memory consumption by about 3%. Compacting and discarding support vectors reduces the size by about 99.99%. 81 | 82 | An alternative way to manage support vectors is to reduce their numbers during training by specifying a larger box constraint, such as 100. 83 | Though SVM models that use fewer support vectors are more desirable and consume less memory, increasing the value of the box constraint tends to increase the training time. 84 | %} 85 | 86 | %Remove MdlSV and Mdl from the workspace. 87 | clear Mdl MdlSV; 88 | 89 | %Assess Holdout Sample Performance 90 | 91 | %Calculate the classification error of the holdout sample. Plot a sample of the holdout sample predictions. 92 | oosLoss = loss(CMdl,X(oosIdx,:),Y(oosIdx)) 93 | yHat = predict(CMdl,X(oosIdx,:)); 94 | nVec = 1:size(X,1); 95 | oosIdx = nVec(oosIdx); 96 | 97 | figure; 98 | for j = 1:9; 99 | subplot(3,3,j) 100 | imagesc(reshape(X(oosIdx(j),:),[d d])); 101 | h = gca; 102 | h.YDir = 'normal'; 103 | title(sprintf('Quadrant: %d',yHat(j))) 104 | end 105 | text(-1.33*d,4.5*d + 1,'Predictions','FontSize',17) 106 | 107 | %The model does not misclassify any holdout sample observations -------------------------------------------------------------------------------- /Color-Based_Segmentation(single).m: -------------------------------------------------------------------------------- 1 | %acquire image 2 | source = imread("vegetables.jpg"); 3 | imshow(source); 4 | title("Vegetables"); 5 | 6 | %Calculate Sample Colors in L*a*b* Color Space for Each Region 7 | %{ 8 | You can see six major colors in the image: the background color, red, green, purple, yellow, and magenta. 9 | The L*a*b* colorspace (also known as CIELAB or CIE L*a*b*) enables you to quantify these visual differences. 10 | 11 | The L*a*b* color space is derived from the CIE XYZ tristimulus values. The L*a*b* space consists of a luminosity 'L*' 12 | or brightness layer, chromaticity layer 'a*' indicating where color falls along the red-green axis, and chromaticity layer 'b*' 13 | indicating where the color falls along the blue-yellow axis. 14 | 15 | Your approach is to choose a small sample region for each color and to calculate each sample region's average color in 'a*b*' space. 16 | You will use these color markers to classify each pixel. 17 | %} 18 | 19 | load regioncoordinates; 20 | 21 | nColors = 6; 22 | sample_regions = false([size(source,1) size(source,2) nColors]); 23 | 24 | for count = 1:nColors 25 | sample_regions(:,:,count) = roipoly(source,region_coordinates(:,1,count), ... 26 | region_coordinates(:,2,count)); 27 | end 28 | 29 | imshow(sample_regions(:,:,2)) 30 | title('Sample Region for Red') 31 | 32 | %Convert your source RGB image into an L*a*b* image using rgb2lab 33 | labImage = rgb2lab(source); 34 | 35 | %Calculate the mean 'a*' and 'b*' value for each area that you extracted with roipoly. 36 | %These values serve as your color markers in 'a*b*' space 37 | AImage = labImage(:, :, 2); 38 | BImage = labImage(:, :, 3); 39 | 40 | color_markers = zeros([nColors, 2]); 41 | 42 | for count = 1:nColors 43 | color_markers(count,1) = mean2(AImage(sample_regions(:,:,count))); 44 | color_markers(count,2) = mean2(BImage(sample_regions(:,:,count))); 45 | end 46 | 47 | %Example the average color of the red sample region in 'a*b*' space is: 48 | fprintf('[%0.3f,%0.3f] \n',color_markers(2,1),color_markers(2,2)); 49 | 50 | %Classify Each Pixel Using the Nearest Neighbor Rule 51 | %{ 52 | Each color marker now has an 'a*' and a 'b*' value. You can classify each pixel in the lab_fabric image by calculating the Euclidean distance 53 | between that pixel and each color marker. The smallest distance will tell you that the pixel most closely matches that color marker. 54 | For example, if the distance between a pixel and the red color marker is the smallest, then the pixel would be labeled as a red pixel. 55 | 56 | Create an array that contains your color labels, i.e., 0 = background, 1 = red, 2 = green, 3 = purple, 4 = magenta, and 5 = yellow. 57 | %} 58 | color_labels = 0:nColors-1; 59 | 60 | %Initialize matrices to be used in the nearest neighbor classification 61 | AImage = double(AImage); 62 | BImage = double(BImage); 63 | distance = zeros([size(AImage), nColors]); 64 | 65 | %Perform classification 66 | for count = 1:nColors 67 | distance(:,:,count) = ( (AImage - color_markers(count,1)).^2 + ... 68 | (BImage - color_markers(count,2)).^2 ).^0.5; 69 | end 70 | 71 | [~,label] = min(distance,[],3); 72 | label = color_labels(label); 73 | clear distance; 74 | 75 | %Display Results of Nearest Neighbor Classification 76 | %{ 77 | The label matrix contains a color label for each pixel in the source image. Use the label matrix to separate objects in the original 78 | source image by color. Display the five segmented colors as a montage. Also display the background pixels in the image that are not 79 | classified as a color. 80 | %} 81 | rgb_label = repmat(label,[1 1 3]); 82 | segmented_images = zeros([size(source), nColors],'uint8'); 83 | 84 | for count = 1:nColors 85 | color = source; 86 | color(rgb_label ~= color_labels(count)) = 0; 87 | segmented_images(:,:,:,count) = color; 88 | end 89 | 90 | montage({segmented_images(:,:,:,2),segmented_images(:,:,:,3) ... 91 | segmented_images(:,:,:,4),segmented_images(:,:,:,5) ... 92 | segmented_images(:,:,:,6),segmented_images(:,:,:,1)}); 93 | title("Montage of Red, Green, Purple, Magenta, and Yellow Objects, and Background") 94 | 95 | %Display 'a*' and 'b*' Values of the Labeled Colors 96 | %{ 97 | You can see how well the nearest neighbor classification separated the different color populations by plotting the 'a*' and 'b*' values 98 | of pixels that were classified into separate colors. For display purposes, label each point with its color label. 99 | %} 100 | purple = [119/255 73/255 152/255]; 101 | plot_labels = {'k', 'r', 'g', purple, 'm', 'y'}; 102 | 103 | figure 104 | for count = 1:nColors 105 | plot(AImage(label==count-1),BImage(label==count-1),'.','MarkerEdgeColor', ... 106 | plot_labels{count}, 'MarkerFaceColor', plot_labels{count}); 107 | hold on; 108 | end 109 | 110 | title('Scatterplot of the segmented pixels in ''a*b*'' space'); 111 | xlabel('''a*'' values'); 112 | ylabel('''b*'' values'); 113 | %--------------------------------------------------------------------------% 114 | % Read the image and convert to L*a*b* color space 115 | I = imread('vegetables.jpg'); 116 | Ilab = rgb2lab(I); 117 | % Extract a* and b* channels and reshape 118 | ab = double(Ilab(:,:,2:3)); 119 | nrows = size(ab,1); 120 | ncols = size(ab,2); 121 | ab = reshape(ab,nrows*ncols,2); 122 | % Segmentation usign k-means 123 | nColors = 6; 124 | [cluster_idx, cluster_center] = kmeans(ab,nColors,... 125 | 'distance', 'sqEuclidean', ... 126 | 'Replicates', 3); 127 | % Show the result 128 | pixel_labels = reshape(cluster_idx,nrows,ncols); 129 | imshow(pixel_labels,[]), title('image labeled by cluster index') -------------------------------------------------------------------------------- /Color-Based_Segmentation(multi).m: -------------------------------------------------------------------------------- 1 | %acquire images 2 | source1 = imread("p1.jpg"); 3 | source2 = imread("p2.jpg"); 4 | source3 = imread("p3.jpg"); 5 | subplot(1,3,1), imshow(source1); 6 | title("Parthenon 1"); 7 | subplot(1,3,2), imshow(source2); 8 | title("Parthenon 2"); 9 | subplot(1,3,3), imshow(source3); 10 | title("Parthenon 3"); 11 | 12 | %Calculate Sample Colors in L*a*b* Color Space for Each Region 13 | %{ 14 | You can see six major colors in the image: the background color, red, green, purple, yellow, and magenta. 15 | The L*a*b* colorspace (also known as CIELAB or CIE L*a*b*) enables you to quantify these visual differences. 16 | 17 | The L*a*b* color space is derived from the CIE XYZ tristimulus values. The L*a*b* space consists of a luminosity 'L*' 18 | or brightness layer, chromaticity layer 'a*' indicating where color falls along the red-green axis, and chromaticity layer 'b*' 19 | indicating where the color falls along the blue-yellow axis. 20 | 21 | Your approach is to choose a small sample region for each color and to calculate each sample region's average color in 'a*b*' space. 22 | You will use these color markers to classify each pixel. 23 | %} 24 | 25 | load regioncoordinates; 26 | 27 | nColors = 6; 28 | sample_regions1 = false([size(source1,1) size(source1,2) nColors]); 29 | sample_regions2 = false([size(source2,1) size(source2,2) nColors]); 30 | sample_regions3 = false([size(source3,1) size(source3,2) nColors]); 31 | 32 | for count = 1:nColors 33 | sample_regions(:,:,count) = roipoly(source,region_coordinates(:,1,count), ... 34 | region_coordinates(:,2,count)); 35 | end 36 | 37 | imshow(sample_regions(:,:,2)) 38 | title('Sample Region for Red') 39 | 40 | %Convert your source RGB image into an L*a*b* image using rgb2lab 41 | labImage = rgb2lab(source); 42 | 43 | %Calculate the mean 'a*' and 'b*' value for each area that you extracted with roipoly. 44 | %These values serve as your color markers in 'a*b*' space 45 | AImage = labImage(:, :, 2); 46 | BImage = labImage(:, :, 3); 47 | 48 | color_markers = zeros([nColors, 2]); 49 | 50 | for count = 1:nColors 51 | color_markers(count,1) = mean2(AImage(sample_regions(:,:,count))); 52 | color_markers(count,2) = mean2(BImage(sample_regions(:,:,count))); 53 | end 54 | 55 | %Example the average color of the red sample region in 'a*b*' space is: 56 | fprintf('[%0.3f,%0.3f] \n',color_markers(2,1),color_markers(2,2)); 57 | 58 | %Classify Each Pixel Using the Nearest Neighbor Rule 59 | %{ 60 | Each color marker now has an 'a*' and a 'b*' value. You can classify each pixel in the lab_fabric image by calculating the Euclidean distance 61 | between that pixel and each color marker. The smallest distance will tell you that the pixel most closely matches that color marker. 62 | For example, if the distance between a pixel and the red color marker is the smallest, then the pixel would be labeled as a red pixel. 63 | 64 | Create an array that contains your color labels, i.e., 0 = background, 1 = red, 2 = green, 3 = purple, 4 = magenta, and 5 = yellow. 65 | %} 66 | color_labels = 0:nColors-1; 67 | 68 | %Initialize matrices to be used in the nearest neighbor classification 69 | AImage = double(AImage); 70 | BImage = double(BImage); 71 | distance = zeros([size(AImage), nColors]); 72 | 73 | %Perform classification 74 | for count = 1:nColors 75 | distance(:,:,count) = ( (AImage - color_markers(count,1)).^2 + ... 76 | (BImage - color_markers(count,2)).^2 ).^0.5; 77 | end 78 | 79 | [~,label] = min(distance,[],3); 80 | label = color_labels(label); 81 | clear distance; 82 | 83 | %Display Results of Nearest Neighbor Classification 84 | %{ 85 | The label matrix contains a color label for each pixel in the source image. Use the label matrix to separate objects in the original 86 | source image by color. Display the five segmented colors as a montage. Also display the background pixels in the image that are not 87 | classified as a color. 88 | %} 89 | rgb_label = repmat(label,[1 1 3]); 90 | segmented_images = zeros([size(source), nColors],'uint8'); 91 | 92 | for count = 1:nColors 93 | color = source; 94 | color(rgb_label ~= color_labels(count)) = 0; 95 | segmented_images(:,:,:,count) = color; 96 | end 97 | 98 | montage({segmented_images(:,:,:,2),segmented_images(:,:,:,3) ... 99 | segmented_images(:,:,:,4),segmented_images(:,:,:,5) ... 100 | segmented_images(:,:,:,6),segmented_images(:,:,:,1)}); 101 | title("Montage of Red, Green, Purple, Magenta, and Yellow Objects, and Background") 102 | 103 | %Display 'a*' and 'b*' Values of the Labeled Colors 104 | %{ 105 | You can see how well the nearest neighbor classification separated the different color populations by plotting the 'a*' and 'b*' values 106 | of pixels that were classified into separate colors. For display purposes, label each point with its color label. 107 | %} 108 | purple = [119/255 73/255 152/255]; 109 | plot_labels = {'k', 'r', 'g', purple, 'm', 'y'}; 110 | 111 | figure 112 | for count = 1:nColors 113 | plot(AImage(label==count-1),BImage(label==count-1),'.','MarkerEdgeColor', ... 114 | plot_labels{count}, 'MarkerFaceColor', plot_labels{count}); 115 | hold on; 116 | end 117 | 118 | title('Scatterplot of the segmented pixels in ''a*b*'' space'); 119 | xlabel('''a*'' values'); 120 | ylabel('''b*'' values'); 121 | 122 | %--------------------------------------------------------------------------% 123 | % Read the image and convert to L*a*b* color space 124 | I = imread('vegetables.jpg'); 125 | Ilab = rgb2lab(I); 126 | % Extract a* and b* channels and reshape 127 | ab = double(Ilab(:,:,2:3)); 128 | nrows = size(ab,1); 129 | ncols = size(ab,2); 130 | ab = reshape(ab,nrows*ncols,2); 131 | % Segmentation usign k-means 132 | nColors = 4; 133 | [cluster_idx, cluster_center] = kmeans(ab,nColors,... 134 | 'distance', 'sqEuclidean', ... 135 | 'Replicates', 3); 136 | % Show the result 137 | pixel_labels = reshape(cluster_idx,nrows,ncols); 138 | imshow(pixel_labels,[]), title('image labeled by cluster index') 139 | -------------------------------------------------------------------------------- /Gabor-Features_Texture_Extraction(gray).m: -------------------------------------------------------------------------------- 1 | %read image 2 | source = imread('pg.jpg'); 3 | 4 | %Design Array of Gabor Filters 5 | %{ 6 | Design an array of Gabor Filters which are tuned to different frequencies and orientations. 7 | The set of frequencies and orientations is designed to localize different, roughly orthogonal, 8 | subsets of frequency and orientation information in the input image. Regularly sample orientations 9 | between [0,150] degrees in steps of 30 degrees. Sample wavelength in increasing powers of two starting 10 | from 4/sqrt(2) up to the hypotenuse length of the input image. 11 | %} 12 | isize = size(source); 13 | numRows = isize(1); 14 | numCols = isize(2); 15 | 16 | wavelengthMin = 4/sqrt(2); 17 | wavelengthMax = hypot(numRows,numCols); 18 | n = floor(log2(wavelengthMax/wavelengthMin)); 19 | wavelength = 2.^(0:(n-2)) * wavelengthMin; 20 | 21 | deltaTheta = 45; 22 | orientation = 0:deltaTheta:(180-deltaTheta); 23 | 24 | c = length(wavelength); 25 | r = length(orientation); 26 | 27 | g = gabor(wavelength,orientation); 28 | 29 | %Visualize the real part of the spatial convolution kernel of each Gabor filter in the array 30 | figure(1); 31 | subplot(c,r,1) 32 | for p = 1:length(g) 33 | subplot(c,r,p); 34 | imshow(real(g(p).SpatialKernel),[]); 35 | lambda = g(p).Wavelength; 36 | theta = g(p).Orientation; 37 | title(sprintf('Re[h(x,y)], \\lambda = %d, \\theta = %d',lambda,theta)); 38 | end 39 | 40 | %Display the magnitude results 41 | gabormag = imgaborfilt(source,g); 42 | outSize = size(gabormag); 43 | gm = reshape(gabormag,[outSize(1:2),1,outSize(3)]); 44 | figure(2), montage(gm,'DisplayRange',[]); 45 | title('Montage of gabor magnitude output images.'); 46 | 47 | %Display the magnitude calculated by the Gabor filter 48 | figure(3); 49 | subplot(c,r,1) 50 | for p = 1:length(g) 51 | [mag,phase] = imgaborfilt(source,g(p)); 52 | subplot(c,r,p); 53 | imshow(mag,[]) 54 | theta = g(p).Orientation; 55 | lambda = g(p).Wavelength; 56 | title(sprintf('Gabor magnitude\nOrientation=%d, Wavelength=%d',theta,lambda)); 57 | end 58 | 59 | %Display the phase calculated by the Gabor filter 60 | figure(4); 61 | subplot(c,r,1) 62 | for p = 1:length(g) 63 | [mag,phase] = imgaborfilt(source,g(p)); 64 | subplot(c,r,p); 65 | imshow(phase,[]); 66 | theta = g(p).Orientation; 67 | lambda = g(p).Wavelength; 68 | title(sprintf('Gabor phase\nOrientation=%d, Wavelength=%d',theta,lambda)); 69 | end 70 | 71 | %Post-process the Gabor Magnitude Images into Gabor Features. 72 | %{ 73 | To use Gabor magnitude responses as features for use in classification, some post-processing is required. 74 | This post processing includes Gaussian smoothing, adding additional spatial information to the feature set, 75 | reshaping our feature set to the form expected by the pca and kmeans functions, and normalizing the feature 76 | information to a common variance and mean. 77 | 78 | Each Gabor magnitude image contains some local variations, even within well segmented regions of constant texture. 79 | These local variations will throw off the segmentation. We can compensate for these variations using simple 80 | Gaussian low-pass filtering to smooth the Gabor magnitude information. We choose a sigma that is matched 81 | to the Gabor filter that extracted each feature. We introduce a smoothing term K that controls how much smoothing 82 | is applied to the Gabor magnitude responses. 83 | %} 84 | for i = 1:length(g) 85 | sigma = 0.5*g(i).Wavelength; 86 | K = 3; 87 | gabormag(:,:,i) = imgaussfilt(gabormag(:,:,i),K*sigma); 88 | end 89 | 90 | %{ 91 | When constructing Gabor feature sets for classification, it is useful to add a map of spatial location information in both X and Y. 92 | This additional information allows the classifier to prefer groupings which are close together spatially. 93 | %} 94 | X = 1:numCols; 95 | Y = 1:numRows; 96 | [X,Y] = meshgrid(X,Y); 97 | featureSet = cat(3,gabormag,X); 98 | featureSet = cat(3,featureSet,Y); 99 | 100 | %{ 101 | Reshape data into a matrix X of the form expected by the kmeans function. Each pixel in the image grid is a separate datapoint, 102 | and each plane in the variable featureSet is a separate feature. In this example, there is a separate feature for each filter 103 | in the Gabor filter bank, plus two additional features from the spatial information that was added in the previous step. 104 | In total, there are 24 Gabor features and 2 spatial features for each pixel in the input image. 105 | %} 106 | numPoints = numRows*numCols; 107 | X = reshape(featureSet,numRows*numCols,[]); 108 | 109 | %Normalize the features to be zero mean, unit variance. 110 | X = bsxfun(@minus, X, mean(X)); 111 | X = bsxfun(@rdivide,X,std(X)); 112 | 113 | %{ 114 | Visualize the feature set. To get a sense of what the Gabor magnitude features look like, Principal Component Analysis can be used 115 | to move from a 26-D representation of each pixel in the input image into a 1-D intensity value for each pixel. 116 | %} 117 | coeff = pca(X); 118 | feature2DImage = reshape(X*coeff(:,1),numRows,numCols); 119 | figure(5) 120 | imshow(feature2DImage,[]) 121 | 122 | %Classify Gabor Texture Features using kmeans 123 | %{ 124 | Repeat k-means clustering five times to avoid local minima when searching for means that minimize objective function. 125 | The only prior information assumed in this example is how many distinct regions of texture are present in the image being segmented. 126 | There are two distinct regions in this case. 127 | %} 128 | L = kmeans(X,2,'Replicates',5); 129 | 130 | %Visualize segmentation using label2rgb. 131 | L = reshape(L,[numRows numCols]); 132 | figure(6) 133 | imshow(label2rgb(L)) 134 | 135 | %{ 136 | Visualize the segmented image using imshowpair. Examine the foreground and background images that result from the mask BW that is associated 137 | with the label matrix L. 138 | %} 139 | Aseg1 = zeros(size(source),'like',source); 140 | Aseg2 = zeros(size(source),'like',source); 141 | BW = L == 2; 142 | Aseg1(BW) = source(BW); 143 | Aseg2(~BW) = source(~BW); 144 | figure(7) 145 | imshowpair(Aseg1,Aseg2,'montage'); -------------------------------------------------------------------------------- /Gabor-Features_Texture_Extraction(rgb).m: -------------------------------------------------------------------------------- 1 | %read image 2 | img = imread('vegetables.jpg'); 3 | 4 | source = rgb2gray(img); 5 | 6 | %Design Array of Gabor Filters 7 | %{ 8 | Design an array of Gabor Filters which are tuned to different frequencies and orientations. 9 | The set of frequencies and orientations is designed to localize different, roughly orthogonal, 10 | subsets of frequency and orientation information in the input image. Regularly sample orientations 11 | between [0,150] degrees in steps of 30 degrees. Sample wavelength in increasing powers of two starting 12 | from 4/sqrt(2) up to the hypotenuse length of the input image. 13 | %} 14 | isize = size(source); 15 | numRows = isize(1); 16 | numCols = isize(2); 17 | 18 | wavelengthMin = 4/sqrt(2); 19 | wavelengthMax = hypot(numRows,numCols); 20 | n = floor(log2(wavelengthMax/wavelengthMin)); 21 | wavelength = 2.^(0:(n-2)) * wavelengthMin; 22 | 23 | deltaTheta = 45; 24 | orientation = 0:deltaTheta:(180-deltaTheta); 25 | 26 | c = length(wavelength); 27 | r = length(orientation); 28 | 29 | g = gabor(wavelength,orientation); 30 | 31 | %Visualize the real part of the spatial convolution kernel of each Gabor filter in the array 32 | figure(1); 33 | subplot(c,r,1) 34 | for p = 1:length(g) 35 | subplot(c,r,p); 36 | imshow(real(g(p).SpatialKernel),[]); 37 | lambda = g(p).Wavelength; 38 | theta = g(p).Orientation; 39 | title(sprintf('Re[h(x,y)], \\lambda = %d, \\theta = %d',lambda,theta)); 40 | end 41 | 42 | %Display the magnitude results 43 | gabormag = imgaborfilt(source,g); 44 | outSize = size(gabormag); 45 | gm = reshape(gabormag,[outSize(1:2),1,outSize(3)]); 46 | figure(2), montage(gm,'DisplayRange',[]); 47 | title('Montage of gabor magnitude output images.'); 48 | 49 | %Display the magnitude calculated by the Gabor filter 50 | figure(3); 51 | subplot(c,r,1) 52 | for p = 1:length(g) 53 | [mag,phase] = imgaborfilt(source,g(p)); 54 | subplot(c,r,p); 55 | imshow(mag,[]) 56 | theta = g(p).Orientation; 57 | lambda = g(p).Wavelength; 58 | title(sprintf('Gabor magnitude\nOrientation=%d, Wavelength=%d',theta,lambda)); 59 | end 60 | 61 | %Display the phase calculated by the Gabor filter 62 | figure(4); 63 | subplot(c,r,1) 64 | for p = 1:length(g) 65 | [mag,phase] = imgaborfilt(source,g(p)); 66 | subplot(c,r,p); 67 | imshow(phase,[]); 68 | theta = g(p).Orientation; 69 | lambda = g(p).Wavelength; 70 | title(sprintf('Gabor phase\nOrientation=%d, Wavelength=%d',theta,lambda)); 71 | end 72 | 73 | %Post-process the Gabor Magnitude Images into Gabor Features. 74 | %{ 75 | To use Gabor magnitude responses as features for use in classification, some post-processing is required. 76 | This post processing includes Gaussian smoothing, adding additional spatial information to the feature set, 77 | reshaping our feature set to the form expected by the pca and kmeans functions, and normalizing the feature 78 | information to a common variance and mean. 79 | 80 | Each Gabor magnitude image contains some local variations, even within well segmented regions of constant texture. 81 | These local variations will throw off the segmentation. We can compensate for these variations using simple 82 | Gaussian low-pass filtering to smooth the Gabor magnitude information. We choose a sigma that is matched 83 | to the Gabor filter that extracted each feature. We introduce a smoothing term K that controls how much smoothing 84 | is applied to the Gabor magnitude responses. 85 | %} 86 | for i = 1:length(g) 87 | sigma = 0.5*g(i).Wavelength; 88 | K = 3; 89 | gabormag(:,:,i) = imgaussfilt(gabormag(:,:,i),K*sigma); 90 | end 91 | 92 | %{ 93 | When constructing Gabor feature sets for classification, it is useful to add a map of spatial location information in both X and Y. 94 | This additional information allows the classifier to prefer groupings which are close together spatially. 95 | %} 96 | X = 1:numCols; 97 | Y = 1:numRows; 98 | [X,Y] = meshgrid(X,Y); 99 | featureSet = cat(3,gabormag,X); 100 | featureSet = cat(3,featureSet,Y); 101 | 102 | %{ 103 | Reshape data into a matrix X of the form expected by the kmeans function. Each pixel in the image grid is a separate datapoint, 104 | and each plane in the variable featureSet is a separate feature. In this example, there is a separate feature for each filter 105 | in the Gabor filter bank, plus two additional features from the spatial information that was added in the previous step. 106 | In total, there are 24 Gabor features and 2 spatial features for each pixel in the input image. 107 | %} 108 | numPoints = numRows*numCols; 109 | X = reshape(featureSet,numRows*numCols,[]); 110 | 111 | %Normalize the features to be zero mean, unit variance. 112 | X = bsxfun(@minus, X, mean(X)); 113 | X = bsxfun(@rdivide,X,std(X)); 114 | 115 | %{ 116 | Visualize the feature set. To get a sense of what the Gabor magnitude features look like, Principal Component Analysis can be used 117 | to move from a 26-D representation of each pixel in the input image into a 1-D intensity value for each pixel. 118 | %} 119 | coeff = pca(X); 120 | feature2DImage = reshape(X*coeff(:,1),numRows,numCols); 121 | figure(5) 122 | imshow(feature2DImage,[]) 123 | 124 | %Classify Gabor Texture Features using kmeans 125 | %{ 126 | Repeat k-means clustering five times to avoid local minima when searching for means that minimize objective function. 127 | The only prior information assumed in this example is how many distinct regions of texture are present in the image being segmented. 128 | There are two distinct regions in this case. 129 | %} 130 | L = kmeans(X,2,'Replicates',5); 131 | 132 | %Visualize segmentation using label2rgb. 133 | L = reshape(L,[numRows numCols]); 134 | figure(6) 135 | imshow(label2rgb(L)) 136 | 137 | %{ 138 | Visualize the segmented image using imshowpair. Examine the foreground and background images that result from the mask BW that is associated 139 | with the label matrix L. 140 | %} 141 | Aseg1 = zeros(size(img),'like',img); 142 | Aseg2 = zeros(size(img),'like',img); 143 | BW = L == 2; 144 | BW = repmat(BW,[1 1 3]); 145 | Aseg1(BW) = img(BW); 146 | Aseg2(~BW) = img(~BW); 147 | figure(7) 148 | imshowpair(Aseg1,Aseg2,'montage'); -------------------------------------------------------------------------------- /Superpixel_Texture-Extraction(Gabor-Features).m: -------------------------------------------------------------------------------- 1 | %read image 2 | source = imread('vegetables.jpg'); 3 | 4 | igray = rgb2gray(source); 5 | 6 | %Design Array of Gabor Filters 7 | %{ 8 | Design an array of Gabor Filters which are tuned to different frequencies and orientations. 9 | The set of frequencies and orientations is designed to localize different, roughly orthogonal, 10 | subsets of frequency and orientation information in the input image. Regularly sample orientations 11 | between [0,150] degrees in steps of 30 degrees. Sample wavelength in increasing powers of two starting 12 | from 4/sqrt(2) up to the hypotenuse length of the input image. 13 | %} 14 | isize = size(source); 15 | numRows = isize(1); 16 | numCols = isize(2); 17 | 18 | wavelengthMin = 4/sqrt(2); 19 | wavelengthMax = hypot(numRows,numCols); 20 | n = floor(log2(wavelengthMax/wavelengthMin)); 21 | wavelength = 2.^(0:(n-2)) * wavelengthMin; 22 | 23 | deltaTheta = 45; 24 | orientation = 0:deltaTheta:(180-deltaTheta); 25 | 26 | g = gabor(wavelength,orientation); 27 | 28 | %{ 29 | Extract Gabor magnitude features from source image. When working with Gabor filters, it is common to work 30 | with the magnitude response of each filter. Gabor magnitude response is also sometimes referred to as "Gabor Energy". 31 | Each MxN Gabor magnitude output image in gabormag(:,:,ind) is the output of the corresponding Gabor filter g(ind). 32 | %} 33 | gabormag = imgaborfilt(igray,g); 34 | 35 | %Post-process the Gabor Magnitude Images into Gabor Features. 36 | %{ 37 | To use Gabor magnitude responses as features for use in classification, some post-processing is required. 38 | This post processing includes Gaussian smoothing, adding additional spatial information to the feature set, 39 | reshaping our feature set to the form expected by the pca and kmeans functions, and normalizing the feature 40 | information to a common variance and mean. 41 | 42 | Each Gabor magnitude image contains some local variations, even within well segmented regions of constant texture. 43 | These local variations will throw off the segmentation. We can compensate for these variations using simple 44 | Gaussian low-pass filtering to smooth the Gabor magnitude information. We choose a sigma that is matched 45 | to the Gabor filter that extracted each feature. We introduce a smoothing term K that controls how much smoothing 46 | is applied to the Gabor magnitude responses. 47 | %} 48 | for i = 1:length(g) 49 | sigma = 0.5*g(i).Wavelength; 50 | K = 3; 51 | gabormag(:,:,i) = imgaussfilt(gabormag(:,:,i),K*sigma); 52 | end 53 | 54 | %{ 55 | When constructing Gabor feature sets for classification, it is useful to add a map of spatial location information in both X and Y. 56 | This additional information allows the classifier to prefer groupings which are close together spatially. 57 | %} 58 | X = 1:numCols; 59 | Y = 1:numRows; 60 | [X,Y] = meshgrid(X,Y); 61 | featureSet = cat(3,gabormag,X); 62 | featureSet = cat(3,featureSet,Y); 63 | 64 | %{ 65 | Reshape data into a matrix X of the form expected by the kmeans function. Each pixel in the image grid is a separate datapoint, 66 | and each plane in the variable featureSet is a separate feature. In this example, there is a separate feature for each filter 67 | in the Gabor filter bank, plus two additional features from the spatial information that was added in the previous step. 68 | In total, there are 24 Gabor features and 2 spatial features for each pixel in the input image. 69 | %} 70 | numPoints = numRows*numCols; 71 | X = reshape(featureSet,numRows*numCols,[]); 72 | 73 | %Normalize the features to be zero mean, unit variance. 74 | X = bsxfun(@minus, X, mean(X)); 75 | X = bsxfun(@rdivide,X,std(X)); 76 | 77 | %{ 78 | Visualize the feature set. To get a sense of what the Gabor magnitude features look like, Principal Component Analysis can be used 79 | to move from a 26-D representation of each pixel in the input image into a 1-D intensity value for each pixel. 80 | %} 81 | coeff = pca(X); 82 | feature2DImage = reshape(X*coeff(:,1),numRows,numCols); 83 | figure 84 | imshow(feature2DImage,[]) 85 | 86 | %Classify Gabor Texture Features using kmeans 87 | %{ 88 | Repeat k-means clustering five times to avoid local minima when searching for means that minimize objective function. 89 | The only prior information assumed in this example is how many distinct regions of texture are present in the image being segmented. 90 | There are two distinct regions in this case. 91 | %} 92 | L = kmeans(X,2,'Replicates',5); 93 | 94 | %Visualize segmentation using label2rgb. 95 | L = reshape(L,[numRows numCols]); 96 | figure 97 | imshow(label2rgb(L)) 98 | 99 | %{ 100 | Visualize the segmented image using imshowpair. Examine the foreground and background images that result from the mask BW that is associated 101 | with the label matrix L. 102 | %} 103 | Aseg1 = zeros(size(source),'like',source); 104 | Aseg2 = zeros(size(source),'like',source); 105 | BW = L == 2; 106 | BW = repmat(BW,[1 1 3]); 107 | Aseg1(BW) = source(BW); 108 | Aseg2(~BW) = source(~BW); 109 | figure 110 | imshowpair(Aseg1,Aseg2,'montage'); 111 | 112 | %--------------------------------------------------------------------------------------------------------------------------------------------% 113 | %Construct Gabor Filter Array and Apply to Input Image 114 | source = imread('pg.jpg'); 115 | 116 | %Create an array of Gabor filters 117 | wavelength = 20; 118 | orientation = [0 45 90 135]; 119 | g = gabor(wavelength,orientation); 120 | 121 | %Apply the filters to the image 122 | outMag = imgaborfilt(source,g); 123 | 124 | %Display the results 125 | outSize = size(outMag); 126 | outMag = reshape(outMag,[outSize(1:2),1,outSize(3)]); 127 | figure, montage(outMag,'DisplayRange',[]); 128 | title('Montage of gabor magnitude output images.'); 129 | 130 | %--------------------------------------------------------------------------------------------------------------------------------------------% 131 | %Construct Gabor Filter Array and Visualize Wavelength and Orientation 132 | %Create array of Gabor filters 133 | g = gabor([5 10],[0 90]); 134 | 135 | %Visualize the real part of the spatial convolution kernel of each Gabor filter in the array 136 | figure; 137 | subplot(2,2,1) 138 | for p = 1:length(g) 139 | subplot(2,2,p); 140 | imshow(real(g(p).SpatialKernel),[]); 141 | lambda = g(p).Wavelength; 142 | theta = g(p).Orientation; 143 | title(sprintf('Re[h(x,y)], \\lambda = %d, \\theta = %d',lambda,theta)); 144 | end 145 | 146 | %--------------------------------------------------------------------------------------------------------------------------------------------% 147 | %Apply Single Gabor Filter to Input Image 148 | %Read image into the workspace 149 | source = imread('p1.jpg'); 150 | 151 | %Convert image to grayscale. 152 | source = rgb2gray(source); 153 | 154 | %Apply Gabor filter to image. 155 | wavelength = 4; 156 | orientation = 90; 157 | [mag,phase] = imgaborfilt(source,wavelength,orientation); 158 | 159 | %Display original image with plots of the magnitude and phase calculated by the Gabor filter 160 | figure 161 | subplot(1,3,1); 162 | imshow(source); 163 | title('Original Image'); 164 | subplot(1,3,2); 165 | imshow(mag,[]) 166 | title('Gabor magnitude'); 167 | subplot(1,3,3); 168 | imshow(phase,[]); 169 | title('Gabor phase'); 170 | 171 | %--------------------------------------------------------------------------------------------------------------------------------------------% 172 | %Apply Array of Gabor Filters to Input Image 173 | %Read image into the workspace 174 | source = imread('pg.jpg'); 175 | 176 | %Create array of Gabor filters, called a filter bank. This filter bank contains two orientations and two wavelengths 177 | gaborArray = gabor([4 8],[0 90]); 178 | 179 | %Apply filters to input image 180 | gaborMag = imgaborfilt(source,gaborArray); 181 | 182 | %Display results. The figure shows the magnitude response for each filter 183 | figure 184 | subplot(2,2,1); 185 | for p = 1:4 186 | subplot(2,2,p) 187 | imshow(gaborMag(:,:,p),[]); 188 | theta = gaborArray(p).Orientation; 189 | lambda = gaborArray(p).Wavelength; 190 | title(sprintf('Orientation=%d, Wavelength=%d',theta,lambda)); 191 | end --------------------------------------------------------------------------------