├── Feature selection based on structured sparsity ├── Discriminative least squares regression for multiclass classification and feature selection │ ├── LEAST_~1.M │ ├── OPTIMI~1.M │ ├── OPTIMI~2.M │ ├── SOLVE_~1.M │ ├── TRAIN_~1.M │ ├── data_XX.mat │ ├── data_Y_id.mat │ ├── demo_for_feature_selection.m │ └── readme_for_feature_selection.txt ├── Efficient and robust feature selection via joint l2,1-norms minimization │ ├── FeatureSelection.m │ ├── L21R21.m │ ├── L21R21_inv.m │ └── readme.txt ├── Exact top-k feature selection via l2,0-norm constraint │ └── FSRobust_ALM.m ├── Joint embedding learning and sparse regression │ ├── compute_W.m │ ├── compute_Y.m │ └── jelsr.m ├── L1 │ ├── ProducePoNe.m │ └── ProducePoNeV1.m ├── L21-Norm Regularized Discriminative Feature Selection for Unsupervised Learning │ ├── LocalDisAna.m │ ├── LquadR21_reg.m │ └── call.m └── l2,1 regularized correntropy for robust feature selection │ ├── CRFS.m │ ├── RFS.m │ ├── SY2MY.m │ └── test.m ├── Multi-View Feature Selection for Heterogeneous Face Recognition ├── MvFS.m └── README.txt ├── README.txt └── Student's t-test └── fsTtest.m /Feature selection based on structured sparsity/Discriminative least squares regression for multiclass classification and feature selection/LEAST_~1.M: -------------------------------------------------------------------------------- 1 | % CORRESPONDENCE INFORMATION 2 | % This code is written by Shiming Xiang and Feiping Nie 3 | 4 | % Shiming Xiang : National Laboratory of Pattern Recognition, Institute of Automation, Academy of Sciences, Beijing 100190 5 | % Email: smxiang@gmail.com 6 | % Feiping Nie: FIT Building 3-120 Room, Tsinghua Univeristy, Beijing, China, 100084 7 | % Email: feipingnie@gmail.com 8 | 9 | 10 | % Comments and bug reports are welcome. Email to feipingnie@gmail.com OR smxiang@gmail.com 11 | 12 | % WORK SETTING: 13 | % This code has been compiled and tested by using matlab 7.0 14 | 15 | % For more detials, please see the manuscript: 16 | % Shiming Xiang, Feiping Nie, Gaofeng Meng, Chunhong Pan, and Changshui Zhang. 17 | % Discriminative Least Squares Regression for Multiclass Classification and Feature Selection. 18 | % IEEE Transactions on Neural Netwrok and Learning System (T-NNLS), volumn 23, issue 11, pages 1738-1754, 2012. 19 | 20 | % Last Modified: Nov. 2, 2012, By Shiming Xiang 21 | 22 | 23 | 24 | function [W, b] = least_squares_regression(X, Y, gamma) 25 | 26 | % X: each column is a data point 27 | % Y: each column is an target data point: such as [0, 1, 0, ..., 0]' 28 | % gamma: a positive scalar 29 | 30 | % return 31 | % W and b 32 | % here we use the following equivalent model: y = W' x + b 33 | 34 | [dim, N] = size (X); 35 | [dim_reduced, N] = size(Y); 36 | 37 | % first step, remove the mean! 38 | XMean = mean(X')'; % is a column vector 39 | XX = X - repmat(XMean, 1, N); % each column is a data point 40 | 41 | W = []; 42 | b = []; 43 | if dim < N 44 | 45 | % W = pinv( XX * XX' + gamma * eye(dim)) * (XX * Y'); 46 | % Note that the above sentence can be repalced by the following sentences. So, it is more fast. 47 | t0 = XX * XX' + gamma * eye(dim); 48 | W = t0 \ (XX * Y'); 49 | 50 | b = Y - W' * X; % each column is an error vector 51 | b = mean(b')'; % now b is a column vector 52 | 53 | else 54 | t0 = XX' * XX + gamma * eye(N); 55 | W = XX * (t0 \ Y'); 56 | 57 | 58 | b = Y - W' * X; % each column is an error vector 59 | b = mean(b')'; % now b is a column vector 60 | 61 | end 62 | 63 | 64 | 65 | -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/Discriminative least squares regression for multiclass classification and feature selection/OPTIMI~1.M: -------------------------------------------------------------------------------- 1 | % CORRESPONDENCE INFORMATION 2 | % This code is written by Shiming Xiang and Feiping Nie 3 | 4 | % Shiming Xiang : National Laboratory of Pattern Recognition, Institute of Automation, Academy of Sciences, Beijing 100190 5 | % Email: smxiang@gmail.com 6 | % Feiping Nie: Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX 76019 USA 7 | % Email: feipingnie@gmail.com 8 | 9 | 10 | % Comments and bug reports are welcome. Email to feipingnie@gmail.com OR smxiang@gmail.com 11 | 12 | % WORK SETTING: 13 | % This code has been compiled and tested by using matlab 7.0 14 | 15 | % For more detials, please see the manuscript: 16 | % Shiming Xiang, Feiping Nie, Gaofeng Meng, Chunhong Pan, and Changshui Zhang. 17 | % Discriminative Least Squares Regression for Multiclass Classification and Feature Selection. 18 | % IEEE Transactions on Neural Netwrok and Learning System (T-NNLS), volumn 23, issue 11, pages 1738-1754, 2012. 19 | 20 | % Last Modified: Nov. 2, 2012, By Shiming Xiang 21 | 22 | function [X, obj] = optimize_L21(A, Y, lambda_para) 23 | %% 21-norm loss with 21-norm regularization 24 | %: each row is a data point 25 | 26 | % Note that: 27 | % min_X || A X - Y||_21 + lambda_para * ||X||_21 is equivalent to the following problem: 28 | 29 | % min_X ||X||_21 + ||E||_21 30 | % s.t. A X + lambda_para*E = Y 31 | 32 | 33 | 34 | 35 | 36 | [m n] = size(A); 37 | [X, obj] = solve_iteratively_L21([A, lambda_para * eye(m)], Y); 38 | 39 | X = X(1:n, :); 40 | obj = lambda_para * obj; 41 | -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/Discriminative least squares regression for multiclass classification and feature selection/OPTIMI~2.M: -------------------------------------------------------------------------------- 1 | % CORRESPONDENCE INFORMATION 2 | % This code is written by Shiming Xiang and Feiping Nie 3 | 4 | % Shiming Xiang : National Laboratory of Pattern Recognition, Institute of Automation, Academy of Sciences, Beijing 100190 5 | % Email: smxiang@gmail.com 6 | % Feiping Nie: Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX 76019 USA 7 | % Email: feipingnie@gmail.com 8 | 9 | 10 | % Comments and bug reports are welcome. Email to feipingnie@gmail.com OR smxiang@gmail.com 11 | 12 | % WORK SETTING: 13 | % This code has been compiled and tested by using matlab 7.0 14 | 15 | % For more detials, please see the manuscript: 16 | % Shiming Xiang, Feiping Nie, Gaofeng Meng, Chunhong Pan, and Changshui Zhang. 17 | % Discriminative Least Squares Regression for Multiclass Classification and Feature Selection. 18 | % IEEE Transactions on Neural Netwrok and Learning System (T-NNLS), volumn 23, issue 11, pages 1738-1754, 2012. 19 | 20 | % Last Modified: Nov. 2, 2012, By Shiming Xiang 21 | 22 | 23 | function M = optimize_m_matrix(P, B) 24 | 25 | % P: the residual matrix, each row is a residual vector 26 | % B: construction matrix related to class label, each row is a constructtion vector 27 | 28 | % return: The optimized matrix 29 | 30 | N = size(P, 1); 31 | num_class = size(B, 2); 32 | 33 | M1 = zeros(N, num_class); 34 | 35 | M = max( B .* P, M1); 36 | 37 | return; -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/Discriminative least squares regression for multiclass classification and feature selection/SOLVE_~1.M: -------------------------------------------------------------------------------- 1 | % CORRESPONDENCE INFORMATION 2 | % This code is written by Shiming Xiang and Feiping Nie 3 | 4 | % Shiming Xiang : National Laboratory of Pattern Recognition, Institute of Automation, Academy of Sciences, Beijing 100190 5 | % Email: smxiang@gmail.com 6 | % Feiping Nie: Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX 76019 USA 7 | % Email: feipingnie@gmail.com 8 | 9 | 10 | % Comments and bug reports are welcome. Email to feipingnie@gmail.com OR smxiang@gmail.com 11 | 12 | % WORK SETTING: 13 | % This code has been compiled and tested by using matlab 7.0 14 | 15 | % For more detials, please see the manuscript: 16 | % Shiming Xiang, Feiping Nie, Gaofeng Meng, Chunhong Pan, and Changshui Zhang. 17 | % Discriminative Least Squares Regression for Multiclass Classification and Feature Selection. 18 | % IEEE Transactions on Neural Netwrok and Learning System (T-NNLS), volumn 23, issue 11, pages 1738-1754, 2012. 19 | 20 | % Last Modified: Nov. 2, 2012, By Shiming Xiang 21 | 22 | 23 | function [X, obj] = solve_iteratively_L21(A, Y) 24 | %% minimize 21-norm with equality constraints 25 | 26 | % Solve the following equivalent problem: 27 | % min_X ||X||_21 28 | % s.t. A X = Y 29 | 30 | n = size(A, 2); 31 | m = size(A, 1); 32 | 33 | ITER = 10; 34 | obj = zeros(ITER,1); 35 | d = ones(n, 1); % initialization 36 | 37 | epsilon = 10^-5; 38 | obj1 = -1000; 39 | 40 | for iter = 1 : ITER 41 | D = spdiags(d, 0, n, n); 42 | lambda = ((A * D) * A') \ Y; 43 | X = D *(A' * lambda); 44 | d = sqrt(sum(X .* X,2)) + 0.00000001; 45 | % d = sqrt(sum(X .* X,2)); 46 | 47 | obj(iter) = sum(d); 48 | 49 | if abs( obj(iter) - obj1) < epsilon 50 | break; 51 | end 52 | obj1 = obj(iter); 53 | end 54 | 55 | 56 | return; -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/Discriminative least squares regression for multiclass classification and feature selection/TRAIN_~1.M: -------------------------------------------------------------------------------- 1 | % CORRESPONDENCE INFORMATION 2 | % This code is written by Shiming Xiang and Feiping Nie 3 | 4 | % Shiming Xiang : National Laboratory of Pattern Recognition, Institute of Automation, Academy of Sciences, Beijing 100190 5 | % Email: smxiang@gmail.com 6 | % Feiping Nie: Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX 76019 USA 7 | % Email: feipingnie@gmail.com 8 | 9 | 10 | % Comments and bug reports are welcome. Email to feipingnie@gmail.com OR smxiang@gmail.com 11 | 12 | % WORK SETTING: 13 | % This code has been compiled and tested by using matlab 7.0 14 | 15 | % For more detials, please see the manuscript: 16 | % Shiming Xiang, Feiping Nie, Gaofeng Meng, Chunhong Pan, and Changshui Zhang. 17 | % Discriminative Least Squares Regression for Multiclass Classification and Feature Selection. 18 | % IEEE Transactions on Neural Netwrok and Learning System (T-NNLS), volumn 23, issue 11, pages 1738-1754, 2012. 19 | 20 | % Last Modified: Nov. 2, 2012, By Shiming Xiang 21 | 22 | function [W, b] = train_feature_selection(X, class_id, lambda_para, u_para, iters, epsilon) 23 | 24 | % X: each column is a data point 25 | % class_id: a column vector, a column vector, such as [1, 2, 3, 4, 1, 3, 2, ...]' 26 | % lambda_para: The lambda parameter in Equation (29), in the paper 27 | % u_para: the parameter c in the theorem, it is a very large positive number, generally, it is infinite. 28 | % iters: the largest number of iterations 29 | % epsilon: For convergence control 30 | 31 | [dim, N] = size(X); 32 | num_class = max(class_id); 33 | 34 | Y = zeros(num_class, N); 35 | B = -1 * ones(N, num_class); 36 | 37 | 38 | for i = 1 : N 39 | Y( class_id(i), i) = 1.0; 40 | B(i, class_id(i) ) = 1.0; 41 | end 42 | 43 | [W0, b0] = least_squares_regression(X, Y, lambda_para); % Here we use the soultion to the standard least squares regreesion as the initial solution 44 | W = W0; 45 | b = b0; 46 | 47 | XX = [X; u_para * ones(1, N)]; % construct the new training data by using homogeneous coordinates 48 | 49 | for i = 1: iters 50 | 51 | %first, optimize matrix M. 52 | P = X' * W0 + ones(N, 1) * b0' - Y'; % each row is a residual vector 53 | 54 | M = optimize_m_matrix(P, B); 55 | 56 | % optimize W and b by using the theorem 57 | R = Y' + (B .* M); 58 | [TT, obj] = optimize_L21(XX', R, lambda_para); 59 | W = TT(1:dim, :); 60 | b = u_para * TT(dim + 1, :)'; % According to Lemma 1 61 | 62 | if ( trace ( (W - W0)' * (W - W0) ) + ( b - b0)' * (b - b0) < epsilon) % check if it reaches the convergence point 63 | break; 64 | end 65 | 66 | W0 = W; 67 | b0 = b; 68 | 69 | end 70 | 71 | 72 | return; -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/Discriminative least squares regression for multiclass classification and feature selection/data_XX.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/guijiejie/Feature-selection/5ddfde14288c6a3f3f5e2937a5d67d26d49f21ea/Feature selection based on structured sparsity/Discriminative least squares regression for multiclass classification and feature selection/data_XX.mat -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/Discriminative least squares regression for multiclass classification and feature selection/data_Y_id.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/guijiejie/Feature-selection/5ddfde14288c6a3f3f5e2937a5d67d26d49f21ea/Feature selection based on structured sparsity/Discriminative least squares regression for multiclass classification and feature selection/data_Y_id.mat -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/Discriminative least squares regression for multiclass classification and feature selection/demo_for_feature_selection.m: -------------------------------------------------------------------------------- 1 | % CORRESPONDENCE INFORMATION 2 | % This code is written by Shiming Xiang and Feiping Nie 3 | 4 | % Shiming Xiang : National Laboratory of Pattern Recognition, Institute of Automation, Academy of Sciences, Beijing 100190 5 | % Email: smxiang@gmail.com 6 | % Feiping Nie: Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX 76019 USA 7 | % Email: feipingnie@gmail.com 8 | 9 | 10 | % Comments and bug reports are welcome. Email to feipingnie@gmail.com OR smxiang@gmail.com 11 | 12 | % WORK SETTING: 13 | % This code has been compiled and tested by using matlab 7.0 and R2010a 14 | 15 | % For more detials, please see the manuscript: 16 | % Shiming Xiang, Feiping Nie, Gaofeng Meng, Chunhong Pan, and Changshui Zhang. 17 | % Discriminative Least Squares Regression for Multiclass Classification and Feature Selection. 18 | % IEEE Transactions on Neural Netwrok and Learning System (T-NNLS), volumn 23, issue 11, pages 1738-1754, 2012. 19 | 20 | % Last Modified: Nov. 2, 2012, By Shiming Xiang 21 | 22 | 23 | % ========================================================================= 24 | % A Demo Example to run the code for feature selection 25 | % ========================================================================= 26 | 27 | 28 | load data_XX; % the training data, each column is a data point 29 | load data_Y_id; %class IDs: a column vector, a column vector, such as [1, 2, 3, 4, 1, 3, 2, ...]' 30 | 31 | lambda_para = 1.0; 32 | u_para = 1000; 33 | iters = 30; 34 | epsilon = 0.0001; 35 | 36 | 37 | % The first step: Train the model 38 | tic 39 | [W, b] = train_feature_selection(XX, Y_id, lambda_para, u_para, iters, epsilon); 40 | toc 41 | 42 | % The second step: Select the features and output the selected features 43 | 44 | WW = W .^ 2; 45 | W_weight = sum(WW, 2); % sum the element row-by-row 46 | [Weight, index_sorted_features] = sort(-W_weight); % sort them from the largest to the smallest 47 | 48 | num_selected_features = 10; % for example, we want to select ten features among all of the source features 49 | 50 | % output the features 51 | index_features_finally_seelcted = index_sorted_features(1 : num_selected_features); 52 | 53 | % perform other tasks, ........... 54 | 55 | 56 | return; 57 | 58 | 59 | -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/Discriminative least squares regression for multiclass classification and feature selection/readme_for_feature_selection.txt: -------------------------------------------------------------------------------- 1 | % ======================================================================================================== 2 | % CORRESPONDENCE INFORMATION 3 | % ======================================================================================================== 4 | 5 | % This code is written by Shiming Xiang and Feiping Nie 6 | 7 | % Shiming Xiang : National Laboratory of Pattern Recognition, Institute of Automation, Academy of Sciences, Beijing 100190 8 | % Email: smxiang@gmail.com 9 | % Feiping Nie: Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX 76019 USA 10 | % Email: feipingnie@gmail.com 11 | 12 | 13 | % Comments and bug reports are welcome. Email to feipingnie@gmail.com OR smxiang@gmail.com 14 | 15 | % WORK SETTING: 16 | % This code has been compiled and tested by using matlab 7.0 and R2012a 17 | 18 | % For more detials, please see the manuscript: 19 | % Shiming Xiang, Feiping Nie, Gaofeng Meng, Chunhong Pan, and Changshui Zhang. 20 | % Discriminative Least Squares Regression for Multiclass Classification and Feature Selection. 21 | % IEEE Transactions on Neural Netwrok and Learning System (T-NNLS), volumn 23, issue 11, pages 1738-1754, 2012. 22 | 23 | % Last Modified: Nov. 2, 2012, By Shiming Xiang 24 | 25 | 26 | % ======================================================================================================== 27 | % How TO RUN THE CODE 28 | % ======================================================================================================== 29 | 30 | % Please derectly run the matlab file "demo_for_feature_selection.m" -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/Efficient and robust feature selection via joint l2,1-norms minimization/FeatureSelection.m: -------------------------------------------------------------------------------- 1 | function feature_idx = FeatureSelection(X, Y, feature_num, r) 2 | % X: d*n training data matrix, each column is a data point 3 | % Y: n*c label matrix 4 | % feature_num: selected feature number 5 | % r: parameter 6 | % feature_idx: selected feature index 7 | 8 | % Ref: Feiping Nie, Heng Huang, Xiao Cai, Chris Ding. 9 | % Efficient and Robust Feature Selection via Joint L21-Norms Minimization. 10 | % Advances in Neural Information Processing Systems 23 (NIPS), 2010. 11 | 12 | 13 | W = L21R21_inv(X', Y, r); 14 | [dumb idx] = sort(sum(W.*W,2),'descend'); 15 | feature_idx = idx(1:feature_num); 16 | -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/Efficient and robust feature selection via joint l2,1-norms minimization/L21R21.m: -------------------------------------------------------------------------------- 1 | function [X, obj]=L21R21(A, Y, r) 2 | %% 21-norm loss with 21-norm regularization 3 | 4 | %% Problem 5 | % 6 | % min_X || A X - Y||_21 + r * ||X||_21 is equivalent to: 7 | % 8 | % min_X ||X||_21 + ||E||_21 9 | % s.t. A X + r*E = Y 10 | 11 | % Ref: Feiping Nie, Heng Huang, Xiao Cai, Chris Ding. 12 | % Efficient and Robust Feature Selection via Joint L21-Norms Minimization. 13 | % Advances in Neural Information Processing Systems 23 (NIPS), 2010. 14 | 15 | 16 | 17 | 18 | 19 | [m n] = size(A); 20 | 21 | [X, obj] = O21EC_inv([A, r*eye(m)], Y); 22 | X = X(1:n,:); 23 | obj = r*obj; 24 | 25 | 26 | 27 | function [X, obj]=O21EC_inv(A, Y) 28 | %% minimize 21-norm with equality constraints 29 | % the row of A should be smaller than the column of A 30 | 31 | %% Problem 32 | % 33 | % min_X ||X||_21 34 | % s.t. A X = Y 35 | 36 | 37 | 38 | [n] = size(A,2); 39 | 40 | ITER = 50; 41 | obj = zeros(ITER,1); 42 | d = ones(n,1); % initialization 43 | for iter = 1:ITER 44 | D = spdiags(d,0,n,n); 45 | lambda = (A*D*A')\Y; 46 | X = D*(A'*lambda); 47 | d = sqrt(sum(X.*X,2)); 48 | 49 | obj(iter) = sum(d); 50 | end; 51 | -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/Efficient and robust feature selection via joint l2,1-norms minimization/L21R21_inv.m: -------------------------------------------------------------------------------- 1 | function [X, obj]=L21R21_inv(A, Y, r, X0) 2 | %% 21-norm loss with 21-norm regularization 3 | 4 | %% Problem 5 | % 6 | % min_X || A X - Y||_21 + r * ||X||_21 7 | 8 | % Ref: Feiping Nie, Heng Huang, Xiao Cai, Chris Ding. 9 | % Efficient and Robust Feature Selection via Joint L21-Norms Minimization. 10 | % Advances in Neural Information Processing Systems 23 (NIPS), 2010. 11 | 12 | 13 | 14 | NIter = 50; 15 | [m n] = size(A); 16 | if nargin < 4 17 | d = ones(n,1); 18 | d1 = ones(m,1); 19 | else 20 | Xi = sqrt(sum(X0.*X0,2)); 21 | d = 2*Xi; 22 | AX = A*X0-Y; 23 | Xi1 = sqrt(sum(AX.*AX,2)+eps); 24 | d1 = 0.5./Xi1; 25 | end; 26 | 27 | if m>n 28 | for iter = 1:NIter 29 | D = spdiags(d,0,n,n); 30 | D1 = spdiags(d1,0,m,m); 31 | DAD = D*A'*D1; 32 | X = (DAD*A+r*eye(n))\(DAD*Y); 33 | 34 | Xi = sqrt(sum(X.*X,2)); 35 | d = 2*Xi; 36 | 37 | AX = A*X-Y; 38 | Xi1 = sqrt(sum(AX.*AX,2)+eps); 39 | d1 = 0.5./Xi1; 40 | 41 | obj(iter) = sum(Xi1) + r*sum(Xi); 42 | end; 43 | else 44 | for iter = 1:NIter 45 | D = spdiags(d,0,n,n); 46 | D1 = spdiags(d1,0,m,m); 47 | DAD = D*A'*D1; 48 | X = DAD*((A*DAD+r*eye(m))\Y); 49 | 50 | Xi = sqrt(sum(X.*X,2)); 51 | d = 2*Xi; 52 | 53 | AX = A*X-Y; 54 | Xi1 = sqrt(sum(AX.*AX,2)+eps); 55 | d1 = 0.5./Xi1; 56 | 57 | obj(iter) = sum(Xi1) + r*sum(Xi); 58 | end; 59 | end; 60 | 1; -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/Efficient and robust feature selection via joint l2,1-norms minimization/readme.txt: -------------------------------------------------------------------------------- 1 | L21R21: 2 | The code of the algorthm described in the NIPS 2010 paper 3 | 4 | L21R21_inv: 5 | The code of another algorthm to solve the same problem -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/Exact top-k feature selection via l2,0-norm constraint/FSRobust_ALM.m: -------------------------------------------------------------------------------- 1 | % min_{||W||_20=k} ||X'*W+1*b'-Y||_21 2 | function [feature_idx, W, b, obj] = FSRobust_ALM(X, Y, k, mu, rho, NITER) 3 | % X: d*n data matrix, each column is a data point 4 | % Y: n*c label matrix, Y(i,j)=1 if xi is labeled to j, and Y(i,j)=0 otherwise 5 | % k: number of selected features 6 | % mu, rho: parameters in the ALM optimization method 7 | % NITER: iteration number 8 | % feature_idx: indices of selected features 9 | % W: d*c embedding matrix 10 | % b: c*1 bias vector 11 | % obj: objective values in the iterations 12 | 13 | % Ref: 14 | % Xiao Cai, Feiping Nie, Heng Huang. 15 | % Exact Top-k Feature Selection via l2,0-Norm Constraint. 16 | % The 23rd International Joint Conference on Artificial Intelligence (IJCAI), 2013. 17 | 18 | 19 | obj = zeros(NITER,1); 20 | 21 | [d, n] = size(X); 22 | c = size(Y,2); 23 | Xm = X-mean(X,2)*ones(1,n); 24 | 25 | Lambda = zeros(d,c); 26 | Sigma = zeros(n,c); 27 | V = rand(d,c); 28 | E = rand(n,c); 29 | W = V; 30 | 31 | inXX = Xm*inv(Xm'*Xm+eye(n)); 32 | for iter = 1:NITER 33 | inmu = 1/mu; 34 | tem = Y+E-inmu*Sigma; 35 | b = mean(tem)'; 36 | V1 = (V-inmu*Lambda+Xm*tem); W = V1 - inXX*(Xm'*V1); 37 | %Wg = XX*W - (V-inmu*Lambda+Xm*tem); st = trace(Wg'*Wg)/trace(Wg'*(XX*Wg)); W = W - st*Wg; 38 | 39 | WL = W+inmu*Lambda; 40 | w = sum(WL.*WL,2); 41 | [~, idx] = sort(-w); 42 | V = zeros(d,c); 43 | V(idx(1:k),:) = WL(idx(1:k),:); 44 | 45 | XW = Xm'*W+ones(n,1)*b'-Y; 46 | XWY = XW+inmu*Sigma; 47 | for i = 1:n 48 | w = XWY(i,:); 49 | la = sqrt(w*w'); 50 | lam = 0; 51 | if la > inmu 52 | lam = 1-inmu/la; 53 | elseif la < -inmu 54 | lam = 1+inmu/la; 55 | end; 56 | E(i,:) = lam*w; 57 | end; 58 | 59 | Lambda = Lambda + mu*(W-V); 60 | Sigma = Sigma + mu*(XW-E); 61 | mu = min(10^10,rho*mu); 62 | 63 | err = Xm'*V+ones(n,1)*b'-Y; 64 | obj(iter) = sum(sqrt(sum(err.*err,2))); 65 | end; 66 | 67 | feature_idx = sort(idx(1:k)); 68 | -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/Joint embedding learning and sparse regression/compute_W.m: -------------------------------------------------------------------------------- 1 | function [W] = compute_W(W,data,D_mhalf) 2 | 3 | [nSmp,nFea] = size(data); 4 | 5 | %%%%%%%%%%%%%%%%%%%% Normalize W 6 | if nSmp < 5000 7 | tmpD_mhalf = repmat(D_mhalf,1,nSmp); 8 | W = (tmpD_mhalf.*W).*tmpD_mhalf'; 9 | clear tmpD_mhalf; 10 | else 11 | [i_idx,j_idx,v_idx] = find(W); 12 | v1_idx = zeros(size(v_idx)); 13 | for i=1:length(v_idx) 14 | v1_idx(i) = v_idx(i)*D_mhalf(i_idx(i))*D_mhalf(j_idx(i)); 15 | end 16 | W = sparse(i_idx,j_idx,v1_idx); 17 | clear i_idx j_idx v_idx v1_idx 18 | end 19 | W = (W+W')/2; -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/Joint embedding learning and sparse regression/compute_Y.m: -------------------------------------------------------------------------------- 1 | function Y = compute_Y(data, W, ReducedDim, D_mhalf) 2 | 3 | [nSmp,nFea] = size(data); 4 | 5 | dimMatrix = size(W,2); 6 | if (dimMatrix > 500 & ReducedDim < dimMatrix/10) 7 | option = struct('disp',0); 8 | [Y, eigvalue] = eigs(W,ReducedDim,'la',option); 9 | eigvalue = diag(eigvalue); 10 | else 11 | W = full(W); 12 | [Y, eigvalue] = eig(W); 13 | eigvalue = diag(eigvalue); 14 | 15 | [junk, index] = sort(-eigvalue); 16 | eigvalue = eigvalue(index); 17 | Y = Y(:,index); 18 | if ReducedDim < length(eigvalue) 19 | Y = Y(:, 1:ReducedDim); 20 | eigvalue = eigvalue(1:ReducedDim); 21 | end 22 | end 23 | 24 | eigIdx = find(abs(eigvalue) < 1e-6); 25 | eigvalue (eigIdx) = []; 26 | Y (:,eigIdx) = []; 27 | 28 | nGotDim = length(eigvalue); 29 | 30 | idx = 1; 31 | while(abs(eigvalue(idx)-1) < 1e-12) 32 | idx = idx + 1; 33 | if idx > nGotDim 34 | break; 35 | end 36 | end 37 | idx = idx - 1; 38 | 39 | if(idx > 1) 40 | % more than one eigenvector of 1 eigenvalue 41 | u = zeros(size(Y,1),idx); 42 | d_m = 1./D_mhalf; 43 | cc = 1/norm(d_m); 44 | u(:,1) = cc./D_mhalf; 45 | 46 | bDone = 0; 47 | for i = 1:idx 48 | if abs(Y(:,i)' * u(:,1) - 1) < 1e-14 49 | Y(:,i) = Y(:,1); 50 | Y(:,1) = u(:,1); 51 | bDone = 1; 52 | end 53 | end 54 | 55 | if ~bDone 56 | for i = 2:idx 57 | u(:,i) = Y(:,i); 58 | for j= 1:i-1 59 | u(:,i) = u(:,i) - (u(:,j)' * Y(:,i))*u(:,j); 60 | end 61 | u(:,i) = u(:,i)/norm(u(:,i)); 62 | end 63 | Y(:,1:idx) = u; 64 | end 65 | end 66 | 67 | if nGotDim < 5000 68 | Y = repmat(D_mhalf,1,nGotDim).*Y; 69 | else 70 | for k = 1:nGotDim 71 | Y(:,k) = Y(:,k).*D_mhalf; 72 | end 73 | end 74 | 75 | Y(:,1) = []; -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/Joint embedding learning and sparse regression/jelsr.m: -------------------------------------------------------------------------------- 1 | function [W_compute, Y, obj] = jelsr(data, W_ori, ReducedDim,alpha,beta) 2 | 3 | %%%%%%%% Input: data: nSmp*nFea; 4 | %%% W_ori: The original local similarity matrix 5 | %%% ReducedDim: the dimensionality for low dimensionality 6 | %%% embedding $Y$ 7 | %%% alpha and beta ar two parameters 8 | 9 | [nSmp,nFea] = size(data); 10 | 11 | %%%%%%%%%%%%%%%%%%% Normalization of W_ori 12 | D_mhalf = full(sum(W_ori,2).^-.5); 13 | W = compute_W(W_ori,data,D_mhalf); 14 | 15 | %%%%%%%%%%%%%%%%%% Eigen_decomposition 16 | Y = compute_Y(data,W, ReducedDim, D_mhalf); 17 | if issparse(data) 18 | data = [data ones(size(data,1),1)]; 19 | [nSmp,nFea] = size(data); 20 | else 21 | sampleMean = mean(data); 22 | data = (data - repmat(sampleMean,nSmp,1)); 23 | end 24 | 25 | %%% To minimize squared loss with L21 normalization 26 | %%%%%%%%%%%% Initialization 27 | AA = data'*data; 28 | Ay = data'*Y; 29 | W_compute = (AA+alpha*eye(nFea))\Ay; 30 | d = sqrt(sum(W_compute.*W_compute,2)); 31 | 32 | itermax = 20; 33 | obj = zeros(itermax,1); 34 | 35 | for iter = 1:itermax 36 | %%%%%%%%%%%%%%%%%%% Fix D to updata W_compute, Y 37 | D = 2*spdiags(d,0,nFea,nFea); 38 | %%%%%%%%%%%%%%%% To updata Y 39 | A = (D*data'*data+alpha*eye(nFea)); 40 | Temp = A\(D*data'); 41 | Temp = data*Temp; 42 | Temp = W_ori-beta*eye(nSmp)+beta*Temp; 43 | 44 | %%%%% Normalization 45 | Temp = compute_W(Temp,data,D_mhalf); 46 | %%%%% Eigen_decomposition 47 | Y = compute_Y(data,Temp, ReducedDim, D_mhalf); 48 | 49 | %%%%%%%%%%%%%%%%% To updata W 50 | B = D*data'*Y; 51 | W_compute = A\B; 52 | 53 | %%%%%%%%%%%%%%%%%% Fix W and update D 54 | d = sqrt(sum(W_compute.*W_compute,2)); 55 | 56 | end 57 | end 58 | 59 | -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/L1/ProducePoNe.m: -------------------------------------------------------------------------------- 1 | function [y,A] = ProducePoNe(gnd,fea); 2 | % Produce positive and negative examples 3 | % This code is written by Jie Gui (guijie@ustc.edu). 4 | % V1 --July 25th, 2014 5 | % V2 --March 24th, 2015 6 | [d,n] = size(fea); 7 | nClass = max(gnd);%length(unique(gnd)); 8 | nEachClass = zeros(nClass,1); 9 | nPos = 0; %the number of positive examples 10 | nNeg = 0; %the number of negative examples 11 | 12 | for i=1:nClass 13 | index = find(gnd == i); 14 | nEachClass(i,1) = length(index); 15 | end 16 | 17 | for i=1:nClass 18 | nPos = nPos+(nEachClass(i,1)*(nEachClass(i,1)-1))/2; 19 | nNeg = nNeg+nEachClass(i,1)*(n-nEachClass(i,1)); 20 | end 21 | 22 | PosExm = zeros(d,nPos);%positive examples 23 | NegExm = zeros(d,nNeg);%negative examples 24 | nPosTemp = 0; 25 | nNegTemp = 0; 26 | 27 | for j = 1:n 28 | for k = (j+1):n 29 | temp_fea = fea(:,j) - fea(:,k); 30 | if gnd(j)==gnd(k) 31 | nPosTemp = nPosTemp+1; 32 | PosExm(:,nPosTemp) = temp_fea; 33 | else 34 | nNegTemp = nNegTemp+1; 35 | NegExm(:,nNegTemp) = temp_fea; 36 | end 37 | end 38 | end 39 | 40 | if nPos <= nNeg 41 | b = randperm(nNeg); 42 | b = b(1:nPos); 43 | NegExm = NegExm(:,b); 44 | A = [PosExm,NegExm]; 45 | y = ones(nPos*2,1); 46 | y((nPos+1):(nPos*2),:) = -1; 47 | else 48 | b = randperm(nPos); 49 | b = b(1:nNeg); 50 | PosExm = PosExm(:,b); 51 | A = [PosExm,NegExm]; 52 | y = ones(nNeg*2,1); 53 | y((nNeg+1):(nNeg*2),:) = -1; 54 | end 55 | -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/L1/ProducePoNeV1.m: -------------------------------------------------------------------------------- 1 | function [y,A] = ProducePoNe(gnd,fea); 2 | % Produce positive and negative examples 3 | % This code is written by Jie Gui (guijie@ustc.edu). 4 | % V1 --July 25th, 2014 5 | % V2 --March 24th, 2015 6 | [d,n] = size(fea); 7 | nClass = length(unique(gnd)); 8 | nEachClass = zeros(nClass,1); 9 | nPos = 0; %the number of positive examples 10 | nNeg = 0; %the number of negative examples 11 | 12 | for i=1:nClass 13 | index = find(gnd == i); 14 | nEachClass(i,1) = length(index); 15 | end 16 | 17 | for i=1:nClass 18 | nPos = nPos+(nEachClass(i,1)*(nEachClass(i,1)-1))/2; 19 | nNeg = nNeg+nEachClass(i,1)*(n-nEachClass(i,1)); 20 | end 21 | 22 | PosExm = zeros(d,nPos);%positive examples 23 | NegExm = zeros(d,nNeg);%negative examples 24 | nPosTemp = 0; 25 | nNegTemp = 0; 26 | 27 | for j = 1:n 28 | for k = (j+1):n 29 | temp_fea = fea(:,j) - fea(:,k); 30 | if gnd(j)==gnd(k) 31 | nPosTemp = nPosTemp+1; 32 | PosExm(:,nPosTemp) = temp_fea; 33 | else 34 | nNegTemp = nNegTemp+1; 35 | NegExm(:,nNegTemp) = temp_fea; 36 | end 37 | end 38 | end 39 | 40 | if nPos <= nNeg 41 | b = randperm(nNeg); 42 | b = b(1:nPos); 43 | NegExm = NegExm(:,b); 44 | A = [PosExm,NegExm]; 45 | y = ones(nPos*2,1); 46 | y((nPos+1):(nPos*2),:) = -1; 47 | else 48 | b = randperm(nPos); 49 | b = b(1:nNeg); 50 | PosExm = PosExm(:,b); 51 | A = [PosExm,NegExm]; 52 | y = ones(nNeg*2,1); 53 | y((nNeg+1):(nNeg*2),:) = -1; 54 | end -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/L21-Norm Regularized Discriminative Feature Selection for Unsupervised Learning/LocalDisAna.m: -------------------------------------------------------------------------------- 1 | function L = LocalDisAna(X, para) 2 | % unsupervised local discriminative analysis 3 | % each column is a data 4 | 5 | 6 | 7 | [D, n] = size(X); 8 | 9 | if isfield(para, 'k') 10 | k = para.k+1; 11 | else 12 | k = 16; 13 | end; 14 | if isfield(para, 'lamda') 15 | lamda = para.lamda; 16 | else 17 | lamda = 1000; 18 | end; 19 | 20 | Lc = eye(k) - 1/k*ones(k); 21 | A = spalloc(n*k,n*k,5*n*k); 22 | S = spalloc(n,n*k,5*n*k); 23 | for i = 1:n 24 | dis = repmat(X(:,i),1,n) - X; 25 | dis = sum(dis.*dis); 26 | [dumb, nnidx] = sort(dis); 27 | Xi = X(:,nnidx(1:k)); 28 | Xi = Xi*Lc; 29 | if D > k 30 | Ai = inv(lamda*eye(k) + Xi'*Xi); 31 | Ai = Lc*Ai*Lc; 32 | else 33 | Ai = Lc - lamda*Xi'*inv(eye(D) + lamda*Xi*Xi')*Xi; 34 | end; 35 | lidx = (i-1)*k+1:(i-1)*k+k; 36 | A(lidx, lidx) = Ai; 37 | S(nnidx(1:k),lidx) = eye(k); 38 | end; 39 | 40 | L = S*A*S'; 41 | 42 | 43 | 44 | 45 | -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/L21-Norm Regularized Discriminative Feature Selection for Unsupervised Learning/LquadR21_reg.m: -------------------------------------------------------------------------------- 1 | function [X, obj]=LquadR21_reg(A, k, r, X0) 2 | %% quadratic loss with 21-norm regularization 3 | 4 | 5 | % 6 | % min_{X'*X=I} Tr(X'*A*X) + r * ||X||_21 7 | 8 | 9 | NIter = 36; 10 | [m n] = size(A); 11 | if nargin < 4 12 | d = ones(n,1); 13 | else 14 | Xi = sqrt(sum(X0.*X0,2)+eps); 15 | d = 0.5./(Xi); 16 | end; 17 | 18 | for iter = 1:NIter 19 | D = diag(d); 20 | M = A+r*D; 21 | M = max(M,M'); 22 | [evec eval] = eig(M); 23 | eval = diag(eval); 24 | [temp idx] = sort(eval); 25 | X = evec(:,idx(1:k)); 26 | 27 | Xi = sqrt(sum(X.*X,2)+eps); 28 | d = 0.5./(Xi); 29 | 30 | obj(iter) = trace(X'*A*X) + r*sum(Xi); 31 | end; 32 | 33 | -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/L21-Norm Regularized Discriminative Feature Selection for Unsupervised Learning/call.m: -------------------------------------------------------------------------------- 1 | % The UDFS feature selection algorithm 2 | % Ref: 3 | % L21-Norm Regularized Discriminative Feature Selection for Unsupervised Learning. 4 | % Yi Yang, Heng Tao Shen, Zhigang Ma, Zi Huang and Xiaofang Zhou. 5 | % International Joint Conferences on Artificial Intelligence 2011, (IJCAI-2011). 6 | % contact: yiyang@cs.cmu.edu 7 | % 8 | % X: the input data matrix; 9 | % X_2: the output selected feature matrix; 10 | % fea_num: the number of selected features 11 | % para.k: the number of knn for local discriminative analysis 12 | % regu: the regularization parameter 13 | 14 | 15 | 16 | 17 | M = LocalDisAna(X', para); 18 | % unsupervised local discriminative analysis 19 | 20 | A = X'*L*X; 21 | [W, obj]=LquadR21_reg(A, class_num, regu); 22 | 23 | score= sum(W.*W,2); 24 | [res, idx] = sort(score,'descend'); 25 | X_2 = X (:,idx(1:fea_num)); 26 | 27 | -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/l2,1 regularized correntropy for robust feature selection/CRFS.m: -------------------------------------------------------------------------------- 1 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 | %Description 3 | % Robust feature selection via L21 reguarlized correntropy 4 | % feature selection via L21-norm 5 | % Robust learning via Correntropy 6 | %Input 7 | % Data d*n data matrix 8 | % label n*1 label vector 9 | % lambda regularization parameter 10 | %Output 11 | % W d*c projection matrix 12 | % feaind d*1 selected feature index vector 13 | % dd d*1 weight vector 14 | % T CPU time 15 | %Reference 16 | % Ran He, Tieniu Tan, Liang Wang and Wei-Shi Zheng. 17 | % L21 Regularized Correntropy for Robust Feature Selection. In IEEE CVPR,2012. 18 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 19 | function [W,feaind,dd,T] = CRFS(Data,label,lambda) 20 | T = cputime; 21 | Y = SY2MY(label); 22 | Y(find(Y==-1))=0; 23 | 24 | 25 | [dim,num]=size(Data); 26 | weight = ones(1,num); 27 | weight2 = ones(dim,1); 28 | iter =5; 29 | 30 | for i=1:iter 31 | X1 = Data.*repmat(weight,dim,1); 32 | W = (X1*Data'+lambda*diag(weight2))\(X1*Y); 33 | %%%%%%%%%%%%distance 34 | vt = (Data'*W-Y)'; 35 | weight=sum(vt.^2); 36 | weight = exp(-weight/(4*mean(weight))); 37 | 38 | dd = W.*W; 39 | weight2 = 2./sqrt(sum(dd')+0.0000001); 40 | end 41 | 42 | dd = sqrt(sum(dd')); 43 | [v,feaind]=sort(dd,'descend'); 44 | T = cputime -T; 45 | -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/l2,1 regularized correntropy for robust feature selection/RFS.m: -------------------------------------------------------------------------------- 1 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 | %Description 3 | % Robust feature selection via compound norms minization (L21 and frobenius norms) 4 | % min_{U,E} ||U||_{21} + ||E||_F s.t. X^TU+\lambda E=Y 5 | % smooth relaxation 6 | % min_{U,E} \sum_i {\sqrt{\varepsilon+||u^i||_2^2}} + ||E||_F s.t. X^T U + \lambda E =Y 7 | %Input 8 | % Data d*n data matrix 9 | % label n*1 label vector 10 | % lambda regularization parameter 11 | %Output 12 | % W d*c projection matrix 13 | % feaind d*1 selected feature index vector 14 | % dd d*1 weight vector 15 | % T1 CPU time 16 | %Reference 17 | % Ran He, Tieniu Tan, Liang Wang and Wei-Shi Zheng. 18 | % L21 Regularized Correntropy for Robust Feature Selection. In IEEE CVPR,2012. 19 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 20 | 21 | function [W,feaind,dd,T1] = RFS(Data,label,lambda) 22 | T1 = cputime; 23 | Y = SY2MY(label); 24 | Y(find(Y==-1))=0; 25 | mn=size(Data,1); 26 | dd= ones(mn,1); 27 | iter =5; 28 | 29 | for i=1:iter 30 | 31 | % calculate (A'*D*A) 32 | d1 = dd(1:size(Data,1)); 33 | T = Data'*(Data.*repmat(d1,1,size(Data,2)))+lambda*eye(size(Data,2)); 34 | 35 | Z = T\Y; 36 | 37 | %calculate U 38 | U= (repmat(dd,1,size(Data,2)).*Data)*Z; 39 | 40 | %calculate the D 41 | dd = U.*U; 42 | dd = 2*sqrt(sum(dd'))'; 43 | end 44 | 45 | q1 = dd; 46 | W = U; 47 | [v,feaind]=sort(q1,'descend'); 48 | T1 = cputime -T1; 49 | 50 | end -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/l2,1 regularized correntropy for robust feature selection/SY2MY.m: -------------------------------------------------------------------------------- 1 | function Y = SY2MY(Y) 2 | %%%%%%%%%%%%%%%%%%%%%%%% 3 | % Description: 4 | % Transform Y with single column to Y with the matrix representation. 5 | % 6 | % Parameters: 7 | % 8 | % Y - Single column, marked by 1 - k. 9 | % 10 | % Output: 11 | % 12 | % Y - with matrix representation, has k columns the column with +1 is 13 | % the affiliation of the the instances. 14 | % 15 | 16 | numI = size(Y,1); 17 | 18 | if size(Y,2) < 2 19 | u = unique(Y); 20 | k = length(u); 21 | 22 | if k >= 2 23 | newY = -ones(numI,k); 24 | for i = 1:length(u); 25 | newY(Y==u(i),u(i)) = 1; 26 | end 27 | else 28 | 29 | end 30 | Y = newY; 31 | end -------------------------------------------------------------------------------- /Feature selection based on structured sparsity/l2,1 regularized correntropy for robust feature selection/test.m: -------------------------------------------------------------------------------- 1 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 | %Description 3 | % A classification problem with three classes 4 | %Code 5 | % 1. Robust feature selection via compound norms minization (L21 and 6 | % frobenius norms) 7 | % 2. Robust feature selection via L21 reguarlized correntropy 8 | % 9 | %Reference 10 | % Ran He, Tieniu Tan, Liang Wang and Wei-Shi Zheng. 11 | % L21 Regularized Correntropy for Robust Feature Selection. In IEEE CVPR,2012. 12 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 13 | 14 | function test 15 | 16 | %Classification problem with three classes 17 | A= rand(50,300); 18 | B= rand(50,300)+2; 19 | C= rand(50,300)+3; 20 | 21 | % label vector for the three classes 22 | label=[ones(300,1);2*ones(300,1);3*ones(300,1)]; 23 | data =[A B C]; 24 | 25 | % generate a sparse matrix whose dimension is 600 26 | sdata =zeros(600,900); 27 | sdata(1:12:600,:) = data; 28 | 29 | %Robust feature selection via compound norms minization (L21 and frobenius norms) 30 | [W,feaind,dd,T1] = RFS(sdata,label,0.01); 31 | 32 | figure; % show select feature 33 | plot(1:600,dd); 34 | legend(['Sparsity:' num2str(length(find(dd>0)))]) 35 | %show the sparse data in a low 2D space 36 | figure; 37 | ds=W'*sdata; 38 | plot(ds(2,:),ds(3,:),'o'); 39 | 40 | %Robust feature selection via L21 reguarlized correntropy 41 | [W,feaind,dd,T] = CRFS(sdata,label,0.01); 42 | figure; % show select feature 43 | plot(1:600,dd); 44 | legend(['Sparsity:' num2str(length(find(dd>0)))]) 45 | %show the sparse data in a low 2D space 46 | figure; 47 | ds=W'*sdata; 48 | plot(ds(2,:),ds(3,:),'o'); -------------------------------------------------------------------------------- /Multi-View Feature Selection for Heterogeneous Face Recognition/MvFS.m: -------------------------------------------------------------------------------- 1 | function [C,fList] = MvFS(X_multiview,Label_multiview) 2 | num_class = length(unique(Label_multiview{1}));% For CUFSF, num_class = 700. 3 | num_view = size(X_multiview,2);% For CUFSF, num_view = 2. 4 | %% ************** Sjr ******************************************** 5 | A = cell(num_view,3); 6 | for vi = 1:num_view 7 | [Xi ci li] = GetEachClass(X_multiview{vi},Label_multiview{vi},'x'); 8 | [Mi ci li] = GetEachClass(X_multiview{vi},Label_multiview{vi},'m'); 9 | [Numi ci li] = GetEachClass(X_multiview{vi},Label_multiview{vi},'num'); 10 | A{vi,1} = Xi; 11 | A{vi,2} = Mi; 12 | A{vi,3} = Numi; 13 | end 14 | Ni = zeros(1,num_class); 15 | dim = zeros(1,num_view); 16 | for i=1:num_view 17 | Ni = Ni + A{i,3}; 18 | dim(i) = size(X_multiview{i},1); 19 | end 20 | %% *********** LDA SW ******************************************* 21 | Sw = zeros(sum(dim),sum(dim)); 22 | for i=1:num_view 23 | Numi = A{i,3}; 24 | Mi = A{i,2}; 25 | for j=i:num_view 26 | Numj = A{j,3}; 27 | Mj = A{j,2}; 28 | Xj = A{j,1}; 29 | sij = zeros(dim(i),dim(j)); 30 | vij = zeros(dim(j),dim(j)); 31 | for ci = 1:num_class 32 | sij = sij - (Numi(ci)*Numj(ci)/Ni(ci))*(Mi(:,ci)*Mj(:,ci)'); 33 | vij = vij + Xj{ci} * (Xj{ci}'); 34 | end 35 | if j==i 36 | sij = vij + sij; 37 | end 38 | 39 | Sw(sum(dim(1:i-1))+1:sum(dim(1:i)), sum(dim(1:j-1))+1:sum(dim(1:j))) = sij; 40 | Sw(sum(dim(1:j-1))+1:sum(dim(1:j)), sum(dim(1:i-1))+1:sum(dim(1:i))) = sij'; 41 | end 42 | end 43 | 44 | %% *********** LDA SB ******************************************* 45 | Sb = zeros(sum(dim),sum(dim)); 46 | n = sum(Ni); 47 | 48 | for i=1:num_view 49 | mi = sum(X_multiview{i},2); 50 | Mi = A{i,2}; 51 | Numi = A{i,3}; 52 | for j=i:num_view 53 | Numj = A{j,3}; 54 | Mj = A{j,2}; 55 | sij = zeros(dim(i),dim(j)); 56 | 57 | mj = sum(X_multiview{j},2); 58 | for ci = 1:num_class 59 | sij = sij + (Numi(ci)*Numj(ci)/Ni(ci))*(Mi(:,ci)*Mj(:,ci)'); 60 | end 61 | sij = sij - mi*mj'/n; 62 | Sb(sum(dim(1:i-1))+1:sum(dim(1:i)), sum(dim(1:j-1))+1:sum(dim(1:j))) = sij; 63 | Sb(sum(dim(1:j-1))+1:sum(dim(1:j)), sum(dim(1:i-1))+1:sum(dim(1:i))) = sij'; 64 | end 65 | end 66 | 67 | %% LDA 68 | Sb = Sb.*num_view; 69 | 70 | dim = size(X_multiview{1},1); 71 | FeatureScore = zeros(1,dim); 72 | for i=1:dim 73 | index = [i (i+dim)]; 74 | if sum(sum(Sw(index,index)))==0 75 | FeatureScore(1,i)= 100;% The same as Fisher score 76 | else 77 | FeatureScore(1,i)= sum(sum(Sb(index,index)))/sum(sum(Sw(index,index))); 78 | end 79 | end 80 | [C, fList] = sort(FeatureScore, 'descend'); 81 | % The follow code is for the case that different views have different 82 | % dimensions. 83 | % View1Index = zeros(1,dim(1)); 84 | % View2Index = zeros(1,dim(2)); 85 | % [C,View2Index(1,1)] = max(max(score)); 86 | % temp = find(score==C); 87 | % if length(temp)==1 88 | % View1Index(1,1) = ceil(temp/dim(2)); 89 | % else 90 | % end 91 | % clear temp; 92 | fprintf('MvFS finished\n'); 93 | -------------------------------------------------------------------------------- /Multi-View Feature Selection for Heterogeneous Face Recognition/README.txt: -------------------------------------------------------------------------------- 1 | Description: This package includes the MATLAB code of the Multi-View Feature Selection algorithm. 2 | 3 | References: J. Gui, P. Li, "Multi-View Feature Selection for Heterogeneous Face Recognition", IEEE International Conference on Data Mining (ICDM, 11.08% acceptance rate), x-x, 2018. 4 | 5 | ATTN: This package is free for academic usage. You can run it at your own risk. For other purposes, please contact Jie Gui(guijie@ustc.edu). 6 | 7 | Requirement: The package was developed with MATLAB. 8 | 9 | ATTN2: This package was developed by Dr. Jie Gui. For any problem concerning the code, please feel free to contact Jie Gui(guijie@ustc.edu). -------------------------------------------------------------------------------- /README.txt: -------------------------------------------------------------------------------- 1 | If you use the matlab code here, please cite our paper below: 2 | References: Jie Gui, Zhenan Sun, Shuiwang Ji, DachengTao, Tieniu Tan, "Feature Selection Based on Structured Sparsity: AComprehensive Study", IEEE Transactions on Neural Networks and Learning Systems, 2017. 3 | J. Gui, P. Li, "Multi-View Feature Selection for Heterogeneous Face Recognition", IEEE International Conference on Data Mining (ICDM, 11.08% acceptance rate), 2018 4 | 5 | ATTN: This package is free for academic usage. You can run it at your own risk. For other purposes, please contact Jie Gui(guijie@ustc.edu). 6 | 7 | Requirement: The package was developed with MATLAB. -------------------------------------------------------------------------------- /Student's t-test/fsTtest.m: -------------------------------------------------------------------------------- 1 | function [out] = fsTtest(X,Y) 2 | [m,n] = size(X); 3 | W = zeros(1,n); 4 | c = length(unique(Y)); 5 | 6 | for i=1:n 7 | temp = 0; 8 | for k=1:c 9 | for j = (k+1):c 10 | X1 = X(Y == k,i); 11 | X2 = X(Y == j,i); 12 | 13 | n1 = size(X1,1); 14 | n2 = size(X2,1); 15 | 16 | mean_X1 = sum(X1)/n1; 17 | mean_X2 = sum(X2)/n2 ; 18 | 19 | S_X1 = sum((X1 - mean_X1).^2); 20 | S_X2 = sum((X2 - mean_X2).^2); 21 | Sw = sqrt((S_X1+S_X2)/(n1+n2-2)); 22 | if Sw ==0 23 | Sw = eps; 24 | end 25 | 26 | temp = temp+abs(mean_X1 - mean_X2)/( Sw*sqrt( (1/n1)+ (1/n2) )); 27 | end 28 | end 29 | W(1,i) = temp; 30 | end 31 | 32 | [foo out.fList] = sort(W, 'descend'); 33 | out.W = W; 34 | out.prf = -1; 35 | end 36 | --------------------------------------------------------------------------------