├── BF ├── func │ └── fastBilateralFilter.m ├── im │ ├── im.png │ ├── im1.png │ ├── im2.png │ ├── im3.png │ └── res.png ├── main.m └── readme.md ├── BFWLS ├── .DS_Store ├── func │ ├── boxfilter.m │ ├── fastBilateralFilter.m │ ├── guidedfilter.m │ ├── guidedfilter_color.m │ ├── myrgb2yuv.m │ ├── myyuv2rgb.m │ ├── saliency.m │ └── wlsFilter.m ├── im │ ├── 0000_nir.tiff │ └── 0000_rgb.tiff ├── main.m └── main2.m ├── GFF ├── .DS_Store ├── func │ ├── boxfilter.m │ ├── guidedfilter.m │ └── saliency.m ├── im │ ├── 1-1.jpg │ ├── 1-2.jpg │ ├── 1.jpg │ ├── 2.jpg │ ├── LWIR.tif │ ├── Vis.tif │ ├── lena1.jpg │ └── lena2.jpg ├── main.m └── readme.md ├── Laplacian-Fusion ├── func │ ├── LPD.m │ └── SSR.m ├── im │ ├── .DS_Store │ ├── Anir.jpg │ ├── Argb.jpg │ ├── inf.jpg │ ├── nir.jpg │ ├── rgb.jpg │ └── vis.jpg ├── main_lpd_fusion.m └── readme.md ├── SE ├── func │ ├── reintegration.m │ └── se.m ├── im │ ├── .DS_Store │ ├── nir.jpg │ └── rgb.jpg ├── main.m ├── readme.md └── test.m └── readme.md /BF/func/fastBilateralFilter.m: -------------------------------------------------------------------------------- 1 | % 2 | % output = bilateralFilter( data, edge, ... 3 | % edgeMin, edgeMax, ... 4 | % sigmaSpatial, sigmaRange, ... 5 | % samplingSpatial, samplingRange ) 6 | % 7 | % Bilateral and Cross-Bilateral Filter using the Bilateral Grid. 8 | % 9 | % Bilaterally filters the image 'data' using the edges in the image 'edge'. 10 | % If 'data' == 'edge', then it the standard bilateral filter. 11 | % Otherwise, it is the 'cross' or 'joint' bilateral filter. 12 | % For convenience, you can also pass in [] for 'edge' for the normal 13 | % bilateral filter. 14 | % 15 | % Note that for the cross bilateral filter, data does not need to be 16 | % defined everywhere. Undefined values can be set to 'NaN'. However, edge 17 | % *does* need to be defined everywhere. 18 | % 19 | % data and edge should be of the greyscale, double-precision floating point 20 | % matrices of the same size (i.e. they should be [ height x width ]) 21 | % 22 | % data is the only required argument 23 | % 24 | % edgeMin and edgeMax specifies the min and max values of 'edge' (or 'data' 25 | % for the normal bilateral filter) and is useful when the input is in a 26 | % range that's not between 0 and 1. For instance, if you are filtering the 27 | % L channel of an image that ranges between 0 and 100, set edgeMin to 0 and 28 | % edgeMax to 100. 29 | % 30 | % edgeMin defaults to min( edge( : ) ) and edgeMax defaults to max( edge( : ) ). 31 | % This is probably *not* what you want, since the input may not span the 32 | % entire range. 33 | % 34 | % sigmaSpatial and sigmaRange specifies the standard deviation of the space 35 | % and range gaussians, respectively. 36 | % sigmaSpatial defaults to min( width, height ) / 16 37 | % sigmaRange defaults to ( edgeMax - edgeMin ) / 10. 38 | % 39 | % samplingSpatial and samplingRange specifies the amount of downsampling 40 | % used for the approximation. Higher values use less memory but are also 41 | % less accurate. The default and recommended values are: 42 | % 43 | % samplingSpatial = sigmaSpatial 44 | % samplingRange = sigmaRange 45 | % 46 | 47 | function output = fastBilateralFilter( data, edge, edgeMin, edgeMax, sigmaSpatial, sigmaRange, ... 48 | samplingSpatial, samplingRange ) 49 | 50 | if( ndims( data ) > 2 ) 51 | error( 'data must be a greyscale image with size [ height, width ]' ); 52 | end 53 | 54 | if( ~isa( data, 'double' ) ) 55 | error( 'data must be of class "double"' ); 56 | end 57 | 58 | if ~exist( 'edge', 'var' ) 59 | edge = data; 60 | elseif isempty( edge ) 61 | edge = data; 62 | end 63 | 64 | if( ndims( edge ) > 2 ) 65 | error( 'edge must be a greyscale image with size [ height, width ]' ); 66 | end 67 | 68 | if( ~isa( edge, 'double' ) ) 69 | error( 'edge must be of class "double"' ); 70 | end 71 | 72 | inputHeight = size( data, 1 ); 73 | inputWidth = size( data, 2 ); 74 | 75 | if ~exist( 'edgeMin', 'var' ) 76 | edgeMin = min( edge( : ) ); 77 | warning( 'edgeMin not set! Defaulting to: %f\n', edgeMin ); 78 | end 79 | 80 | if ~exist( 'edgeMax', 'var' ) 81 | edgeMax = max( edge( : ) ); 82 | warning( 'edgeMax not set! Defaulting to: %f\n', edgeMax ); 83 | end 84 | 85 | edgeDelta = edgeMax - edgeMin; 86 | 87 | if ~exist( 'sigmaSpatial', 'var' ) 88 | sigmaSpatial = min( inputWidth, inputHeight ) / 16; 89 | fprintf( 'Using default sigmaSpatial of: %f\n', sigmaSpatial ); 90 | end 91 | 92 | if ~exist( 'sigmaRange', 'var' ) 93 | sigmaRange = 0.1 * edgeDelta; 94 | fprintf( 'Using default sigmaRange of: %f\n', sigmaRange ); 95 | end 96 | 97 | if ~exist( 'samplingSpatial', 'var' ) 98 | samplingSpatial = sigmaSpatial; 99 | end 100 | 101 | if ~exist( 'samplingRange', 'var' ) 102 | samplingRange = sigmaRange; 103 | end 104 | 105 | if size( data ) ~= size( edge ) 106 | error( 'data and edge must be of the same size' ); 107 | end 108 | 109 | % parameters 110 | derivedSigmaSpatial = sigmaSpatial / samplingSpatial; 111 | derivedSigmaRange = sigmaRange / samplingRange; 112 | 113 | paddingXY = floor( 2 * derivedSigmaSpatial ) + 1; 114 | paddingZ = floor( 2 * derivedSigmaRange ) + 1; 115 | 116 | % allocate 3D grid 117 | downsampledWidth = floor( ( inputWidth - 1 ) / samplingSpatial ) + 1 + 2 * paddingXY; 118 | downsampledHeight = floor( ( inputHeight - 1 ) / samplingSpatial ) + 1 + 2 * paddingXY; 119 | downsampledDepth = floor( edgeDelta / samplingRange ) + 1 + 2 * paddingZ; 120 | 121 | gridData = zeros( downsampledHeight, downsampledWidth, downsampledDepth ); 122 | gridWeights = zeros( downsampledHeight, downsampledWidth, downsampledDepth ); 123 | 124 | % compute downsampled indices 125 | [ jj, ii ] = meshgrid( 0 : inputWidth - 1, 0 : inputHeight - 1 ); 126 | 127 | % ii = 128 | % 0 0 0 0 0 129 | % 1 1 1 1 1 130 | % 2 2 2 2 2 131 | 132 | % jj = 133 | % 0 1 2 3 4 134 | % 0 1 2 3 4 135 | % 0 1 2 3 4 136 | 137 | % so when iterating over ii( k ), jj( k ) 138 | % get: ( 0, 0 ), ( 1, 0 ), ( 2, 0 ), ... (down columns first) 139 | 140 | di = round( ii / samplingSpatial ) + paddingXY + 1; 141 | dj = round( jj / samplingSpatial ) + paddingXY + 1; 142 | dz = round( ( edge - edgeMin ) / samplingRange ) + paddingZ + 1; 143 | 144 | % perform scatter (there's probably a faster way than this) 145 | % normally would do downsampledWeights( di, dj, dk ) = 1, but we have to 146 | % perform a summation to do box downsampling 147 | for k = 1 : numel( dz ) 148 | 149 | dataZ = data( k ); % traverses the image column wise, same as di( k ) 150 | if ~isnan( dataZ ) 151 | 152 | dik = di( k ); 153 | djk = dj( k ); 154 | dzk = dz( k ); 155 | 156 | gridData( dik, djk, dzk ) = gridData( dik, djk, dzk ) + dataZ; 157 | gridWeights( dik, djk, dzk ) = gridWeights( dik, djk, dzk ) + 1; 158 | 159 | end 160 | end 161 | 162 | % make gaussian kernel 163 | kernelWidth = 2 * derivedSigmaSpatial + 1; 164 | kernelHeight = kernelWidth; 165 | kernelDepth = 2 * derivedSigmaRange + 1; 166 | 167 | halfKernelWidth = floor( kernelWidth / 2 ); 168 | halfKernelHeight = floor( kernelHeight / 2 ); 169 | halfKernelDepth = floor( kernelDepth / 2 ); 170 | 171 | [gridX, gridY, gridZ] = meshgrid( 0 : kernelWidth - 1, 0 : kernelHeight - 1, 0 : kernelDepth - 1 ); 172 | gridX = gridX - halfKernelWidth; 173 | gridY = gridY - halfKernelHeight; 174 | gridZ = gridZ - halfKernelDepth; 175 | gridRSquared = ( gridX .* gridX + gridY .* gridY ) / ( derivedSigmaSpatial * derivedSigmaSpatial ) + ( gridZ .* gridZ ) / ( derivedSigmaRange * derivedSigmaRange ); 176 | kernel = exp( -0.5 * gridRSquared ); 177 | 178 | % convolve 179 | blurredGridData = convn( gridData, kernel, 'same' ); 180 | blurredGridWeights = convn( gridWeights, kernel, 'same' ); 181 | 182 | % divide 183 | blurredGridWeights( blurredGridWeights == 0 ) = -2; % avoid divide by 0, won't read there anyway 184 | normalizedBlurredGrid = blurredGridData ./ blurredGridWeights; 185 | normalizedBlurredGrid( blurredGridWeights < -1 ) = 0; % put 0s where it's undefined 186 | 187 | % for debugging 188 | % blurredGridWeights( blurredGridWeights < -1 ) = 0; % put zeros back 189 | 190 | % upsample 191 | [ jj, ii ] = meshgrid( 0 : inputWidth - 1, 0 : inputHeight - 1 ); % meshgrid does x, then y, so output arguments need to be reversed 192 | % no rounding 193 | di = ( ii / samplingSpatial ) + paddingXY + 1; 194 | dj = ( jj / samplingSpatial ) + paddingXY + 1; 195 | dz = ( edge - edgeMin ) / samplingRange + paddingZ + 1; 196 | 197 | % interpn takes rows, then cols, etc 198 | % i.e. size(v,1), then size(v,2), ... 199 | output = interpn( normalizedBlurredGrid, di, dj, dz ); 200 | -------------------------------------------------------------------------------- /BF/im/im.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/BF/im/im.png -------------------------------------------------------------------------------- /BF/im/im1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/BF/im/im1.png -------------------------------------------------------------------------------- /BF/im/im2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/BF/im/im2.png -------------------------------------------------------------------------------- /BF/im/im3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/BF/im/im3.png -------------------------------------------------------------------------------- /BF/im/res.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/BF/im/res.png -------------------------------------------------------------------------------- /BF/main.m: -------------------------------------------------------------------------------- 1 | clc 2 | clear 3 | 4 | 5 | I = im2double(imread('im.png')); 6 | rgb = I(:,1:500,:); 7 | nir = I(:,501:1000,:); 8 | 9 | % rgb = im2double(imread('s11.png')); 10 | % nir = im2double(imread('s12.png')); 11 | % nir = cat(3, nir,nir,nir); 12 | 13 | w = size(rgb, 2); 14 | 15 | Ycbcr = rgb2ycbcr([rgb nir]); 16 | Y = Ycbcr(:,:,1); 17 | 18 | base = fastBilateralFilter(Y); 19 | detail = Y-base; 20 | 21 | YF = base(:,1:w,:)+detail(:,w+1:2*w,:); 22 | cbcr = Ycbcr(:,1:w,:); 23 | cbcr(:,:,1) = YF; 24 | 25 | out = ycbcr2rgb(cbcr); 26 | 27 | imshow([rgb nir out]) 28 | imwrite([rgb zeros(529,10,3) nir zeros(529,10,3) out],'res.png') 29 | -------------------------------------------------------------------------------- /BF/readme.md: -------------------------------------------------------------------------------- 1 | # 基于红外图像融合的美颜思路 2 | 3 | > 原文:[**Combining visible and near-infrared images for realistic skin smoothing** - 2009](https://ivrlwww.epfl.ch/alumni/fredemba/papers/FS_CIC09.pdf) 4 | 5 | ## 介绍 6 | 7 | 如今,各大美图软件基本是大家手机里的必备APP。各家的美颜算法应该也都是在现有文献的基础上经过各种优化和调参得到的结果,在这个领域,有大名鼎鼎三大边缘保持滤波器。双边滤波,加权最小二乘滤波和导向滤波,当然还有很多别的。在美颜方面,几个滤波器的作用基本就是保持图像中明显的边缘,滤掉不明显的噪声。这个噪声在这里指的就是雀斑,皱纹,豆豆之类的,滤波也就是磨皮。至于怎么才算不明显,这都是相对而言,一般都是由参数决定。 8 | 9 | 以上所做的事,都是对现有图像的细节进行处理。本文记录的论文,利用了红外光波段的信息,对彩色图像和红外图像数据融合,达到美颜的效果,看一下RGB图像和红外图像的成像效果(图片均来源论文)。 10 | 11 | ![origin](im/im1.png) 12 | 13 | RGB图和红外图 14 | ![detail](im/im2.png) 15 | 16 | 细节对比 17 | 可以看到,红外图像能拍到更光滑的皮肤表面,它忽略了很多“不好的细节”。至于原理,简单讲就是红外光有更长的波长,不易在皮肤表面造成散射折射,穿透力更强,因此不会被表皮上的色素阻隔,从而能更好的反映皮肤本身的信息。 18 | 19 | ## 算法 20 | 21 | 算法中用到了双边滤波器来分离图像的基本层和细节层。整个算法很简单,就是分别对已配准的彩色图和红外图滤波,然后把彩色图的基本层和红外图的基本层相加即可,流程如下。 22 | 23 | ![pipeline](im/im3.png) 24 | 25 | 算法流程 26 | ## 结果 27 | 28 | ![result](im/res.png) 29 | 30 | 分别为RGB图,NIR图,结果 31 | ## 总结 32 | 33 | 图像融合问题的典型方法,滤波进行多尺度分离,融合。 34 | 35 | ## 参考 36 | 37 | [1]Fredembach C, Barbuscia N, Süsstrunk S. Combining visible and near-infrared images for realistic skin smoothing[C]//Color and Imaging Conference. Society for Imaging Science and Technology, 2009, 2009(1): 242-247. 38 | 39 | -------------------------------------------------------------------------------- /BFWLS/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/BFWLS/.DS_Store -------------------------------------------------------------------------------- /BFWLS/func/boxfilter.m: -------------------------------------------------------------------------------- 1 | function imDst = boxfilter(imSrc, r) 2 | 3 | % BOXFILTER O(1) time box filtering using cumulative sum 4 | % 5 | % - Definition imDst(x, y)=sum(sum(imSrc(x-r:x+r,y-r:y+r))); 6 | % - Running time independent of r; 7 | % - Equivalent to the function: colfilt(imSrc, [2*r+1, 2*r+1], 'sliding', @sum); 8 | % - But much faster. 9 | 10 | [hei, wid] = size(imSrc); 11 | imDst = zeros(size(imSrc)); 12 | 13 | %cumulative sum over Y axis 14 | imCum = cumsum(imSrc, 1); 15 | %difference over Y axis 16 | imDst(1:r+1, :) = imCum(1+r:2*r+1, :); 17 | imDst(r+2:hei-r, :) = imCum(2*r+2:hei, :) - imCum(1:hei-2*r-1, :); 18 | imDst(hei-r+1:hei, :) = repmat(imCum(hei, :), [r, 1]) - imCum(hei-2*r:hei-r-1, :); 19 | 20 | %cumulative sum over X axis 21 | imCum = cumsum(imDst, 2); 22 | %difference over Y axis 23 | imDst(:, 1:r+1) = imCum(:, 1+r:2*r+1); 24 | imDst(:, r+2:wid-r) = imCum(:, 2*r+2:wid) - imCum(:, 1:wid-2*r-1); 25 | imDst(:, wid-r+1:wid) = repmat(imCum(:, wid), [1, r]) - imCum(:, wid-2*r:wid-r-1); 26 | end 27 | 28 | -------------------------------------------------------------------------------- /BFWLS/func/fastBilateralFilter.m: -------------------------------------------------------------------------------- 1 | % 2 | % output = bilateralFilter( data, edge, ... 3 | % edgeMin, edgeMax, ... 4 | % sigmaSpatial, sigmaRange, ... 5 | % samplingSpatial, samplingRange ) 6 | % 7 | % Bilateral and Cross-Bilateral Filter using the Bilateral Grid. 8 | % 9 | % Bilaterally filters the image 'data' using the edges in the image 'edge'. 10 | % If 'data' == 'edge', then it the standard bilateral filter. 11 | % Otherwise, it is the 'cross' or 'joint' bilateral filter. 12 | % For convenience, you can also pass in [] for 'edge' for the normal 13 | % bilateral filter. 14 | % 15 | % Note that for the cross bilateral filter, data does not need to be 16 | % defined everywhere. Undefined values can be set to 'NaN'. However, edge 17 | % *does* need to be defined everywhere. 18 | % 19 | % data and edge should be of the greyscale, double-precision floating point 20 | % matrices of the same size (i.e. they should be [ height x width ]) 21 | % 22 | % data is the only required argument 23 | % 24 | % edgeMin and edgeMax specifies the min and max values of 'edge' (or 'data' 25 | % for the normal bilateral filter) and is useful when the input is in a 26 | % range that's not between 0 and 1. For instance, if you are filtering the 27 | % L channel of an image that ranges between 0 and 100, set edgeMin to 0 and 28 | % edgeMax to 100. 29 | % 30 | % edgeMin defaults to min( edge( : ) ) and edgeMax defaults to max( edge( : ) ). 31 | % This is probably *not* what you want, since the input may not span the 32 | % entire range. 33 | % 34 | % sigmaSpatial and sigmaRange specifies the standard deviation of the space 35 | % and range gaussians, respectively. 36 | % sigmaSpatial defaults to min( width, height ) / 16 37 | % sigmaRange defaults to ( edgeMax - edgeMin ) / 10. 38 | % 39 | % samplingSpatial and samplingRange specifies the amount of downsampling 40 | % used for the approximation. Higher values use less memory but are also 41 | % less accurate. The default and recommended values are: 42 | % 43 | % samplingSpatial = sigmaSpatial 44 | % samplingRange = sigmaRange 45 | % 46 | 47 | function output = fastBilateralFilter( data, edge, edgeMin, edgeMax, sigmaSpatial, sigmaRange, ... 48 | samplingSpatial, samplingRange ) 49 | 50 | if( ndims( data ) > 2 ) 51 | error( 'data must be a greyscale image with size [ height, width ]' ); 52 | end 53 | 54 | if( ~isa( data, 'double' ) ) 55 | error( 'data must be of class "double"' ); 56 | end 57 | 58 | if ~exist( 'edge', 'var' ) 59 | edge = data; 60 | elseif isempty( edge ) 61 | edge = data; 62 | end 63 | 64 | if( ndims( edge ) > 2 ) 65 | error( 'edge must be a greyscale image with size [ height, width ]' ); 66 | end 67 | 68 | if( ~isa( edge, 'double' ) ) 69 | error( 'edge must be of class "double"' ); 70 | end 71 | 72 | inputHeight = size( data, 1 ); 73 | inputWidth = size( data, 2 ); 74 | 75 | if ~exist( 'edgeMin', 'var' ) 76 | edgeMin = min( edge( : ) ); 77 | warning( 'edgeMin not set! Defaulting to: %f\n', edgeMin ); 78 | end 79 | 80 | if ~exist( 'edgeMax', 'var' ) 81 | edgeMax = max( edge( : ) ); 82 | warning( 'edgeMax not set! Defaulting to: %f\n', edgeMax ); 83 | end 84 | 85 | edgeDelta = edgeMax - edgeMin; 86 | 87 | if ~exist( 'sigmaSpatial', 'var' ) 88 | sigmaSpatial = min( inputWidth, inputHeight ) / 16; 89 | fprintf( 'Using default sigmaSpatial of: %f\n', sigmaSpatial ); 90 | end 91 | 92 | if ~exist( 'sigmaRange', 'var' ) 93 | sigmaRange = 0.1 * edgeDelta; 94 | fprintf( 'Using default sigmaRange of: %f\n', sigmaRange ); 95 | end 96 | 97 | if ~exist( 'samplingSpatial', 'var' ) 98 | samplingSpatial = sigmaSpatial; 99 | end 100 | 101 | if ~exist( 'samplingRange', 'var' ) 102 | samplingRange = sigmaRange; 103 | end 104 | 105 | if size( data ) ~= size( edge ) 106 | error( 'data and edge must be of the same size' ); 107 | end 108 | 109 | % parameters 110 | derivedSigmaSpatial = sigmaSpatial / samplingSpatial; 111 | derivedSigmaRange = sigmaRange / samplingRange; 112 | 113 | paddingXY = floor( 2 * derivedSigmaSpatial ) + 1; 114 | paddingZ = floor( 2 * derivedSigmaRange ) + 1; 115 | 116 | % allocate 3D grid 117 | downsampledWidth = floor( ( inputWidth - 1 ) / samplingSpatial ) + 1 + 2 * paddingXY; 118 | downsampledHeight = floor( ( inputHeight - 1 ) / samplingSpatial ) + 1 + 2 * paddingXY; 119 | downsampledDepth = floor( edgeDelta / samplingRange ) + 1 + 2 * paddingZ; 120 | 121 | gridData = zeros( downsampledHeight, downsampledWidth, downsampledDepth ); 122 | gridWeights = zeros( downsampledHeight, downsampledWidth, downsampledDepth ); 123 | 124 | % compute downsampled indices 125 | [ jj, ii ] = meshgrid( 0 : inputWidth - 1, 0 : inputHeight - 1 ); 126 | 127 | % ii = 128 | % 0 0 0 0 0 129 | % 1 1 1 1 1 130 | % 2 2 2 2 2 131 | 132 | % jj = 133 | % 0 1 2 3 4 134 | % 0 1 2 3 4 135 | % 0 1 2 3 4 136 | 137 | % so when iterating over ii( k ), jj( k ) 138 | % get: ( 0, 0 ), ( 1, 0 ), ( 2, 0 ), ... (down columns first) 139 | 140 | di = round( ii / samplingSpatial ) + paddingXY + 1; 141 | dj = round( jj / samplingSpatial ) + paddingXY + 1; 142 | dz = round( ( edge - edgeMin ) / samplingRange ) + paddingZ + 1; 143 | 144 | % perform scatter (there's probably a faster way than this) 145 | % normally would do downsampledWeights( di, dj, dk ) = 1, but we have to 146 | % perform a summation to do box downsampling 147 | for k = 1 : numel( dz ) 148 | 149 | dataZ = data( k ); % traverses the image column wise, same as di( k ) 150 | if ~isnan( dataZ ) 151 | 152 | dik = di( k ); 153 | djk = dj( k ); 154 | dzk = dz( k ); 155 | 156 | gridData( dik, djk, dzk ) = gridData( dik, djk, dzk ) + dataZ; 157 | gridWeights( dik, djk, dzk ) = gridWeights( dik, djk, dzk ) + 1; 158 | 159 | end 160 | end 161 | 162 | % make gaussian kernel 163 | kernelWidth = 2 * derivedSigmaSpatial + 1; 164 | kernelHeight = kernelWidth; 165 | kernelDepth = 2 * derivedSigmaRange + 1; 166 | 167 | halfKernelWidth = floor( kernelWidth / 2 ); 168 | halfKernelHeight = floor( kernelHeight / 2 ); 169 | halfKernelDepth = floor( kernelDepth / 2 ); 170 | 171 | [gridX, gridY, gridZ] = meshgrid( 0 : kernelWidth - 1, 0 : kernelHeight - 1, 0 : kernelDepth - 1 ); 172 | gridX = gridX - halfKernelWidth; 173 | gridY = gridY - halfKernelHeight; 174 | gridZ = gridZ - halfKernelDepth; 175 | gridRSquared = ( gridX .* gridX + gridY .* gridY ) / ( derivedSigmaSpatial * derivedSigmaSpatial ) + ( gridZ .* gridZ ) / ( derivedSigmaRange * derivedSigmaRange ); 176 | kernel = exp( -0.5 * gridRSquared ); 177 | 178 | % convolve 179 | blurredGridData = convn( gridData, kernel, 'same' ); 180 | blurredGridWeights = convn( gridWeights, kernel, 'same' ); 181 | 182 | % divide 183 | blurredGridWeights( blurredGridWeights == 0 ) = -2; % avoid divide by 0, won't read there anyway 184 | normalizedBlurredGrid = blurredGridData ./ blurredGridWeights; 185 | normalizedBlurredGrid( blurredGridWeights < -1 ) = 0; % put 0s where it's undefined 186 | 187 | % for debugging 188 | % blurredGridWeights( blurredGridWeights < -1 ) = 0; % put zeros back 189 | 190 | % upsample 191 | [ jj, ii ] = meshgrid( 0 : inputWidth - 1, 0 : inputHeight - 1 ); % meshgrid does x, then y, so output arguments need to be reversed 192 | % no rounding 193 | di = ( ii / samplingSpatial ) + paddingXY + 1; 194 | dj = ( jj / samplingSpatial ) + paddingXY + 1; 195 | dz = ( edge - edgeMin ) / samplingRange + paddingZ + 1; 196 | 197 | % interpn takes rows, then cols, etc 198 | % i.e. size(v,1), then size(v,2), ... 199 | output = interpn( normalizedBlurredGrid, di, dj, dz ); 200 | -------------------------------------------------------------------------------- /BFWLS/func/guidedfilter.m: -------------------------------------------------------------------------------- 1 | function q = guidedfilter(I, p, r, eps) 2 | % GUIDEDFILTER O(1) time implementation of guided filter. 3 | % 4 | % - guidance image: I (should be a gray-scale/single channel image) 5 | % - filtering input image: p (should be a gray-scale/single channel image) 6 | % - local window radius: r 7 | % - regularization parameter: eps 8 | 9 | [hei, wid] = size(I); 10 | N = boxfilter(ones(hei, wid), r); % the size of each local patch; N=(2r+1)^2 except for boundary pixels. 11 | 12 | mean_I = boxfilter(I, r) ./ N; 13 | mean_p = boxfilter(p, r) ./ N; 14 | mean_Ip = boxfilter(I.*p, r) ./ N; 15 | cov_Ip = mean_Ip - mean_I .* mean_p; % this is the covariance of (I, p) in each local patch. 16 | 17 | mean_II = boxfilter(I.*I, r) ./ N; 18 | var_I = mean_II - mean_I .* mean_I; 19 | 20 | a = cov_Ip ./ (var_I + eps); % Eqn. (5) in the paper; 21 | b = mean_p - a .* mean_I; % Eqn. (6) in the paper; 22 | 23 | mean_a = boxfilter(a, r) ./ N; 24 | mean_b = boxfilter(b, r) ./ N; 25 | 26 | q = mean_a .* I + mean_b; % Eqn. (8) in the paper; 27 | end -------------------------------------------------------------------------------- /BFWLS/func/guidedfilter_color.m: -------------------------------------------------------------------------------- 1 | function q = guidedfilter_color(I, p, r, eps) 2 | % GUIDEDFILTER_COLOR O(1) time implementation of guided filter using a color image as the guidance. 3 | % 4 | % - guidance image: I (should be a color (RGB) image) 5 | % - filtering input image: p (should be a gray-scale/single channel image) 6 | % - local window radius: r 7 | % - regularization parameter: eps 8 | 9 | [hei, wid] = size(p); 10 | N = boxfilter(ones(hei, wid), r); % the size of each local patch; N=(2r+1)^2 except for boundary pixels. 11 | 12 | mean_I_r = boxfilter(I(:, :, 1), r) ./ N; 13 | mean_I_g = boxfilter(I(:, :, 2), r) ./ N; 14 | mean_I_b = boxfilter(I(:, :, 3), r) ./ N; 15 | 16 | mean_p = boxfilter(p, r) ./ N; 17 | 18 | mean_Ip_r = boxfilter(I(:, :, 1).*p, r) ./ N; 19 | mean_Ip_g = boxfilter(I(:, :, 2).*p, r) ./ N; 20 | mean_Ip_b = boxfilter(I(:, :, 3).*p, r) ./ N; 21 | 22 | % covariance of (I, p) in each local patch. 23 | cov_Ip_r = mean_Ip_r - mean_I_r .* mean_p; 24 | cov_Ip_g = mean_Ip_g - mean_I_g .* mean_p; 25 | cov_Ip_b = mean_Ip_b - mean_I_b .* mean_p; 26 | 27 | % variance of I in each local patch: the matrix Sigma in Eqn (14). 28 | % Note the variance in each local patch is a 3x3 symmetric matrix: 29 | % rr, rg, rb 30 | % Sigma = rg, gg, gb 31 | % rb, gb, bb 32 | var_I_rr = boxfilter(I(:, :, 1).*I(:, :, 1), r) ./ N - mean_I_r .* mean_I_r; 33 | var_I_rg = boxfilter(I(:, :, 1).*I(:, :, 2), r) ./ N - mean_I_r .* mean_I_g; 34 | var_I_rb = boxfilter(I(:, :, 1).*I(:, :, 3), r) ./ N - mean_I_r .* mean_I_b; 35 | var_I_gg = boxfilter(I(:, :, 2).*I(:, :, 2), r) ./ N - mean_I_g .* mean_I_g; 36 | var_I_gb = boxfilter(I(:, :, 2).*I(:, :, 3), r) ./ N - mean_I_g .* mean_I_b; 37 | var_I_bb = boxfilter(I(:, :, 3).*I(:, :, 3), r) ./ N - mean_I_b .* mean_I_b; 38 | 39 | a = zeros(hei, wid, 3); 40 | for y=1:hei 41 | for x=1:wid 42 | Sigma = [var_I_rr(y, x), var_I_rg(y, x), var_I_rb(y, x); 43 | var_I_rg(y, x), var_I_gg(y, x), var_I_gb(y, x); 44 | var_I_rb(y, x), var_I_gb(y, x), var_I_bb(y, x)]; 45 | %Sigma = Sigma + eps * eye(3); 46 | 47 | cov_Ip = [cov_Ip_r(y, x), cov_Ip_g(y, x), cov_Ip_b(y, x)]; 48 | 49 | a(y, x, :) = cov_Ip * inv(Sigma + eps * eye(3)); % Eqn. (14) in the paper; 50 | end 51 | end 52 | 53 | b = mean_p - a(:, :, 1) .* mean_I_r - a(:, :, 2) .* mean_I_g - a(:, :, 3) .* mean_I_b; % Eqn. (15) in the paper; 54 | 55 | q = (boxfilter(a(:, :, 1), r).* I(:, :, 1)... 56 | + boxfilter(a(:, :, 2), r).* I(:, :, 2)... 57 | + boxfilter(a(:, :, 3), r).* I(:, :, 3)... 58 | + boxfilter(b, r)) ./ N; % Eqn. (16) in the paper; 59 | end -------------------------------------------------------------------------------- /BFWLS/func/myrgb2yuv.m: -------------------------------------------------------------------------------- 1 | % function yuv = myrgb2yuv(image) 2 | % input params. 3 | % image: input color image with 3 channels, which value must be [0 255] 4 | % output 5 | % yuv: 3 channels(YUV444, Y plane, U plane, V plane), value [0 255], double 6 | % 7 | 8 | 9 | function yuv = myrgb2yuv(image) 10 | image = double(image); 11 | R = image(:,:,1); 12 | G = image(:,:,2); 13 | B = image(:,:,3); 14 | 15 | yuv(:,:,1) = 0.299.*R + 0.587.*G + 0.114.*B; 16 | yuv(:,:,2) = - 0.1687.*R - 0.3313.*G + 0.5.*B + 128; 17 | yuv(:,:,3) = 0.5.*R - 0.4187.*G - 0.0813.*B + 128; 18 | end 19 | 20 | -------------------------------------------------------------------------------- /BFWLS/func/myyuv2rgb.m: -------------------------------------------------------------------------------- 1 | % function rgb = myyuv2rgb(image) 2 | % input params. 3 | % image: input YUV image with YUV444 format, which value must be [0 255] 4 | % output 5 | % rgb: 3 channels color image, value [0 255], double 6 | % 7 | 8 | 9 | 10 | function rgb = myyuv2rgb(image) 11 | image = double(image); 12 | Y = image(:,:,1); 13 | U = image(:,:,2); 14 | V = image(:,:,3); 15 | 16 | R = Y + 1.402.*(V-128); 17 | 18 | G = Y - 0.34414.*(U-128) - 0.71414.*(V-128); 19 | 20 | B = Y + 1.772.*(U-128); 21 | 22 | rgb(:,:,1) = R; 23 | rgb(:,:,2) = G; 24 | rgb(:,:,3) = B; 25 | end -------------------------------------------------------------------------------- /BFWLS/func/saliency.m: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/BFWLS/func/saliency.m -------------------------------------------------------------------------------- /BFWLS/func/wlsFilter.m: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/BFWLS/func/wlsFilter.m -------------------------------------------------------------------------------- /BFWLS/im/0000_nir.tiff: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/BFWLS/im/0000_nir.tiff -------------------------------------------------------------------------------- /BFWLS/im/0000_rgb.tiff: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/BFWLS/im/0000_rgb.tiff -------------------------------------------------------------------------------- /BFWLS/main.m: -------------------------------------------------------------------------------- 1 | % 2 | 3 | clear 4 | clc 5 | addpath ./im 6 | addpath ./func 7 | t1 = Tiff('0000_rgb.tiff','r'); 8 | t2 = Tiff('0000_nir.tiff','r'); 9 | imgRGB = read(t1); 10 | imgNIR = read(t2); 11 | % trans to YUV color space 12 | imgYUV = myrgb2yuv(imgRGB); 13 | 14 | 15 | imgNIR_Double = double(imgNIR); 16 | 17 | %% step1 18 | Yb_WLS = wlsFilter(imgNIR_Double,0.125,1.2); 19 | Yb_BF = fastBilateralFilter(imgNIR_Double); 20 | 21 | Yd_WLS = imgNIR_Double - Yb_WLS; 22 | Yd_BF = imgNIR_Double - Yb_BF; 23 | 24 | Yb_WLS_C = wlsFilter(imgYUV(:,:,1),0.125,1.2); 25 | % detail fusion 26 | Yd = 0.5*(Yd_WLS + Yd_BF); 27 | %% step2 28 | % base fusion 29 | Y = Yb_WLS_C + Yd; 30 | 31 | 32 | imgYUV(:,:,1) = Y; 33 | 34 | % trans to RGB 35 | OUT = myyuv2rgb(imgYUV); 36 | 37 | figure 38 | imshow([imgRGB uint8(OUT)]); 39 | 40 | -------------------------------------------------------------------------------- /BFWLS/main2.m: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/BFWLS/main2.m -------------------------------------------------------------------------------- /GFF/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/GFF/.DS_Store -------------------------------------------------------------------------------- /GFF/func/boxfilter.m: -------------------------------------------------------------------------------- 1 | function imDst = boxfilter(imSrc, r) 2 | 3 | % BOXFILTER O(1) time box filtering using cumulative sum 4 | % 5 | % - Definition imDst(x, y)=sum(sum(imSrc(x-r:x+r,y-r:y+r))); 6 | % - Running time independent of r; 7 | % - Equivalent to the function: colfilt(imSrc, [2*r+1, 2*r+1], 'sliding', @sum); 8 | % - But much faster. 9 | 10 | [hei, wid] = size(imSrc); 11 | imDst = zeros(size(imSrc)); 12 | 13 | %cumulative sum over Y axis 14 | imCum = cumsum(imSrc, 1); 15 | %difference over Y axis 16 | imDst(1:r+1, :) = imCum(1+r:2*r+1, :); 17 | imDst(r+2:hei-r, :) = imCum(2*r+2:hei, :) - imCum(1:hei-2*r-1, :); 18 | imDst(hei-r+1:hei, :) = repmat(imCum(hei, :), [r, 1]) - imCum(hei-2*r:hei-r-1, :); 19 | 20 | %cumulative sum over X axis 21 | imCum = cumsum(imDst, 2); 22 | %difference over Y axis 23 | imDst(:, 1:r+1) = imCum(:, 1+r:2*r+1); 24 | imDst(:, r+2:wid-r) = imCum(:, 2*r+2:wid) - imCum(:, 1:wid-2*r-1); 25 | imDst(:, wid-r+1:wid) = repmat(imCum(:, wid), [1, r]) - imCum(:, wid-2*r:wid-r-1); 26 | end 27 | 28 | -------------------------------------------------------------------------------- /GFF/func/guidedfilter.m: -------------------------------------------------------------------------------- 1 | function q = guidedfilter(I, p, r, eps) 2 | 3 | % - guidance image: I (should be a gray-scale/single channel image) 4 | % - filtering input image: p (should be a gray-scale/single channel image) 5 | % - local window radius: r 6 | % - regularization parameter: eps 7 | 8 | [hei, wid] = size(I); 9 | N = boxfilter(ones(hei, wid), r); 10 | 11 | mean_I = boxfilter(I, r) ./ N; 12 | mean_p = boxfilter(p, r) ./ N; 13 | mean_Ip = boxfilter(I.*p, r) ./ N; 14 | % this is the covariance of (I, p) in each local patch. 15 | cov_Ip = mean_Ip - mean_I .* mean_p; 16 | 17 | mean_II = boxfilter(I.*I, r) ./ N; 18 | var_I = mean_II - mean_I .* mean_I; 19 | 20 | a = cov_Ip ./ (var_I + eps); 21 | b = mean_p - a .* mean_I; 22 | 23 | mean_a = boxfilter(a, r) ./ N; 24 | mean_b = boxfilter(b, r) ./ N; 25 | 26 | q = mean_a .* I + mean_b; 27 | end 28 | 29 | -------------------------------------------------------------------------------- /GFF/func/saliency.m: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/GFF/func/saliency.m -------------------------------------------------------------------------------- /GFF/im/1-1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/GFF/im/1-1.jpg -------------------------------------------------------------------------------- /GFF/im/1-2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/GFF/im/1-2.jpg -------------------------------------------------------------------------------- /GFF/im/1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/GFF/im/1.jpg -------------------------------------------------------------------------------- /GFF/im/2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/GFF/im/2.jpg -------------------------------------------------------------------------------- /GFF/im/LWIR.tif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/GFF/im/LWIR.tif -------------------------------------------------------------------------------- /GFF/im/Vis.tif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/GFF/im/Vis.tif -------------------------------------------------------------------------------- /GFF/im/lena1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/GFF/im/lena1.jpg -------------------------------------------------------------------------------- /GFF/im/lena2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/GFF/im/lena2.jpg -------------------------------------------------------------------------------- /GFF/main.m: -------------------------------------------------------------------------------- 1 | clc 2 | clear 3 | close all 4 | addpath ./func 5 | addpath ./im 6 | 7 | 8 | im1 = im2double(imread('1.jpg')); 9 | im2 = im2double(imread('2.jpg')); 10 | 11 | % imshow([im1 im2]) 12 | %% Get B,D 13 | h1 = fspecial('average',[31,31]); 14 | B1 = imfilter(im1, h1, 'replicate'); 15 | B2 = imfilter(im2, h1, 'replicate'); 16 | 17 | D1 = im1-B1; 18 | D2 = im2-B2; 19 | % imshow([B1 B2]) 20 | 21 | 22 | %% Saliency map 23 | h2 = fspecial('laplacian'); 24 | H1 = imfilter(im1, h2, 'replicate'); 25 | H2 = imfilter(im2, h2, 'replicate'); 26 | 27 | H1 = abs(H1); 28 | H2 = abs(H2); 29 | 30 | h3 = fspecial('gaussian', [11,11], 5); 31 | S1 = imfilter(H1, h3, 'replicate'); 32 | S2 = imfilter(H2, h3, 'replicate'); 33 | 34 | % imshow([H1 H2;S1 S2]) 35 | 36 | %% Get Weight maps 37 | P1 = saliency(S1, S2); 38 | P2 = saliency(S2, S1); 39 | 40 | % imshow([P1 P2]) 41 | 42 | %% Guide Filter 43 | eps1 = 0.3^2; 44 | eps2 = 0.05^2; 45 | for i=1:3 46 | Wb1(:,:,i) = guidedfilter(im1(:,:,i) , P1(:,:,i) , 8, eps1); 47 | Wb2(:,:,i) = guidedfilter(im2(:,:,i) , P2(:,:,i) , 8, eps1); 48 | 49 | Wd1(:,:,i) = guidedfilter(im1(:,:,i) , P1(:,:,i) , 4, eps2); 50 | Wd2(:,:,i) = guidedfilter(im2(:,:,i) , P2(:,:,i) , 4, eps2); 51 | end 52 | 53 | % imshow([Wb1 Wb2;Wd1 Wd2]) 54 | %% Weighted average 55 | Wbmax = Wb1+Wb2; 56 | Wdmax = Wd1+Wd2; 57 | Wb1 = Wb1./Wbmax; 58 | Wb2 = Wb2./Wbmax; 59 | Wd1 = Wd1./Wdmax; 60 | Wd2 = Wd2./Wdmax; 61 | 62 | 63 | B = B1.*Wb1+B2.*Wb2; 64 | D = D1.*Wd1+D2.*Wd2; 65 | im = B+D; 66 | 67 | imshow([im1 im2 im]) 68 | 69 | 70 | 71 | 72 | -------------------------------------------------------------------------------- /GFF/readme.md: -------------------------------------------------------------------------------- 1 | # Guide Filter Fusion(GFF) 2 | 3 | ​ 本文提出了一种基于导向滤波的图像融合方法。融合的对象可以是**多光谱图像**,**多焦点的图像**,**不同曝光的图像**等。论文的创新点就是提出了一种多幅图像的权重计算方法,引入了像素显著性和图像空间连续性的概念。并且使用了引导滤波来重建权重,最终得到了高效和不错的融合效果。 4 | 5 | [学习记录](https://blog.csdn.net/weixin_43194305/article/details/90678312) 6 | 7 | 效果:![](https://img-blog.csdnimg.cn/2019053011355770.jpg) 8 | 9 | ![lena](https://img-blog.csdnimg.cn/20190603171823631.jpg) -------------------------------------------------------------------------------- /Laplacian-Fusion/func/LPD.m: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/Laplacian-Fusion/func/LPD.m -------------------------------------------------------------------------------- /Laplacian-Fusion/func/SSR.m: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/Laplacian-Fusion/func/SSR.m -------------------------------------------------------------------------------- /Laplacian-Fusion/im/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/Laplacian-Fusion/im/.DS_Store -------------------------------------------------------------------------------- /Laplacian-Fusion/im/Anir.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/Laplacian-Fusion/im/Anir.jpg -------------------------------------------------------------------------------- /Laplacian-Fusion/im/Argb.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/Laplacian-Fusion/im/Argb.jpg -------------------------------------------------------------------------------- /Laplacian-Fusion/im/inf.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/Laplacian-Fusion/im/inf.jpg -------------------------------------------------------------------------------- /Laplacian-Fusion/im/nir.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/Laplacian-Fusion/im/nir.jpg -------------------------------------------------------------------------------- /Laplacian-Fusion/im/rgb.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/Laplacian-Fusion/im/rgb.jpg -------------------------------------------------------------------------------- /Laplacian-Fusion/im/vis.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/Laplacian-Fusion/im/vis.jpg -------------------------------------------------------------------------------- /Laplacian-Fusion/main_lpd_fusion.m: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/Laplacian-Fusion/main_lpd_fusion.m -------------------------------------------------------------------------------- /Laplacian-Fusion/readme.md: -------------------------------------------------------------------------------- 1 | # Laplacian Pyramid Fusion 2 | 3 | [记录](https://blog.csdn.net/weixin_43194305/article/details/93034655) 4 | 5 | 效果: 6 | 7 | ![res1](https://img-blog.csdnimg.cn/20190620143112815.jpg) 8 | 9 | ![res2](https://img-blog.csdnimg.cn/20190620143124612.jpg?) 10 | 11 | ![res3](https://img-blog.csdnimg.cn/20190620143551940.jpg?) -------------------------------------------------------------------------------- /SE/func/reintegration.m: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/SE/func/reintegration.m -------------------------------------------------------------------------------- /SE/func/se.m: -------------------------------------------------------------------------------- 1 | function [gx,gy] = se(rx, ry, hx, hy) 2 | %SE 3 | [m, n, ~] = size(rx); 4 | gx = zeros(size(rx)); 5 | gy = gx; 6 | 7 | rx2 = sum(rx.^2, 3); 8 | ry2 = sum(ry.^2, 3); 9 | rxy = sum(rx.*ry, 3); 10 | 11 | hx2 = sum(hx.^2, 3); 12 | hy2 = sum(hy.^2, 3); 13 | hxy = sum(hx.*hy, 3); 14 | 15 | for i=1:m 16 | for j=1:n 17 | st1 = [rx2(i,j) rxy(i, j);rxy(i, j) ry2(i, j)]; 18 | st2 = [hx2(i,j) hxy(i, j);hxy(i, j) hy2(i, j)]; 19 | sq1 = st1^0.5; 20 | sq2 = st2^0.5; 21 | temp = sq1*sq2'; 22 | [u, ~, v] = svd(temp); 23 | o = u*v; 24 | a = real(pinv(sq1)*o*sq2'); 25 | x = rx(i, j, :); 26 | y = ry(i, j, :); 27 | xy = [x(:) y(:)]; 28 | xy = xy*a; 29 | gx(i,j,:) = xy(:,1); 30 | gy(i,j,:) = xy(:,2); 31 | end 32 | end 33 | 34 | end 35 | 36 | -------------------------------------------------------------------------------- /SE/im/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/SE/im/.DS_Store -------------------------------------------------------------------------------- /SE/im/nir.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/SE/im/nir.jpg -------------------------------------------------------------------------------- /SE/im/rgb.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/SE/im/rgb.jpg -------------------------------------------------------------------------------- /SE/main.m: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/SE/main.m -------------------------------------------------------------------------------- /SE/readme.md: -------------------------------------------------------------------------------- 1 | # Spectral Edge Image Fusion 2 | 3 | 该文提出了一种基于矩阵优化的图像融合方法,适用于任意多维图像的融合。 4 | 5 | [学习记录](https://blog.csdn.net/weixin_43194305/article/details/88864187) 6 | 7 | 效果: 8 | 9 | ![res1](https://img-blog.csdnimg.cn/20190619195446742.jpg) 10 | 11 | 优化线性拉伸方法后的效果: 12 | 13 | ![res2](https://img-blog.csdnimg.cn/20190708121739733.jpg) 14 | -------------------------------------------------------------------------------- /SE/test.m: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/entropyzeroo/ImageFusion/13de65ff6590c25a8356be79c7de12b556ffd9d1/SE/test.m -------------------------------------------------------------------------------- /readme.md: -------------------------------------------------------------------------------- 1 | # ImageFusion 2 | 基本都是Matlab跑的算法。 3 | 4 | --------------------------------------------------------------------------------