├── .DS_Store ├── channel ├── .DS_Store └── 5dB │ └── .DS_Store ├── TVT_SourceCode ├── .DS_Store ├── OMP_Algorithm_MMV.m ├── readme.md ├── main.m ├── TrainingSetGen.m └── channel_f.m ├── .gitignore ├── __pycache__ ├── utils.cpython-38.pyc └── models.cpython-38.pyc ├── README.md ├── utils.py ├── models.py └── modelTrain.py /.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/scliubit/complex-DnCNN/HEAD/.DS_Store -------------------------------------------------------------------------------- /channel/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/scliubit/complex-DnCNN/HEAD/channel/.DS_Store -------------------------------------------------------------------------------- /channel/5dB/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/scliubit/complex-DnCNN/HEAD/channel/5dB/.DS_Store -------------------------------------------------------------------------------- /TVT_SourceCode/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/scliubit/complex-DnCNN/HEAD/TVT_SourceCode/.DS_Store -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | 2 | channel/5dB/trainingChannel5_C6P8_AS7.5.mat 3 | channel/5dB/trueTrainingChannel5_C6P8_AS7.5.mat 4 | -------------------------------------------------------------------------------- /__pycache__/utils.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/scliubit/complex-DnCNN/HEAD/__pycache__/utils.cpython-38.pyc -------------------------------------------------------------------------------- /TVT_SourceCode/OMP_Algorithm_MMV.m: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/scliubit/complex-DnCNN/HEAD/TVT_SourceCode/OMP_Algorithm_MMV.m -------------------------------------------------------------------------------- /__pycache__/models.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/scliubit/complex-DnCNN/HEAD/__pycache__/models.cpython-38.pyc -------------------------------------------------------------------------------- /TVT_SourceCode/readme.md: -------------------------------------------------------------------------------- 1 | # 贪婪估计部分 2 | 3 | 该部分代码为[文章](https://ieeexplore.ieee.org/document/9127834)[^1]的贪婪估计部分,可以直接绘制第一张仿真图像。其他参数可以根据该代码修改。 4 | 5 | `main.m`的参数可以自由设置,此处给出的设置为文章的参数设置,执行一次的时间较长,且随着运行的推进,运算速度会逐渐下降。 6 | 7 | 有关我们的其他研究,可以在[gaozhen16.github.io](https://gaozhen16.eu.org)中找到。 8 | 9 | --- 10 | 11 | This is the source code for [paper](https://ieeexplore.ieee.org/document/9127834)[^1], and can be used to plot the first simulation result directly. Other simulation results can be obtained by modifying this code. 12 | 13 | Parameters for `main.m` can be changed freely, while here we only give the preferred parameter set in the original paper. A complete process of calculation can be time-consuming, and the operational performance is declining during the whole calculation process. 14 | 15 | For further infomation, please visit our official website [gaozhen16.github.io](https://gaozhen16.eu.org) 16 | 17 | --- 18 | 19 | [^1]: S. Liu, Z. Gao, J. Zhang, M. D. Renzo and M. -S. Alouini, "Deep Denoising Neural Network Assisted Compressive Channel Estimation for mmWave Intelligent Reflecting Surfaces," in IEEE Transactions on Vehicular Technology, vol. 69, no. 8, pp. 9223-9228, Aug. 2020, doi: 10.1109/TVT.2020.3005402. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## complex-valued DnCNN 2 | 3 | Source code for [Deep Denoising Neural Network Assisted Compressive Channel Estimation for mmWave Intelligent Reflecting Surfaces](https://ieeexplore.ieee.org/document/9127834), modified from [DnCNN](https://github.com/cszn/DnCNN). 4 | 5 | ``` 6 | S. Liu, Z. Gao, J. Zhang, M. D. Renzo and M. -S. Alouini, "Deep Denoising Neural Network Assisted Compressive Channel Estimation for mmWave Intelligent Reflecting Surfaces," in IEEE Transactions on Vehicular Technology, vol. 69, no. 8, pp. 9223-9228, Aug. 2020, doi: 10.1109/TVT.2020.3005402. 7 | ``` 8 | 9 | 10 | To successfully run the code, you should follow the steps below 11 | 12 | - Make sure you have installed the required libs 13 | 14 | - Generate channel dataset. We have used the geometric channel model in our paper with vary parameters. 15 | 16 | Note that the required shape is $[N,2,N_{IRS},N_{C}]$, where 2 represents the real part and imaginary part of the channel matrix. 17 | 18 | - Set the parameters in `modelTrain.py` and create the corresponding folders before running the program. 19 | - *If you want to evalutate the performance, please change the dataset configurations in `modelTrain.py`* 20 | 21 | --- 22 | 23 | 文章 [Deep Denoising Neural Network Assisted Compressive Channel Estimation for mmWave Intelligent Reflecting Surfaces](https://ieeexplore.ieee.org/document/9127834) 的源代码, 由 [DnCNN](https://github.com/cszn/DnCNN) 等修改。 24 | 25 | 为保证成功运行,您应按照以下步骤进行 26 | 27 | - 确保您已安装所有需要的库 28 | - 产生一组信道数据集。此处我们采用几何信道建模,参数在仿真中有变。 29 | 30 | 值得注意的是,信道数据集的尺寸要求为$[N,2,N_{IRS},N_{C}]$,其中2代表信道矩阵的实部和虚部。 31 | 32 | - 运行`modelTrain.py`进行训练,在这之前要设置合适的参数(用于区别保存模型时的路径)。 33 | - *若需要测试性能表现,请将训练文件中的数据集配置换成测试集* 34 | 35 | --- 36 | 37 | > 原文采用OMP算法构造数据集,该部分代码可能在后续整理中发布。 38 | -------------------------------------------------------------------------------- /utils.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import numpy as np 4 | from torch.utils.data import DataLoader 5 | from torch.utils.data import Dataset 6 | import os 7 | import glob 8 | import datetime 9 | import re 10 | import random 11 | 12 | 13 | def NMSE_cuda(x, x_hat): 14 | x = x.contiguous().view(len(x), -1) 15 | x_hat = x_hat.contiguous().view(len(x_hat), -1) 16 | power = torch.sum(abs(x) ** 2, dim=1) 17 | mse = torch.sum(abs(x - x_hat) ** 2, dim=1) / power 18 | return mse 19 | 20 | 21 | class NMSELoss(nn.Module): 22 | def __init__(self): 23 | super(NMSELoss, self).__init__() 24 | 25 | def forward(self, x, x_hat): 26 | return torch.mean(NMSE_cuda(x, x_hat)) 27 | 28 | 29 | def seed_everything(seed=42): 30 | os.environ['PYTHONHASHSEED'] = str(seed) 31 | torch.manual_seed(seed) 32 | torch.cuda.manual_seed_all(seed) 33 | np.random.seed(seed) 34 | random.seed(seed) 35 | torch.backends.cudnn.deterministic = True 36 | 37 | 38 | def findLastCheckpoint(save_dir): 39 | file_list = glob.glob(os.path.join(save_dir, 'model_*.pth')) 40 | if file_list: 41 | epochs_exist = [] 42 | for file_ in file_list: 43 | result = re.findall(".*model_(.*).pth.*", file_) 44 | epochs_exist.append(int(result[0])) 45 | initial_epoch = max(epochs_exist) 46 | else: 47 | initial_epoch = 0 48 | return initial_epoch 49 | 50 | 51 | def log(*args, **kwargs): 52 | print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S:"), *args, 53 | **kwargs) 54 | 55 | 56 | class MyDenoisingDataset(Dataset): 57 | """Dataset wrapping tensors. 58 | Arguments: 59 | xs (Tensor): clean image patches 60 | sigma: noise level, e.g., 25 61 | """ 62 | 63 | def __init__(self, xs, ys): 64 | super(MyDenoisingDataset, self).__init__() 65 | self.xs = xs 66 | self.ys = ys 67 | 68 | def __getitem__(self, index): 69 | batch_x = self.xs[index] # ground truth 70 | batch_y = self.ys[index] # noisy image 71 | return batch_y, batch_x 72 | 73 | def __len__(self): 74 | return self.xs.size(0) 75 | 76 | 77 | def cov(m, rowvar=False): 78 | '''Estimate a covariance matrix given data. 79 | 80 | Covariance indicates the level to which two variables vary together. 81 | If we examine N-dimensional samples, `X = [x_1, x_2, ... x_N]^T`, 82 | then the covariance matrix element `C_{ij}` is the covariance of 83 | `x_i` and `x_j`. The element `C_{ii}` is the variance of `x_i`. 84 | 85 | Args: 86 | m: A 1-D or 2-D array containing multiple variables and observations. 87 | Each row of `m` represents a variable, and each column a single 88 | observation of all those variables. 89 | rowvar: If `rowvar` is True, then each row represents a 90 | variable, with observations in the columns. Otherwise, the 91 | relationship is transposed: each column represents a variable, 92 | while the rows contain observations. 93 | 94 | Returns: 95 | The covariance matrix of the variables. 96 | ''' 97 | if m.dim() > 2: 98 | raise ValueError('m has more than 2 dimensions') 99 | if m.dim() < 2: 100 | m = m.view(1, -1) 101 | if not rowvar and m.size(0) != 1: 102 | m = m.t() 103 | # m = m.type(torch.double) # uncomment this line if desired 104 | fact = 1.0 / (m.size(1) - 1) 105 | m -= torch.mean(m, dim=1, keepdim=True) 106 | mt = m.t() # if complex: mt = m.t().conj() 107 | return fact * m.matmul(mt).squeeze() 108 | 109 | 110 | if __name__ == "__main__": 111 | r = False 112 | x = np.random.randn(30, 2) 113 | xt = torch.from_numpy(x).type(torch.double) 114 | np_c = np.cov(x, rowvar=r) 115 | our_c = cov(xt, rowvar=r).numpy() 116 | print(np.allclose(np_c, our_c)) 117 | print(x, '\n\n\n', our_c, '\n\n\n', torch.var(xt)) 118 | -------------------------------------------------------------------------------- /models.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # 2019 1127 Modified by S. Liu 3 | # 2020 0321 ReModified by S. Liu 4 | # 2022 0117 Re S. Liu 5 | # 2022 0308 Re S. Liu 6 | 7 | import argparse 8 | import re 9 | import os 10 | import glob 11 | import datetime 12 | import time 13 | import numpy as np 14 | import torch 15 | import h5py 16 | import torch.nn as nn 17 | from scipy import io 18 | from utils import cov 19 | 20 | 21 | class ComplexBN(torch.nn.Module): 22 | 23 | def __init__(self, num_features, eps=1e-05, momentum=0.1, affine=True): 24 | super(ComplexBN, self).__init__() 25 | self.device = torch.device( 26 | "cuda" if torch.cuda.is_available() else "cpu") 27 | self.eps = eps 28 | self.num_features = num_features 29 | self.upper = True 30 | # self.batchNorm2dF = torch.nn.BatchNorm2d(num_features, 31 | # affine=affine).to(self.device) 32 | 33 | def forward(self, x): # shpae of x : [batch,2,channel,axis1,axis2] 34 | # divide dim=1 to 2 parts -> real and imag 35 | # real/imag = [batch, channel, axis1, axis2] 36 | 37 | real = x[:, 0] 38 | imag = x[:, 1] 39 | 40 | realVec = torch.flatten(real) 41 | imagVec = torch.flatten(imag) 42 | re_im_stack = torch.stack((realVec, imagVec), dim=1) 43 | covMat = cov(re_im_stack) 44 | # e, v = torch.symeig(covMat, True) 45 | e, v = torch.linalg.eigh(covMat, UPLO='U' if self.upper else 'L') 46 | covMat_sq2 = torch.mm(torch.mm(v, torch.diag(torch.pow(e, -0.5))), 47 | v.t()) 48 | data = torch.stack((realVec - real.mean(), imagVec - imag.mean()), 49 | dim=1).t() 50 | whitenData = torch.mm(covMat_sq2, data) 51 | real_data = whitenData[0, :].reshape(real.shape[0], real.shape[1], 52 | real.shape[2], real.shape[3]) 53 | imag_data = whitenData[1, :].reshape(real.shape[0], real.shape[1], 54 | real.shape[2], real.shape[3]) 55 | output = torch.stack((real_data, imag_data), dim=1) 56 | return output 57 | 58 | 59 | class ComplexConv2D(torch.nn.Module): 60 | 61 | def __init__(self, 62 | in_channels, 63 | out_channels, 64 | kernel_size, 65 | stride=1, 66 | padding=0, 67 | dilation=1, 68 | groups=1, 69 | bias=True): 70 | super(ComplexConv2D, self).__init__() 71 | self.device = torch.device( 72 | "cuda" if torch.cuda.is_available() else "cpu") 73 | self.padding = padding 74 | self.paddingF = nn.ZeroPad2d(1) 75 | # Model components 76 | # define complex conv 77 | self.conv_re = nn.Conv2d(in_channels, 78 | out_channels, 79 | kernel_size, 80 | stride=stride, 81 | padding=padding, 82 | dilation=dilation, 83 | groups=groups, 84 | bias=bias).to(self.device) 85 | self.conv_im = nn.Conv2d(in_channels, 86 | out_channels, 87 | kernel_size, 88 | stride=stride, 89 | padding=padding, 90 | dilation=dilation, 91 | groups=groups, 92 | bias=bias).to(self.device) 93 | self.weight1 = self.conv_re.weight 94 | self.weight2 = self.conv_im.weight 95 | self.bias1 = self.conv_re.bias 96 | self.bias2 = self.conv_im.bias 97 | 98 | def forward(self, x): 99 | 100 | # print(x.shape) 101 | r = self.paddingF(x[:, 0]) # NCHW 102 | # print(r.shape) 103 | i = self.paddingF(x[:, 1]) 104 | # print(r.shape) 105 | # New 20191102 106 | r[:, :, 0, :], i[:, :, 0, :] = r[:, :, -2, :], i[:, :, -2, :] 107 | r[:, :, -1, :], i[:, :, -1, :] = r[:, :, 1, :], i[:, :, 1, :] 108 | r[:, :, :, 0], i[:, :, :, 0] = r[:, :, :, 2], i[:, :, :, 2] 109 | r[:, :, :, -1], i[:, :, :, -1] = r[:, :, :, 1], i[:, :, :, 1] 110 | # NEW END 111 | real = self.conv_re(r) - self.conv_im(i) 112 | # print(real.shape) 113 | imaginary = self.conv_re(i) + self.conv_im(r) 114 | # stack real and imag part together @ dim=1 115 | output = torch.stack((real, imaginary), dim=1) 116 | return output 117 | 118 | 119 | class ComplexReLU(torch.nn.Module): 120 | 121 | def __init__(self, inplace=False): 122 | super(ComplexReLU, self).__init__() 123 | self.device = torch.device( 124 | "cuda" if torch.cuda.is_available() else "cpu") 125 | self.relu = nn.ReLU() 126 | # self.relu_re = nn.ReLU(inplace=inplace).to(self.device) 127 | # self.relu_im = nn.ReLU(inplace=inplace).to(self.device) 128 | 129 | def forward(self, x): 130 | # output = torch.stack( 131 | # (self.relu_re(x[:, 0]), self.relu_im(x[:, 1])), dim=1).to(self.device) 132 | return self.relu(x) 133 | 134 | 135 | class ComplexDnCNN(torch.nn.Module): 136 | 137 | def __init__(self, 138 | depth=17, 139 | n_channels=64, 140 | image_channels=1, 141 | use_bnorm=True, 142 | kernel_size=3): 143 | super(ComplexDnCNN, self).__init__() 144 | self.device = torch.device( 145 | "cuda" if torch.cuda.is_available() else "cpu") 146 | # kernel_size = 3 147 | padding = 0 148 | layers = [] 149 | # 1. Conv2d and ReLU 150 | layers.append( 151 | ComplexConv2D(in_channels=image_channels, 152 | out_channels=n_channels, 153 | kernel_size=kernel_size, 154 | padding=padding, 155 | bias=True)) 156 | layers.append(ComplexReLU(inplace=False)) 157 | # 2. 15 * (Conv2d + BN + ReLU) 158 | for _ in range(depth - 2): 159 | layers.append( 160 | ComplexConv2D(in_channels=n_channels, 161 | out_channels=n_channels, 162 | kernel_size=kernel_size, 163 | padding=padding, 164 | bias=False)) 165 | '''layers.append(torch.nn.BatchNorm2d( 166 | n_channels, eps=0.0001, momentum=0.95).to(device=self.device))''' 167 | layers.append(ComplexBN(n_channels, eps=0.0001, momentum=0.95)) 168 | layers.append(ComplexReLU(inplace=False)) 169 | # 3. conv2d 170 | layers.append( 171 | ComplexConv2D(in_channels=n_channels, 172 | out_channels=image_channels, 173 | kernel_size=kernel_size, 174 | padding=padding, 175 | bias=False)) 176 | self.dncnn = torch.nn.Sequential(*layers) 177 | 178 | def forward(self, x): 179 | y = x 180 | out = self.dncnn(x) 181 | return y - out 182 | -------------------------------------------------------------------------------- /modelTrain.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import re 3 | import os 4 | import glob 5 | import datetime 6 | import time 7 | import numpy as np 8 | import torch 9 | import torch.nn as nn 10 | import h5py 11 | from scipy import io 12 | from utils import * 13 | from models import * 14 | import torch.optim as optim 15 | from torch.optim.lr_scheduler import MultiStepLR 16 | 17 | # check if cuda is available 18 | cuda_available = torch.cuda.is_available() 19 | print('cuda_available:', cuda_available) 20 | if cuda_available: 21 | device = torch.device('cuda') 22 | else: 23 | device = torch.device('cpu') 24 | 25 | 26 | # Params 27 | parser = argparse.ArgumentParser(description='PyTorch Complex DnCNN') 28 | parser.add_argument( 29 | '--model', 30 | default='cDnCNN', 31 | type=str, 32 | help='choose a type of model') 33 | parser.add_argument('--batch_size', default=2, type=int, help='batch size') 34 | parser.add_argument('--train_data', 35 | default='channel', 36 | type=str, 37 | help='path of train data') 38 | parser.add_argument('--snr', default=5, type=int, help='noise level') 39 | parser.add_argument('--channel_clusters', default=6, 40 | type=int, help='clusters of channel') 41 | parser.add_argument('--paths_per_cluster', default=8, 42 | type=int, help='paths per cluster') 43 | parser.add_argument('--angle_spread', default=7.5, 44 | type=int, help='angle spread') 45 | parser.add_argument('--epoch', 46 | default=150, 47 | type=int, 48 | help='number of train epoches') 49 | parser.add_argument('--lr', 50 | default=3e-4, 51 | type=float, 52 | help='initial learning rate for Adam') 53 | args = parser.parse_args() 54 | clusters = args.channel_clusters 55 | paths = args.paths_per_cluster 56 | AS = args.angle_spread 57 | batch_size = args.batch_size 58 | print(batch_size) 59 | train_data = args.train_data 60 | n_epoch = args.epoch 61 | snr = args.snr 62 | PRINT_FREQ = 20 63 | # snr = 10 64 | save_dir = os.path.join('./models', args.model + '_' + 'snr' + str(snr)) 65 | if not os.path.exists(save_dir): 66 | os.makedirs(save_dir) 67 | 68 | if __name__ == '__main__': 69 | print('>>> Building Model') 70 | model = ComplexDnCNN().to(device) 71 | # uncomment to use dataparallel (unstable) 72 | # device_ids = [0, 1] 73 | # model = nn.DataParallel(model, device_ids=device_ids).cuda() 74 | initial_epoch = findLastCheckpoint(save_dir=save_dir) 75 | if initial_epoch > 0: 76 | print('resuming by loading epoch %03d' % initial_epoch) 77 | model.load_state_dict( 78 | torch.load(os.path.join(save_dir, 79 | 'model_%03d.pth' % initial_epoch))) 80 | print(">>> Building Model Finished") 81 | # model.train() # Enable BN and Dropout 82 | # criterion = nn.MSELoss(reduction='sum').cuda() 83 | criterion = NMSELoss() 84 | optimizer = optim.Adam(model.parameters(), lr=args.lr) 85 | scheduler = MultiStepLR(optimizer, milestones=[20, 40, 60, 80, 100, 120], 86 | gamma=0.7) # learning rates 87 | print("Loading Data") 88 | # should be generated by yourself 89 | train_est = train_data + '/' + \ 90 | str(snr)+'dB/trainingChannel'+str(snr) + \ 91 | '_C' + str(clusters) + 'P' + str(paths) + '_AS' + str(AS) + '.mat' 92 | 93 | train_true = train_data + '/' + \ 94 | str(snr)+'dB/trueTrainingChannel'+str(snr) + \ 95 | '_C' + str(clusters) + 'P' + str(paths) + '_AS' + str(AS) + '.mat' 96 | 97 | train_est_mat = h5py.File(train_est, mode='r') 98 | # print(train_est_mat.keys()) 99 | x_train = train_est_mat['trainingChannel'] 100 | x_train = np.transpose(x_train, [3, 2, 1, 0]) 101 | print('>>> training Set setup complete') 102 | 103 | # ground truth 104 | train_true_mat = h5py.File(train_true, mode='r') 105 | # y_train = train_true_mat['trueTrainingChannel'] 106 | y_train = train_true_mat['trueTrainingChannel'] 107 | y_train = np.transpose(y_train, [3, 2, 1, 0]) 108 | print('>>> groundTruth Set setup complete') 109 | data_num = x_train.shape[0] 110 | split = int(data_num * 0.9) 111 | 112 | h_hat = torch.from_numpy(x_train).float().reshape( 113 | [x_train.shape[0], x_train.shape[1], 1, x_train.shape[2], x_train.shape[3]]) 114 | # x_train = x_train[0:2000, :] 115 | h_hat_train = h_hat[0:split, :, :, :, :] 116 | h_hat_test = h_hat[split:, :, :, :, :] 117 | print(h_hat_train.shape) 118 | h_true = torch.from_numpy(y_train).float().reshape( 119 | [y_train.shape[0], y_train.shape[1], 1, y_train.shape[2], y_train.shape[3]]) 120 | h_true_train = h_true[0:split, :, :, :, :] 121 | h_true_test = h_true[split:, :, :, :, :] 122 | print(h_true_train.shape) 123 | train_dataset = MyDenoisingDataset(h_true_train, h_hat_train) 124 | test_dataset = MyDenoisingDataset(h_true_test, h_hat_test) 125 | trainLoader = DataLoader(dataset=train_dataset, 126 | num_workers=0, 127 | drop_last=True, 128 | batch_size=batch_size, 129 | shuffle=True) 130 | testloader = DataLoader(dataset=test_dataset, 131 | num_workers=0, 132 | drop_last=False, 133 | batch_size=batch_size, 134 | shuffle=False) 135 | best_loss = 100 136 | for epoch in range(initial_epoch, n_epoch): 137 | epoch_loss = 0 138 | start_time = time.time() 139 | model.train() 140 | for n_count, batch_yx in enumerate(trainLoader): 141 | if cuda_available: 142 | batch_x, batch_y = batch_yx[1].cuda(), batch_yx[0].cuda() 143 | else: 144 | batch_x, batch_y = batch_yx[1], batch_yx[0] 145 | out = model(batch_y) 146 | loss = criterion(out, batch_x) 147 | optimizer.zero_grad() 148 | loss.backward() 149 | optimizer.step() 150 | scheduler.step() # step to the learning rate in this epoch 151 | epoch_loss += loss.item() 152 | del batch_x # free some memory 153 | del batch_y 154 | del batch_yx 155 | del out 156 | if n_count % PRINT_FREQ == 0: 157 | print('[%4d] - [%4d]/[%4d] loss = %2.4f\ttime: %2.4f' % 158 | (epoch + 1, n_count, len(trainLoader), loss.item(), time.time() - start_time)) 159 | elapsed_time = time.time() - start_time 160 | log('epoch = %4d , loss = %4.4f , time = %4.2f s' % 161 | (epoch + 1, epoch_loss / n_count, elapsed_time)) 162 | model.eval() 163 | test_loss = 0 164 | with torch.no_grad(): 165 | for n_count, batch_yx in enumerate(testloader): 166 | if cuda_available: 167 | batch_x, batch_y = batch_yx[1].cuda(), batch_yx[0].cuda() 168 | loss = criterion(model(batch_y), batch_x) 169 | test_loss += loss.item() 170 | print('test loss = %4.4f' % (test_loss/len(testloader))) 171 | if test_loss/len(testloader) <= best_loss: 172 | torch.save(model.state_dict(), 173 | os.path.join(save_dir, 'model_%03d.pth' % (epoch + 1))) 174 | best_loss = test_loss/len(testloader) 175 | print('save model\n') 176 | 177 | # torch.save(model, os.path.join(save_dir, 'model_%03d.pth' % (epoch+1))) 178 | -------------------------------------------------------------------------------- /TVT_SourceCode/main.m: -------------------------------------------------------------------------------- 1 | clc; 2 | clear all; 3 | close all; 4 | %% Parameters 5 | N_ms = [1,1]; 6 | N_MS = N_ms(1)*N_ms(2); 7 | N_irs = [16,16]; 8 | N_IRS = N_irs(1)*N_irs(2); 9 | FFT_len = 256; 10 | Nc = FFT_len; % sub carriers 11 | BW = FFT_len*(15e3); 12 | fs = BW; 13 | fc = 28e9; % frequency of carrier is 28GHz 14 | lambda = 3e8/fc; 15 | d_ant = lambda/2; % interval of antennas 16 | sigma_2_alpha = 1; % variance of path gain 17 | awgn_en = 1; % 1: add noise; 0: no noise 18 | N_bits = 3; % number of quantized bits 19 | N_Bits = 2^N_bits; 20 | % Lp = 3; 21 | N_RF = 1; 22 | Ns = N_RF; 23 | activeEleRateList=[0.01 0.05 0.1 0.2 0.25 0.3 0.5 1]; 24 | % activeEleRateList = [0.25]; 25 | % activeEleRateList=[0.01 0.02 0.2]; 26 | activeEleNumList=round(activeEleRateList*N_IRS); 27 | identity_d = 1; % digital precoding/combining matrix is an identity matrix: 0(no); 1(yes); Make a reasonable assumption that Ns = N_RF; 28 | oversampling = 4; % dict 29 | mode=1; % 0: phase shifters, 1: switchers 30 | %% Dict 31 | N_2=N_MS; 32 | m=(0:N_2-1).'; 33 | grid_M=N_2*oversampling; 34 | virtual_ang_2=-1 : 2/grid_M : 1-2/grid_M; % quantizing based on virtual angles for MS side 35 | sub_N=N_irs(1); 36 | sub=(0:sub_N-1).'; 37 | sub_grid_N=sub_N*oversampling; 38 | sub_virtual_ang_1=-1 : 2/sub_grid_N : 1-2/sub_grid_N; 39 | sub_N_tilde = exp(1i*pi*sub*sub_virtual_ang_1)/sqrt(sub_N); % IRS U 40 | 41 | sub_M=N_irs(2); 42 | sub=(0:sub_M-1).'; 43 | sub_grid_M=sub_M*oversampling; 44 | sub_virtual_ang_2=-1 : 2/sub_grid_M : 1-2/sub_grid_M; 45 | sub_M_tilde = exp(1i*pi*sub*sub_virtual_ang_2)/sqrt(sub_M); % IRS U 46 | 47 | M_tilde = exp(1i*pi*m*virtual_ang_2)/sqrt(N_2); % MS V 48 | Psi=kron(M_tilde,kron(sub_M_tilde,sub_N_tilde)); 49 | 50 | %% sim Setup 51 | iterMax = 10; 52 | PNR_dBs = -10:5:20; 53 | NMSEvsActiveNum=zeros(length(PNR_dBs),length(activeEleRateList)); 54 | %% Start 55 | for kk=1:length(activeEleNumList) 56 | activeEleNum = activeEleNumList(kk); 57 | M = activeEleNum; % number of traning frames ( = number of time-slots ?) 58 | NMSE = zeros(2,length(PNR_dBs)); 59 | for ii = 1:length(PNR_dBs) 60 | snr=PNR_dBs(ii); 61 | sigma2 = 10^(-(PNR_dBs(ii)/10)); % noise variance = epsilon 62 | sigma = sqrt(sigma2); 63 | for iter = 1:iterMax 64 | %% Channel 65 | % H_f = FSF_Channel_Model_uplink(N_ms, N_irs, fc, Lp, sigma_2_alpha, fs, K); 66 | H_f_o = channel_f(N_ms,N_irs,1,FFT_len,BW); 67 | activeEleIndex = randperm(N_IRS); 68 | activeEleIndex = sort(activeEleIndex(1:activeEleNum)); 69 | %% pilot Sig 70 | S_matrix = exp(-1i*2*pi*rand(Ns, activeEleNum))/sqrt(Ns); % Consider M successive traning frames sqrt(Ns) 71 | % S_matrix = ones(Ns, M); % Consider M successive traning frames sqrt(Ns) 72 | %% recv Sig 73 | % Generating combined received signal y_k_com 74 | y_k_com = zeros(activeEleNum*Ns,Nc); 75 | %% sensing Mat 76 | Phi = zeros(activeEleNum*Ns,N_IRS*N_MS); 77 | %% comm Proc 78 | for mm = 1:activeEleNum 79 | % Generating analog precoding/combining matrix, and quantized 80 | %% F(RF and BB) 81 | F_BB_m = eye(N_RF,Ns); 82 | 83 | F_RF_m = exp(1j*2*pi*rand(N_MS,N_RF)); 84 | % F_RF_m = ones(N_MS,N_RF); 85 | F_RF_quan_phase_m = round((angle(F_RF_m)+pi)*(2^N_Bits)/(2*pi)) *2*pi/(2^N_Bits); % - pi 86 | Quantized_F_RF_m = exp(1j*F_RF_quan_phase_m)/sqrt(N_MS); 87 | 88 | F_temp_m = Quantized_F_RF_m*F_BB_m; 89 | norm_factor_m = sqrt(N_RF)/norm(F_temp_m,'fro'); % The normalized factor of power. N_RF 90 | F_m = norm_factor_m*F_temp_m; 91 | 92 | %% W(RF and BB) 93 | W_BB_m = exp(1j*2*pi*rand(N_RF,Ns)); 94 | if mode==0 95 | % 0: phase shifters 96 | W_RF_m = exp(1j*2*pi*rand(N_IRS,N_RF)); 97 | W_RF_quan_phase_m = round((angle(W_RF_m)+pi)*(2^N_Bits)/(2*pi)) *2*pi/(2^N_Bits); % - pi 98 | Quantized_W_RF_m = exp(1j*W_RF_quan_phase_m)/sqrt(N_IRS); 99 | W_m = Quantized_W_RF_m*W_BB_m; 100 | else 101 | % 1: switchers 102 | activeEle=activeEleIndex(mm); 103 | IRSCombine=zeros(N_IRS,N_RF); 104 | IRSCombine(activeEle)=1; 105 | W_m=IRSCombine; 106 | end 107 | 108 | Phi((mm-1)*Ns+1:mm*Ns,:) = kron((F_m*S_matrix(:,mm)).',W_m'); 109 | % signal transmission model 110 | for carrier=1:Nc 111 | y_k_com((mm-1)*Ns+1:mm*Ns,carrier)=W_m'*H_f_o(:,:,carrier)*F_m*S_matrix(:,mm) + awgn_en*sigma*W_m'*(normrnd(0,1,N_IRS,1) + 1i*normrnd(0,1,N_IRS,1))/sqrt(2); 112 | end 113 | 114 | end 115 | %% OMP_Algorithm 116 | epsilon = sigma2; 117 | h_v_hat=zeros(N_IRS*N_MS,Nc); 118 | [ h_v_hat,iter_num ] = OMP_Algorithm_MMV( y_k_com,Phi,Psi,epsilon,eye(M*N_RF),10); 119 | %% Reestablish the high-dimensional channel matrix based on estimated channal support and corresponding channel coefficients 120 | h_k_est_com = Psi*h_v_hat; 121 | % = zeros(N_IRS,N_MS,K); 122 | H_f_est=reshape(h_k_est_com,[N_IRS,N_MS,Nc]); 123 | %% NMSE performance 124 | difference_channel_MSE = zeros(Nc,1); 125 | true_channel_MSE = zeros(Nc,1); 126 | for carrier = 1:Nc 127 | difference_channel_MSE(carrier) = norm(H_f_est(:,:,carrier) - H_f_o(:,:,carrier),'fro')^2; 128 | true_channel_MSE(carrier) = norm(H_f_o(:,:,carrier),'fro')^2; 129 | end 130 | NMSE_temp = sum(difference_channel_MSE)/sum(true_channel_MSE); 131 | % NMSE_temp2=norm(H_f_est-H_f_o,'fro')^2/norm(H_f_o,'fro')^2 132 | NMSE(1,ii) = NMSE(1,ii) + NMSE_temp; 133 | disp(['Active Elements = ' num2str(M) ', SNR = ' num2str(PNR_dBs(ii)) ', iter_max = ' num2str(iterMax) ', iter_now = ' num2str(iter)... 134 | ', OMP_iter_num = ' num2str(iter_num)... 135 | ', NMSE = ' num2str(NMSE_temp) ... 136 | ' , NMSE_dB = ' num2str(10*log10(NMSE_temp)) ... 137 | 'dB, total_NMSE = ' num2str(10*log10(NMSE(1,ii)/iter)) ... 138 | 'dB']); 139 | end 140 | %% 141 | % NMSE(ii) = NMSE(ii)/iterMax; 142 | % toc 143 | disp(['Finished ',num2str(ii),'/', num2str(length(PNR_dBs)) ' , NMSE = ' num2str(NMSE(1,ii))]); 144 | NMSEvsActiveNum(ii,kk)=NMSE(1,ii); 145 | end 146 | 147 | end 148 | %% 149 | NMSE_dB=10*log10(NMSEvsActiveNum/iterMax) 150 | %% 151 | figure 152 | plot(activeEleRateList,NMSE_dB(1,:),'-.ko','MarkerFaceColor',[1 0 0]); 153 | hold on; 154 | plot(activeEleRateList,NMSE_dB(2,:),'-.ko','MarkerFaceColor',[0 1 0]); 155 | plot(activeEleRateList,NMSE_dB(3,:),'-.ko','MarkerFaceColor',[0 0 1]); 156 | plot(activeEleRateList,NMSE_dB(4,:),'-.ko','MarkerFaceColor',[1 1 0]); 157 | plot(activeEleRateList,NMSE_dB(5,:),'-.ko','MarkerFaceColor',[0 1 1]); 158 | plot(activeEleRateList,NMSE_dB(6,:),'-.ko','MarkerFaceColor',[1 0 1]); 159 | plot(activeEleRateList,NMSE_dB(7,:),'-.ko','MarkerFaceColor',[0 0 0]); 160 | grid on; 161 | legend('-10dB','-5dB','0dB','5dB','10dB','15dB','20dB') 162 | %% 163 | % 576 antennas 164 | % 0dB -15.0217dB 165 | % 5dB -18.3317dB 166 | % 10dB -21.9621dB 167 | % 15dB -24.9924dB 168 | % 20dB -28.4125 169 | 170 | % itermax=100 171 | % NMSEvsActiveNum = 172 | % 137.252630992341 34.6072682796650 17.6232346466307 9.92944180895987 8.87400539546164 7.82531295337135 5.44924678164065 3.14652979807803 173 | % 139.893156420093 17.8317845496042 8.61015826139355 5.32333117893545 4.15587322993563 3.57738284376139 2.55854469880122 1.46833909324709 174 | % 139.956331764781 11.2817176333092 4.39656673537971 2.43857708790001 2.00262226782018 1.80011414222341 1.24402678635564 0.636490244807475 175 | % 138.053182700697 10.0615306897981 2.57087119783495 1.23300176735678 0.943260781566910 0.840019272916517 0.528032746544638 0.316783966210867 176 | % 139.987126173561 8.74972664393701 1.65516742649616 0.601809464535755 0.493383330704330 0.383917837214407 0.262480254921139 0.144129577452127 177 | 178 | 179 | % -1.3335 180 | % -4.5012 181 | % -8.1550 182 | % -11.6633 183 | % -14.8042 184 | % -16.9935 185 | % -18.3701 -------------------------------------------------------------------------------- /TVT_SourceCode/TrainingSetGen.m: -------------------------------------------------------------------------------- 1 | %% 1127 Generation of training data 2 | % snr fix=10 15 20; channel 6 8; active array rate 20%; 3 | % oversampling 2; MMV(32 carriers); 4 | % 5dB 5 | % clear; 6 | %% para 7 | N_ms = [1 1]; 8 | % N_MS = N_ms(1)*N_ms(2); 9 | N_MS = 1; 10 | N_irs = [16 16]; 11 | N_IRS = N_irs(1)*N_irs(2); 12 | % N_IRS = 64; 13 | FFT_len = 256; 14 | Nc = FFT_len; % sub carriers 15 | 16 | BW = 90e6; 17 | fs = BW; 18 | fc = 28e9; % frequency of carrier is 28GHz 19 | lambda = 3e8/fc; 20 | d_ant = lambda/2; % interval of antennas 21 | sigma_2_alpha = 1; % variance of path gain 22 | awgn_en = 1; % 1: add noise; 0: no noise 23 | N_bits = 3; % number of quantized bits 24 | N_Bits = 2^N_bits; 25 | N_RF = 1; 26 | Ns = N_RF; 27 | % activeEleRateList=[0.01 0.02 0.05 0.1 0.2 0.5 1]; 28 | activeEleRateList=[0.25]; 29 | activeEleNumList=round(activeEleRateList*N_IRS); 30 | identity_d = 4; % digital precoding/combining matrix is an identity matrix: 0(no); 1(yes); Make a reasonable assumption that Ns = N_RF; 31 | oversampling = 2; % dict 32 | mode=1; % 0: phase shifters, 1: switchers 33 | 34 | %% Training Set Generation setup 35 | debug = 1; 36 | debugRate = 0.75; 37 | trainingSize = 100;%12000-debug*debugRate*12000; 38 | testSize = 0;%4000-debug*debugRate*4000; 39 | PNR_dBs = 5; 40 | iterMax = trainingSize+testSize; 41 | NMSEvsActiveNum = zeros(length(PNR_dBs),length(activeEleRateList)); 42 | % randomSelect = 4; % choose 4 per 4*32 channels=>batchsize should be n*4 43 | trainingChannel = zeros([trainingSize 2 N_IRS Nc]); % NCHW 44 | testChannel = zeros([testSize 2 N_IRS Nc]); 45 | trueTrainingChannel = zeros([trainingSize 2 N_IRS Nc]); 46 | trueTestChannel = zeros([testSize 2 N_IRS Nc]); 47 | %% dict over 48 | N_2=N_MS; 49 | m=(0:N_2-1).'; 50 | grid_M=N_2*oversampling; 51 | virtual_ang_2=-1 : 2/grid_M : 1-2/grid_M; % quantizing based on virtual angles for MS side 52 | % virtual_ang_1=-pi/2:pi/grid_N:pi/2-pi/grid_N; % quantizing based on virtual angles for MS side 53 | % virtual_ang_2=-pi/2:pi/grid_M:pi/2-pi/grid_M; % quantizing based on virtual angles for MS side 54 | 55 | sub_N=N_irs(1); 56 | sub=(0:sub_N-1).'; 57 | sub_grid_N=sub_N*oversampling; 58 | sub_virtual_ang_1=-1 : 2/sub_grid_N : 1-2/sub_grid_N; 59 | sub_N_tilde = exp(1i*pi*sub*sub_virtual_ang_1)/sqrt(sub_N); % IRS U 60 | 61 | sub_M=N_irs(2); 62 | sub=(0:sub_M-1).'; 63 | sub_grid_M=sub_M*oversampling; 64 | sub_virtual_ang_2=-1 : 2/sub_grid_M : 1-2/sub_grid_M; 65 | sub_M_tilde = exp(1i*pi*sub*sub_virtual_ang_2)/sqrt(sub_M); % IRS U 66 | 67 | M_tilde = exp(1i*pi*m*virtual_ang_2)/sqrt(N_2); % MS V 68 | Psi=kron(M_tilde,kron(sub_M_tilde,sub_N_tilde)); 69 | Psi=kron(sub_M_tilde,sub_N_tilde); 70 | 71 | 72 | %% sim start 73 | for kk=1:length(activeEleNumList) 74 | activeEleNum = activeEleNumList(kk); 75 | M = activeEleNum; % number of traning frames ( = number of time-slots ?) 76 | NMSE = zeros(2,length(PNR_dBs)); 77 | for ii = 1:length(PNR_dBs) 78 | snr=PNR_dBs(ii); 79 | sigma2 = 10^(-(PNR_dBs(ii)/10)); % noise variance = epsilon 80 | sigma = sqrt(sigma2); 81 | for iter = 1:iterMax 82 | %% Channel 83 | % H_f = FSF_Channel_Model_uplink(N_ms, N_irs, fc, Lp, sigma_2_alpha, fs, K); 84 | H_f_o = channel_f(N_ms,N_irs,1,FFT_len,BW); 85 | activeEleIndex = randperm(N_IRS); 86 | activeEleIndex = sort(activeEleIndex(1:activeEleNum)); 87 | %% pilot Sig 88 | S_matrix = exp(-1i*2*pi*rand(Ns, activeEleNum)); % Consider M successive traning frames sqrt(Ns) 89 | %% recv Sig 90 | % Generating combined received signal y_k_com 91 | y_k_com = zeros(activeEleNum*Ns,Nc); 92 | %% sensing Mat 93 | Phi = zeros(activeEleNum*Ns,N_IRS*N_MS); 94 | %% comm Proc 95 | for mm = 1:activeEleNum 96 | % Generating analog precoding/combining matrix, and quantized 97 | %% F(RF and BB) 98 | F_BB_m = eye(N_RF,Ns); 99 | 100 | F_RF_m = exp(1j*2*pi*rand(N_MS,N_RF)); 101 | % F_RF_m = ones(N_MS,N_RF); 102 | F_RF_quan_phase_m = round((angle(F_RF_m)+pi)*N_Bits/(2*pi)) *2*pi/N_Bits; % - pi 103 | Quantized_F_RF_m = exp(1j*F_RF_quan_phase_m)/sqrt(N_MS); 104 | 105 | F_temp_m = Quantized_F_RF_m*F_BB_m; 106 | norm_factor_m = sqrt(N_RF)/norm(F_temp_m,'fro'); % The normalized factor of power. N_RF 107 | F_m = norm_factor_m*F_temp_m; 108 | 109 | %% W(RF and BB) 110 | W_BB_m = exp(1j*2*pi*rand(N_RF,Ns)); 111 | if mode==0 112 | % 0: phase shifters 113 | W_RF_m = exp(1j*2*pi*rand(N_IRS,N_RF)); 114 | W_RF_quan_phase_m = round((angle(W_RF_m)+pi)*N_Bits/(2*pi)) *2*pi/N_Bits; % - pi 115 | Quantized_W_RF_m = exp(1j*W_RF_quan_phase_m)/sqrt(N_IRS); 116 | W_m = Quantized_W_RF_m*W_BB_m; 117 | else 118 | % 1: switchers 119 | activeEle=activeEleIndex(mm); 120 | IRSCombine=zeros(N_IRS,N_RF); 121 | IRSCombine(activeEle)=1; 122 | W_m=IRSCombine; 123 | end 124 | 125 | Phi((mm-1)*Ns+1:mm*Ns,:) = kron((F_m*S_matrix(:,mm)).',W_m'); 126 | % signal transmission model 127 | for carrier=1:Nc 128 | y_k_com((mm-1)*Ns+1:mm*Ns,carrier)=W_m'*H_f_o(:,:,carrier)*F_m*S_matrix(:,mm) + awgn_en*sigma*W_m'*(normrnd(0,1,N_IRS,1) + 1i*normrnd(0,1,N_IRS,1))/sqrt(2); 129 | end 130 | 131 | end 132 | 133 | %% OMP_Algorithm 134 | snreff=snr; 135 | epsilon = 10^(-(snreff/10)); 136 | h_v_hat=zeros(N_IRS*N_MS,Nc); 137 | [ h_v_hat,iter_num ] = OMP_Algorithm_MMV( y_k_com,Phi,Psi,epsilon,eye(M),100); 138 | % return 139 | %% Reestablish the high-dimensional channel matrix based on estimated channal support and corresponding channel coefficients 140 | h_k_est_com = Psi*h_v_hat; 141 | H_f_est=reshape(h_k_est_com,[N_IRS,N_MS,Nc]); 142 | 143 | %% NMSE performance 144 | difference_channel_MSE = zeros(Nc,1); 145 | true_channel_MSE = zeros(Nc,1); 146 | for carrier = 1:Nc 147 | difference_channel_MSE(carrier) = norm(H_f_est(:,:,carrier) - H_f_o(:,:,carrier),'fro')^2; 148 | true_channel_MSE(carrier) = norm(H_f_o(:,:,carrier),'fro')^2; 149 | end 150 | NMSE_temp = sum(difference_channel_MSE)/sum(true_channel_MSE); 151 | NMSE(1,ii) = NMSE(1,ii) + NMSE_temp; 152 | disp(['Active Elements = ' num2str(M) ', SNR = ' num2str(PNR_dBs(ii)) ', iter_max = ' num2str(iterMax) ', iter_now = ' num2str(iter)... 153 | ', OMP_iter_num = ' num2str(iter_num)... 154 | ', NMSE = ' num2str(NMSE_temp) ... 155 | ' , NMSE_dB = ' num2str(10*log10(NMSE_temp)) ... 156 | 'dB, total_NMSE = ' num2str(10*log10(NMSE(1,ii)/iter)) ... 157 | 'dB']); 158 | if iter<=trainingSize 159 | temp=reshape(fft(fft(H_f_est,[],1),[],3),[256 256]); 160 | trainingChannel(iter,1,:,:,:)=real(temp); 161 | trainingChannel(iter,2,:,:,:)=imag(temp); 162 | temp=reshape(fft(fft(H_f_o,[],1),[],3),[256 256]); 163 | trueTrainingChannel(iter,1,:,:,:)=real(temp); 164 | trueTrainingChannel(iter,2,:,:,:)=imag(temp); 165 | else 166 | temp=reshape(fft(fft(H_f_est,[],1),[],3),[256 256]); 167 | testChannel(iter,1,:,:,:)=real(temp); 168 | testChannel(iter,2,:,:,:)=imag(temp); 169 | temp=reshape(fft(fft(H_f_o,[],1),[],3),[256 256]); 170 | trueTestChannel(iter,1,:,:,:)=real(temp); 171 | trueTestChannel(iter,2,:,:,:)=imag(temp); 172 | end 173 | end 174 | %% 175 | % NMSE(ii) = NMSE(ii)/iterMax; 176 | % toc 177 | disp(['Finished ',num2str(ii),'/', num2str(length(PNR_dBs)) ' , NMSE = ' num2str(NMSE(1,ii)/iterMax)]); 178 | NMSEvsActiveNum(ii,kk)=NMSE(1,ii); 179 | end 180 | 181 | end 182 | %% 183 | save trueTrainingChannel trueTrainingChannel 184 | save trainingChannel trainingChannel 185 | 186 | % save channelMat channel3 187 | % save trainingChannel15 trainingChannel 188 | % save trueTrainingChannel15 trueTrainingChannel 189 | % save testChannel15 testChannel 190 | % save trueTestChannel15 trueTestChannel 191 | % 4000 iters, = -17.4566dB of 15dB snr 192 | 193 | -------------------------------------------------------------------------------- /TVT_SourceCode/channel_f.m: -------------------------------------------------------------------------------- 1 | function [H, H_At, H_Ar, H_D, An_t, An_r] = channel_f(Nt, Nr, K, N_carrier,BW, ch) 2 | % generate channel model (ULA, UPA) in narrow band: H = H_Ar * H_D * H_At 3 | % channel's basic information: ch{.fc, .Nc, .Np, .max_BS, .max_MS, .sigma, .lambda, .d} 4 | % BS&MS_Path{.elevation, .azimuth, .gain} 5 | % H_At/H_Ar: AoDs/AoAs steering vector 6 | % H_D: path gain 7 | % An_t/An_r: AoDs/AoAs of paths 8 | % N: the path of strongest gain 9 | % H[a b c d]; a���Ӿ�����ά�ȣ�b���Ӿ����ά�ȣ�c�����ز�������d���û����� 10 | %% Set parameters 11 | if nargin <= 6 12 | ch.fc = 28e9; % Ƶ�� 13 | ch.Nc = 6; % �� 8 14 | ch.Np = 1; % ÿ�����е�·������ 10 15 | ch.max_BS = pi; % BS�Ƕȷ�Χ 16 | % ch.max_BS1 = pi * 2 / 3; % BS�Ƕȷ�Χ 17 | ch.max_BS1 = ch.max_BS; % BS�Ƕȷ�Χ 18 | ch.max_MS = pi; % users�Ƕȷ�Χ 19 | ch.sigma = 7.5; % 7.5 20 | ch.lambda = 3e8 / ch.fc; % length of carrier wave 21 | ch.d = ch.lambda / 2; % antenna spacing 22 | 23 | ch.fs=BW; 24 | ch.tau_max = (N_carrier / 2) / ch.fs; % ·�����ʱ�� 25 | end 26 | 27 | angle = ch.sigma * pi / 180;%��׼���������ļ���Ƕ� 28 | k = 2 * pi * ch.d / ch.lambda; 29 | total_Np = ch.Nc * ch.Np; 30 | %% 31 | switch (length(Nr) + length(Nt)) 32 | case 2 % creat mmWave channel(ULA) 33 | H = zeros(Nr, Nt, N_carrier, K); 34 | H_At = zeros(Nt, total_Np, K); 35 | H_Ar = zeros(Nr, total_Np, K); 36 | H_D = zeros(total_Np, total_Np, K); 37 | for i = 1 : K 38 | N_t = (0 : (Nt - 1))'; % transmiter 39 | phi = (ch.max_BS - angle) * rand(ch.Nc, 1) * ones(1, ch.Np) + ... 40 | angle * (rand(ch.Nc, ch.Np) - 0.5 * ones(ch.Nc, ch.Np)) - ch.max_BS / 2 + angle / 2; 41 | % phi = sort(phi, 2); % �����Ķ���ʵ�� 18/08/18 42 | 43 | phi = k * sin(reshape(phi',[1, total_Np])); 44 | a_BS = N_t * phi; 45 | At = exp(1i * a_BS) / sqrt(Nt); 46 | 47 | N_r = (0 : (Nr - 1))'; % receiver 48 | theta = (ch.max_MS - angle) * rand(ch.Nc, 1) * ones(1,ch. Np) + ... 49 | angle * (rand(ch.Nc, ch.Np) - 0.5 * ones(ch.Nc, ch.Np)) - ch.max_MS / 2 + angle / 2; 50 | % theta = sort(theta, 2); % �����Ķ���ʵ�� 18/08/18 51 | 52 | theta = k * (reshape(theta',[1, total_Np])); 53 | a_MS = N_r * theta; 54 | Ar = exp(1i * a_MS) / sqrt(Nr); 55 | 56 | 57 | %----------------------------�����Ķ���ʵ�� 18/08/18 ------------------------------------ 58 | % D_am = (randn(1, ch.Nc) + 1i * randn(1, ch.Nc)) / sqrt(2); 59 | % D_am = sort(D_am, 'descend'); 60 | % D_am = reshape(ones(ch.Np, 1) * D_am, 1, total_Np); 61 | % 62 | % tau = ch.tau_max .* rand(1, ch.Nc); 63 | % tau = sort(tau); 64 | % tau = reshape(ones(ch.Np, 1) * tau, 1, total_Np); 65 | %-------------------------------------end----------------------------------------------- 66 | D_am = (randn(1, total_Np) + 1i * randn(1, total_Np)) / sqrt(2); % Path gain 67 | tau = ch.tau_max .* rand(1, total_Np); 68 | 69 | % D_am = sort(D_am); 70 | % tau = sort(tau, 'descend'); 71 | for ii = 1 : ch.Nc 72 | D_am(((ii - 1) * ch.Np + 1) : (ii * ch.Np)) = sort(D_am(((ii - 1) * ch.Np + 1) : (ii * ch.Np))); 73 | tau(((ii - 1) * ch.Np + 1) : (ii * ch.Np)) = sort(tau(((ii - 1) * ch.Np + 1) : (ii * ch.Np)), 'descend'); 74 | end 75 | %---------------------------------------------------------------------------------------- 76 | 77 | miu_tau = -2 * pi * ch.fs * tau / N_carrier; 78 | 79 | for ii = 1 : N_carrier 80 | D = diag(D_am .* exp(1i * (ii - 1) * miu_tau)) * sqrt(Nt * Nr / total_Np); 81 | H(:, :, ii, i) = Ar * D * At'; 82 | end 83 | H_At(:, :, i) = At; 84 | H_Ar(:, :, i) = Ar; 85 | end 86 | case 3 % BS(UPA) User(ULA) 87 | total_Nt = Nt(1) * Nt(2); total_Nr = Nr; 88 | H = zeros(total_Nr, total_Nt, N_carrier, K); 89 | H_At = zeros(total_Nt, total_Np, N_carrier, K); 90 | H_Ar = zeros(total_Nr, total_Np, N_carrier, K); 91 | H_D = zeros(total_Np, ch.Nc * ch.Np, N_carrier, K); 92 | An_t = zeros(1, ch.Nc * ch.Np, K); 93 | An_r = zeros(1, ch.Nc * ch.Np, K); 94 | for i = 1 : K 95 | n_t = (0 : Nt(1) - 1)'; % transmitter 96 | m_t = (0 : Nt(2) - 1)'; 97 | phi1 = (ch.max_BS - angle) * rand(ch.Nc, 1) * ones(1, ch.Np) +... 98 | angle * (rand(ch.Nc, ch.Np) - 0.5 * ones(ch.Nc, ch.Np)) - ch.max_BS / 2 + angle / 2; % azimuth 99 | 100 | phi1 = reshape(phi1', [total_Np, 1]); 101 | theta1 = (ch.max_BS1 - angle) * rand(ch.Nc, 1) * ones(1, ch.Np) +... 102 | angle * (rand(ch.Nc, ch.Np) - 0.5 * ones(ch.Nc, ch.Np)) - ch.max_BS1 / 2 + angle / 2; % elevation 103 | 104 | theta1 = reshape(theta1', [total_Np, 1]); 105 | An_t(:, :, i) = theta1.'; 106 | An_r(:, :, i) = phi1.'; %->��ȡ�Ƕ� 107 | A_t = zeros(total_Nt, total_Np); 108 | for path = 1 : total_Np 109 | e_a1 = exp(-1i * k * sin(phi1(path, 1)) * cos(theta1(path, 1)) * n_t); % channel model 2 110 | e_e1 = exp(-1i * k * sin(theta1(path, 1)) * m_t); 111 | A_t(:, path) = kron(e_a1, e_e1) / sqrt(total_Nt); 112 | end 113 | 114 | % N_r = (0 : (total_Nr - 1))'; % receiver 115 | % theta = (ch.max_MS - angle) * rand(ch.Nc, 1) * ones(1,ch. Np) + ... 116 | % angle * (rand(ch.Nc, ch.Np) - 0.5 * ones(ch.Nc, ch.Np)) - ch.max_MS / 2 + angle / 2; 117 | % 118 | % theta = k * (reshape(theta',[1, total_Np])); 119 | % a_MS = N_r * theta; 120 | % A_r = exp(1i * a_MS) / sqrt(total_Nr); 121 | 122 | A_r = zeros(total_Nr, total_Np); 123 | for ii = 1 : total_Nr 124 | theta = (ch.max_MS - angle) * rand(ch.Nc, 1) * ones(1,ch. Np) + ... 125 | angle * (rand(ch.Nc, ch.Np) - 0.5 * ones(ch.Nc, ch.Np)) - ch.max_MS / 2 + angle / 2; 126 | theta = k * (reshape(theta',[1, total_Np])); 127 | A_r(ii, :) = exp(1i * theta) / sqrt(total_Nr); 128 | end 129 | 130 | 131 | D_am = (randn(1, total_Np) + 1i * randn(1, total_Np)) / sqrt(2); % Path gain 132 | tau = ch.tau_max .* rand(1, total_Np); 133 | 134 | % D_am = sort(D_am); 135 | % tau = sort(tau, 'descend'); 136 | for ii = 1 : ch.Nc 137 | D_am(((ii - 1) * ch.Np + 1) : (ii * ch.Np)) = sort(D_am(((ii - 1) * ch.Np + 1) : (ii * ch.Np))); 138 | tau(((ii - 1) * ch.Np + 1) : (ii * ch.Np)) = sort(tau(((ii - 1) * ch.Np + 1) : (ii * ch.Np)), 'descend'); 139 | end 140 | 141 | miu_tau = -2 * pi * ch.fs * tau / N_carrier; 142 | 143 | for ii = 1 : N_carrier 144 | D = diag(D_am .* exp(1i * (ii - 1) * miu_tau)) * sqrt(total_Nt * total_Nr / total_Np); 145 | H(:, :, ii, i) = A_r * D * A_t'; 146 | H_D(:, :, ii, i) = D; 147 | end 148 | 149 | H_At(:, :, i) = A_t; 150 | H_Ar(:, :, i) = A_r; 151 | end 152 | case 4 % creat mmWave channel(UPA) 153 | total_Nt = Nt(1) * Nt(2); 154 | total_Nr = Nr(1) * Nr(2); 155 | H = zeros(total_Nr, total_Nt, N_carrier, K); 156 | H_At = zeros(total_Nt, total_Np, N_carrier, K); 157 | H_Ar = zeros(total_Nr, total_Np, N_carrier, K); 158 | H_D = zeros(total_Np, ch.Nc * ch.Np, N_carrier, K); 159 | An_t = zeros(1, ch.Nc * ch.Np, K); 160 | An_r = zeros(1, ch.Nc * ch.Np, K); 161 | for i = 1 : K 162 | n_t = (0 : Nt(1) - 1)'; % transmitter 163 | m_t = (0 : Nt(2) - 1)'; 164 | phi1 = (ch.max_BS - angle) * rand(ch.Nc, 1) * ones(1, ch.Np) +... 165 | angle * (rand(ch.Nc, ch.Np) - 0.5 * ones(ch.Nc, ch.Np)) - ch.max_BS / 2 + angle / 2; % azimuth 166 | 167 | phi1 = reshape(phi1', [total_Np, 1]); 168 | theta1 = (ch.max_BS1 - angle) * rand(ch.Nc, 1) * ones(1, ch.Np) +... 169 | angle * (rand(ch.Nc, ch.Np) - 0.5 * ones(ch.Nc, ch.Np)) - ch.max_BS1 / 2 + angle / 2; % elevation 170 | 171 | theta1 = reshape(theta1', [total_Np, 1]); 172 | An_t(:, :, i) = theta1.'; 173 | An_r(:, :, i) = phi1.'; %->��ȡ�Ƕ� 174 | A_t = zeros(total_Nt, total_Np); 175 | for path = 1 : total_Np 176 | e_a1 = exp(-1i * k * sin(phi1(path, 1)) * cos(theta1(path, 1)) * n_t); % channel model 2 177 | e_e1 = exp(-1i * k * sin(theta1(path, 1)) * m_t); 178 | A_t(:, path) = kron(e_a1, e_e1) / sqrt(total_Nt); 179 | end 180 | 181 | n_r = (0:(Nr(1) - 1))'; % receiver 182 | m_r = (0:(Nr(2) - 1))'; 183 | phi2 = (ch.max_MS - angle) * rand(ch.Nc, 1) * ones(1, ch.Np) +... 184 | angle * (rand(ch.Nc, ch.Np) - 0.5 * ones(ch.Nc, ch.Np)) - ch.max_MS / 2 + angle / 2; % azimuth 185 | 186 | phi2 = reshape(phi2', [total_Np, 1]); 187 | theta2 = (ch.max_MS - angle) * rand(ch.Nc, 1) * ones(1, ch.Np) +... 188 | angle * (rand(ch.Nc, ch.Np) - 0.5 * ones(ch.Nc, ch.Np)) - ch.max_MS / 2 + angle / 2; % elevation 189 | 190 | theta2 = reshape(theta2', [total_Np, 1]); 191 | A_r = zeros(total_Nr, total_Np); 192 | for path = 1 : total_Np 193 | e_a2 = exp(-1i * k * sin(phi2(path, 1)) * cos(theta2(path, 1)) * n_r); % channel model 2 194 | e_e2 = exp(-1i * k * sin(theta2(path, 1)) * m_r); 195 | A_r(:,path) = kron(e_a2,e_e2) / sqrt(total_Nr); 196 | end 197 | 198 | D_am = (randn(1, total_Np) + 1i * randn(1, total_Np)) / sqrt(2); % Path gain 199 | tau = ch.tau_max .* rand(1, total_Np); 200 | 201 | % D_am = sort(D_am); 202 | % tau = sort(tau, 'descend'); 203 | for ii = 1 : ch.Nc 204 | D_am(((ii - 1) * ch.Np + 1) : (ii * ch.Np)) = sort(D_am(((ii - 1) * ch.Np + 1) : (ii * ch.Np))); 205 | tau(((ii - 1) * ch.Np + 1) : (ii * ch.Np)) = sort(tau(((ii - 1) * ch.Np + 1) : (ii * ch.Np)), 'descend'); 206 | end 207 | 208 | miu_tau = -2 * pi * ch.fs * tau / N_carrier; 209 | 210 | for ii = 1 : N_carrier 211 | D = diag(D_am .* exp(1i * (ii - 1) * miu_tau)) * sqrt(total_Nt * total_Nr / total_Np); 212 | H(:, :, ii, i) = A_r * D * A_t'; 213 | H_D(:, :, ii, i) = D; 214 | end 215 | 216 | H_At(:, :, i) = A_t; 217 | H_Ar(:, :, i) = A_r; 218 | end 219 | otherwise, error('error in data structure: Nr or Nt'); 220 | end 221 | --------------------------------------------------------------------------------