├── GEMML_result.mat
├── GMML.pdf
├── GMML_Architecture.png
├── LICENSE
├── README.md
├── TWC_Paper.pdf
├── dataset.mat
├── main.py
├── net.py
├── requirements.txt
└── util.py
/GEMML_result.mat:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FenghaoZhu/GMML/52a49d90f26a24b5cffc228b396a8a4433a1c0a0/GEMML_result.mat
--------------------------------------------------------------------------------
/GMML.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FenghaoZhu/GMML/52a49d90f26a24b5cffc228b396a8a4433a1c0a0/GMML.pdf
--------------------------------------------------------------------------------
/GMML_Architecture.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FenghaoZhu/GMML/52a49d90f26a24b5cffc228b396a8a4433a1c0a0/GMML_Architecture.png
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2023 FenghaoZhu
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # GMML
2 | This repository is the Python implementation of paper _"[Robust Beamforming for RIS-aided Communications: Gradient-based Manifold Meta Learning](https://ieeexplore.ieee.org/document/10623434)"_, which has been accepted by _IEEE Transactions on Wireless Communications 2024_
3 |
4 | A simplified version, titled _"[Energy-efficient Beamforming for RIS-aided Communications: Gradient Based Meta Learning](https://ieeexplore.ieee.org/document/10622978)"_ and with manifold learning technique removed, has been accepted for _2024 IEEE International Conference on Communications (ICC)_.
5 |
6 | ## Blog
7 | English version : [Click here](https://zhuanlan.zhihu.com/p/695011497).
8 |
9 | Chinese version : [Click here](https://zhuanlan.zhihu.com/p/686734331).
10 |
11 | ## Files in this repo
12 | `main.py`: The main function. Can be directly run to get the results.
13 |
14 | `utils.py`: This file contains the util functions, including the intialization functions and calculation function of spectral efficiency. It also contains definition of system params.
15 |
16 | `net.py`: This file defines and declares the neural networks and their params.
17 |
18 | `TWC_Paper.pdf`: This file is the PDF file of the paper.
19 |
20 | ## Reference
21 | Should you find this work beneficial, **kindly grant it a star**!
22 |
23 | To follow our research, **please consider citing**:
24 |
25 | F. Zhu et al., "Robust Beamforming for RIS-Aided Communications: Gradient-Based Manifold Meta Learning," in _IEEE Transactions on Wireless Communications_, vol. 23, no. 11, pp. 15945-15956, Nov. 2024.
26 |
27 | X. Wang, F. Zhu, Q. Zhou, Q. Yu, C. Huang, A. Alhammadi, Z. Zhang, C. Yuen, and M. Debbah, "Energy-efficient Beamforming for RISs-aided Communications: Gradient Based Meta Learning," in _Proc. of the 2024 IEEE International Conference on Communications (ICC)_, Jun. 9, 2024, pp. 3464-3469.
28 |
29 |
30 | ```bibtex
31 |
32 | @ARTICLE{Zhu2024GMML,
33 | author={Zhu, Fenghao and Wang, Xinquan and Huang, Chongwen and Yang, Zhaohui and Chen, Xiaoming and Alhammadi, Ahmed and Zhang, Zhaoyang and Yuen, Chau and Debbah, Mérouane},
34 | journal={IEEE Transactions on Wireless Communications},
35 | title={Robust Beamforming for RIS-aided Communications: Gradient-based Manifold Meta Learning},
36 | year={2024},
37 | volume={23},
38 | number={11},
39 | pages={15945-15956},
40 | keywords={Reconfigurable intelligent surfaces;meta learning;manifold learning;gradient;beamforming},
41 | doi={10.1109/TWC.2024.3435023}}
42 |
43 | @inproceedings{Wang2024EnergyEfficient,
44 | author = {X. Wang and F. Zhu and Q. Zhou and Q. Yu and C. Huang and A. Alhammadi and Z. Zhang and C. Yuen and M. Debbah},
45 | title = {{Energy-efficient Beamforming for RISs-aided Communications: Gradient Based Meta Learning}},
46 | booktitle = {Proc. of the 2024 IEEE International Conference on Communications (ICC)},
47 | year = {2024},
48 | date = {Jun. 9},
49 | pages = {3464-3469}
50 | }
51 |
52 | ```
53 | ## More than GMML...
54 | We are excited to announce a novel method that utilizes linear approximations of **ODE-based neural networks** to optimize sum rate in beamforming in mmWave MIMO systems.
55 |
56 | Compared to baseline, it only uses **1.6\% of time** to optimize and achieves a **significantly stronger robustness**!
57 |
58 | See [GLNN](https://github.com/tp1000d/GLNN) for more information!
59 |
60 | ## Star History
61 |
62 |
63 |
64 |
65 |
66 |
67 |
68 |
69 |
--------------------------------------------------------------------------------
/TWC_Paper.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FenghaoZhu/GMML/52a49d90f26a24b5cffc228b396a8a4433a1c0a0/TWC_Paper.pdf
--------------------------------------------------------------------------------
/dataset.mat:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FenghaoZhu/GMML/52a49d90f26a24b5cffc228b396a8a4433a1c0a0/dataset.mat
--------------------------------------------------------------------------------
/main.py:
--------------------------------------------------------------------------------
1 | """
2 | GEMML code
3 | ------------------------------
4 | Implementation of GEMML algorithm, which is proposed in the paper:
5 | Robust Beamforming for RIS-aided Communications: Gradient Enhanced Manifold Meta Learning
6 |
7 | References and Relevant Links
8 | ------------------------------
9 | GitHub Repository:
10 | https://github.com/FenghaoZhu/GEMML
11 |
12 | Related arXiv Paper:
13 | https://arxiv.org/abs/2402.10626
14 |
15 | file introduction
16 | ------------------------------
17 | this is the main function which can be run directly
18 |
19 | @author: F. Zhu and X.Wang
20 | """
21 | #
22 | import random
23 | import scipy.io as sio
24 | import torch
25 |
26 | from net import *
27 | from tqdm import tqdm
28 | import math
29 |
30 | #
31 |
32 |
33 | #
34 | seed = 42 # fix the random seed
35 | torch.manual_seed(seed) # cpu random seed
36 | torch.cuda.manual_seed(seed) # gpu random seed
37 | torch.cuda.manual_seed_all(seed) # multi-gpu random seed
38 | np.random.seed(seed) # numpy random seed
39 | random.seed(seed) # python random seed
40 | torch.backends.cudnn.benchmark = False
41 | torch.backends.cudnn.deterministic = True
42 | #
43 |
44 | #
45 | H_t = sio.loadmat(f'dataset.mat')['HH'] # load the channel H, numpy format
46 | G_t = sio.loadmat(f'dataset.mat')['GG'] # load the channel G, numpy format
47 | user_weights = sio.loadmat(f'dataset.mat')['omega'].squeeze() # load the user weights, numpy format
48 | regulated_user_weights = user_weights / np.sum(user_weights) # normalize the user weights
49 | H_t = torch.tensor(H_t) # transforms from numpy to torch format
50 | G_t = torch.tensor(G_t) # transforms from numpy to torch format
51 | #
52 |
53 | #
54 | WSR_list_per_sample = torch.zeros(nr_of_training, External_iteration) # record the WSR of each sample
55 | # Iterate and optimize each sample
56 | for item_index in range(nr_of_training):
57 | # refresh the nn parameters at the beginning of each sample to guarantee the independence
58 | # note that GEMML is pretraining free!
59 |
60 | # initialize the meta learning network for the precoding matrix
61 | optimizer_w = meta_optimizer_w(input_size_w, hidden_size_w, output_size_w)
62 | # initialize the optimizer for the precoding matrix
63 | adam_w = torch.optim.Adam(optimizer_w.parameters(), lr=optimizer_lr_w)
64 |
65 | # initialize the meta learning network for the phase shift matrix
66 | optimizer_theta = meta_optimizer_theta(input_size_theta, hidden_size_theta, output_size_theta)
67 | # initialize the optimizer for the phase shift matrix
68 | adam_theta = torch.optim.Adam(optimizer_theta.parameters(), lr=optimizer_lr_theta)
69 |
70 | maxi = 0 # record the maximum WSR of each sample
71 | # load the channel sample
72 | G = G_t[:, :, item_index].to(torch.complex64) # dimension: nr_of_RIS_elements * nr_of_BS_antennas
73 | H = H_t[:, :, item_index].to(torch.complex64) # dimension: nr_of_users * nr_of_RIS_elements
74 |
75 | # initialize the precoding matrix and the phase shift matrix
76 | theta = torch.randn(nr_of_RIS_elements).to(torch.float32) # initialize the phase shift matrix
77 | theta_init = theta
78 | cascaded_channel = H.conj() @ torch.diag(torch.exp(theta * 1j)) @ G # cascaded channel
79 |
80 | # initialize the precoding matrix and the compressed precoding matrix
81 | X, V = init_X(nr_of_BS_antennas, nr_of_users, cascaded_channel, total_power)
82 | X_init = X
83 | transmitter_precoder_init = V
84 | transmitter_precoder = transmitter_precoder_init
85 |
86 | LossAccumulated_w = 0 # record the accumulated loss in the meta learning network for precoding matrix
87 | LossAccumulated_theta = 0 # record the accumulated loss in the meta learning network for phase shift matrix
88 | for epoch in range(External_iteration):
89 | # update the precoding matrix and the phase shift matrix in outer loop
90 | # one outer loop includes Internal_iteration inner loops
91 | # when updating the phase shift matrix, the compressed precoding matrix is inherited from the last outer loop
92 | loss_theta, sum_loss_theta, theta = meta_learner_theta(optimizer_theta, Internal_iteration,
93 | regulated_user_weights, G, H,
94 | X.clone().detach(), # clone the precoding matrix
95 | theta_init, # update the phase shift matrix from scratch
96 | noise_power)
97 | # when updating the compressed precoding matrix, the phase shift matrix is inherited from the last outer loop
98 | loss_w, sum_loss_w, X = meta_learner_w(optimizer_w, Internal_iteration,
99 | regulated_user_weights, G, H,
100 | X_init, # update the precoding matrix from scratch
101 | theta.clone().detach(), # clone the phase shift matrix
102 | noise_power)
103 |
104 | # handle the normalization of the compressed precoding matrix
105 | cascaded_channel = H.conj() @ torch.diag(torch.exp(theta * 1j)) @ G # cascaded channel
106 | transmitter_precoder = cascaded_channel.conj().T @ X # compute the precoding matrix before normalization
107 | normV = torch.norm(transmitter_precoder) # compute the norm of the precoding matrix before normalization
108 | WW = math.sqrt(total_power) / normV # normalization coefficient
109 | X = X * WW # normalize the compressed precoding matrix
110 | transmitter_precoder = transmitter_precoder * WW # normalize the precoding matrix
111 |
112 | # compute the loss of each sample
113 | loss_total = -compute_weighted_sum_rate(regulated_user_weights, G, H, transmitter_precoder, theta, noise_power)
114 | LossAccumulated_w = LossAccumulated_w + loss_total # accumulate the precoding matrix network loss
115 | LossAccumulated_theta = LossAccumulated_theta + loss_total # accumulate the shift matrix network loss
116 | MSR = compute_weighted_sum_rate(user_weights, G, H, transmitter_precoder, theta.detach(), noise_power)
117 | WSR_list_per_sample[item_index, epoch] = MSR # record the WSR of each sample
118 | if MSR > maxi: # update maxi only when the WSR is larger than the current maximum WSR
119 | maxi = MSR.item() # record the maximum WSR of each sample
120 | print('max', maxi, 'epoch=', epoch, 'item', item_index) # print the maximum WSR of each sample
121 | if (epoch + 1) % Update_steps == 0: # update the meta learning network every Update_steps outer loops
122 | adam_w.zero_grad()
123 | adam_theta.zero_grad()
124 | Average_loss_w = LossAccumulated_w / Update_steps
125 | Average_loss_theta = LossAccumulated_theta / Update_steps
126 | Average_loss_w.backward(retain_graph=True)
127 | Average_loss_theta.backward(retain_graph=True)
128 | adam_w.step()
129 | if (epoch + 1) % 5 == 0:
130 | adam_theta.step()
131 | MSR = compute_weighted_sum_rate(regulated_user_weights, G, H, transmitter_precoder, theta.detach(),
132 | noise_power)
133 | LossAccumulated_w = 0 # reset the accumulated loss in the meta learning network for precoding matrix
134 | LossAccumulated_theta = 0 # reset the accumulated loss in the meta learning network for phase shift matrix
135 |
136 | # save the WSR of each sample
137 | WSR_matrix = WSR_list_per_sample
138 | sio.savemat(f'./GEMML_result.mat',
139 | {'WSR_matrix': WSR_matrix.detach().numpy()})
140 |
141 | #
142 |
--------------------------------------------------------------------------------
/net.py:
--------------------------------------------------------------------------------
1 | """
2 | GEMML code
3 | ------------------------------
4 | Implementation of GEMML algorithm, which is proposed in the paper:
5 | Robust Beamforming for RIS-aided Communications: Gradient Enhanced Manifold Meta Learning
6 |
7 | References and Relevant Links
8 | ------------------------------
9 | GitHub Repository:
10 | https://github.com/FenghaoZhu/GEMML
11 |
12 | Related arXiv Paper:
13 | https://arxiv.org/abs/2402.10626
14 |
15 | file introduction
16 | ------------------------------
17 | this is the net file, which declares the meta learning network as shown in the paper.
18 | note that the NNs are declared here and the optimization process is implemented in the main file.
19 |
20 | @author: F. Zhu and X.Wang
21 | """
22 | #
23 | import torch
24 | import torch.nn as nn
25 | import numpy as np
26 | from util import *
27 |
28 |
29 | #
30 |
31 |
32 | #
33 |
34 | # customized layer for optimizing the phase shifting matrix
35 | class LambdaLayer(nn.Module):
36 | def __init__(self, lambda_function):
37 | super(LambdaLayer, self).__init__()
38 | self.lambda_function = lambda_function
39 |
40 | def forward(self, x):
41 | return self.lambda_function(x)
42 |
43 |
44 | class meta_optimizer_theta(nn.Module):
45 | """
46 | this class is used to define the meta learning network for
47 | """
48 |
49 | def __init__(self, input_size, hidden_size, output_size):
50 | """
51 | this function is used to initialize the meta learning network for phase shifting matrix
52 | :param input_size: the size of the input, which is nr_of_RIS_elements in this code
53 | :param hidden_size: the size of hidden layers, which is hidden_size_theta in this code
54 | :param output_size: the size of the output, which is nr_of_RIS_elements in this code
55 | """
56 | super(meta_optimizer_theta, self).__init__()
57 |
58 | self.layer = nn.Sequential(
59 | nn.Linear(input_size, hidden_size),
60 | nn.ReLU(),
61 | nn.Linear(hidden_size, output_size),
62 | nn.Sigmoid(),
63 | LambdaLayer(lambda x: 2 * torch.pi * x)
64 | )
65 |
66 | def forward(self, gradient):
67 | """
68 | this function is used to implement the forward propagation of the meta learning network for theta
69 | :param gradient: the gradient of SE with respect to theta, with sum of user weights normalized to 1
70 | :return: regulated delta theta
71 | """
72 | gradient = gradient.unsqueeze(0)
73 | gradient = self.layer(gradient)
74 | gradient = gradient.squeeze(0)
75 | return gradient
76 |
77 |
78 | class meta_optimizer_w(nn.Module):
79 | """
80 | this class is used to define the meta learning network for w
81 | """
82 |
83 | def __init__(self, input_size, hidden_size, output_size):
84 | """
85 | this function is used to initialize the meta learning network for w
86 | :param input_size: the size of the input, which is nr_of_users*2 in this code
87 | :param hidden_size: the size of hidden layers, which is hidden_size_w in this code
88 | :param output_size: the size of the output, which is nr_of_users*2 in this code
89 | """
90 | super(meta_optimizer_w, self).__init__()
91 |
92 | self.layer = nn.Sequential(
93 | nn.Linear(input_size, hidden_size),
94 | nn.ReLU(),
95 | nn.Linear(hidden_size, output_size),
96 | )
97 |
98 | def forward(self, gradient):
99 | """
100 | this function is used to implement the forward propagation of the meta learning network for w
101 | :param gradient: the gradient of SE with respect to w, with sum of user weights normalized to 1
102 | :return: delta w
103 | """
104 | gradient = gradient.unsqueeze(0)
105 | gradient = self.layer(gradient)
106 | gradient = gradient.squeeze(0)
107 | return gradient
108 |
109 |
110 | #
111 |
112 |
113 | #
114 |
115 | def meta_learner_w(optimizee, Internal_iteration, user_weights, channel1, channel2, X,
116 | theta, noise_power, retain_graph_flag=True):
117 | """
118 | Implementation of inner iteration of meta learning for w
119 | :param optimizee: optimizer for w
120 | :param Internal_iteration: number of inner loops in each outer loop
121 | :param user_weights: the weight of each user
122 | :param channel1: channel G
123 | :param channel2: channel H
124 | :param X: the compressed precoding matrix
125 | :param theta: the phase shift matrix
126 | :param noise_power: the noise power
127 | :param retain_graph_flag: whether to retain the graph
128 | :return: the loss, the accumulated loss, and the updated compressed precoding matrix
129 | """
130 | X_internal = X # initialize the compressed precoding matrix
131 | X_internal.requires_grad = True # set the requires_grad flag to true to enable the backward propagation
132 | sum_loss_w = 0 # record the accumulated loss
133 | for internal_index in range(Internal_iteration):
134 | L = -compute_weighted_sum_rate_X(user_weights, channel1, channel2, X_internal, theta, noise_power)
135 | sum_loss_w = L + sum_loss_w # accumulate the loss
136 | L.backward(retain_graph=retain_graph_flag) # compute the gradient
137 | X_grad = X_internal.grad.clone().detach() # clone the gradient
138 | # as pytorch can not process complex number, we have to split the real and imaginary parts and concatenate them
139 | X_grad1 = torch.cat((X_grad.real, X_grad.imag), dim=1) # concatenate the real and imaginary part
140 | X_update = optimizee(X_grad1) # input the gradient and get the increment
141 | # recover the complex number from the real and imaginary parts
142 | X_update1 = X_update[:, 0: nr_of_users] + 1j * X_update[:, nr_of_users: 2 * nr_of_users]
143 | X_internal = X_internal + X_update1 # update the compressed precoding matrix
144 | X_update.retain_grad()
145 | X_internal.retain_grad()
146 | return L, sum_loss_w, X_internal
147 |
148 |
149 | def meta_learner_theta(optimizee, Internal_iteration, user_weights, channel1, channel2, X,
150 | theta, noise_power, retain_graph_flag=True):
151 | """
152 | Implementation of inner iteration of meta learning for theta
153 | :param optimizee: optimizer for theta
154 | :param Internal_iteration: number of inner loops in each outer loop
155 | :param user_weights: the weight of each user
156 | :param channel1: channel G
157 | :param channel2: channel H
158 | :param X: the compressed precoding matrix
159 | :param theta: the phase shift matrix
160 | :param noise_power: the noise power
161 | :param retain_graph_flag: whether to retain the graph
162 | :return: the loss, the accumulated loss, and the updated phase shift matrix
163 | """
164 | cascaded_channel = channel2.conj() @ torch.diag(torch.exp(theta * 1j)) @ channel1
165 | transmitter_precoder = cascaded_channel.conj().T @ X
166 | theta_internal = theta
167 | theta_internal.requires_grad = True
168 | sum_loss_theta = 0
169 | for internal_index in range(Internal_iteration):
170 | L = -compute_weighted_sum_rate(user_weights, channel1, channel2, transmitter_precoder, theta_internal,
171 | noise_power) # compute the loss
172 | L.backward(retain_graph=retain_graph_flag) # compute the gradient
173 | theta_update = optimizee(theta_internal.grad.clone().detach()) # input the gradient and get the increment
174 | sum_loss_theta = L + sum_loss_theta # accumulate the loss
175 | theta_internal = theta_internal + theta_update # update the phase shift matrix
176 | theta_update.retain_grad()
177 | theta_internal.retain_grad()
178 | return L, sum_loss_theta, theta_internal
179 |
180 |
181 | #
182 |
183 | #
184 | # initialize the meta learning network w parameters
185 | input_size_w = nr_of_users * 2
186 | hidden_size_w = 200
187 | output_size_w = nr_of_users * 2
188 | batch_size_w = nr_of_users
189 |
190 | # initialize the meta learning network theta parameters
191 | input_size_theta = nr_of_RIS_elements
192 | hidden_size_theta = 200
193 | output_size_theta = nr_of_RIS_elements
194 | batch_size_theta = 1
195 |
196 | #
197 |
198 |
199 | # 测试函数,仅供测试用途
200 | if __name__ == '__main__':
201 | print("input_size_w: ", input_size_w, "\n",
202 | "hidden_size_w: ", hidden_size_w, "\n",
203 | "output_size_w: ", output_size_w, "\n",
204 | "batch_size_w: ", batch_size_w, "\n",
205 | "input_size_theta: ", input_size_theta, "\n",
206 | "hidden_size_theta: ", hidden_size_theta, "\n",
207 | "output_size_theta: ", output_size_theta, "\n",
208 | "batch_size_theta: ", batch_size_theta, "\n",
209 | )
210 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | numpy==1.23.5
2 | scipy==1.12.0
3 | torch==2.1.0
4 | tqdm==4.66.1
5 |
--------------------------------------------------------------------------------
/util.py:
--------------------------------------------------------------------------------
1 | """
2 | GEMML code
3 | ------------------------------
4 | Implementation of GEMML algorithm, which is proposed in the paper:
5 | Robust Beamforming for RIS-aided Communications: Gradient Enhanced Manifold Meta Learning
6 |
7 | References and Relevant Links
8 | ------------------------------
9 | GitHub Repository:
10 | https://github.com/FenghaoZhu/GEMML
11 |
12 | Related arXiv Paper:
13 | https://arxiv.org/abs/2402.10626
14 |
15 | file introduction
16 | ------------------------------
17 | this is the utils file, including the initialization of the channel, the computation of the SINR and the rate, etc.
18 |
19 | @author: F. Zhu and X.Wang
20 | """
21 | #
22 | import numpy as np
23 | import torch
24 | import torch.nn as nn
25 | import random
26 | #
27 |
28 | #
29 | USE_CUDA = torch.cuda.is_available()
30 | DEVICE = torch.device("cuda" if USE_CUDA else "cpu")
31 | External_iteration = 500
32 |
33 | Internal_iteration = 1
34 | Update_steps = 1
35 | N_i = Internal_iteration
36 | N_o = Update_steps
37 | # optimizer_lr_theta = 10e-4 # changeable
38 | # optimizer_lr_w = 15e-4
39 | optimizer_lr_theta = 1e-3 # changeable
40 | optimizer_lr_w = 1.5e-3
41 | hidden_size_theta = 200
42 | hidden_size_w = 200
43 |
44 | nr_of_users = 4
45 | nr_of_BS_antennas = 64
46 | nr_of_RIS_elements = 100
47 |
48 | epoch = 1
49 | nr_of_training = 100 # used for training, while solving
50 | scheduled_users = [x for x in range(nr_of_users)]
51 | selected_users = [x for x in range(nr_of_users)] # array of scheduled users. Note that we schedule all the users.
52 | snr = 10
53 | noise_power = 1
54 | total_power = noise_power * 10 ** (snr / 10)
55 |
56 | #
57 |
58 | #
59 | def initialize_channel(number_of_BS_antennas, number_of_users):
60 | """
61 | Generate the channel matrix
62 | :param number_of_BS_antennas: the number of BS antennas
63 | :param number_of_users: the number of users
64 | :return: channel matrix
65 | """
66 | channel = torch.randn(number_of_users, number_of_BS_antennas) + 1j * torch.randn(number_of_users,
67 | number_of_BS_antennas)
68 | channel = channel / torch.sqrt(torch.tensor(2))
69 | return channel # size: nr_of_users * nr_of_BS_antennas
70 |
71 |
72 | def compute_sinr(channel1, channel2, precoder, theta, power_noise, user_id):
73 | """
74 | This version of SINR computation deals with torch format, precoder is a complex matrix
75 | :param channel1: nr_of_RIS_elements * nr_of_BS_antennas (G in our paper, the channel from BS to RIS)
76 | :param channel2: nr_of_users * nr_of_RIS_elements (h in our paper, the channel from RIS to users)
77 | :param precoder: nr_of_BS_antennas * nr_of_users (w in our paper, the precoder of BS)
78 | :param theta: nr_of_RIS_elements * nr_of_RIS_elements (theta in our paper, the phase shift of RIS)
79 | :param user_id: the index of the user
80 | :param power_noise: the noise power
81 | :return: the SINR of the user
82 | """
83 | # to avoid duplicate calculation, we first calculate G @ w_k and h_k @ Theta
84 | # and then calculate the numerator and denominator using the result
85 | htG = torch.conj(channel2[user_id, :])*torch.exp(theta * 1j) @ channel1
86 | inter_user_interference = (torch.absolute(htG @ precoder)) ** 2
87 | numerator = inter_user_interference[user_id]
88 | inter_user_interference = torch.sum(inter_user_interference)-numerator
89 | denominator = power_noise + inter_user_interference
90 | result = numerator / denominator
91 | return result
92 |
93 |
94 | def compute_weighted_sum_rate(user_weights, channel1, channel2, precoder_in, theta, power_noise):
95 | """
96 | This version of rate function deals with torch format, and the transmitter. Precoder is a complex matrix
97 | :param user_weights: the weights of users
98 | :param channel1: nr_of_RIS_elements * nr_of_BS_antennas (G in our paper, the channel from BS to RIS)
99 | :param channel2: nr_of_users * nr_of_RIS_elements (h in our paper, the channel from RIS to users)
100 | :param precoder_in: nr_of_BS_antennas * nr_of_users (w in our paper, the precoder of BS)
101 | :param theta: nr_of_RIS_elements * 1 (theta in our paper, the phase shift of RIS)
102 | :param power_noise: the noise power
103 | :return: weighted_sum_rate (the weighted sum rate of the users)
104 | """
105 | result = 0
106 | nr_of_user = channel2.shape[0]
107 | transmitter_precoder = precoder_in
108 | for user_index in range(nr_of_user):
109 | user_sinr = compute_sinr(channel1, channel2, transmitter_precoder, theta, power_noise, user_index)
110 | result = result + user_weights[user_index] * torch.log2(1 + user_sinr)
111 | # print('end')
112 | return result
113 |
114 |
115 | def compute_weighted_sum_rate_X(user_weights, channel1, channel2, X, theta, power_noise):
116 | """
117 | This version of rate function deals with torch format, and the transmitter. Precoder is a complex matrix
118 | :param user_weights: the weights of users
119 | :param channel1: nr_of_RIS_elements * nr_of_BS_antennas (G in our paper, the channel from BS to RIS)
120 | :param channel2: nr_of_users * nr_of_RIS_elements (h in our paper, the channel from RIS to users)
121 | :param X: nr_of_users * nr_of_users (X in our paper, the collapsed precoder of BS)
122 | :param theta: nr_of_RIS_elements * 1 (theta in our paper, the phase shift of RIS)
123 | :param power_noise: the noise power
124 | :return: weighted_sum_rate (the weighted sum rate of the users)
125 | """
126 | result = 0
127 | cascaded_channel = channel2.conj() @ torch.diag(torch.exp(theta * 1j)) @ channel1
128 | transmitter_precoder = cascaded_channel.conj().T @ X
129 | nr_of_user = channel2.shape[0]
130 | for user_index in range(nr_of_user):
131 | user_sinr = compute_sinr(channel1, channel2, transmitter_precoder, theta, power_noise, user_index)
132 | result = result + user_weights[user_index] * torch.log2(1 + user_sinr)
133 | return result
134 |
135 |
136 | def init_transmitter_precoder(channel_realization):
137 | """
138 | This function is used to initialize the transmitter precoder in torch and numpy format
139 | :param channel_realization:
140 | :return: transmitter_precoder, transmitter_precoder_initialize
141 | """
142 | transmitter_precoder_init = np.zeros((nr_of_BS_antennas, nr_of_users)) + 1j * np.zeros(
143 | (nr_of_BS_antennas, nr_of_users))
144 | for user_index in range(nr_of_users):
145 | if user_index in selected_users:
146 | transmitter_precoder_init[:, user_index] = channel_realization[user_index, :]
147 | transmitter_precoder_initialize = transmitter_precoder_init / np.linalg.norm(transmitter_precoder_init) * np.sqrt(
148 | total_power)
149 |
150 | transmitter_precoder_init = torch.from_numpy(transmitter_precoder_initialize)
151 | transmitter_precoder_complex = transmitter_precoder_init
152 | transmitter_precoder_Re = transmitter_precoder_complex.real
153 | transmitter_precoder_Im = transmitter_precoder_complex.imag
154 | transmitter_precoder = torch.cat((transmitter_precoder_Re, transmitter_precoder_Im), dim=1)
155 | return transmitter_precoder, transmitter_precoder_initialize # torch real format, numpy complex format
156 |
157 |
158 | def init_X(antenna_number, user_number, cascaded_channel, power):
159 | """
160 | This function is used to initialize the collapsed beamforming vector in torch format
161 | :param user_number: the number of users
162 | :param cascaded_channel: the cascaded channel between BS and users
163 | :param power: the power constraint
164 | :return: initiliazed collapsed beamforming vector X (user_number * user_number)
165 | and initiliazed full beamforming vector V (antenna_number * user_number)
166 | """
167 | # initialize the collapsed beamforming vector X and the full beamforming vector V
168 | X = torch.randn(user_number, user_number) + 1j * torch.randn(user_number, user_number)
169 | V = torch.randn(antenna_number, user_number) + 1j * torch.randn(antenna_number, user_number)
170 | V = cascaded_channel.conj().T @ X
171 | # normalize the collapsed beamforming vector X and the full beamforming vector V
172 | X = X / torch.norm(V) * torch.sqrt(torch.tensor(power))
173 | V = cascaded_channel.conj().T @ X
174 | return X, V
175 |
176 |
177 | #
178 |
179 |
180 | # 测试函数,仅供测试用途 Test function, for test only
181 | if __name__ == '__main__':
182 | channel1 = torch.randn(nr_of_RIS_elements, nr_of_BS_antennas) + 1j * torch.randn(
183 | nr_of_RIS_elements, nr_of_BS_antennas)
184 | channel2 = torch.randn(nr_of_users, nr_of_RIS_elements) + 1j * torch.randn(
185 | nr_of_users, nr_of_RIS_elements)
186 | precoder = torch.randn(nr_of_BS_antennas, nr_of_users) + 1j * torch.randn(
187 | nr_of_BS_antennas, nr_of_users)
188 | theta = torch.randn(nr_of_RIS_elements)
189 | user_weights = np.ones(nr_of_users)
190 | noise_power = 1
191 | user_id = 0
192 | selected_users = [x for x in range(nr_of_users)]
193 | print("compute_weighted_sum_rate: ",
194 | compute_weighted_sum_rate(user_weights, channel1, channel2,
195 | precoder, theta, noise_power),
196 | '\n',
197 | "compute_sinr: ",
198 | compute_sinr(channel1, channel2, precoder, theta, noise_power, user_id),
199 | )
200 |
--------------------------------------------------------------------------------