├── model.png
├── Data
└── README.md
├── README.md
└── Model
├── load_data.py
└── MGCL.py
/model.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kangliu1225/MGCL/HEAD/model.png
--------------------------------------------------------------------------------
/Data/README.md:
--------------------------------------------------------------------------------
1 | Please download the [datasets](https://www.aliyundrive.com/s/cmEeDMecU88) and copy them to here.
2 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | Paper link: https://ieeexplore.ieee.org/document/10075502/
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 | 
14 |
15 | Many state-of-the-art multimedia recommender efforts effectively alleviate the issues of sparsity and cold-start via modeling multimodal user preference. The core paradigm of them is based on graph learning techniques, which perform high-order message propagation of multimodal information on user-item interaction graph, so as to obtain the node representations that contain both interactive- and multimodal-dimension user preferences. However, we argue that such a paradigm is suboptimal because it ignores two problems, including (1) the presence of a large number of preference-independent noisy data in the items' multimodal content, and (2) the propagation of this multimodal noise over the interaction graph contaminates the representations of both interactive- and multimodal-dimension user preferences.
16 |
17 | In this work, we aim to reduce the negative effects of multimodal noise and further improve user preference modeling. Towards this end, we develop a multimodal graph contrastive learning (MGCL) approach, which decomposes user preferences into multiple dimensions and performs cross-dimension mutual information maximization, so that user preference modeling over different dimensions can be enhanced with each other. In particular, we first adopt the graph learning approach to generate representations of users and items in the interaction and multimodal dimensions, respectively. Then, we construct an additional contrastive learning task to maximize the consistency between different dimensions. Extensive experiments on three public datasets validate the effectiveness and scalability of the proposed MGCL.
18 |
19 | We provide tensorflow implementation for MGCL.
20 |
21 | ### Before running the codes, please download the [datasets](https://www.aliyundrive.com/s/cmEeDMecU88) and copy them to the Data directory.
22 |
23 | ## prerequisites
24 |
25 | - Tensorflow 1.10.0
26 | - Python 3.5
27 | - NVIDIA GPU + CUDA + CuDNN
28 |
29 |
30 | ## Citation :satisfied:
31 | If our paper and codes are useful to you, please cite:
32 |
33 | @ARTICLE{MGCL,
34 | author={Liu, Kang and Xue, Feng and Guo, Dan and Sun, Peijie and Qian, Shengsheng and Hong, Richang},
35 | journal={IEEE Transactions on Multimedia},
36 | title={Multimodal Graph Contrastive Learning for Multimedia-Based Recommendation},
37 | year={2023},
38 | pages={1-13},
39 | doi={10.1109/TMM.2023.3251108}
40 | }
41 |
--------------------------------------------------------------------------------
/Model/load_data.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import random as rd
3 | import scipy.sparse as sp
4 | import time
5 | import collections
6 | import pickle
7 | import os
8 |
9 |
10 | class Data(object):
11 | def __init__(self, path, batch_size):
12 | title_enable = True
13 | if 'ali' in path or 'taobao' in path:
14 | # print('Data loader won\'t provide title feat.')
15 | title_enable = False
16 |
17 | if 'movie' in path:
18 | d1 = 2048
19 | d2 = 100
20 | else:
21 | d1 = 4096
22 | d2 = 300
23 |
24 | self.path = path
25 |
26 | self.batch_size = batch_size
27 |
28 | train_file = path + '/train.txt'
29 | test_file = path + '/test.txt'
30 |
31 | img_feat_file = path + '/item2imgfeat.txt'
32 | text_feat_file = path + '/itemtitle2vec.txt'
33 |
34 | self.exist_items_in_entity = set()
35 | self.exist_items_in_title = set()
36 | self.exist_items_in_review = set()
37 | self.exist_items_in_visual = set()
38 |
39 | self.n_users, self.n_items = 0, 0
40 |
41 | self.n_train, self.n_test = 0, 0
42 | self.neg_pools = {}
43 |
44 | self.exist_users = []
45 |
46 | with open(train_file) as f:
47 | for l in f.readlines():
48 | if len(l) > 0:
49 | l = l.strip('\n').split(' ')
50 | items = [int(i) for i in l[1:]]
51 | uid = int(l[0])
52 | self.exist_users.append(uid)
53 | self.n_items = max(self.n_items, max(items))
54 | self.n_users = max(self.n_users, uid)
55 | self.n_train += len(items)
56 |
57 | with open(test_file) as f:
58 | for l in f.readlines():
59 | if len(l) > 0:
60 | l = l.strip('\n')
61 | try:
62 | items = [int(i) for i in l.split(' ')[1:]]
63 | except Exception:
64 | continue
65 | self.n_items = max(self.n_items, max(items))
66 | self.n_test += len(items)
67 |
68 | self.n_items += 1
69 | self.n_users += 1
70 |
71 | self.exist_items = list(range(self.n_items))
72 |
73 | self.R = sp.dok_matrix((self.n_users, self.n_items), dtype=np.float32)
74 |
75 | self.train_items, self.test_set = {}, {}
76 | self.train_users = {}
77 | self.train_users_f = {}
78 |
79 | with open(train_file) as f:
80 | for l in f.readlines():
81 | if len(l) > 0:
82 | l = l.strip('\n').split(' ')
83 | items = [int(i) for i in l[1:]]
84 | uid = int(l[0])
85 | for i in items:
86 | if i not in self.train_users_f:
87 | self.train_users_f[i] = []
88 | else:
89 | self.train_users_f[i].append(uid)
90 |
91 | with open(train_file) as f_train:
92 | with open(test_file) as f_test:
93 | for l in f_train.readlines():
94 | if len(l) == 0: break
95 | l = l.strip('\n')
96 | items = [int(i) for i in l.split(' ')]
97 | uid, train_items = items[0], items[1:]
98 |
99 | for i in train_items:
100 | self.R[uid, i] = 1.
101 |
102 | if i not in self.train_users:
103 | self.train_users[i] = []
104 | self.train_users[i].append(uid)
105 |
106 | self.train_items[uid] = train_items
107 |
108 | for l in f_test.readlines():
109 | if len(l) == 0: break
110 | l = l.strip('\n')
111 | try:
112 | items = [int(i) for i in l.split(' ')]
113 | except Exception:
114 | continue
115 |
116 | uid, test_items = items[0], items[1:]
117 | self.test_set[uid] = test_items
118 |
119 | self.img_features = {}
120 | with open(img_feat_file, 'r') as file:
121 | for line in file.readlines():
122 | l = line.strip().split(' ')
123 | item_id = l[0]
124 | img_feat = list(map(float, l[1:]))
125 | self.img_features[int(item_id)] = img_feat
126 | self.imageFeaMatrix = [[0.] * d1] * self.n_items
127 | for item in self.img_features:
128 | self.imageFeaMatrix[item] = self.img_features[item]
129 |
130 | if title_enable:
131 | self.text_features = {}
132 | with open(text_feat_file, 'r') as file:
133 | for line in file.readlines():
134 | l = line.strip().split(' ')
135 | item_id = l[0]
136 | text_feat = list(map(float, l[1:]))
137 | self.text_features[int(item_id)] = text_feat
138 | self.textFeatMatrix = [[0.] * d2] * self.n_items
139 | for item in self.text_features:
140 | self.textFeatMatrix[item] = self.text_features[item]
141 |
142 | self.R = self.R.tocsr()
143 |
144 | self.coo_R = self.R.tocoo()
145 |
146 |
147 | def get_adj_mat(self):
148 | origin_file = self.path + '/origin'
149 |
150 | try:
151 | t1 = time.time()
152 | if not os.path.exists(origin_file):
153 | os.mkdir(origin_file)
154 |
155 | left = sp.load_npz(origin_file + '/adj_mat_left.npz')
156 | norm_adj_mat_3 = sp.load_npz(origin_file + '/adj_mat_3.npz')
157 | norm_adj_mat_4 = sp.load_npz(origin_file + '/adj_mat_4.npz')
158 | norm_adj_mat_5 = sp.load_npz(origin_file + '/adj_mat_5.npz')
159 |
160 | print('already load adj_t matrix', norm_adj_mat_4.shape, time.time() - t1)
161 |
162 | except Exception:
163 | left, norm_adj_mat_3, norm_adj_mat_4, norm_adj_mat_5 = self.create_adj_mat()
164 |
165 | sp.save_npz(origin_file + '/adj_mat_left.npz', left)
166 | sp.save_npz(origin_file + '/adj_mat_3.npz', norm_adj_mat_3)
167 | sp.save_npz(origin_file + '/adj_mat_4.npz', norm_adj_mat_4)
168 | sp.save_npz(origin_file + '/adj_mat_5.npz', norm_adj_mat_5)
169 |
170 | return left, norm_adj_mat_3, norm_adj_mat_4, norm_adj_mat_5
171 |
172 | def create_adj_mat(self):
173 | t1 = time.time()
174 | adj_mat = sp.dok_matrix((self.n_users + self.n_items, self.n_users + self.n_items), dtype=np.float32)
175 | adj_mat = adj_mat.tolil()
176 |
177 | R = self.R.tolil()
178 |
179 | adj_mat[:self.n_users, self.n_users: self.n_users + self.n_items] = R
180 | adj_mat[self.n_users: self.n_users + self.n_items, :self.n_users] = R.T
181 |
182 | adj_mat = adj_mat.todok()
183 |
184 | print('already create adjacency matrix', adj_mat.shape, time.time() - t1)
185 |
186 | t2 = time.time()
187 |
188 | def normalized_adj_symetric(adj, d1, d2):
189 | adj = sp.coo_matrix(adj)
190 | rowsum = np.array(adj.sum(1))
191 | d_inv_sqrt = np.power(rowsum, d1).flatten()
192 | d_inv_sqrt[np.isinf(d_inv_sqrt)] = 0.
193 | d_mat_inv_sqrt = sp.diags(d_inv_sqrt)
194 |
195 | d_inv_sqrt_last = np.power(rowsum, d2).flatten()
196 | d_inv_sqrt_last[np.isinf(d_inv_sqrt_last)] = 0.
197 | d_mat_inv_sqrt_last = sp.diags(d_inv_sqrt_last)
198 |
199 | return adj.dot(d_mat_inv_sqrt).transpose().dot(d_mat_inv_sqrt_last).tocoo()
200 |
201 | norm_adj_mat_left = normalized_adj_symetric(adj_mat + sp.eye(adj_mat.shape[0]), - 1.0, -0.0)
202 | norm_adj_mat_53 = normalized_adj_symetric(adj_mat + sp.eye(adj_mat.shape[0]), - 0.5, -0.3)
203 | norm_adj_mat_54 = normalized_adj_symetric(adj_mat + sp.eye(adj_mat.shape[0]), - 0.5, -0.4)
204 | norm_adj_mat_55 = normalized_adj_symetric(adj_mat + sp.eye(adj_mat.shape[0]), - 0.5, -0.5)
205 |
206 | norm_adj_mat_left = norm_adj_mat_left.tocsr()
207 | norm_adj_mat_53 = norm_adj_mat_53.tocsr()
208 | norm_adj_mat_54 = norm_adj_mat_54.tocsr()
209 | norm_adj_mat_55 = norm_adj_mat_55.tocsr()
210 |
211 | print('already normalize adjacency matrix', time.time() - t2)
212 | return norm_adj_mat_left.tocsr(), norm_adj_mat_53.tocsr(), norm_adj_mat_54.tocsr(), norm_adj_mat_55.tocsr()
213 |
214 | def sample_u(self):
215 | total_users = self.exist_users
216 | users = rd.sample(total_users, self.batch_size)
217 |
218 | def sample_pos_items_for_u(u):
219 | pos_items = self.train_items[u]
220 | n_pos_items = len(pos_items)
221 | pos_id = np.random.randint(low=0, high=n_pos_items, size=1)[0]
222 | pos_i_id = pos_items[pos_id]
223 | return pos_i_id
224 |
225 | def sample_neg_items_for_u(u):
226 | pos_items = self.train_items[u]
227 | while True:
228 | neg_id = np.random.randint(low=0, high=self.n_items, size=1)[0]
229 | if neg_id not in pos_items:
230 | return neg_id
231 |
232 | pos_items, neg_items, pos_users_for_pi, neg_users_for_pi = [], [], [], []
233 | for u in users:
234 | pos_i = sample_pos_items_for_u(u)
235 | neg_i = sample_neg_items_for_u(u)
236 |
237 | pos_items.append(pos_i)
238 | neg_items.append(neg_i)
239 |
240 | return users, pos_items, neg_items
241 |
242 | def print_statistics(self):
243 | print('n_users=%d, n_items=%d' % (self.n_users, self.n_items))
244 | print('n_interactions=%d' % (self.n_train + self.n_test))
245 | print('n_train=%d, n_test=%d, sparsity=%.5f' % (
246 | self.n_train, self.n_test, (self.n_train + self.n_test) / (self.n_users * self.n_items)))
247 |
248 | # def test_data(self):
249 | # for u, i in self.test_set.items():
250 | # user_batch = [0] * 100
251 | # item_batch = [0] * 100
252 | # test_items = []
253 | # negative_items = []
254 | # while len(negative_items) < 99:
255 | # h = np.random.randint(self.n_items)
256 | # if h in self.train_items[u]:
257 | # continue
258 | # negative_items.append(h)
259 | # test_items.extend(negative_items)
260 | # test_items.extend(i)
261 | #
262 | # for k, item in enumerate(test_items):
263 | # user_batch[k] = u
264 | # item_batch[k] = item
265 | #
266 | # yield user_batch, item_batch
267 |
--------------------------------------------------------------------------------
/Model/MGCL.py:
--------------------------------------------------------------------------------
1 | import time
2 | import tensorflow as tf
3 | import os
4 | import sys
5 | from load_data import Data
6 | import numpy as np
7 | import math
8 | import multiprocessing
9 | import heapq
10 | import random as rd
11 |
12 | os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
13 |
14 | model_name = 'MGCL'
15 |
16 | # ['amazon-beauty', 'Art', 'ali']
17 | dataset = 'amazon-beauty'
18 |
19 | '''
20 | #######################################################################
21 | Hyper-parameter settings.
22 | '''
23 |
24 | if dataset == 'amazon-beauty':
25 | n_layers = 4
26 | temp = 0.04
27 | decay = 0.001
28 | lambda_m = 0.4
29 | interval = 10
30 |
31 | elif dataset == 'Art':
32 | n_layers = 5
33 | temp = 0.04
34 | decay = 0.001
35 | lambda_m = 0.4
36 | interval = 10
37 | else:
38 | n_layers = 5
39 | temp = 0.04
40 | decay = 0.001
41 | lambda_m = 0.4
42 | interval = 5
43 |
44 |
45 | data_path = '../Data/'
46 | lr = 0.001
47 | batch_size = 2048
48 | embed_size = 64
49 | epoch = 500
50 | data_generator = Data(path=data_path + dataset, batch_size=batch_size)
51 | USR_NUM, ITEM_NUM = data_generator.n_users, data_generator.n_items
52 | N_TRAIN, N_TEST = data_generator.n_train, data_generator.n_test
53 | BATCH_SIZE = batch_size
54 | Ks = np.arange(1, 21)
55 |
56 | if dataset == 'ali':
57 | textual_enable = False
58 | else:
59 | textual_enable = True
60 |
61 |
62 | # model test module
63 | def test_one_user(x):
64 | u, rating = x[1], x[0]
65 |
66 | training_items = data_generator.train_items[u]
67 |
68 | user_pos_test = data_generator.test_set[u]
69 |
70 | all_items = set(range(ITEM_NUM))
71 |
72 | test_items = rd.sample(list(all_items - set(training_items) - set(user_pos_test)), 99) + user_pos_test
73 |
74 | item_score = {}
75 | for i in test_items:
76 | item_score[i] = rating[i]
77 |
78 | K_max = max(Ks)
79 | K_max_item_score = heapq.nlargest(K_max, item_score, key=item_score.get)
80 |
81 | r = []
82 | for i in K_max_item_score:
83 | if i in user_pos_test:
84 | r.append(1)
85 | else:
86 | r.append(0)
87 |
88 | precision, recall, ndcg, hit_ratio = [], [], [], []
89 |
90 | def hit_at_k(r, k):
91 | r = np.array(r)[:k]
92 | if np.sum(r) > 0:
93 | return 1.
94 | else:
95 | return 0.
96 |
97 | def ndcg_at_k(r, k):
98 | r = np.array(r)[:k]
99 |
100 | if np.sum(r) > 0:
101 | return math.log(2) / math.log(np.where(r == 1)[0] + 2)
102 | else:
103 | return 0.
104 |
105 | for K in Ks:
106 | ndcg.append(ndcg_at_k(r, K))
107 | hit_ratio.append(hit_at_k(r, K))
108 |
109 | return {'ndcg': np.array(ndcg), 'hit_ratio': np.array(hit_ratio), 'k': (u, K_max_item_score[:20])}
110 |
111 |
112 | def test(sess, model, users, items, batch_size, cores):
113 | result = {'ndcg': np.zeros(len(Ks)),
114 | 'hit_ratio': np.zeros(len(Ks)),
115 | 'k': []}
116 |
117 | pool = multiprocessing.Pool(cores)
118 |
119 | u_batch_size = batch_size * 2
120 |
121 | n_test_users = len(users)
122 | n_user_batchs = n_test_users // u_batch_size + 1
123 |
124 | count = 0
125 |
126 | for u_batch_id in range(n_user_batchs):
127 |
128 | start = u_batch_id * u_batch_size
129 |
130 | end = (u_batch_id + 1) * u_batch_size
131 |
132 | user_batch = users[start: end]
133 |
134 | item_batch = items
135 |
136 | rate_batch = sess.run(model.batch_ratings, {model.users: user_batch,
137 | model.pos_items: item_batch})
138 |
139 | user_batch_rating_uid = zip(rate_batch, user_batch)
140 |
141 | batch_result = pool.map(test_one_user, user_batch_rating_uid)
142 |
143 | count += len(batch_result)
144 |
145 | for re in batch_result:
146 | result['ndcg'] += re['ndcg'] / n_test_users
147 | result['hit_ratio'] += re['hit_ratio'] / n_test_users
148 | result['k'].append(re['k'])
149 |
150 | assert count == n_test_users
151 | pool.close()
152 | return result
153 |
154 | class Model(object):
155 |
156 | def __init__(self, data_config, img_feat, text_feat, d1, d2):
157 | self.n_users = data_config['n_users']
158 | self.n_items = data_config['n_items']
159 | self.d1 = d1
160 | self.d2 = d2
161 | self.n_fold = 10
162 | self.norm_adj = data_config['norm_adj']
163 | self.norm_adj_m = data_config['norm_adj_m']
164 | self.n_nonzero_elems = self.norm_adj.count_nonzero()
165 | self.lr = data_config['lr']
166 | self.emb_dim = data_config['embed_size']
167 | self.batch_size = data_config['batch_size']
168 | self.n_layers = data_config['n_layers']
169 | self.decay = data_config['decay']
170 |
171 | self.users = tf.placeholder(tf.int32, shape=(None,))
172 | self.pos_items = tf.placeholder(tf.int32, shape=(None,))
173 | self.neg_items = tf.placeholder(tf.int32, shape=(None,))
174 |
175 | self.weights = self._init_weights()
176 |
177 | self.im_v = tf.matmul(img_feat, self.weights['w1_v'])
178 | self.um_v = self.weights['user_embedding_v']
179 |
180 | self.im_t = tf.matmul(text_feat, self.weights['w1_t'])
181 | self.um_t = self.weights['user_embedding_t']
182 |
183 | '''
184 | ######################################################################################
185 | generate interactive-dimension embeddings
186 | '''
187 | self.ua_embeddings, self.ia_embeddings = self._create_norm_embed()
188 | self.u_g_embeddings = tf.nn.embedding_lookup(self.ua_embeddings, self.users)
189 | self.pos_i_g_embeddings = tf.nn.embedding_lookup(self.ia_embeddings, self.pos_items)
190 | self.neg_i_g_embeddings = tf.nn.embedding_lookup(self.ia_embeddings, self.neg_items)
191 |
192 | self.u_g_embeddings_pre = tf.nn.embedding_lookup(self.weights['user_embedding'], self.users)
193 | self.pos_i_g_embeddings_pre = tf.nn.embedding_lookup(self.weights['item_embedding'], self.pos_items)
194 | self.neg_i_g_embeddings_pre = tf.nn.embedding_lookup(self.weights['item_embedding'], self.neg_items)
195 |
196 | '''
197 | ######################################################################################
198 | generate multimodal-dimension embeddings
199 | '''
200 | self.ua_embeddings_v, self.ia_embeddings_v = self._create_norm_embed_v()
201 | self.u_g_embeddings_v = tf.nn.embedding_lookup(self.ua_embeddings_v, self.users)
202 | self.pos_i_g_embeddings_v = tf.nn.embedding_lookup(self.ia_embeddings_v, self.pos_items)
203 | self.neg_i_g_embeddings_v = tf.nn.embedding_lookup(self.ia_embeddings_v, self.neg_items)
204 |
205 | self.u_g_embeddings_v_pre = tf.nn.embedding_lookup(self.um_v, self.users)
206 | self.pos_i_g_embeddings_v_pre = tf.nn.embedding_lookup(self.im_v, self.pos_items)
207 | self.neg_i_g_embeddings_v_pre = tf.nn.embedding_lookup(self.im_v, self.neg_items)
208 |
209 | self.ua_embeddings_t, self.ia_embeddings_t = self._create_norm_embed_t()
210 | self.u_g_embeddings_t = tf.nn.embedding_lookup(self.ua_embeddings_t, self.users)
211 | self.pos_i_g_embeddings_t = tf.nn.embedding_lookup(self.ia_embeddings_t, self.pos_items)
212 | self.neg_i_g_embeddings_t = tf.nn.embedding_lookup(self.ia_embeddings_t, self.neg_items)
213 |
214 | self.u_g_embeddings_t_pre = tf.nn.embedding_lookup(self.um_t, self.users)
215 | self.pos_i_g_embeddings_t_pre = tf.nn.embedding_lookup(self.im_t, self.pos_items)
216 | self.neg_i_g_embeddings_t_pre = tf.nn.embedding_lookup(self.im_t, self.neg_items)
217 |
218 | self.batch_ratings = tf.matmul(self.u_g_embeddings, self.pos_i_g_embeddings,
219 | transpose_a=False, transpose_b=True) + \
220 | lambda_m * tf.matmul(self.u_g_embeddings_v, self.pos_i_g_embeddings_v,
221 | transpose_a=False, transpose_b=True) + \
222 | tf.matmul(self.u_g_embeddings_t, self.pos_i_g_embeddings_t,
223 | transpose_a=False, transpose_b=True)
224 |
225 | self.mf_loss, self.mf_loss_m, self.emb_loss = self.create_bpr_loss()
226 | self.cl_loss, self.cl_loss_v, self.cl_loss_t = self.create_cl_loss_cf(), self.create_cl_loss_v(), self.create_cl_loss_t()
227 |
228 | self.opt_1 = tf.train.AdamOptimizer(learning_rate=self.lr).minimize(self.mf_loss + self.mf_loss_m + self.emb_loss)
229 | self.opt_2 = tf.train.AdamOptimizer(learning_rate=self.lr/10).minimize(self.cl_loss + self.cl_loss_v + self.cl_loss_t)
230 |
231 | def _init_weights(self):
232 |
233 | all_weights = dict()
234 |
235 | initializer = tf.contrib.layers.xavier_initializer()
236 |
237 | all_weights['user_embedding'] = tf.Variable(initializer([self.n_users, self.emb_dim]),
238 | name='user_embedding')
239 | all_weights['item_embedding'] = tf.Variable(initializer([self.n_items, self.emb_dim]),
240 | name='item_embedding')
241 | all_weights['user_embedding_v'] = tf.Variable(initializer([self.n_users, self.emb_dim]),
242 | name='user_embedding_v')
243 | all_weights['user_embedding_t'] = tf.Variable(initializer([self.n_users, self.emb_dim]),
244 | name='user_embedding_t')
245 |
246 | all_weights['w1_v'] = tf.Variable(initializer([self.d1, self.emb_dim]), name='w1_v')
247 | all_weights['w1_t'] = tf.Variable(initializer([self.d2, self.emb_dim]), name='w1_t')
248 |
249 | return all_weights
250 |
251 | def _split_A_hat(self, X):
252 |
253 | A_fold_hat = []
254 |
255 | fold_len = (self.n_users + self.n_items) // self.n_fold
256 | for i_fold in range(self.n_fold):
257 | start = i_fold * fold_len
258 | if i_fold == self.n_fold - 1:
259 | end = self.n_users + self.n_items
260 | else:
261 | end = (i_fold + 1) * fold_len
262 |
263 | A_fold_hat.append(self._convert_sp_mat_to_sp_tensor(X[start:end]))
264 | return A_fold_hat
265 |
266 | def _create_norm_embed(self):
267 |
268 | A_fold_hat = self._split_A_hat(self.norm_adj)
269 |
270 | ego_embeddings = tf.concat(
271 | [self.weights['user_embedding'], self.weights['item_embedding']], axis=0)
272 |
273 | for k in range(0, self.n_layers):
274 |
275 | temp_embed = []
276 |
277 | for f in range(self.n_fold):
278 | temp_embed.append(tf.sparse_tensor_dense_matmul(A_fold_hat[f], ego_embeddings))
279 |
280 | side_embeddings = tf.concat(temp_embed, 0)
281 |
282 | ego_embeddings = side_embeddings
283 |
284 | u_g_embeddings, i_g_embeddings = tf.split(ego_embeddings, [self.n_users, self.n_items], 0)
285 |
286 | return u_g_embeddings, i_g_embeddings
287 |
288 | def _create_norm_embed_v(self):
289 |
290 | A_fold_hat = self._split_A_hat(self.norm_adj_m)
291 |
292 | ego_embeddings_v = tf.concat([self.um_v, self.im_v], axis=0)
293 |
294 | for k in range(0, 1):
295 |
296 | temp_embed = []
297 |
298 | for f in range(self.n_fold):
299 | temp_embed.append(tf.sparse_tensor_dense_matmul(A_fold_hat[f], ego_embeddings_v))
300 |
301 | side_embeddings = tf.concat(temp_embed, 0)
302 |
303 | ego_embeddings_v = side_embeddings
304 |
305 | u_embed, i_embed = tf.split(ego_embeddings_v, [self.n_users, self.n_items], 0)
306 |
307 | return u_embed, i_embed
308 |
309 | def _create_norm_embed_t(self):
310 |
311 | A_fold_hat = self._split_A_hat(self.norm_adj_m)
312 |
313 | ego_embeddings_t = tf.concat([self.um_t, self.im_t], axis=0)
314 |
315 | for k in range(0, 1):
316 |
317 | temp_embed = []
318 |
319 | for f in range(self.n_fold):
320 | temp_embed.append(tf.sparse_tensor_dense_matmul(A_fold_hat[f], ego_embeddings_t))
321 |
322 | side_embeddings = tf.concat(temp_embed, 0)
323 |
324 | ego_embeddings_t = side_embeddings
325 |
326 | u_embed, i_embed = tf.split(ego_embeddings_t, [self.n_users, self.n_items], 0)
327 |
328 | return u_embed, i_embed
329 |
330 | def create_bpr_loss(self):
331 | pos_scores_v = tf.reduce_sum(tf.multiply(self.u_g_embeddings_v, self.pos_i_g_embeddings_v), axis=1)
332 | neg_scores_v = tf.reduce_sum(tf.multiply(self.u_g_embeddings_v, self.neg_i_g_embeddings_v), axis=1)
333 |
334 | pos_scores_t = tf.reduce_sum(tf.multiply(self.u_g_embeddings_t, self.pos_i_g_embeddings_t), axis=1)
335 | neg_scores_t = tf.reduce_sum(tf.multiply(self.u_g_embeddings_t, self.neg_i_g_embeddings_t), axis=1)
336 |
337 | pos_scores = tf.reduce_sum(tf.multiply(self.u_g_embeddings, self.pos_i_g_embeddings), axis=1)
338 | neg_scores = tf.reduce_sum(tf.multiply(self.u_g_embeddings, self.neg_i_g_embeddings), axis=1)
339 |
340 | regularizer_mf_v = tf.nn.l2_loss(self.u_g_embeddings_v_pre) + tf.nn.l2_loss(self.pos_i_g_embeddings_v_pre) + \
341 | tf.nn.l2_loss(self.neg_i_g_embeddings_v_pre)
342 | regularizer_mf_t = tf.nn.l2_loss(self.u_g_embeddings_t_pre) + tf.nn.l2_loss(self.pos_i_g_embeddings_t_pre) + \
343 | tf.nn.l2_loss(self.neg_i_g_embeddings_t_pre)
344 | regularizer_mf = tf.nn.l2_loss(self.u_g_embeddings_pre) + tf.nn.l2_loss(self.pos_i_g_embeddings_pre) + \
345 | tf.nn.l2_loss(self.neg_i_g_embeddings_pre)
346 |
347 | mf_loss = tf.reduce_mean(tf.nn.softplus(-(pos_scores - neg_scores)))
348 |
349 | mf_loss_m = tf.reduce_mean(tf.nn.softplus(-(pos_scores_v - neg_scores_v))) \
350 | + tf.reduce_mean(tf.nn.softplus(-(pos_scores_t - neg_scores_t)))
351 |
352 | emb_loss = self.decay * (regularizer_mf + 0.1 * regularizer_mf_t + 0.1 * regularizer_mf_v) / self.batch_size
353 |
354 | self.user_embed, self.user_embed_v, self.user_embed_t = tf.nn.l2_normalize(self.u_g_embeddings_pre, axis=1), \
355 | tf.nn.l2_normalize(self.u_g_embeddings_v_pre, axis=1), \
356 | tf.nn.l2_normalize(self.u_g_embeddings_t_pre, axis=1)
357 | self.item_embed, self.item_embed_v, self.item_embed_t = tf.nn.l2_normalize(self.pos_i_g_embeddings_pre, axis=1), \
358 | tf.nn.l2_normalize(self.pos_i_g_embeddings_v_pre, axis=1), \
359 | tf.nn.l2_normalize(self.pos_i_g_embeddings_t_pre, axis=1)
360 |
361 | return mf_loss, mf_loss_m, emb_loss
362 |
363 | def create_cl_loss_cf(self):
364 | pos_score_v = tf.reduce_sum(tf.multiply(self.user_embed, self.user_embed_v), axis=1)
365 | pos_score_t = tf.reduce_sum(tf.multiply(self.user_embed, self.user_embed_t), axis=1)
366 |
367 | neg_score_v_u = tf.matmul(self.user_embed, self.user_embed_v, transpose_a=False, transpose_b=True)
368 | neg_score_t_u = tf.matmul(self.user_embed, self.user_embed_t, transpose_a=False, transpose_b=True)
369 |
370 | cl_loss_v_u = - tf.reduce_sum(
371 | tf.log(tf.exp(pos_score_v / temp) / tf.reduce_sum(tf.exp(neg_score_v_u / temp), axis=1)))
372 | cl_loss_t_u = - tf.reduce_sum(
373 | tf.log(tf.exp(pos_score_t / temp) / tf.reduce_sum(tf.exp(neg_score_t_u / temp), axis=1)))
374 |
375 | pos_score_v = tf.reduce_sum(tf.multiply(self.item_embed, self.item_embed_v), axis=1)
376 | pos_score_t = tf.reduce_sum(tf.multiply(self.item_embed, self.item_embed_t), axis=1)
377 |
378 | neg_score_v_i = tf.matmul(self.item_embed, self.item_embed_v, transpose_a=False, transpose_b=True)
379 | neg_score_t_i = tf.matmul(self.item_embed, self.item_embed_t, transpose_a=False, transpose_b=True)
380 |
381 | cl_loss_v_i = - tf.reduce_sum(
382 | tf.log(tf.exp(pos_score_v / temp) / tf.reduce_sum(tf.exp(neg_score_v_i / temp), axis=1)))
383 | cl_loss_t_i = - tf.reduce_sum(
384 | tf.log(tf.exp(pos_score_t / temp) / tf.reduce_sum(tf.exp(neg_score_t_i / temp), axis=1)))
385 |
386 | cl_loss = cl_loss_v_u + cl_loss_t_u + cl_loss_v_i + cl_loss_t_i
387 |
388 | return cl_loss
389 |
390 | def create_cl_loss_v(self):
391 | pos_score_1 = tf.reduce_sum(tf.multiply(self.user_embed_v, self.user_embed), axis=1)
392 | pos_score_2 = tf.reduce_sum(tf.multiply(self.user_embed_v, self.user_embed_t), axis=1)
393 |
394 | neg_score_1_u = tf.matmul(self.user_embed_v, self.user_embed, transpose_a=False, transpose_b=True)
395 | neg_score_2_u = tf.matmul(self.user_embed_v, self.user_embed_t, transpose_a=False, transpose_b=True)
396 |
397 | cl_loss_1_u = - tf.reduce_sum(
398 | tf.log(tf.exp(pos_score_1 / temp) / tf.reduce_sum(tf.exp(neg_score_1_u / temp), axis=1)))
399 | cl_loss_2_u = - tf.reduce_sum(
400 | tf.log(tf.exp(pos_score_2 / temp) / tf.reduce_sum(tf.exp(neg_score_2_u / temp), axis=1)))
401 |
402 | pos_score_1 = tf.reduce_sum(tf.multiply(self.item_embed_v, self.item_embed), axis=1)
403 | pos_score_2 = tf.reduce_sum(tf.multiply(self.item_embed_v, self.item_embed_t), axis=1)
404 |
405 | neg_score_1_i = tf.matmul(self.item_embed_v, self.item_embed, transpose_a=False, transpose_b=True)
406 | neg_score_2_i = tf.matmul(self.item_embed_v, self.item_embed_t, transpose_a=False, transpose_b=True)
407 |
408 | cl_loss_1_i = - tf.reduce_sum(
409 | tf.log(tf.exp(pos_score_1 / temp) / tf.reduce_sum(tf.exp(neg_score_1_i / temp), axis=1)))
410 | cl_loss_2_i = - tf.reduce_sum(
411 | tf.log(tf.exp(pos_score_2 / temp) / tf.reduce_sum(tf.exp(neg_score_2_i / temp), axis=1)))
412 |
413 | cl_loss = cl_loss_1_u + cl_loss_2_u + cl_loss_1_i + cl_loss_2_i
414 |
415 | return cl_loss
416 |
417 | def create_cl_loss_t(self):
418 | pos_score_1 = tf.reduce_sum(tf.multiply(self.user_embed_t, self.user_embed), axis=1)
419 | pos_score_2 = tf.reduce_sum(tf.multiply(self.user_embed_t, self.user_embed_v), axis=1)
420 |
421 | neg_score_1_u = tf.matmul(self.user_embed_t, self.user_embed, transpose_a=False, transpose_b=True)
422 | neg_score_2_u = tf.matmul(self.user_embed_t, self.user_embed_v, transpose_a=False, transpose_b=True)
423 |
424 | cl_loss_1_u = - tf.reduce_sum(
425 | tf.log(tf.exp(pos_score_1 / temp) / tf.reduce_sum(tf.exp(neg_score_1_u / temp), axis=1)))
426 | cl_loss_2_u = - tf.reduce_sum(
427 | tf.log(tf.exp(pos_score_2 / temp) / tf.reduce_sum(tf.exp(neg_score_2_u / temp), axis=1)))
428 |
429 | pos_score_1 = tf.reduce_sum(tf.multiply(self.item_embed_t, self.item_embed), axis=1)
430 | pos_score_2 = tf.reduce_sum(tf.multiply(self.item_embed_t, self.item_embed_v), axis=1)
431 |
432 | neg_score_1_i = tf.matmul(self.item_embed_t, self.item_embed_v, transpose_a=False, transpose_b=True)
433 | neg_score_2_i = tf.matmul(self.item_embed_t, self.item_embed_t, transpose_a=False, transpose_b=True)
434 |
435 | cl_loss_1_i = - tf.reduce_sum(
436 | tf.log(tf.exp(pos_score_1 / temp) / tf.reduce_sum(tf.exp(neg_score_1_i / temp), axis=1)))
437 | cl_loss_2_i = - tf.reduce_sum(
438 | tf.log(tf.exp(pos_score_2 / temp) / tf.reduce_sum(tf.exp(neg_score_2_i / temp), axis=1)))
439 |
440 | cl_loss = cl_loss_1_u + cl_loss_2_u + cl_loss_1_i + cl_loss_2_i
441 |
442 | return cl_loss
443 |
444 | def _convert_sp_mat_to_sp_tensor(self, X):
445 | coo = X.tocoo().astype(np.float32)
446 | indices = np.mat([coo.row, coo.col]).transpose()
447 | return tf.SparseTensor(indices, coo.data, coo.shape)
448 |
449 |
450 | if __name__ == '__main__':
451 |
452 | os.environ["CUDA_VISIBLE_DEVICES"] = str(0)
453 |
454 | cores = multiprocessing.cpu_count() // 3
455 | Ks = np.arange(1, 21)
456 |
457 | data_generator.print_statistics()
458 | config = dict()
459 | config['n_users'] = data_generator.n_users
460 | config['n_items'] = data_generator.n_items
461 | config['decay'] = decay
462 | config['n_layers'] = n_layers
463 | config['embed_size'] = embed_size
464 | config['lr'] = lr
465 | config['batch_size'] = batch_size
466 |
467 | """
468 | ################################################################################
469 | Generate the Laplacian matrix.
470 | """
471 | norm_left, norm_3, norm_4, norm_5 = data_generator.get_adj_mat()
472 |
473 | if dataset == 'amazon-beauty' or dataset == 'Art':
474 | config['norm_adj'] = norm_4
475 | config['norm_adj_m'] = norm_4
476 | else:
477 | config['norm_adj'] = norm_3
478 | config['norm_adj_m'] = norm_3
479 |
480 | print('shape of adjacency', norm_left.shape)
481 |
482 | t0 = time.time()
483 |
484 | if textual_enable == True:
485 | model = Model(data_config=config,
486 | img_feat=data_generator.imageFeaMatrix,
487 | text_feat=data_generator.textFeatMatrix, d1=4096, d2=300)
488 | else:
489 | model = Model(data_config=config,
490 | img_feat=data_generator.imageFeaMatrix,
491 | text_feat=data_generator.imageFeaMatrix, d1=4096, d2=4096)
492 |
493 | config = tf.ConfigProto()
494 | config.gpu_options.allow_growth = True
495 | sess = tf.Session(config=config)
496 |
497 | saver = tf.train.Saver(tf.global_variables())
498 |
499 | sess.run(tf.global_variables_initializer())
500 | cur_best_pre_0 = 0.
501 |
502 | """
503 | ################################################################################
504 | Train.
505 | """
506 | loss_loger, pre_loger, rec_loger, ndcg_loger, hit_loger = [], [], [], [], []
507 | stopping_step = 0
508 | should_stop = False
509 | max_recall, max_precision, max_ndcg, max_hr = 0., 0., 0., 0.
510 | max_epoch = 0
511 | early_stopping = 0
512 |
513 | best_score = 0
514 | best_result = {}
515 | all_result = {}
516 |
517 | for epoch in range(500):
518 | t1 = time.time()
519 | loss, mf_loss, emb_loss, cl_loss = 0., 0., 0., 0.
520 | n_batch = data_generator.n_train // batch_size + 1
521 |
522 | for idx in range(n_batch):
523 | users, pos_items, neg_items = data_generator.sample_u()
524 |
525 | _, batch_mf_loss, batch_emb_loss = sess.run(
526 | [model.opt_1, model.mf_loss, model.emb_loss],
527 | feed_dict={model.users: users,
528 | model.pos_items: pos_items,
529 | model.neg_items: neg_items
530 | })
531 | mf_loss += batch_mf_loss
532 | emb_loss += batch_emb_loss
533 |
534 | _, batch_cl_loss = sess.run(
535 | [model.opt_2, model.cl_loss],
536 | feed_dict={model.users: users,
537 | model.pos_items: pos_items,
538 | model.neg_items: neg_items
539 | })
540 | cl_loss += batch_cl_loss
541 |
542 | if np.isnan(mf_loss) == True:
543 | print('ERROR: loss is nan.')
544 | sys.exit()
545 |
546 | if (epoch + 1) % interval != 0:
547 | perf_str = 'Epoch {} [{:.1f}s]: train==[{:.5f} + {:.5f}]'.format(
548 | epoch, time.time() - t1,
549 | mf_loss + emb_loss, cl_loss / data_generator.n_train)
550 | print(perf_str)
551 | continue
552 |
553 | t2 = time.time()
554 | users_to_test = list(data_generator.test_set.keys())
555 |
556 | result = test(sess, model, users_to_test, data_generator.exist_items, batch_size, cores)
557 | hr = result['hit_ratio']
558 | ndcg = result['ndcg']
559 |
560 | score = hr[4] + ndcg[4]
561 | if score > best_score:
562 | best_score = score
563 | best_result['hr'] = [str(i) for i in hr]
564 | best_result['ndcg'] = [str(i) for i in ndcg]
565 | print('best result until now: hr@5,10,20={:.4f},{:.4f},{:.4f},ndcg@5,10,20={:.4f},{:.4f},{:.4f}'.format(
566 | hr[4], hr[9], hr[19], ndcg[4], ndcg[9], ndcg[19]))
567 | early_stopping = 0
568 | else:
569 | early_stopping += 1
570 |
571 | t3 = time.time()
572 |
573 | perf_str = 'Epoch {} [{:1f}s + {:1f}s]: hit@5=[{:5f}],hit@10=[{:5f}],hit@20=[{:5f}],ndcg@5=[{:5f}],ndcg@10=[{:5f}],ndcg@20=[{:5f}]'.format(epoch, t2 - t1, t3 - t2,
574 | hr[4], hr[9], hr[19], ndcg[4], ndcg[9], ndcg[19])
575 | print(perf_str)
576 |
577 | all_result[epoch + 1] = result
578 | if early_stopping == 10:
579 | break
580 | print('###########################################################################################################################')
581 | print('[{}], best result: hr@5,10,20={},{},{},ndcg@5,10,20={},{},{}'.format(dataset,
582 | best_result['hr'][4], best_result['hr'][9], best_result['hr'][19], best_result['ndcg'][4],
583 | best_result['ndcg'][9], best_result['ndcg'][19]))
--------------------------------------------------------------------------------