├── README.md └── TRN-PD.py /README.md: -------------------------------------------------------------------------------- 1 | # TRL: Tensor rank learning in CP decomposition via convolutional neural network 2 | Tensor factorization is a useful technique for capturing the high-order interactions in data analysis. One assumption of tensor decompositions is that a predefined rank should be known in advance. However, the tensor rank prediction is an NP-hard problem. The CANDECOMP/PARAFAC (CP) decomposition is a typical one. In this paper, we propose two methods based on convolutional neural network (CNN) to estimate CP tensor rank from noisy measurements. One applies CNN to the CP rank estimation directly. The other one adds a pre-decomposition for feature acquisition, which inputs rank-one components to CNN. Experimental results on synthetic and real-world datasets show the proposed methods outperforms state-of-the-art methods in terms of rank estimation accuracy. 3 | 4 | # Notes: 5 | 1. We use 50 × 50 × 50 synthetic dataset with 20dB noise level in this demo. The ranks range from 50 to 100. 6 | 2. The predefined rank bound of pre-decomposition of TRN-PD method is 110. 7 | 3. The dataset of this demo can be downloaded in https://drive.google.com/drive/folders/1Q-yjBoRHQ4N0Um1IJcIf_xMQINLfGxUg?usp=sharing 8 | 9 | # Reference: 10 | M. Zhou, Y. Liu, Z. Long, L. Chen, C. Zhu, Tensor rank learning in CP decomposition via convolutional neural network, Signal Processing: Image Communication, 2018, DOI: doi.org/10.1016/j.image.2018.03.017. online available at: https://www.sciencedirect.com/science/article/pii/S0923596518302741 11 | -------------------------------------------------------------------------------- /TRN-PD.py: -------------------------------------------------------------------------------- 1 | import keras 2 | from keras.models import Sequential, load_model 3 | from keras.layers import Conv3D, AveragePooling3D 4 | from keras.layers import Activation, Dropout, Flatten, Dense 5 | from keras import optimizers 6 | import scipy.io as scio 7 | import numpy as np 8 | from keras.callbacks import ModelCheckpoint, LearningRateScheduler 9 | from keras.callbacks import ReduceLROnPlateau 10 | import os 11 | 12 | path = 'TRN-PD' 13 | 14 | isExists=os.path.exists(path) 15 | if not isExists: 16 | os.makedirs(path) 17 | 18 | log_filepath = path 19 | filepath_checkpoint = 'TRN-PD_data/model.h5' 20 | 21 | epochs = 60 22 | batch_size = 32 23 | 24 | CPD_data = scio.loadmat('TRN-PD_data/CPD_noise_110.mat') 25 | x_train = CPD_data['CPD_data'] 26 | x_train = x_train.reshape(2000, 50, 110, 3, 1) 27 | rank_data = scio.loadmat('TRN-PD_data/rank_data.mat') 28 | y_train = rank_data['rank_data'] 29 | test_CPD_data = scio.loadmat('TRN-PD_data/val_CPD_noise_110.mat') 30 | x_test = test_CPD_data['CPD_data'] 31 | x_test = x_test.reshape(100, 50, 110, 3, 1) 32 | test_rank_data = scio.loadmat('TRN-PD_data/val_rank_data.mat') 33 | y_test = test_rank_data['rank_data'] 34 | 35 | 36 | def lr_schedule(epoch): 37 | """Learning Rate Schedule 38 | Learning rate is scheduled to be reduced after 35, 45, 55 epochs. 39 | Called automatically every epoch as part of callbacks during training. 40 | # Arguments 41 | epoch (int): The number of epochs 42 | # Returns 43 | lr (float32): learning rate 44 | """ 45 | lr = 1e-3 46 | if epoch > 55: 47 | lr *= 1e-3 48 | elif epoch > 45: 49 | lr *= 1e-2 50 | elif epoch > 35: 51 | lr *= 1e-1 52 | print('Learning rate: ', lr) 53 | return lr 54 | 55 | 56 | model = Sequential() 57 | 58 | model.add(Conv3D(32, (3, 3, 2), padding='same', 59 | input_shape=x_train.shape[1:])) 60 | model.add(Activation('relu')) 61 | model.add(Conv3D(32, (3, 3, 2), padding='same')) 62 | model.add(Activation('relu')) 63 | model.add(AveragePooling3D(pool_size=(2, 2, 1))) 64 | 65 | model.add(Conv3D(64, (3, 3, 2), padding='same')) 66 | model.add(Activation('relu')) 67 | model.add(Conv3D(64, (3, 3, 2), padding='same')) 68 | model.add(Activation('relu')) 69 | model.add(AveragePooling3D(pool_size=(2, 2, 1))) 70 | 71 | model.add(Conv3D(64, (3, 3, 2), padding='same')) 72 | model.add(Activation('relu')) 73 | model.add(Conv3D(64, (3, 3, 2), padding='same')) 74 | model.add(Activation('relu')) 75 | model.add(AveragePooling3D(pool_size=(2, 2, 3))) 76 | 77 | # the model so far outputs 4D feature maps (height, width, features) 78 | model.add(Flatten()) # this converts our 4D feature maps to 1D feature vectors 79 | model.add(Dense(1024)) 80 | model.add(Activation('relu')) 81 | model.add(Dropout(0.5)) 82 | model.add(Dense(1024)) 83 | model.add(Activation('relu')) 84 | model.add(Dropout(0.5)) 85 | model.add(Dense(1)) 86 | 87 | adam = optimizers.Adam(lr=lr_schedule(0)) 88 | model.compile(loss='mae', 89 | optimizer=adam, 90 | metrics=['mae']) 91 | 92 | y_train = (y_train - 50)/50 93 | y_test = (y_test - 50)/50 94 | 95 | lr_scheduler = LearningRateScheduler(lr_schedule) 96 | 97 | lr_reducer = ReduceLROnPlateau(factor=np.sqrt(0.1), 98 | cooldown=0, 99 | patience=5, 100 | min_lr=0.5e-6) 101 | tb_cb = keras.callbacks.TensorBoard(log_dir=log_filepath, write_images=1, histogram_freq=0) 102 | 103 | callbacks = [lr_reducer, lr_scheduler, tb_cb] 104 | 105 | model.fit(x_train, y_train, 106 | batch_size=batch_size, 107 | epochs=epochs, 108 | validation_data=(x_test, y_test), 109 | callbacks=callbacks, 110 | shuffle=True) 111 | 112 | 113 | # Load a trained model 114 | # model = load_model(filepath_checkpoint) 115 | 116 | y_test_pre = model.predict(x_test) 117 | y_test_pre = np.int8(np.round(y_test_pre*50)) 118 | print('The MAE of rank prediction is', np.sum(np.absolute((y_test*50 - y_test_pre))) / 100) 119 | 120 | --------------------------------------------------------------------------------