├── CMP_ECPTRN_Code ├── Readme.md ├── TRL.py ├── TRN_Data_Generation │ ├── ComputeResponseXOR.m │ ├── TRN_Data_Generation.m │ ├── Transform.m │ ├── XORPUFgeneration.m │ ├── getPCAVector.m │ └── sampling.m └── ecp_trn_xor_7.py ├── FPGA ├── Readme.md └── User Manual for BRPUF Experiments on PFGA Board.pdf ├── Matlab_Code ├── ComputeResponseXOR.m ├── GenTestSet.m ├── Generation_Data.m ├── LR_XAPUF.m ├── LR_XAPUF_PCA_Experiment.m ├── LR_XAPUF_PCA_GetTestSet.m ├── LR_XAPUF_PCA_Rd_Challenge.m ├── Main.m ├── Readme.md ├── Transform.m ├── XORPUF_3_32_Test_Challenge.mat ├── XORPUF_3_32_Test_Response.mat ├── XORPUF_3_32_Train_Challenge.mat ├── XORPUF_3_32_Train_Response.mat ├── XORPUF_3_64_Test_Challenge.mat ├── XORPUF_3_64_Test_Response.mat ├── XORPUF_3_64_Train_Challenge.mat ├── XORPUF_3_64_Train_Response.mat ├── XORPUFgeneration.m ├── accuracy.m ├── classify.m ├── getGrad_XORPUF_model.m ├── getModelRPROP_XORPUF.m ├── getResponse_XORPUF_model.m ├── sampling.m ├── sigmiod_fn.m └── unifiedRamdonSplit.m ├── README.md └── data_BRPUF ├── BRPUF64_Test_1000_Challenge.mat ├── BRPUF64_Test_1000_Response.mat ├── BRPUF64_Train_100_Challenge.mat ├── BRPUF64_Train_100_Response.mat └── readme.md /CMP_ECPTRN_Code/Readme.md: -------------------------------------------------------------------------------- 1 | 1. This directory holds source codes to perform comparison experiments. We embed PC-enhanced LAD Model I into the ECP-TRN framework to improve its accuracy. ECP-TRN is the method proposed in a SOTA work from the paper, 2 | 3 | P. Santikellur and R. S. Chakraborty, "A Computationally Efficient Tensor Regression Network-Based Modeling Attack on XOR Arbiter PUF and Its Variants," in *IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems*, vol. 40, no. 6, pp. 1197-1206, June 2021. 4 | 5 | 2. ecp_trn_xor_7.py and TRL.py are python codes adopted from ECP-TRN, 6 | 7 | Most of the codes are kept unchanged. 8 | 9 | The part we changed is denoted by "####################". For example, in ecp_trn_xor_7.py , 10 | 11 | 12 | ################################################################################# 13 | 14 | y1 = tf.placeholder(tf.float32, shape = [None, 129,129,129,129, 129,129]) 15 | x = tf.placeholder(tf.float32, shape = [None, 129]) 16 | ################################################################################# 17 | 18 | ​ is where we made changes and is different from the orignal ECP-TRN code. 19 | 20 | 3. The following hyperparameters : 21 | 22 | - Rank 23 | 24 | - Batch size 25 | - Learning rate 26 | 27 | ​ can be set in the ecp_trn_xor_7.py . 28 | 29 | 4. The directory TRN_Data_Generation holds source codes to generate the training set and test set with PC-enhanced LAD Model I. 30 | 31 | - We write the functions TRN_Data_Generation and getPCAVector. Other functions can be found from the paper, 32 | 33 | P. H. Nguyen, D. P. Sahoo, C. Jin, K. Mahmood, U. Rührmair, and M. van Dijk, “The interpose PUF: Secure PUF design against state-of-the-art machine learning attacks,” IACR Transactions on CHES, vol.2019, no. 4, pp. 243–290, Aug. 2019. 34 | 35 | 36 | 37 | - Function description 38 | 39 | - TRN_Data_Generation.m: Generate XOR PUF data with PC-enhanced LAD Model I , including the weights of XOR PUF instance, training set, and test set 40 | 41 | - getPCAVector.m: Get the PCA data by dealing with challenge data 42 | 43 | - XORPUFgeneration.m: Use a Gaussian distribution with a mean of 0 and a standard deviation of 0.05 to generate the weight vector of the XOR PUF instance 44 | 45 | - sampling.m: Randomly generate non-repetitive CRP data set 46 | 47 | - Transform.m: Conversion of challenge data by phi function (0,1)->(-1,1) 48 | 49 | - ComputeResponseXOR.m: Calculate the output of each APUF of XOR PUF 50 | 51 | 52 | 53 | -------------------------------------------------------------------------------- /CMP_ECPTRN_Code/TRL.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | import sys, os 3 | 4 | import tensorflow as tf 5 | import numpy as np 6 | 7 | #Tensor Neural Network Architecture Code 8 | #Tensor regression layer (TRL). 9 | 10 | def ecp_trn(x,y1, rank, n_outputs): 11 | weight_initializer = tf.contrib.layers.xavier_initializer(0) 12 | input_shape = y1.get_shape().as_list()[1:] 13 | 14 | bias = tf.get_variable("bias_{}".format(np.prod(n_outputs)), shape=(1, np.prod(n_outputs))) 15 | 16 | rank1_tnsrs = [] 17 | 18 | for i in range(rank): 19 | rank1_tnsr = [] 20 | 21 | for j in range(len(input_shape)): 22 | rank1_tnsr.append(tf.get_variable("rank1_tnsr_{0}_{1}_{2}".format(i,j,np.prod(n_outputs)), 23 | shape = (input_shape[j]), 24 | initializer = weight_initializer)) 25 | 26 | rank1_tnsr.append(tf.get_variable("rank1_tnsr_{0}_output_{1}".format(i,np.prod(n_outputs)), 27 | shape = (n_outputs), 28 | initializer = weight_initializer)) 29 | 30 | rank1_tnsrs.append(rank1_tnsr) 31 | #The tensor data dimension use the PC-enhanced LAD Model I 32 | ########################################################### 33 | x= tf.reshape(x, [-1, 129]) 34 | ######################################################### 35 | cout=tf.zeros([n_outputs],tf.float32) 36 | for j in range(0,len(rank1_tnsrs)): 37 | tout=tf.multiply(tf.scalar_mul(1,tf.matmul(x,tf.reshape(rank1_tnsrs[j][0], [-1,1]))),tf.scalar_mul(1,tf.matmul(x,tf.reshape(rank1_tnsrs[j][1], [-1,1])))) 38 | for k in range(2,len(rank1_tnsrs[j])-1): 39 | tout=tf.multiply(tout,tf.scalar_mul(1,tf.matmul(x,tf.reshape(rank1_tnsrs[j][k], [-1,1])))) 40 | tout=tf.multiply(tout,tf.reshape(rank1_tnsrs[j][k+1], [-1,1])) 41 | cout = tf.add(cout,tout) 42 | #cout=tf.multiply(tf.scalar_mul(10,wscale),cout) 43 | return tf.add(cout,bias) 44 | 45 | -------------------------------------------------------------------------------- /CMP_ECPTRN_Code/TRN_Data_Generation/ComputeResponseXOR.m: -------------------------------------------------------------------------------- 1 | function [BResponse] = ComputeResponseXOR(XORw,nXOR,APhi,nRows,Size) 2 | % The function computes the array of responses for a given array of feature 3 | % vectors APhi and weight vector w 4 | % The response = 0 if w*APhi >0, otherwise = 1; 5 | % Detailed explanation goes here 6 | 7 | BResponse = ones(nRows,nXOR); 8 | 9 | for i=1:nRows 10 | %Outputs of each APUF instances 11 | Sum = zeros(1,nXOR); 12 | 13 | %Compute the outputs of nXOR APUFs 14 | for j=1:Size 15 | for k = 1:nXOR 16 | Sum(k) = Sum(k) + XORw(k,j)*APhi(i,j); 17 | if(Sum(k)>0) 18 | BResponse(i,k) = 0; 19 | else 20 | BResponse(i,k) = 1; 21 | end 22 | end 23 | end 24 | 25 | end 26 | 27 | end 28 | 29 | -------------------------------------------------------------------------------- /CMP_ECPTRN_Code/TRN_Data_Generation/TRN_Data_Generation.m: -------------------------------------------------------------------------------- 1 | clear all; 2 | clc; 3 | 4 | Xpw_number=10; %The number of times to regenerate the weight value 5 | Iscover=true; %whether to regenerate weights 6 | 7 | Isrepeat=true; %whether to regenerate challenge 8 | repeatnum=1; 9 | 10 | chalSize1 = 128; % Bit length of challenge 11 | mu = 0.1; % Mean of variation in delay parameters 12 | sigma = 1; % Standard deviation of variation in delay parameters 13 | x =5; % x - number of APUFs in x-XOR PUF 14 | 15 | %generate challenge and response matricies 16 | nGeneration = 100000; %size of train set 17 | 18 | number=nGeneration/1000; 19 | nTest=2000; 20 | nTotal=nGeneration+200; 21 | allac=[]; 22 | for epoch=Xpw_number:Xpw_number 23 | count=0; 24 | if Iscover==true 25 | %generate x-XOR PUF 26 | x_XPw = XORPUFgeneration(x,chalSize1,mu,sigma); 27 | filename="x_XPw_chal"+chalSize1+"_"+x+"APUF"+"_epoch"+epoch+".csv"; 28 | csvwrite(filename,x_XPw); 29 | % x_XPw=csvread(filename); 30 | end 31 | for chalgen=1:repeatnum 32 | if Isrepeat==true 33 | 34 | challenge= sampling(0,1,nGeneration,chalSize1); 35 | 36 | CurPCAVector=getPCAVector(challenge); 37 | filenamechal="chal"+chalSize1+"_trainsize"+nGeneration+"_repeatnum"+chalgen+"_epoch"+epoch+"_"+x+"APUF"+".csv"; 38 | csvwrite(filenamechal,challenge); 39 | challengePhi = Transform(challenge, nGeneration, chalSize1); 40 | 41 | challengeParity=challengePhi; 42 | csvwrite("APUF_"+x+"_XOR_Challenge_Parity_"+chalSize1+"_"+number+"k_"+Xpw_number+".csv",challengeParity); 43 | challengePhi = fliplr(challengePhi); 44 | filenamexpw="x_XPw_chal"+chalSize1+"_"+x+"APUF"+"_epoch"+epoch+".csv"; 45 | response = ComputeResponseXOR(x_XPw,x,challengePhi,nGeneration,chalSize1+1); 46 | % Number of APUFs are being XORed 47 | nAPUF = x; 48 | 49 | 50 | nChal = nGeneration; % Number of challenges 51 | C = challenge(1:nChal,:); 52 | Resp = response(1:nChal,:); 53 | clear challenge response; 54 | chalSize = size(C,2); % Bit-length of challenge 55 | 56 | % Compute the XORAPUF output 57 | R = zeros(nChal,1); % Response of XORAPUF 58 | for k=1:nAPUF 59 | Rk = Resp(:,k); 60 | R = double(xor(Rk,R)); 61 | end 62 | csvwrite(x+"-xorpuf_"+chalSize1+"_"+number+"k_"+Xpw_number+".csv",R); 63 | end 64 | 65 | train_pca=[challengeParity CurPCAVector]; 66 | csvwrite("PCA_"+x+"APUF_XOR_Challenge_Parity_"+chalSize1+"_"+number+"k_"+Xpw_number+".csv",train_pca); 67 | 68 | 69 | end 70 | 71 | end -------------------------------------------------------------------------------- /CMP_ECPTRN_Code/TRN_Data_Generation/Transform.m: -------------------------------------------------------------------------------- 1 | function [APhi] = Transform(AChallenge, nRows, ChalSize ) 2 | % The function transform the array of challenges of ChalSize-bit APUF to the 3 | % corresponding array of feature vectors APhi of (ChalSize+1)-bit 4 | % Detailed explanation goes here 5 | 6 | APhi = ones(nRows,ChalSize+1); 7 | 8 | for i=1:nRows 9 | for j=1:ChalSize 10 | APhi(i,j) = 1; 11 | for k=j:ChalSize 12 | APhi(i,j) = APhi(i,j)*(1-2*AChallenge(i,k)); 13 | end 14 | end 15 | end 16 | 17 | 18 | 19 | % er = 0 ; 20 | % m = size(AChallenge,1); 21 | % for i=1:m 22 | % R = apuf.getResponse(AChallenge(i,:)); 23 | % if(R ~= AResponse(i)) 24 | % er = er+1; 25 | % end 26 | % end 27 | % 28 | % error = er/m; 29 | end 30 | 31 | -------------------------------------------------------------------------------- /CMP_ECPTRN_Code/TRN_Data_Generation/XORPUFgeneration.m: -------------------------------------------------------------------------------- 1 | function [XORw] = XORPUFgeneration(nXOR,chalSize,mu,sigma) 2 | % The function transform the array of challenges of ChalSize-bit APUF to the 3 | % corresponding array of feature vectors APhi of (ChalSize+1)-bit 4 | % Detailed explanation goes here 5 | 6 | XORw = normrnd(mu,sigma,nXOR,chalSize+1); 7 | 8 | end 9 | 10 | -------------------------------------------------------------------------------- /CMP_ECPTRN_Code/TRN_Data_Generation/getPCAVector.m: -------------------------------------------------------------------------------- 1 | function [train_pca] =getPCAVector(trainChallenge) 2 | d=size(trainChallenge,2); 3 | challengeModel_0_1=Transform(trainChallenge, size(trainChallenge,1), size(trainChallenge,2)); 4 | test_before_backTrans=challengeModel_0_1; 5 | 6 | for j=1:size(challengeModel_0_1,1) 7 | for k=1:size(challengeModel_0_1,2) 8 | if(challengeModel_0_1(j,k)==-1) 9 | challengeModel_0_1(j,k)=0; 10 | end 11 | end 12 | end 13 | 14 | [coeff, score, LATENT, TSQUARED,explained,mu]=pca(challengeModel_0_1); 15 | 16 | train_pca = score(:,1:d); 17 | end -------------------------------------------------------------------------------- /CMP_ECPTRN_Code/TRN_Data_Generation/sampling.m: -------------------------------------------------------------------------------- 1 | function s=sampling(low,up,m,n) 2 | nGeneration = m; %size of test set 3 | challenge= randi([low up], nGeneration, n); 4 | classNo = unique(challenge,'rows'); 5 | count=size(classNo,1); 6 | while size(classNo,1) 0.97) : 115 | print("\n\n\n yipee found it \n\n\n") 116 | file.write("\n\n\n yipee found it \n\n\n") 117 | 118 | 119 | 120 | def run(outfilepath, rank, iter): 121 | with open(outfilepath,"w+") as f, open("log22_error_graph.txt","w+") as f_error, open("log22_acc_graph.txt","w+") as f_acc: 122 | for i in range(iter): 123 | main(rank = rank, file = f, f_error = f_error, f_acc = f_acc) 124 | tf.reset_default_graph() 125 | f.write("\n") 126 | f_acc.write("\n") 127 | f_error.write("\n") 128 | end_time = (time.time() - start_time) 129 | print("\n\n--- time for learning : %s seconds ---\n" % end_time) 130 | f.write("\n\n--- time for learning : %s seconds ---\n" % end_time) 131 | 132 | if __name__ == '__main__': 133 | 134 | run("log22_3.txt", 10 , 1) 135 | -------------------------------------------------------------------------------- /FPGA/Readme.md: -------------------------------------------------------------------------------- 1 | ![](C:\Users\24556\Desktop\图片\图片备份\信号灯修改图2\信号灯FPGA板.jpg) 2 | 3 | - The hardware platform hosts a Xilinx Zynq-7000 FPGA (28 nm process technology). 4 | 5 | - LED A:Reset signal. LED A on means the Reset is at high level; LED A off means the Reset is at low level. 6 | 7 | - LED B: Determine whether the BRPUF circuit is in a stable state of 0101010101···or 1010101010··· When it is in a stable state, the LED B will be on, and the output of the BRPUF circuit can be recorded at this time. Otherwise light B will not light up. 8 | 9 | - LED C: Output signal. When LED C is on, it means the output is 0. When LED C is off, it means the output is 1. 10 | 11 | - Button 1: Reset. Press Button 1 to change the reset signal of the BRPUF circuit to high or low. The default is high when power on. 12 | 13 | - Button 2: Run. Press Button 2 to apply the input challenge to the BRPUF circuit, one at a time. 14 | 15 | 16 | 17 | We can actually use a third button (Button 3) from the platform, to pair up with Button 2. While Button 2 is used for apply the challenges to the BRPUF circuit incrementally, as C1, C2, … Ci, Ci+1, …, Button 3 can be used to apply the challenges in a decreasing order, as … Ci+1, Ci, …, C2, C1. This functionality is reserved for now. 18 | 19 | 20 | 21 | - Step-by-step instruction flow after power-on is as follows: 22 | 1. Set the Reset signal to high level (LED A is on), so that the entire circuit is in a state of all zeros. 23 | 2. Press the Button 2 to apply the input challenge data. 24 | 3. Press the Button 1 to set the reset signal line at low level (LED A is off). After the circuit is stable (LED B is on), the current output can be read as the response to the applied challenge, by reading the on/off signal given by LED C. 25 | 4. Go back to step 1, repeat this process to get another CRP. -------------------------------------------------------------------------------- /FPGA/User Manual for BRPUF Experiments on PFGA Board.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/TechnologyAiGroup/pufC2D2/a1f07df475f7ad8c1e107e1909b2bbbad34d555d/FPGA/User Manual for BRPUF Experiments on PFGA Board.pdf -------------------------------------------------------------------------------- /Matlab_Code/ComputeResponseXOR.m: -------------------------------------------------------------------------------- 1 | 2 | 3 | function [BResponse] = ComputeResponseXOR(XORw,nXOR,APhi,nRows,Size) 4 | % The function computes the array of responses for a given array of feature 5 | % vectors APhi and weight vector w 6 | % The response = 0 if w*APhi >0, otherwise = 1; 7 | % Detailed explanation goes here 8 | 9 | BResponse = ones(nRows,nXOR); 10 | 11 | for i=1:nRows 12 | %Outputs of each APUF instances 13 | Sum = zeros(1,nXOR); 14 | 15 | %Compute the outputs of nXOR APUFs 16 | for j=1:Size 17 | for k = 1:nXOR 18 | Sum(k) = Sum(k) + XORw(k,j)*APhi(i,j); 19 | if(Sum(k)>0) 20 | BResponse(i,k) = 0; 21 | else 22 | BResponse(i,k) = 1; 23 | end 24 | end 25 | end 26 | 27 | end 28 | 29 | end 30 | 31 | -------------------------------------------------------------------------------- /Matlab_Code/GenTestSet.m: -------------------------------------------------------------------------------- 1 | %Generate the Test Set for XORPUF 2 | %filenameXpw: the instance weight of the XORPUF 3 | %nGeneration:the size of the test set 4 | %chalSize1:the bits of XORPPUF 5 | %epoch:the Serial number of the generated XORPUF 6 | %x:Number of APUFs are being XORed 7 | 8 | function [filenamechal,filenameres]=GenTestSet(filenameXpw,nGeneration,chalSize1,epoch,x) 9 | challenge= sampling(0,1,nGeneration,chalSize1); 10 | filenamechal="chal"+chalSize1+"_Testsize"+nGeneration+"_epoch"+epoch+"_"+x+"APUF"+".csv"; 11 | csvwrite(filenamechal,challenge); 12 | challengePhi = Transform(challenge, nGeneration, chalSize1); 13 | challengePhi = fliplr(challengePhi); 14 | filenamexpw=filenameXpw; 15 | 16 | % C[0], C[1], ...., C[n-1] 17 | x_XPw = csvread(filenamexpw); 18 | response = ComputeResponseXOR(x_XPw,x,challengePhi,nGeneration,chalSize1+1); 19 | % Number of APUFs are being XORed 20 | nAPUF = x; 21 | 22 | 23 | nChal = nGeneration; % Number of challenges 24 | C = challenge(1:nChal,:); 25 | Resp = response(1:nChal,:); 26 | clear challenge response; 27 | chalSize = size(C,2); % Bit-length of challenge 28 | 29 | % Compute the XORAPUF output 30 | R = zeros(nChal,1); % Response of XORAPUF 31 | for k=1:nAPUF 32 | Rk = Resp(:,k); 33 | R = double(xor(Rk,R)); 34 | end 35 | filenameres="response"+"_Testsize"+nGeneration+"_epoch"+epoch+"_"+x+"APUF"+".csv"; 36 | csvwrite(filenameres,R); 37 | 38 | 39 | end -------------------------------------------------------------------------------- /Matlab_Code/Generation_Data.m: -------------------------------------------------------------------------------- 1 | 2 | clear all; 3 | clc; 4 | 5 | Xpw_number=1; %number of regenerated XOR PUF models 6 | 7 | Iscover=true; %whether to regenerate XOR PUF weights 8 | 9 | Isrepeat=true; %whether to regenerate the test set and training set 10 | 11 | 12 | repeatnum=1; 13 | compareAccuracyresult=[]; 14 | chalSize1 = 64; % Bit length of challenge 15 | mu = 0; % Mean of variation in delay parameters 16 | sigma = 0.05; % Standard deviation of variation in delay parameters 17 | x =2; % x - number of APUFs in x-XOR PUF 18 | 19 | %generate challenge and response matricies 20 | nGeneration = 3000; %size of train set 21 | nTest=1000; 22 | for epoch=1:Xpw_number 23 | count=0; 24 | if Iscover==true 25 | %generate x-XOR PUF 26 | x_XPw = XORPUFgeneration(x,chalSize1,mu,sigma); 27 | filename="x_XPw_chal"+chalSize1+"_"+x+"APUF"+"_epoch"+epoch+".csv"; 28 | csvwrite(filename,x_XPw); 29 | filenameXpw=filename; 30 | end 31 | for chalgen=1:repeatnum 32 | if Isrepeat==true 33 | filename="x_XPw_chal"+chalSize1+"_"+x+"APUF"+"_epoch"+epoch+".csv"; 34 | filenameXpw=filename; 35 | 36 | [Testchal,Testres]=GenTestSet(filenameXpw,nTest,chalSize1,epoch,x); 37 | 38 | challenge= sampling(0,1,nGeneration,chalSize1); 39 | filenamechal="chal"+chalSize1+"_trainsize"+nGeneration+"_repeatnum"+chalgen+"_epoch"+epoch+"_"+x+"APUF"+".csv"; 40 | csvwrite(filenamechal,challenge); 41 | challengePhi = Transform(challenge, nGeneration, chalSize1); 42 | challengePhi = fliplr(challengePhi); 43 | filenamexpw="x_XPw_chal"+chalSize1+"_"+x+"APUF"+"_epoch"+epoch+".csv"; 44 | x_XPw = csvread(filenamexpw); 45 | response = ComputeResponseXOR(x_XPw,x,challengePhi,nGeneration,chalSize1+1); 46 | nAPUF = x; 47 | 48 | nChal = nGeneration; % Number of challenges 49 | C = challenge(1:nChal,:); 50 | Resp = response(1:nChal,:); 51 | clear challenge response; 52 | chalSize = size(C,2); % Bit-length of challenge 53 | 54 | % Compute the XORAPUF output 55 | R = zeros(nChal,1); % Response of XORAPUF 56 | for k=1:nAPUF 57 | Rk = Resp(:,k); 58 | R = double(xor(Rk,R)); 59 | end 60 | filenameres="response"+"_trainsize"+nGeneration+"_repeatnum"+chalgen+"_epoch"+epoch+"_"+x+"APUF"+".csv"; 61 | csvwrite(filenameres,R); 62 | end 63 | end 64 | end 65 | 66 | 67 | 68 | 69 | -------------------------------------------------------------------------------- /Matlab_Code/LR_XAPUF.m: -------------------------------------------------------------------------------- 1 | 2 | function [allac, precision, recall, fscore]=LR_XAPUF(train_data,train_label,test_data,test_label,chalSize,xAPUFsize) 3 | %train_data:the origin train challenge data 4 | %train_label:the origin train response data 5 | %test_data:the origin test challenge data 6 | %test_label:the origin test response data 7 | %chalSize:the bits number of the XORPUF 8 | %xAPUFsize: Number of APUFs are being XORed 9 | 10 | %this function:generate a training model based on the training set through logistic regression (LR), 11 | %and test it on the test set 12 | 13 | count=0; 14 | zongac=0; 15 | while(count<7) 16 | ac=0; 17 | while ac<0.9 18 | 19 | fid = fopen('logFile.txt', 'w'); 20 | delete('accuracy.csv'); 21 | chalSize1 = chalSize; % Bit length of challenge 22 | 23 | x =xAPUFsize; % x - number of APUFs in x-XOR PUF 24 | 25 | nGeneration = size(train_data,1); 26 | challenge= train_data; 27 | challengePhi = Transform(challenge, nGeneration, chalSize1); 28 | challengePhi = fliplr(challengePhi); 29 | 30 | response = train_label; 31 | nAPUF = x; 32 | R = response; 33 | R(R==0)=-1; 34 | 35 | P = challengePhi; 36 | nFeatures = size(P,2)*nAPUF; 37 | nObservation = size(P,1); 38 | 39 | % Details of optimization technique 40 | % Default Parameters 41 | param.method = 'Rprop+'; 42 | param.MaxIter = 1000; 43 | param.mu_neg = 0.01; 44 | param.mu_pos = 1.1; 45 | param.delta0 = 0.0123; 46 | param.delta_min = 0; 47 | param.delta_max = 50; 48 | param.errorTol = 10e-15; 49 | 50 | delta = repmat(param.delta0,1,nFeatures); 51 | 52 | trPercent = [100]; 53 | repeat = 1; 54 | nChal = nGeneration; 55 | acMat = zeros(length(trPercent),repeat); 56 | precisionMat = zeros(length(trPercent),repeat); 57 | recallMat = zeros(length(trPercent),repeat); 58 | fscoreMat = zeros(length(trPercent),repeat); 59 | 60 | % Modeling with various amount of Traing data 61 | for i=1:length(trPercent) 62 | 63 | nTrainSample = int64(((nChal+1)*trPercent(i))/100); 64 | for j=1:repeat 65 | 66 | [trainX,trainY,~,~] = unifiedRamdonSplit(P,R,trPercent(i)); 67 | challengetest= test_data; 68 | testX = Transform(challengetest, size(challengetest,1), chalSize1); 69 | testX = fliplr(testX); 70 | 71 | testY=test_label; 72 | 73 | W0 = rand(1,nFeatures); 74 | 75 | % Training 76 | [W, grad] = getModelRPROP_XORPUF(trainX,trainY,W0,delta,nAPUF,param); 77 | fprintf(fid,'\n[%d %d] Max Grad = %g',i,j,max(abs(grad))); 78 | % Testing 79 | [Yp, ~] = classify(testX,W,nAPUF); 80 | testY(testY==-1)=0; 81 | [ac, precision, recall, fscore] = accuracy(testY,Yp); 82 | 83 | acMat(i,j) = ac; 84 | precisionMat(i,j) = precision; 85 | recallMat(i,j) = recall; 86 | fscoreMat(i,j) = fscore; 87 | 88 | end 89 | dlmwrite('accuracy.csv',acMat(i,:),'-append'); 90 | end 91 | end 92 | 93 | zongac=zongac+ac; 94 | count=count+1; 95 | end 96 | 97 | 98 | allac=zongac/7; 99 | 100 | save(['modelingResults_' num2str(nAPUF) '_XORPUF.mat'],'acMat','precisionMat','recallMat','fscoreMat'); 101 | fprintf(fid,'DONE!!!'); 102 | end 103 | %exit; -------------------------------------------------------------------------------- /Matlab_Code/LR_XAPUF_PCA_Experiment.m: -------------------------------------------------------------------------------- 1 | 2 | function allac=LR_XAPUF_PCA_Experiment(train_data,train_label,test_data,test_label,chalSize,xAPUFsize) 3 | %train_data:the origin train challenge data 4 | %train_label:the origin train response data 5 | %test_data:the origin test challenge data 6 | %test_label:the origin test response data 7 | %chalSize:the bits number of the XORPUF 8 | %xAPUFsize: Number of APUFs are being XORed 9 | allac=[]; 10 | bit_nums=32; 11 | d_set=[bit_nums+1]; 12 | d2=bit_nums+1; 13 | tm=1; 14 | challengeModel_before=train_data; 15 | challengeModel=train_data; 16 | 17 | %transform the challenge from (0,1) to (-1,1) 18 | challengeModel = Transform(challengeModel, size(challengeModel,1), size(challengeModel,2)); 19 | %transform the challenge -1->0 20 | for i=1:size(challengeModel,1) 21 | for k=1:size(challengeModel,2) 22 | if(challengeModel(i,k)==-1) 23 | challengeModel(i,k)=0; 24 | end 25 | end 26 | end 27 | 28 | %get the transform challenge data matrix 29 | challengeModel_before01_trans=Transform(challengeModel_before, size(challengeModel_before,1), size(challengeModel_before,2)); 30 | 31 | res_real_test=test_label; 32 | test_chal_data=test_data; 33 | 34 | backTrans=Transform(test_chal_data, size(test_chal_data,1), size(test_chal_data,2)); 35 | test_before_backTrans=backTrans; 36 | for j=1:size(backTrans,1) 37 | for k=1:size(backTrans,2) 38 | if(backTrans(j,k)==-1) 39 | backTrans(j,k)=0; 40 | end 41 | end 42 | end 43 | 44 | d=d_set(tm); 45 | %get the pca matrix by using the pca function 46 | [coeff, score, LATENT, TSQUARED,explained,mu]=pca(challengeModel); 47 | 48 | train_mean=mean(challengeModel,1); 49 | train_pca = score(:,1:d); 50 | 51 | %get the random selected the challenge matrix 52 | p = randperm(bit_nums+1); 53 | Top_N=d2; 54 | IndexSample=p(1:Top_N); 55 | IndexSample=sort(IndexSample); 56 | 57 | challenge1=zeros(size(train_pca,1),Top_N); 58 | 59 | for i=1:size(IndexSample,2) 60 | challenge1(:,i)=challengeModel_before01_trans(:,IndexSample(i)); 61 | end 62 | %get the matrix of the combination of train pca and train challenge 63 | train_pca =[challenge1 train_pca]; 64 | 65 | %deal with the test data 66 | test_mean=mean(backTrans,1); 67 | test_pca = (backTrans - train_mean)*coeff(:,1:d); 68 | 69 | 70 | challenge2=zeros(size(test_pca,1),Top_N); 71 | for i=1:size(IndexSample,2) 72 | challenge2(:,i)=test_before_backTrans(:,IndexSample(i)); 73 | end 74 | append_matrix=challenge2; 75 | %get the matrix of the combination of test pca and test challenge 76 | test_pca =[append_matrix test_pca]; 77 | 78 | 79 | %running the LR method 80 | nTrainSize=size(train_pca,1); 81 | count=0; 82 | zongac=0; 83 | while(count<7) 84 | ac=0; 85 | while(ac<0.9) 86 | [ac, precision, recall,~,Yp]=LR_XAPUF_PCA_GetTestSet(train_pca,train_label,chalSize,xAPUFsize,nTrainSize,test_pca,res_real_test); 87 | end 88 | zongac=zongac+ac; 89 | count=count+1; 90 | end 91 | allac=[allac,zongac/7]; 92 | end 93 | -------------------------------------------------------------------------------- /Matlab_Code/LR_XAPUF_PCA_GetTestSet.m: -------------------------------------------------------------------------------- 1 | 2 | function [ac, precision, recall, fscore,Yp]=LR_XAPUF_PCA_GetTestSet(train_pca,train_label,chalSize,xAPUFsize,nTrainSize,test_pca,test_label) 3 | %train_data:the origin train challenge data 4 | %train_label:the origin train response data 5 | %test_data:the origin test challenge data 6 | %test_label:the origin test response data 7 | %chalSize:the bits number of the XORPUF 8 | %xAPUFsize: Number of APUFs are being XORed 9 | fid = fopen('logFile.txt', 'w'); 10 | chalSize1 = chalSize; % Bit length of challenge 11 | 12 | x =xAPUFsize; % x - number of APUFs in x-XOR PUF 13 | 14 | %generate challenge and response matricies 15 | nGeneration = nTrainSize; %size of test set 16 | challenge= train_pca; 17 | 18 | challengePhi = fliplr(challenge); 19 | 20 | response = train_label; 21 | % Number of APUFs are being XORed 22 | nAPUF = x; 23 | % Compute the XORAPUF output 24 | R = response; % Response of XORAPUF 25 | R(R==0)=-1; 26 | 27 | % Compute features from challenge (Parity vector of challenges) 28 | P = challengePhi; 29 | nFeatures = size(P,2)*nAPUF; % Number of features for XORAPUF 30 | nObservation = size(P,1); % Total number of samples 31 | 32 | param.method = 'Rprop+'; 33 | param.mu_neg = 0.01; 34 | param.mu_pos = 1.1; 35 | param.MaxIter = 1000; 36 | param.delta0 = 0.0123; 37 | param.delta_min = 0; 38 | param.delta_max = 50; 39 | param.errorTol = 10e-10; 40 | delta = repmat(param.delta0,1,nFeatures); 41 | 42 | trPercent = [100]; 43 | repeat = 1; 44 | nChal = nGeneration; 45 | acMat = zeros(length(trPercent),repeat); 46 | precisionMat = zeros(length(trPercent),repeat); 47 | recallMat = zeros(length(trPercent),repeat); 48 | fscoreMat = zeros(length(trPercent),repeat); 49 | 50 | % Modeling with various amount of Traing data 51 | for i=1:length(trPercent) 52 | 53 | nTrainSample = int64(((nChal+1)*trPercent(i))/100); 54 | 55 | % Traing with ramdomly chosen set of samples 56 | for j=1:repeat 57 | 58 | [trainX,trainY,~,~] = unifiedRamdonSplit(P,R,trPercent(i)); 59 | challengetest= test_pca;param.MaxIter=param.MaxIter+500; 60 | 61 | testX = fliplr(challengetest); 62 | testY=test_label; 63 | 64 | W0 = rand(1,nFeatures); % Initial parameters value 65 | 66 | % Training 67 | [W, ~] = getModelRPROP_XORPUF(trainX,trainY,W0,delta,nAPUF,param); 68 | 69 | % Testing 70 | [Yp, ~] = classify(testX,W,nAPUF); 71 | testY(testY==-1)=0; 72 | [ac, precision, recall, fscore] = accuracy(testY,Yp); 73 | 74 | acMat(i,j) = ac; 75 | precisionMat(i,j) = precision; 76 | recallMat(i,j) = recall; 77 | fscoreMat(i,j) = fscore; 78 | 79 | end 80 | end 81 | % ac=allac_result; 82 | save(['modelingResults_' num2str(nAPUF) '_XORPUF.mat'],'acMat','precisionMat','recallMat','fscoreMat' ); 83 | 84 | fprintf(fid,'DONE!!!'); 85 | end 86 | %exit; -------------------------------------------------------------------------------- /Matlab_Code/LR_XAPUF_PCA_Rd_Challenge.m: -------------------------------------------------------------------------------- 1 | 2 | 3 | function LR_PCA_Rd_Challenge_Ac=LR_XAPUF_PCA_Rd_Challenge(train_data,train_label,test_data,test_label,chalSize,xAPUFsize) 4 | %train_data:the origin train challenge data 5 | %train_label:the origin train response data 6 | %test_data:the origin test challenge data 7 | %test_label:the origin test response data 8 | %chalSize:the bits number of the XORPUF 9 | %xAPUFsize: Number of APUFs are being XORed 10 | final_result=[]; 11 | 12 | allac=[]; %record intermediate results 13 | bitnums=32; %the bits of the XORPUF 14 | d_set=[33]; %the size of the pca matrix 15 | d2_random=[5,10,15,20];%random select the number of the challenge 16 | 17 | 18 | tm=1; 19 | Ypset=zeros(1000,50); %used for majority voting 20 | 21 | for d2_num=1:size(d2_random,2) 22 | d2=d2_random(d2_num); 23 | for randnum=1:50 24 | 25 | 26 | challengeModel_before=train_data; 27 | challengeModel=train_data; 28 | %transform the train set challenge from (0,1) to (-1,1) 29 | challengeModel = Transform(challengeModel, size(challengeModel,1), size(challengeModel,2)); 30 | 31 | %transform the train set challenge -1->0 32 | for i=1:size(challengeModel,1) 33 | for k=1:size(challengeModel,2) 34 | if(challengeModel(i,k)==-1) 35 | challengeModel(i,k)=0; 36 | end 37 | end 38 | end 39 | 40 | challengeModel_before01_trans=Transform(challengeModel_before, size(challengeModel_before,1), size(challengeModel_before,2)); 41 | res_real_test=test_label; 42 | 43 | test_chal_data=test_data; 44 | %transform the test set challenge from (0,1) to (-1,1) 45 | backTrans=Transform(test_chal_data, size(test_chal_data,1), size(test_chal_data,2)); 46 | test_before_backTrans=backTrans; 47 | 48 | 49 | %transform the test set challenge -1->0 50 | for j=1:size(backTrans,1) 51 | for k=1:size(backTrans,2) 52 | if(backTrans(j,k)==-1) 53 | backTrans(j,k)=0; 54 | end 55 | end 56 | end 57 | 58 | %get the pca matrix by using the pca function 59 | d=d_set(tm); 60 | [coeff, score, LATENT, TSQUARED,explained,mu]=pca(challengeModel); 61 | train_mean=mean(challengeModel); 62 | train_pca = score(:,1:d); 63 | train_pca(:,size(train_pca,2))=1; 64 | 65 | %get the random selected the challenge matrix 66 | p = randperm(bitnums); 67 | Top_N=d2; 68 | IndexSample=p(1:Top_N); 69 | IndexSample=sort(IndexSample); 70 | challenge1=zeros(size(train_pca,1),Top_N); 71 | 72 | for i=1:size(IndexSample,2) 73 | challenge1(:,i)=challengeModel_before01_trans(:,IndexSample(i)); 74 | end 75 | challenge1(:,Top_N+1:Top_N)=challengeModel_before01_trans(:,bitnums+2:bitnums+1); 76 | 77 | %get the train pca data 78 | train_pca =[challenge1 train_pca]; 79 | 80 | 81 | %deal with the test challenge set 82 | test_mean=mean(backTrans,1); 83 | 84 | % 85 | test_pca = (backTrans - train_mean)*coeff(:,1:d); 86 | test_pca(:,size(test_pca,2))=1; 87 | 88 | %get the test pca data 89 | challenge2=zeros(size(test_pca,1),Top_N); 90 | for i=1:size(IndexSample,2) 91 | challenge2(:,i)=test_before_backTrans(:,IndexSample(i)); 92 | end 93 | challenge2(:,Top_N+1:Top_N)=test_before_backTrans(:,bitnums+2:bitnums+1); 94 | append_matrix=challenge2; 95 | test_pca =[append_matrix test_pca]; 96 | 97 | 98 | nTrainSize=size(train_pca,1); 99 | 100 | %running the LR method 101 | ac=0; 102 | while ac<0.9 103 | [ac, precision, recall,~,Yp]=LR_XAPUF_PCA_GetTestSet(train_pca,train_label,chalSize,xAPUFsize,nTrainSize,test_pca,res_real_test); 104 | end 105 | Ypset(:,randnum)=Yp; 106 | allac=[allac,ac]; 107 | 108 | %A majority vote is used every ten times to get the result 109 | if(mod(randnum,10)==0) 110 | current_Ypset=Ypset(:,1:randnum); 111 | test_set_pre= mode(current_Ypset,2); 112 | [ac_real, precision_real, recall_real, fscore] = accuracy(res_real_test,test_set_pre); 113 | final_result=[final_result,ac_real]; 114 | end 115 | 116 | fclose all; 117 | end 118 | end 119 | 120 | LR_PCA_Rd_Challenge_Ac=max(max(final_result)); 121 | 122 | end -------------------------------------------------------------------------------- /Matlab_Code/Main.m: -------------------------------------------------------------------------------- 1 | warning('off'); 2 | 3 | 4 | chalSize=32; % Bit length of challenge 5 | xAPUFsize=3; % x - number of APUFs in x-XOR PUF 6 | 7 | %load data 8 | test_data=load('XORPUF_3_32_Test_Challenge.mat').XORPUF_3_32_Test_Challenge; 9 | 10 | 11 | test_label=load('XORPUF_3_32_Test_Response.mat').XORPUF_3_32_Test_Response; 12 | 13 | train_data=load('XORPUF_3_32_Train_Challenge.mat').XORPUF_3_32_Train_Challenge; 14 | 15 | 16 | train_label=load('XORPUF_3_32_Train_Response.mat').XORPUF_3_32_Train_Response; 17 | 18 | %comparison of the accuracies of the three methods 19 | 20 | % Logisitc Regression 21 | LR_Ac=LR_XAPUF(train_data,train_label,test_data,test_label,chalSize,xAPUFsize); 22 | 23 | % PC-enhanced LAD Model I 24 | LR_PCA_Challenge_Ac=LR_XAPUF_PCA_Experiment(train_data,train_label,test_data,test_label,chalSize,xAPUFsize); 25 | 26 | % PC-enhanced LAD Model II 27 | LR_PCA_Rd_Challenge_Ac=LR_XAPUF_PCA_Rd_Challenge(train_data,train_label,test_data,test_label,chalSize,xAPUFsize); 28 | 29 | -------------------------------------------------------------------------------- /Matlab_Code/Readme.md: -------------------------------------------------------------------------------- 1 | 1. Matlab codes for PC-enhanced LAD Model I and Model II for attacking XOR Arbiter PUFs 2 | 3 | 4 | 5 | 2. Function description 6 | 7 | - Main.m: Accuracy comparison of three methods: LR, PC-enhanced LAD Model I, PC-enhanced LAD Model II. 8 | 9 | - Generation_Data.m: Generate XOR PUF data, including the weights of XOR PUF instance, training set, and test set 10 | 11 | - GenTestSet.m: Generate non-repetitive CRP test set 12 | - XORPUFgeneration.m: Use a Gaussian distribution with a mean of 0 and a standard deviation of 0.05 to generate the weight vector of the XOR PUF instance 13 | - sampling.m: Randomly generate non-repetitive CRP data set 14 | - sigmiod_fn.m: Implement the sigmoid function 15 | - Transform.m: Conversion of challenge data by phi function (0,1)->(-1,1) 16 | - ComputeResponseXOR.m: Calculate the output of each APUF of XOR PUF 17 | - LR_XAPUF.m: Generate a training model based on the training set through logistic regression (LR), and test it on the test set 18 | - unifiedRamdonSplit.m: Divide training set, test set 19 | - getModelRPROP_XORPUF.m: Train the weight model through the Rprop gradient update method 20 | - classify.m: By the trained model weight, get the response 0 or 1 for each challenge vector 21 | - accuracy.m: Calculate the accuracy of the model generated by the logistic regression method 22 | - getGrad_XORPUF_model.m: Get the gradient of model weight 23 | - LR_XAPUF_PCA_Experiment.m: PC-enhanced LAD Model I, using all PCs with all challenge bits 24 | - LR_XAPUF_PCA_Rd_Challenge.m: PC-enhanced LAD Model II, using all PCs with challenge bits that are randomly selected. Multiple models are aggregated via majority voting over 50 times. The accuracy results are recorded every ten runs during majority voting. 25 | - LR_XAPUF_PCA_GetTestSet.m: Use logistic regression to train LAD model, subroutine called by PC-enhanced LAD Model I and II 26 | - getResponse_XORPUF_model.m: Get the XOR PUF response through the sigmoid function 27 | 28 | 29 | 30 | 3. Some of the files or functions are adopted from the iPUF package. 31 | 32 | The iPUF paper is, 33 | 34 | P. H. Nguyen, D. P. Sahoo, C. Jin, K. Mahmood, U. Rührmair, and M. van Dijk, “The interpose PUF: Secure PUF design against state-of-the-art machine learning attacks,” IACR Transactions on CHES, vol.2019, no. 4, pp. 243–290, Aug. 2019. 35 | 36 | 37 | 38 | The URL of the package, 39 | 40 | https://github.com/scluconn/DA_PUF_Library/tree/master/MatLab_simulation/LR_XORPUF -------------------------------------------------------------------------------- /Matlab_Code/Transform.m: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | function [APhi] = Transform(AChallenge, nRows, ChalSize ) 5 | % The function transform the array of challenges of ChalSize-bit APUF to the 6 | % corresponding array of feature vectors APhi of (ChalSize+1)-bit 7 | % Detailed explanation goes here 8 | 9 | APhi = ones(nRows,ChalSize+1); 10 | 11 | for i=1:nRows 12 | for j=1:ChalSize 13 | APhi(i,j) = 1; 14 | for k=j:ChalSize 15 | APhi(i,j) = APhi(i,j)*(1-2*AChallenge(i,k)); 16 | end 17 | end 18 | end 19 | 20 | end 21 | 22 | -------------------------------------------------------------------------------- /Matlab_Code/XORPUF_3_32_Test_Challenge.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/TechnologyAiGroup/pufC2D2/a1f07df475f7ad8c1e107e1909b2bbbad34d555d/Matlab_Code/XORPUF_3_32_Test_Challenge.mat -------------------------------------------------------------------------------- /Matlab_Code/XORPUF_3_32_Test_Response.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/TechnologyAiGroup/pufC2D2/a1f07df475f7ad8c1e107e1909b2bbbad34d555d/Matlab_Code/XORPUF_3_32_Test_Response.mat -------------------------------------------------------------------------------- /Matlab_Code/XORPUF_3_32_Train_Challenge.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/TechnologyAiGroup/pufC2D2/a1f07df475f7ad8c1e107e1909b2bbbad34d555d/Matlab_Code/XORPUF_3_32_Train_Challenge.mat -------------------------------------------------------------------------------- /Matlab_Code/XORPUF_3_32_Train_Response.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/TechnologyAiGroup/pufC2D2/a1f07df475f7ad8c1e107e1909b2bbbad34d555d/Matlab_Code/XORPUF_3_32_Train_Response.mat -------------------------------------------------------------------------------- /Matlab_Code/XORPUF_3_64_Test_Challenge.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/TechnologyAiGroup/pufC2D2/a1f07df475f7ad8c1e107e1909b2bbbad34d555d/Matlab_Code/XORPUF_3_64_Test_Challenge.mat -------------------------------------------------------------------------------- /Matlab_Code/XORPUF_3_64_Test_Response.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/TechnologyAiGroup/pufC2D2/a1f07df475f7ad8c1e107e1909b2bbbad34d555d/Matlab_Code/XORPUF_3_64_Test_Response.mat -------------------------------------------------------------------------------- /Matlab_Code/XORPUF_3_64_Train_Challenge.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/TechnologyAiGroup/pufC2D2/a1f07df475f7ad8c1e107e1909b2bbbad34d555d/Matlab_Code/XORPUF_3_64_Train_Challenge.mat -------------------------------------------------------------------------------- /Matlab_Code/XORPUF_3_64_Train_Response.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/TechnologyAiGroup/pufC2D2/a1f07df475f7ad8c1e107e1909b2bbbad34d555d/Matlab_Code/XORPUF_3_64_Train_Response.mat -------------------------------------------------------------------------------- /Matlab_Code/XORPUFgeneration.m: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | function [XORw] = XORPUFgeneration(nXOR,chalSize,mu,sigma) 5 | % The function transform the array of challenges of ChalSize-bit APUF to the 6 | % corresponding array of feature vectors APhi of (ChalSize+1)-bit 7 | % Detailed explanation goes here 8 | 9 | XORw = normrnd(mu,sigma,nXOR,chalSize+1); 10 | 11 | end 12 | 13 | -------------------------------------------------------------------------------- /Matlab_Code/accuracy.m: -------------------------------------------------------------------------------- 1 | 2 | 3 | % Confusion Matrix 4 | % ------------------------------------------------------------------------- 5 | % ACTUAL CLASS TRUE FALSE 6 | % ------------------------------------------------------------------------- 7 | % TEST says TRUE True Positive(TP) False positive(FP) 8 | % TEST says FLASE False negative(FN) True Negative(TN) 9 | % ------------------------------------------------------------------------- 10 | % TP rate = TP/(TP+FN) [called as Sensitivity] 11 | % FN rate = FN/(TP+FN) 12 | % FP rate = FP/(FP+TP) 13 | % TN rate = TN/(FP+TN) [called as Specificity] 14 | % Accuracy = (TP + TN)/(TP+FN+FP+TN) 15 | % Recall = [Retrived U Relevent]/Relevant 16 | % = TP/(TP+FN) 17 | % Precision = [Retrived U Relevent]/Retrived 18 | % = TP/(TP+FP) 19 | 20 | 21 | function [accuracy precision recall fscore] = accuracy(Y,Yp) 22 | 23 | % True positive [Correctly Accepted] 24 | TP = sum(Yp == 1 & Y == 1)/sum(Y); 25 | 26 | % False positive [Incorrectly Accepted] 27 | FP = sum(Yp == 1 & Y == 0)/sum(Y==0); 28 | 29 | % True negative [Correctly Rejected] 30 | TN = sum(Yp == 0 & Y == 0)/sum(Y==0); 31 | 32 | % False negative [Incorrectly Rejected] 33 | FN = sum(Yp == 0 & Y == 1)/sum(Y); 34 | 35 | accuracy = (TP + TN)/(TP + FN + FP + TN); 36 | recall = TP/(TP + FN); 37 | precision = TP/(TP + FP); 38 | fscore = 2 *((recall*precision)/(recall+precision)); 39 | 40 | end -------------------------------------------------------------------------------- /Matlab_Code/classify.m: -------------------------------------------------------------------------------- 1 | 2 | function [Y P] = classify(X,theta_xor,nXOR) 3 | 4 | nChal = size(X,1); 5 | nFeaturesPerAPUF = size(X,2); 6 | indivAPUF_response = zeros(nChal,nXOR); 7 | theta_xor = vec2mat(theta_xor, nFeaturesPerAPUF); 8 | for i=1:nXOR 9 | theta = theta_xor(i,:); 10 | indivAPUF_response(:,i) = X * theta'; % compute WX' of each APUF 11 | end 12 | 13 | gx = prod(indivAPUF_response,2); % compute the product of WX' of each APUF 14 | P = sigmiod_fn(gx); % response in range [-1,1] 15 | Y = P > 0.5; 16 | 17 | end -------------------------------------------------------------------------------- /Matlab_Code/getGrad_XORPUF_model.m: -------------------------------------------------------------------------------- 1 | 2 | function grad = getGrad_XORPUF_model(theta,P,Y,nXOR) 3 | 4 | % theta: a matrix of model parameters. each row represents the 5 | % parameters for each APUF. 6 | % P: feature matrix for individual APUF. 7 | % All arbiter PUFs have same features values 8 | % Y: is target response [-1,1] 9 | % nXOR: # of APUF to XORed 10 | 11 | nChal = size(P,1); 12 | nFeaturesPerAPUF = size(P,2); 13 | theta = vec2mat(theta, nFeaturesPerAPUF); 14 | 15 | Ypred = getResponse_XORPUF_model(P,theta,nXOR); 16 | 17 | yy = Ypred - (Y+1)/2; 18 | wx = zeros(nChal,nXOR); 19 | for i=1:nXOR 20 | wx(:,i) = P * theta(i,:)' ; 21 | end 22 | prod_wx = prod(wx,2); 23 | 24 | grad = zeros(nXOR,nFeaturesPerAPUF); 25 | for i=1:nXOR 26 | wx_i = wx(:,i); 27 | for j=1:nFeaturesPerAPUF 28 | grad(i,j) = sum((yy.*(P(:,j)./wx_i)).*prod_wx); 29 | end 30 | end 31 | 32 | grad = reshape(grad',1,nFeaturesPerAPUF*nXOR); 33 | end -------------------------------------------------------------------------------- /Matlab_Code/getModelRPROP_XORPUF.m: -------------------------------------------------------------------------------- 1 | 2 | %-------------------------------------------------------------------------- 3 | % param.errorTol = ; % Error tolerance 4 | % param.optType = ; % Optimization method 5 | % param.mu_pos = ; % Positive learning rate 6 | % param.mu_neg = ; % Negative learning rate 7 | % param.delta_max = ; % Maximum update value 8 | % param.delta_min = ; % Minimum update value 9 | % param.method 10 | %param.errorTol 11 | %-------------------------------------------------------------------------- 12 | function [W grad] = getModelRPROP_XORPUF(X,Y,W0,delta,nXOR,param) 13 | 14 | W = W0; % Weight vector to be discovered 15 | old_deltaW = zeros(1,length(W)); 16 | 17 | for itr = 1:param.MaxIter 18 | 19 | grad = getGrad_XORPUF_model(W,X,Y,nXOR); 20 | if itr > 1 21 | gg = grad .* old_grad; 22 | else 23 | gg = grad; 24 | end 25 | deltaPos = min(delta*param.mu_pos,param.delta_max).*(gg > 0); 26 | deltaNeg = max(delta*param.mu_neg,param.delta_min).*(gg < 0); 27 | deltaEq = delta.*(gg == 0); 28 | delta = deltaPos + deltaNeg + deltaEq; 29 | 30 | switch param.method 31 | 32 | case 'Rprop-' 33 | deltaW = -sign(grad).*delta; 34 | 35 | case 'Rprop+' 36 | deltaW = -sign(grad).*delta.*(gg>=0) -... 37 | old_deltaW.*(gg<0); 38 | grad = grad.*(gg>=0); 39 | old_deltaW = deltaW; 40 | 41 | case 'IRprop-' 42 | grad = grad.*(gg>=0); 43 | deltaW = -sign(grad).*delta; 44 | 45 | otherwise 46 | error('Unknown method') 47 | 48 | end %End Switch 49 | 50 | W = W + deltaW; 51 | old_grad = grad; 52 | 53 | if mod(itr,1000)==0 54 | fprintf('\nMax Grad = %g\n',max(abs(grad))); 55 | end 56 | 57 | if max(abs(grad)) < param.errorTol 58 | fprintf('\nRequire amount of Error is achieved\n'); 59 | break; 60 | else 61 | 62 | end 63 | 64 | end 65 | end 66 | -------------------------------------------------------------------------------- /Matlab_Code/getResponse_XORPUF_model.m: -------------------------------------------------------------------------------- 1 | 2 | 3 | % This function will compute the response for given challenges of 4 | % a XOR-APUF model. 5 | 6 | function response = getResponse_XORPUF_model(challenge,theta_xor,nXOR) 7 | 8 | % challenge: Challenge for XOR-APUF 9 | % theta_xor: matrix of model parameter for XOR-APUF. Each row 10 | % is the model parmaters for individual PUF. 11 | % nXOR: # of APUF to be XORed 12 | 13 | nChal = size(challenge,1); 14 | nFeaturesPerAPUF = size(challenge,2); 15 | indivAPUF_response = zeros(nChal,nXOR); 16 | theta_xor = vec2mat(theta_xor, nFeaturesPerAPUF); 17 | for i=1:nXOR 18 | theta = theta_xor(i,:); 19 | indivAPUF_response(:,i) = challenge * theta'; % compute WX' of each APUF 20 | end 21 | 22 | gx = prod(indivAPUF_response,2); % compute the product of WX' of each APUF 23 | response = sigmiod_fn(gx); % response in range [-1,1] 24 | 25 | end 26 | -------------------------------------------------------------------------------- /Matlab_Code/sampling.m: -------------------------------------------------------------------------------- 1 | % 2 | %Generate a non-repetitive challenge set 3 | % 4 | 5 | function s=sampling(low,up,m,n) 6 | % m: The column size of the challenge set 7 | % n: The row size of each challenge 8 | nGeneration = m; %size of test set 9 | challenge= randi([low up], nGeneration, n); 10 | 11 | %unique the array challenge 12 | classNo = unique(challenge,'rows'); 13 | count=size(classNo,1); 14 | 15 | while size(classNo,1)