├── README.md ├── pairwise.py ├── testfile.tsv └── trainfile.tsv /README.md: -------------------------------------------------------------------------------- 1 | # Pairwise preference ranking 2 | 3 | ``` 4 | python pairwise.py trainfile.tsv testfile.tsv 5 | ``` 6 | 7 | Performs pairwise preference ranking for a given trainfile and testfile with binary class labels (1 and not 1). 8 | 9 | Other implementations use the same pairwise transformation function for the test set and the train set. Note however that we need 2 separate functions for the pairwise transform: one for the trainset that takes the labels into account (only creating pairs of one positive and one negative example), and one for the testset that does not take the labels into account (creates pairs of all items). 10 | 11 | The binary classification on the pairwise test data gives a prediction from each pair of test items: which of the two should be ranked higher. From these pairwise preferences a ranking can be created using a greedy sort algorithm. 12 | 13 | Pairwise preference ranking is commonly performed on grouped data: two items from the same group are comparable to each other (in an ordinal version: one is better than the other), while two items from different groups are not. A typical example is rank-learning in the context of an information retrieval system. Here the groups are the queries. Two documents that are retrieved for the same query have a pairwise preference; two documents that are retrieved for two different queries do not have a pairwise preference and should not be compared to each other. 14 | 15 | ## Running the script 16 | 17 | The script requires 2 tab-separated files (train and test) as arguments in which: 18 | - the first column is the group id (in IR context: query id) 19 | - the last column is the label (1 = positive, not 1 = negative) 20 | - all the columns in between are numeric feature values 21 | 22 | The train-test split should be on the group level: all items belonging to one group should be in the same partition. 23 | 24 | Two example files are provided. 25 | 26 | ## Steps in the algorithm 27 | 28 | 1. Get data from feature files 29 | 2. Do the pairwise transform for the training set 30 | 3. Do the pairwise transform for the test set 31 | 4. Train classifier on pairwise data 32 | 5. Make predictions on testset 33 | 6. Greedy sort parwise preferences 34 | 7. Evaluate: create table for Precision-Recall graph 35 | -------------------------------------------------------------------------------- /pairwise.py: -------------------------------------------------------------------------------- 1 | # python pairwise.py trainfile.tsv testfile.tsv 2 | 3 | """ 4 | performs pairwise preference ranking for a given trainfile and testfile with binary class labals (1 and not 1) 5 | 6 | Pairwise preference ranking is commonly performed on grouped data: two items from the same group are comparable to each other (in an ordinal version: one is better than the other), while two items from different groups are not. A typical example is rank-learning in the context of an information retrieval system. Here the groups are the queries. Two documents that are retrieved for the same query have a pairwise preference; two documents that are retrieved for two different queries do not have a pairwise preference and should not be compared to each other. 7 | 8 | The script requires 2 tab-separated files in which: 9 | - the first column is the group id (in IR context: query id) 10 | - the last column is the label (1 = positive, not 1 = negative) 11 | - all the columns in between are numeric feature values 12 | 13 | the train-test split should be on the group level: all items belonging to one group should be in the same partition. 14 | 15 | Two example files are provided. 16 | 17 | """ 18 | 19 | import sys 20 | import csv 21 | import numpy as np 22 | from sklearn.linear_model import SGDClassifier 23 | 24 | 25 | def get_vectors_per_groupid (filename): 26 | vectors_per_groupid = dict() 27 | tsv = open(filename, 'r') 28 | 29 | for line in csv.reader(tsv, delimiter="\t"): 30 | #print (line) 31 | v = np.array(line) 32 | vector = v.astype(np.float) 33 | #print (vector) 34 | 35 | group_id = vector[0] 36 | 37 | vectors_for_this_groupid = [] 38 | if group_id in vectors_per_groupid: 39 | vectors_for_this_groupid = vectors_per_groupid[group_id] 40 | vectors_for_this_groupid.append(vector) 41 | vectors_per_groupid[group_id] = vectors_for_this_groupid 42 | return vectors_per_groupid 43 | 44 | 45 | ''' 46 | Note that we need 2 separate functions for pairwise transform: one for the trainset that takes the labels into account 47 | (only creating pairs of one positive and one negative example), and one for the testset that does not take the 48 | labels into account (creates pairs of all items) 49 | ''' 50 | 51 | 52 | def pairwise_transform_trainset (vectors_for_one_group): 53 | # assumes that the first column is the group id, the second column is the item id and the last column is the label 54 | positive_examples = [] 55 | negative_examples = [] 56 | #print (vectors_for_one_group) 57 | group_id = str(int(vectors_for_one_group[0][0])) 58 | #print (group_id) 59 | for vector in vectors_for_one_group: 60 | if vector[-1] == 1 or vector[-1] == 'yes': 61 | positive_examples.append(vector[0:]) 62 | #print ("pos example:",vector) 63 | else: 64 | negative_examples.append(vector[0:]) 65 | #print ("neg example:",vector) 66 | pairwise_data = [] 67 | 68 | for pos in positive_examples: 69 | for neg in negative_examples: 70 | #print (pos[1],neg[1]) 71 | if pos[1] != neg[1]: 72 | pair_id = str(int(pos[1]))+"-"+str(int(neg[1])) 73 | #print (group_id,pair_id) 74 | paired1 = [group_id,pair_id] 75 | #print (np.array(pos[2:-2],dtype='float')) 76 | diff1 = np.array(pos[2:-2],dtype='float')-np.array(neg[2:-2],dtype='float') 77 | # the last item (the label) should not be part of the vectors that are subtracted 78 | 79 | for i in diff1: 80 | paired1.append(i) 81 | paired1.append(1) 82 | #vector1 = np.array(paired1) 83 | 84 | pair_id = str(int(neg[1]))+"-"+str(int(pos[1])) 85 | paired2 = [group_id,pair_id] 86 | diff2 = np.array(neg[2:-2],dtype='float')-np.array(pos[2:-2],dtype='float') 87 | for i in diff2: 88 | paired2.append(i) 89 | paired2.append(0) 90 | #vector2 = np.array(paired2) 91 | 92 | pairwise_data.append(paired1) 93 | pairwise_data.append(paired2) 94 | #print (paired1) 95 | #print (paired2) 96 | return pairwise_data 97 | 98 | 99 | def pairwise_transform_testset (vectors_for_one_group): 100 | group_id = str(int(vectors_for_one_group[0][0])) 101 | #print (group_id) 102 | pairwise_data = [] 103 | for vector1 in vectors_for_one_group: 104 | for vector2 in vectors_for_one_group: 105 | if vector1[1] != vector2[1]: 106 | pair_id = str(int(vector1[1]))+"-"+str(int(vector2[1])) 107 | paired1 = [group_id,pair_id] 108 | diff1 = np.array(vector1[2:-2],dtype='float')-np.array(vector2[2:-2],dtype='float') 109 | # the last item (the label) should not be part of the vectors that are subtracted 110 | for i in diff1: 111 | paired1.append(i) 112 | paired1.append(-1) # -1 is unknown label 113 | 114 | pair_id = str(int(vector2[1]))+"-"+str(int(vector1[1])) 115 | paired2 = [group_id,pair_id] 116 | diff2 = np.array(vector2[2:-2],dtype='float')-np.array(vector1[2:-2],dtype='float') 117 | for i in diff2: 118 | paired2.append(i) 119 | paired2.append(-1) # -1 is unknown label 120 | 121 | pairwise_data.append(paired1) 122 | pairwise_data.append(paired2) 123 | #print (paired1) 124 | #print (paired2) 125 | return pairwise_data 126 | 127 | 128 | def get_sum_prefs (PREF,V,v): 129 | sum_prefs = 0 130 | for u in V: 131 | if (v,u) in PREF: 132 | sum_prefs += PREF[(v,u)] 133 | #print ("\t",(v,u),sum_prefs) 134 | if (u,v) in PREF: 135 | sum_prefs -= PREF[(u,v)] 136 | return sum_prefs 137 | 138 | 139 | def greedy_sort(X,PREF): 140 | V = X 141 | maxv = "" 142 | max_sum_prefs = 0 143 | for v in V: 144 | sum_prefs = get_sum_prefs(PREF,V,v) 145 | #print (v, "sum prefs:",sum_prefs) 146 | if sum_prefs > max_sum_prefs: 147 | maxv = v 148 | max_sum_prefs = sum_prefs 149 | sorted_X = list() 150 | #print ("First maxv:", maxv,"sum prefs:",max_sum_prefs,"nodes left:",len(V)) 151 | 152 | while len(V)> 1 and not maxv is "": 153 | sorted_X.append(maxv) 154 | V.remove(maxv) 155 | maxv = "" 156 | max_sum_prefs = 0 157 | for v in V: 158 | sum_prefs = get_sum_prefs(PREF,V,v) 159 | #print (v, "sum prefs:",sum_prefs) 160 | 161 | if sum_prefs > max_sum_prefs: 162 | maxv = v 163 | max_sum_prefs = sum_prefs 164 | #print ("maxv:", maxv,"sum prefs:",max_sum_prefs,"nodes left:",len(V)) 165 | 166 | # add remaining v's with sumprefs=0 167 | for v in V: 168 | sorted_X.append(v) 169 | return sorted_X 170 | 171 | 172 | def compute_precision(model,reference): 173 | if len(model)+len(reference)>0: 174 | tp=len(model.intersection(reference)) 175 | fp=len(model-reference) 176 | if tp > 0: 177 | return float(tp)/(float(fp)+float(tp)) 178 | else: 179 | #print ("no true positives") 180 | return 0 181 | else: 182 | return 1 183 | 184 | def compute_recall(model,reference): 185 | if len(model)+len(reference)>0: 186 | tp=len(model.intersection(reference)) 187 | fn=len(reference-model) 188 | if tp > 0: 189 | return float(tp)/(float(fn)+float(tp)) 190 | else: 191 | return 0 192 | else: 193 | return 1 194 | 195 | if __name__ == "__main__": 196 | trainfile = sys.argv[1] 197 | testfile = sys.argv[2] 198 | 199 | 200 | print("Get data from feature files") 201 | vectors_per_groupid_trainset = get_vectors_per_groupid(trainfile) 202 | vectors_per_groupid_testset = get_vectors_per_groupid(testfile) 203 | 204 | 205 | print("Do the pairwise transform for the training set") 206 | pairwise_data_train = [] 207 | for group_id in vectors_per_groupid_trainset: 208 | vectors_for_this_groupid = vectors_per_groupid_trainset[group_id] 209 | #print (vectors_for_this_groupid) 210 | pairwise_data = pairwise_transform_trainset(np.array(vectors_for_this_groupid)) 211 | for vector in pairwise_data: 212 | pairwise_data_train.append(vector) 213 | #print (pairwise_data) 214 | 215 | print("Do the pairwise transform for the test set") 216 | selecteditems_human = set() 217 | pairwise_data_test = [] 218 | for group_id in vectors_per_groupid_testset: 219 | vectors_for_this_groupid = vectors_per_groupid_testset[group_id] 220 | #print (vectors_for_this_groupid) 221 | for vector in vectors_for_this_groupid: 222 | if vector[-1] == 1: 223 | #print (str(int(vector[1]))) 224 | selecteditems_human.add(str(int(vector[1]))) 225 | pairwise_data = pairwise_transform_testset(np.array(vectors_for_this_groupid)) 226 | for vector in pairwise_data: 227 | pairwise_data_test.append(vector) 228 | 229 | 230 | print("Convert to numpy arrays") 231 | 232 | x_train = np.array([i[2:-2] for i in pairwise_data_train]) 233 | y_train = np.array([i[-1] for i in pairwise_data_train]) 234 | print("Train X dimensions:",x_train.shape) 235 | print("Train y dimensions:",y_train.shape) 236 | x_test = np.array([i[2:-2] for i in pairwise_data_test]) 237 | group_id_array_test = np.array([i[0] for i in pairwise_data_test]) 238 | item_pair_id_array_test = np.array([i[1] for i in pairwise_data_test]) 239 | 240 | print("Test X dimensions:",x_test.shape) 241 | #print ("Test items:",group_id_array_test,item_pair_id_array_test) 242 | 243 | ''' 244 | PAIRWISE PREFERENCE LEARNING 245 | ''' 246 | 247 | print("Train classifier on pairwise data") 248 | #clf = SVC() 249 | #clf.fit(x_train,y_train) 250 | clf = SGDClassifier(loss="hinge", penalty="l2") 251 | clf.fit(x_train,y_train) 252 | 253 | print ("Make predictions on testset") 254 | predicted = clf.predict(x_test) 255 | #print(predicted) 256 | 257 | print ("Greedy sort parwise preferences") 258 | ''' 259 | The binary classification on the pairwise test data gives a prediction from each pair of test items: 260 | which of the two should be ranked higher. From these pairwise preferences a ranking can be created 261 | using a greedy sort algorithm. 262 | ''' 263 | pairwise_preferences = dict() 264 | set_of_items_in_testset_per_group_id = dict() 265 | k=0 266 | for pred in predicted: 267 | group_id = group_id_array_test[k] 268 | item_pair_id = item_pair_id_array_test[k] 269 | 270 | (item_id1,item_id2) = str(item_pair_id).split(sep="-") 271 | #(item_id1,item_id2) = "-".split(item_pair_id) 272 | #print (group_id,item_id1,item_id2,pred) 273 | pairwise_preferences[(item_id1,item_id2)] = pred 274 | set_of_items = set() 275 | if group_id in set_of_items_in_testset_per_group_id: 276 | set_of_items = set_of_items_in_testset_per_group_id[group_id] 277 | if item_id1 not in set_of_items: 278 | set_of_items.add(item_id1) 279 | if item_id2 not in set_of_items: 280 | set_of_items.add(item_id2) 281 | set_of_items_in_testset_per_group_id[group_id] = set_of_items 282 | k += 1 283 | 284 | ranked_itemids_per_group = dict() 285 | for group_id in set_of_items_in_testset_per_group_id: 286 | set_of_items = set_of_items_in_testset_per_group_id[group_id] 287 | sorted_items = greedy_sort(set_of_items,pairwise_preferences) 288 | ranked_itemids_per_group[group_id] = sorted_items 289 | #print(group_id,sorted_items) 290 | 291 | print("Evaluate: create table for Precision-Recall graph") 292 | 293 | for cutoff in range (1,10): 294 | selecteditems_model = set() 295 | for groupid in ranked_itemids_per_group: 296 | ranked_itemids = ranked_itemids_per_group[groupid] 297 | k=0 298 | for itemid in ranked_itemids: 299 | k +=1 300 | if k <= cutoff: 301 | selecteditems_model.add(itemid) 302 | recall = compute_recall(selecteditems_model,selecteditems_human) 303 | precision = compute_precision(selecteditems_model,selecteditems_human) 304 | f1 = 2*(precision*recall)/(precision+recall) 305 | print ("pairwise_SGD","\t",cutoff,"\t",recall,"\t",precision,"\t",f1) 306 | 307 | 308 | --------------------------------------------------------------------------------