├── .gitignore ├── vgg-face-2 ├── mean_rgb.yaml ├── class_counts.sh ├── calculate_image_mean.py ├── crop_face.sh ├── train_resnet50_vggface_scratch.py └── train_resnet101_vggface.py ├── lfw ├── lfw_roc_10fold.png ├── README.md ├── eval_lfw.py └── data │ └── pairsDevTest.txt ├── samples ├── demo_verif.png ├── lfw_roc_10fold.png ├── lfw_roc_devTest.png ├── stage1_log_plots.png ├── stage2_log_plots.png ├── stage3_log_plots.png ├── verif │ ├── Quincy_Jones_0001.jpg │ ├── Recep_Tayyip_Erdogan_0012.jpg │ └── Recep_Tayyip_Erdogan_0015.jpg └── tiny_dataset │ ├── male │ ├── Jim_OBrien_0003.jpg │ ├── Bulent_Ecevit_0001.jpg │ └── Recep_Tayyip_Erdogan_0004.jpg │ └── female │ ├── Brooke_Adams_0001.jpg │ └── Emmy_Rossum_0001.jpg ├── environment.yml ├── tests ├── test_fcn32s.py ├── vis_prediction.py ├── test_colorizer.py └── test_data_reader.py ├── ijba ├── README.md └── eval_ijba_1_1.py ├── umd-face ├── run_crop_face.py └── train_resnet_umdface.py ├── data_loader.py ├── run_resnet_demo.py ├── config.py ├── README.md ├── finetune.py ├── train.py ├── utils.py └── models.py /.gitignore: -------------------------------------------------------------------------------- 1 | logs 2 | *.pyc 3 | lfw/*.png 4 | -------------------------------------------------------------------------------- /vgg-face-2/mean_rgb.yaml: -------------------------------------------------------------------------------- 1 | B: 0.36426129937171936 2 | G: 0.406549334526062 3 | R: 0.49698397517204285 4 | -------------------------------------------------------------------------------- /lfw/lfw_roc_10fold.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AruniRC/resnet-face-pytorch/HEAD/lfw/lfw_roc_10fold.png -------------------------------------------------------------------------------- /samples/demo_verif.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AruniRC/resnet-face-pytorch/HEAD/samples/demo_verif.png -------------------------------------------------------------------------------- /samples/lfw_roc_10fold.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AruniRC/resnet-face-pytorch/HEAD/samples/lfw_roc_10fold.png -------------------------------------------------------------------------------- /samples/lfw_roc_devTest.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AruniRC/resnet-face-pytorch/HEAD/samples/lfw_roc_devTest.png -------------------------------------------------------------------------------- /samples/stage1_log_plots.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AruniRC/resnet-face-pytorch/HEAD/samples/stage1_log_plots.png -------------------------------------------------------------------------------- /samples/stage2_log_plots.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AruniRC/resnet-face-pytorch/HEAD/samples/stage2_log_plots.png -------------------------------------------------------------------------------- /samples/stage3_log_plots.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AruniRC/resnet-face-pytorch/HEAD/samples/stage3_log_plots.png -------------------------------------------------------------------------------- /samples/verif/Quincy_Jones_0001.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AruniRC/resnet-face-pytorch/HEAD/samples/verif/Quincy_Jones_0001.jpg -------------------------------------------------------------------------------- /samples/tiny_dataset/male/Jim_OBrien_0003.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AruniRC/resnet-face-pytorch/HEAD/samples/tiny_dataset/male/Jim_OBrien_0003.jpg -------------------------------------------------------------------------------- /samples/verif/Recep_Tayyip_Erdogan_0012.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AruniRC/resnet-face-pytorch/HEAD/samples/verif/Recep_Tayyip_Erdogan_0012.jpg -------------------------------------------------------------------------------- /samples/verif/Recep_Tayyip_Erdogan_0015.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AruniRC/resnet-face-pytorch/HEAD/samples/verif/Recep_Tayyip_Erdogan_0015.jpg -------------------------------------------------------------------------------- /samples/tiny_dataset/female/Brooke_Adams_0001.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AruniRC/resnet-face-pytorch/HEAD/samples/tiny_dataset/female/Brooke_Adams_0001.jpg -------------------------------------------------------------------------------- /samples/tiny_dataset/female/Emmy_Rossum_0001.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AruniRC/resnet-face-pytorch/HEAD/samples/tiny_dataset/female/Emmy_Rossum_0001.jpg -------------------------------------------------------------------------------- /samples/tiny_dataset/male/Bulent_Ecevit_0001.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AruniRC/resnet-face-pytorch/HEAD/samples/tiny_dataset/male/Bulent_Ecevit_0001.jpg -------------------------------------------------------------------------------- /samples/tiny_dataset/male/Recep_Tayyip_Erdogan_0004.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AruniRC/resnet-face-pytorch/HEAD/samples/tiny_dataset/male/Recep_Tayyip_Erdogan_0004.jpg -------------------------------------------------------------------------------- /environment.yml: -------------------------------------------------------------------------------- 1 | name: resnet-demo 2 | dependencies: 3 | - python=2.7 4 | - scipy 5 | - Pillow 6 | - tqdm 7 | - scikit-learn 8 | - scikit-image 9 | - numpy 10 | - matplotlib 11 | - ipython 12 | - pyyaml 13 | 14 | -------------------------------------------------------------------------------- /vgg-face-2/class_counts.sh: -------------------------------------------------------------------------------- 1 | 2 | 3 | # Utility script to get class frequencies of VGGFace2 based on #images in class-folder 4 | 5 | # >>>>>> Set paths here <<<<<< 6 | DATA_PATH=/home/renyi/arunirc/data1/datasets/vggface2/train 7 | DATA_PATH=/home/renyi/arunirc/data1/datasets/vggface2/train-crop 8 | touch vgg-face-2/vggface_class_counts.txt 9 | 10 | for folder in `ls ${DATA_PATH}`; do 11 | img_count=`ls ${DATA_PATH}/${folder} | wc -l` 12 | echo ${folder}' '${img_count} 13 | echo ${folder}' '${img_count} >> vgg-face-2/vggface_class_counts.txt 14 | done 15 | -------------------------------------------------------------------------------- /lfw/README.md: -------------------------------------------------------------------------------- 1 | ## LFW setup 2 | 3 | This README shows how to set up the downloaded face images from the LFW dataset for face verification evaluation. 4 | 5 | Download the deep-funneled (roughly-aligned) images from LFW: `http://vis-www.cs.umass.edu/lfw/lfw-deepfunneled.tgz`. Extract these at some local folder, referred to henceforth as `LFW_DIR`. 6 | 7 | NOTE: all scripts are to be executed from the project root directory, *not* the sub-folder where this README file is located. 8 | 9 | ### Validation results 10 | 11 | The LFW database provides DevTrain and DevTest splits as validation sets for developing code without overfitting to the 10 folds used in the final evaluation. The DevTest pairs are saved at `./lfw/data/pairsDevTest.txt` 12 | 13 | Run `python ./lfw/eval_lfw.py -d LFW_DIR -m MODEL_PATH --fold 0` to get AUC score for verification on the dev test split. The number should be around 0.9989. Setting the `--fold` flag to 10 will evaluate on the full 10 folds of LFW test set. Note, ROC curves and AUC metrics are reported (not EER). 14 | 15 | -------------------------------------------------------------------------------- /tests/test_fcn32s.py: -------------------------------------------------------------------------------- 1 | # FIXME: Import order causes error: 2 | # ImportError: dlopen: cannot load any more object with static TL 3 | # https://github.com/pytorch/pytorch/issues/2083 4 | import torch 5 | 6 | import matplotlib.pyplot as plt 7 | import numpy as np 8 | import skimage.data 9 | 10 | from torchfcn.models.fcn32s import get_upsampling_weight 11 | 12 | 13 | def test_get_upsampling_weight(): 14 | src = skimage.data.coffee() 15 | x = src.transpose(2, 0, 1) 16 | x = x[np.newaxis, :, :, :] 17 | x = torch.from_numpy(x).float() 18 | x = torch.autograd.Variable(x) 19 | 20 | in_channels = 3 21 | out_channels = 3 22 | kernel_size = 4 23 | 24 | m = torch.nn.ConvTranspose2d( 25 | in_channels, out_channels, kernel_size, stride=2, bias=False) 26 | m.weight.data = get_upsampling_weight( 27 | in_channels, out_channels, kernel_size) 28 | 29 | y = m(x) 30 | 31 | y = y.data.numpy() 32 | y = y[0] 33 | y = y.transpose(1, 2, 0) 34 | dst = y.astype(np.uint8) 35 | 36 | assert abs(src.shape[0] * 2 - dst.shape[0]) <= 2 37 | assert abs(src.shape[1] * 2 - dst.shape[1]) <= 2 38 | 39 | return src, dst 40 | 41 | 42 | if __name__ == '__main__': 43 | src, dst = test_get_upsampling_weight() 44 | plt.subplot(121) 45 | plt.imshow(src) 46 | plt.title('x1: {}'.format(src.shape)) 47 | plt.subplot(122) 48 | plt.imshow(dst) 49 | plt.title('x2: {}'.format(dst.shape)) 50 | plt.show() 51 | -------------------------------------------------------------------------------- /tests/vis_prediction.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | import os.path as osp 4 | import numpy as np 5 | import PIL.Image 6 | import skimage.io 7 | import skimage.color as color 8 | import matplotlib 9 | matplotlib.use('Agg') 10 | import matplotlib.pyplot as plt 11 | import torch 12 | from torch.autograd import Variable 13 | 14 | 15 | import sys 16 | sys.path.append('/vis/home/arunirc/data1/Research/colorize-fcn/colorizer-fcn') 17 | import utils 18 | import data_loader 19 | 20 | 21 | root = '/vis/home/arunirc/data1/datasets/ImageNet/images/' 22 | out_path = '/data2/arunirc/Research/colorize-fcn/pytorch-fcn/tests/data_tests/' 23 | GMM_PATH = '/srv/data1/arunirc/Research/colorize-fcn/colorizer-fcn/logs/MODEL-fcn32s_color_CFG-014_VCS-db517d6_TIME-20171230-212406/gmm.pkl' 24 | MEAN_L_PATH = '/srv/data1/arunirc/Research/colorize-fcn/colorizer-fcn/logs/MODEL-fcn32s_color_CFG-014_VCS-db517d6_TIME-20171230-212406/mean_l.npy' 25 | cuda = torch.cuda.is_available() 26 | 27 | 28 | 29 | def main(): 30 | dataset = data_loader.ColorizeImageNet( 31 | root, split='val', set='small', 32 | bins='soft', num_hc_bins=16, 33 | gmm_path=GMM_PATH, mean_l_path=MEAN_L_PATH) 34 | img, labels = dataset.__getitem__(0) 35 | gmm = dataset.gmm 36 | mean_l = dataset.mean_l 37 | 38 | img_file = dataset.files['val'][1] 39 | im_orig = skimage.io.imread(img_file) 40 | 41 | # ... predicted labels and input image (mean subtracted) 42 | labels = labels.numpy() 43 | img = img.squeeze().numpy() 44 | im_rgb = utils.colorize_image_hc(labels, img, gmm, mean_l) 45 | 46 | plt.imshow(im_rgb) 47 | plt.show() 48 | 49 | # 50 | inputs = Variable(img) 51 | if cuda: 52 | inputs = inputs.cuda() 53 | outputs = model(inputs) 54 | # TODO: assertions 55 | # del inputs, outputs 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | if __name__ == '__main__': 64 | main() 65 | -------------------------------------------------------------------------------- /ijba/README.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | ## Resnet-101, scale-224x224 [best scaling strategy] 5 | TAR @ FAR=0.0001 : 0.5276 6 | TAR @ FAR=0.0010 : 0.7609 7 | TAR @ FAR=0.0100 : 0.9148 8 | TAR @ FAR=0.1000 : 0.9845 9 | 10 | ## Resnet-101, scale-256, center-crop-224 11 | TAR @ FAR=0.0001 : 0.3485 12 | TAR @ FAR=0.0010 : 0.6033 13 | TAR @ FAR=0.0100 : 0.8752 14 | TAR @ FAR=0.1000 : 0.9767 15 | 16 | ## Resnet-101, scale-224, center-crop-224 17 | TAR @ FAR=0.0001 : 0.4173 18 | TAR @ FAR=0.0010 : 0.6968 19 | TAR @ FAR=0.0100 : 0.9129 20 | TAR @ FAR=0.1000 : 0.9850 21 | 22 | --- 23 | 24 | ## Resnet-101-512d-norm, scale-224x224 [cfg-23] 25 | TAR @ FAR=0.0001 : 0.5790 26 | TAR @ FAR=0.0010 : 0.7704 27 | TAR @ FAR=0.0100 : 0.9240 28 | TAR @ FAR=0.1000 : 0.9884 29 | 30 | (epoch3) 31 | TAR @ FAR=0.0001 : *0.6100* 32 | TAR @ FAR=0.0010 : *0.7848* 33 | TAR @ FAR=0.0100 : 0.9262 34 | TAR @ FAR=0.1000 : 0.9878 35 | 36 | ( + sqrt) 37 | TAR @ FAR=0.0001 : 0.6112 38 | TAR @ FAR=0.0010 : 0.7984 39 | TAR @ FAR=0.0100 : 0.9251 40 | TAR @ FAR=0.1000 : 0.9889 41 | 42 | ( + cosine) 43 | TAR @ FAR=0.0001 : 0.6100 44 | TAR @ FAR=0.0010 : 0.7914 45 | TAR @ FAR=0.0100 : 0.9262 46 | TAR @ FAR=0.1000 : 0.9878 47 | 48 | ( + cosine + sqrt) 49 | TAR @ FAR=0.0001 : 0.6067 50 | TAR @ FAR=0.0010 : 0.7987 51 | TAR @ FAR=0.0100 : 0.9248 52 | TAR @ FAR=0.1000 : 0.9885 53 | 54 | [cfg-24] 55 | 56 | 57 | --- 58 | 59 | 60 | ## Resnet-101-512d-norm, scale-224x224 [cfg-22, ft stage-2] 61 | TAR @ FAR=0.0001 : 0.5814 62 | TAR @ FAR=0.0010 : 0.7651 63 | TAR @ FAR=0.0100 : 0.9242 64 | TAR @ FAR=0.1000 : 0.9867 65 | 66 | --- 67 | 68 | ## Resnet-101-512d, scale-224x224 [cfg-23, ft stage-2] 69 | TAR @ FAR=0.0001 : 0.5926 70 | TAR @ FAR=0.0010 : **0.7919** 71 | TAR @ FAR=0.0100 : 0.9262 72 | TAR @ FAR=0.1000 : 0.9872 73 | 74 | ## Resnet-101-512d, scale-224x224 [cfg-21, ft stage-1] 75 | TAR @ FAR=0.0001 : 0.5723 76 | TAR @ FAR=0.0010 : 0.7834 77 | TAR @ FAR=0.0100 : 0.9240 78 | TAR @ FAR=0.1000 : 0.9845 79 | 80 | 81 | 82 | -------------------------------------------------------------------------------- /tests/test_colorizer.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | import os.path as osp 4 | import numpy as np 5 | import PIL.Image 6 | import skimage.io 7 | import skimage.color as color 8 | from skimage import img_as_ubyte 9 | import torch 10 | from torch.autograd import Variable 11 | import sys 12 | sys.path.append('/vis/home/arunirc/data1/Research/colorize-fcn/colorizer-fcn') 13 | # import matplotlib 14 | # matplotlib.use('agg') 15 | # import matplotlib.pyplot as plt 16 | 17 | # import models 18 | # import utils 19 | import data_loader 20 | 21 | root = '/srv/data1/arunirc/datasets/ImageNet/images/' 22 | cuda = torch.cuda.is_available() 23 | GMM_PATH = '/srv/data1/arunirc/Research/colorize-fcn/colorizer-fcn/logs/MODEL-fcn32s_color_CFG-014_VCS-db517d6_TIME-20171230-212406/gmm.pkl' 24 | MEAN_L_PATH = '/srv/data1/arunirc/Research/colorize-fcn/colorizer-fcn/logs/MODEL-fcn32s_color_CFG-014_VCS-db517d6_TIME-20171230-212406/mean_l.npy' 25 | 26 | 27 | def test_color_gmm(): 28 | print 'Entering: test_color' 29 | dataset = data_loader.ColorizeImageNet( 30 | root, split='val', set='tiny', 31 | bins='soft', num_hc_bins=16, 32 | gmm_path=GMM_PATH, mean_l_path=MEAN_L_PATH) 33 | 34 | img, labels = dataset.__getitem__(0) 35 | gmm = dataset.gmm 36 | 37 | labels = labels.numpy() 38 | img = img.squeeze().numpy() 39 | labels = labels.astype(gmm.means_.dtype) 40 | img = img.astype(gmm.means_.dtype) 41 | 42 | # expectation over GMM centroids 43 | hc_means = gmm.means_.astype(labels.dtype) 44 | im_hc = np.tensordot(labels, hc_means, (2,0)) 45 | im_l = img + dataset.mean_l.astype(img.dtype) 46 | im_rgb = dataset.hue_chroma_to_rgb(im_hc, im_l) 47 | low, high = np.min(im_rgb), np.max(im_rgb) 48 | im_rgb = (im_rgb - low) / (high - low) 49 | im_out = img_as_ubyte(im_rgb) 50 | skimage.io.imsave("tests/output.png", im_out) 51 | 52 | img_file = dataset.files['val'][0] 53 | im_orig = skimage.io.imread(img_file) 54 | skimage.io.imsave("tests/orig.png", im_orig) 55 | 56 | 57 | 58 | def main(): 59 | test_color_gmm() 60 | 61 | 62 | 63 | 64 | 65 | if __name__ == '__main__': 66 | main() 67 | -------------------------------------------------------------------------------- /vgg-face-2/calculate_image_mean.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | import os.path as osp 4 | 5 | import torch 6 | import torchvision 7 | import torch.utils.data 8 | import torchvision.transforms as transforms 9 | import torchvision.datasets as datasets 10 | 11 | import yaml 12 | import tqdm 13 | import numpy as np 14 | import matplotlib.pyplot as plt 15 | 16 | here = osp.dirname(osp.abspath(__file__)) # output folder is located here 17 | root_dir,_ = osp.split(here) 18 | import sys 19 | sys.path.append(root_dir) 20 | 21 | 22 | 23 | ''' 24 | Calculate mean R, G, B values for VGGFace2 dataset 25 | -------------------------------------------------- 26 | Following implementation: [VGGFace2](https://arxiv.org/pdf/1710.08092.pdf) 27 | ''' 28 | 29 | def main(): 30 | parser = argparse.ArgumentParser() 31 | parser.add_argument('-d', '--dataset_path', 32 | default='/srv/data1/arunirc/datasets/vggface2') 33 | args = parser.parse_args() 34 | 35 | 36 | # os.environ['CUDA_VISIBLE_DEVICES'] = str(gpu) 37 | cuda = torch.cuda.is_available() 38 | 39 | torch.manual_seed(1337) 40 | if cuda: 41 | torch.cuda.manual_seed(1337) 42 | torch.backends.cudnn.enabled = True 43 | torch.backends.cudnn.benchmark = True # enable if all images are same size 44 | 45 | # ----------------------------------------------------------------------------- 46 | # 1. Dataset 47 | # ----------------------------------------------------------------------------- 48 | data_root = args.dataset_path 49 | kwargs = {'num_workers': 4, 'pin_memory': True} if cuda else {} 50 | 51 | # Data transforms 52 | train_transform = transforms.Compose([ 53 | transforms.Scale(256), # smaller side resized 54 | transforms.RandomCrop(224), 55 | transforms.RandomHorizontalFlip(), 56 | # transforms.RandomGrayscale(p=0.2), 57 | transforms.ToTensor() 58 | ]) 59 | 60 | # Data loaders 61 | traindir = osp.join(data_root, 'train') 62 | dataset = datasets.ImageFolder(traindir, train_transform) 63 | train_loader = torch.utils.data.DataLoader( 64 | dataset, shuffle=True, batch_size=128, **kwargs) 65 | 66 | rgb_mean = [] 67 | for batch_idx, (images, lbl) in tqdm.tqdm( enumerate(train_loader), 68 | total=len(train_loader), 69 | desc='Sampling images' ): 70 | rgb_mean.append( ((images.mean(dim=0)).mean(dim=-1)).mean(dim=-1) ) 71 | if batch_idx == 100: 72 | break 73 | 74 | print len(rgb_mean) 75 | rgb_mean = torch.mean( torch.stack(rgb_mean, dim=1), dim=1) 76 | print rgb_mean 77 | 78 | res = {} 79 | res['R'] = rgb_mean[0] 80 | res['G'] = rgb_mean[1] 81 | res['B'] = rgb_mean[2] 82 | 83 | with open(osp.join(here, 'mean_rgb.yaml'), 'w') as f: 84 | yaml.dump(res, f, default_flow_style=False) 85 | 86 | 87 | 88 | 89 | if __name__ == '__main__': 90 | main() 91 | 92 | -------------------------------------------------------------------------------- /vgg-face-2/crop_face.sh: -------------------------------------------------------------------------------- 1 | 2 | 3 | # Utility script to crop large image datasets from the console 4 | 5 | # INFO: kill all spawned sub-processes: kill $(ps -s $$ -o pid=) 6 | 7 | crop_image_multi () { 8 | DATA_PATH=$1 9 | OUT_PATH=$2 10 | ANNOT_FILE=$3 11 | SUBJECT_FOLDER=$4 12 | 13 | OLDIFS=$IFS 14 | IFS=, 15 | [ ! -f $ANNOT_FILE ] && { echo "$ANNOT_FILE file not found"; exit 99; } 16 | 17 | COUNT=0 18 | while read flname sid xmin ymin width height 19 | do 20 | IFS='/' read -r -a array <<< "$flname" 21 | SUBJECT_DIR="${array[-2]}" 22 | 23 | if [ "${SUBJECT_DIR}" == "${SUBJECT_FOLDER}" ]; then 24 | mkdir -p ${OUT_PATH}/${SUBJECT_DIR} 25 | IMG_PATH=${DATA_PATH}/${array[-2]}/${array[-1]} 26 | IMG_OUT=${OUT_PATH}/${array[-2]}/${array[-1]} 27 | convert -crop ${width}x${height}+${xmin}+${ymin} ${IMG_PATH} ${IMG_OUT} 28 | fi 29 | 30 | done < $ANNOT_FILE 31 | IFS=$OLDIFS 32 | } 33 | 34 | 35 | create_val_dir () { 36 | # For each subject's folder in train_data, move 2 images into val_folder 37 | TRAIN_DATA=$1 38 | VAL_DATA=$2 39 | mkdir -p ${VAL_DATA} 40 | for folder in `ls ${TRAIN_DATA}`; do 41 | echo $folder 42 | mkdir -p ${VAL_DATA}/${folder} 43 | for filename in `ls ${OUT_PATH}/${folder} | tail -n 2`; do 44 | mv ${OUT_PATH}/${folder}/${filename} ${VAL_DATA}/${folder} 45 | done 46 | done 47 | } 48 | 49 | 50 | crop_image () { 51 | DATA_PATH=$1 52 | OUT_PATH=$2 53 | ANNOT_FILE=$3 54 | 55 | OLDIFS=$IFS 56 | IFS=, 57 | [ ! -f $ANNOT_FILE ] && { echo "$ANNOT_FILE file not found"; exit 99; } 58 | 59 | COUNT=0 60 | while read flname sid xmin ymin width height 61 | do 62 | IFS='/' read -r -a array <<< "$flname" 63 | SUBJECT_DIR="${array[-2]}" 64 | mkdir -p ${OUT_PATH}/${SUBJECT_DIR} 65 | IMG_PATH=${DATA_PATH}/${array[-2]}/${array[-1]} 66 | IMG_OUT=${OUT_PATH}/${array[-2]}/${array[-1]} 67 | convert -crop ${width}x${height}+${xmin}+${ymin} ${IMG_PATH} ${IMG_OUT} 68 | 69 | done < $ANNOT_FILE 70 | IFS=$OLDIFS 71 | } 72 | 73 | 74 | 75 | 76 | 77 | # >>>>>> Set paths here <<<<<< 78 | DATA_PATH=/home/renyi/arunirc/data1/datasets/vggface2/train 79 | OUT_PATH=/home/renyi/arunirc/data1/datasets/vggface2/train-crop 80 | ANNOT_FILE=/home/renyi/arunirc/data1/datasets/vggface2/vggface2_disk1.csv 81 | # Annotations format: filename subject_id xmin ymin width height 82 | VAL_PATH=/home/renyi/arunirc/data1/datasets/vggface2/val-crop 83 | 84 | 85 | # crop faces out of images and save into "train-crop" output folder 86 | mkdir -p ${OUT_PATH} 87 | date 88 | # crop_image ${DATA_PATH} ${OUT_PATH} ${ANNOT_FILE} 89 | date 90 | echo "Done cropping" 91 | 92 | # take 2 face images per subject and save into "val-crop" folder 93 | create_val_dir ${OUT_PATH} ${VAL_PATH} 94 | echo "Done creating validation set" 95 | 96 | 97 | 98 | 99 | # COUNT=0 100 | # WAIT_COUNT=0 101 | # MAXPROG=`ls ${DATA_PATH} | wc -l` 102 | 103 | # for folder in `ls ${DATA_PATH}`; do 104 | # # show progress 105 | # ((WAIT_COUNT += 1)) 106 | # ((COUNT += 1)) 107 | # echo ${COUNT} 108 | # PROGRESS=`echo ${COUNT}*100/${MAXPROG}|bc -l` 109 | # echo -n "${PROGRESS} % " 110 | # # echo -n "$((${COUNT}*100/${MAXPROG})) % " 111 | # echo -n R | tr 'R' '\r' 112 | # # if [ ${WAIT_COUNT} -eq 20 ]; then 113 | # # # don't spawn more than 20 processes at a time 114 | # # echo 'waiting for 20 processes to finish' 115 | # # echo "Progress: $((${COUNT}*100/${MAXPROG})) % " 116 | # # wait 117 | # # ((WAIT_COUNT=0)) 118 | # # fi 119 | 120 | # # if [ ${COUNT} -ge 20 ]; then 121 | # # break 122 | # # fi 123 | # done 124 | 125 | # wait 126 | # echo "Done cropping" 127 | 128 | # mkdir -p ${OUT_PATH} 129 | # date 130 | # crop_image ${DATA_PATH} ${OUT_PATH} ${ANNOT_FILE} 131 | # date -------------------------------------------------------------------------------- /umd-face/run_crop_face.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | import os.path as osp 4 | 5 | # import torch 6 | # import torchvision 7 | # import torch.utils.data 8 | # import torchvision.datasets as datasets 9 | 10 | import yaml 11 | import numpy as np 12 | import matplotlib 13 | matplotlib.use('Agg') 14 | import matplotlib.pyplot as plt 15 | import PIL 16 | from tqdm import tqdm 17 | # from multiprocessing.pool import ThreadPool as Pool 18 | # pool_size = 5 # your "parallelness" 19 | # pool = Pool(pool_size) 20 | 21 | ''' 22 | Crops out the faces from UMDFace images using the annotations in 23 | umdfaces_batch*_ultraface.csv. 24 | Automatically creates "train" and "val" folders. 25 | 26 | ''' 27 | 28 | def main(): 29 | parser = argparse.ArgumentParser() 30 | parser.add_argument('-d', '--dataset_path', 31 | default='/srv/data1/arunirc/datasets/UMDFaces/', 32 | help='Location of the folders containing 3 batches of UMDFaces stills.') 33 | parser.add_argument('-o', '--output_path', 34 | default='/srv/data1/arunirc/datasets/UMDFaces/face_crops') 35 | parser.add_argument('-n', '--num_val', type=int, default=2) 36 | parser.add_argument('-b', '--batch', type=int, default=-1, 37 | help='crop faces of specified UMDFaces batch') 38 | args = parser.parse_args() 39 | 40 | # torch.manual_seed(1337) 41 | 42 | if not osp.exists(args.output_path): 43 | os.makedirs(args.output_path) 44 | 45 | if not osp.exists(osp.join(args.output_path, 'train')): 46 | os.makedirs(osp.join(args.output_path, 'train')) 47 | 48 | if not osp.exists(osp.join(args.output_path, 'val')): 49 | os.makedirs(osp.join(args.output_path, 'val')) 50 | 51 | 52 | # ----------------------------------------------------------------------------- 53 | # 1. Dataset 54 | # ----------------------------------------------------------------------------- 55 | dir_batch = ( 56 | osp.join(args.dataset_path, 'umdfaces_batch1'), 57 | osp.join(args.dataset_path, 'umdfaces_batch2'), 58 | osp.join(args.dataset_path, 'umdfaces_batch3')) 59 | 60 | # dataset_batch = [datasets.ImageFolder(b) for b in dir_batch] 61 | 62 | annot_files = ( 63 | osp.join(dir_batch[0], 'umdfaces_batch1_ultraface.csv'), 64 | osp.join(dir_batch[1], 'umdfaces_batch2_ultraface.csv'), 65 | osp.join(dir_batch[2], 'umdfaces_batch3_ultraface.csv')) 66 | 67 | for fn in annot_files: 68 | assert osp.exists(fn) 69 | 70 | if args.batch < 0: 71 | # by default loop over batches in order 72 | for i in range(len(dir_batch)): 73 | crop_batch(dir_batch[i], annot_files[i], 74 | args.output_path, args.num_val) 75 | else: 76 | i = args.batch 77 | crop_batch(dir_batch[i], annot_files[i], args.output_path, args.num_val) 78 | 79 | 80 | 81 | # dataset_all = torch.utils.data.ConcatDataset( 82 | # (dataset_batch1, dataset_batch2, dataset_batch3)) 83 | # for i in range(100): 84 | # pool.apply_async(f, (item,)) 85 | 86 | 87 | def crop_batch(data_dir, annot_fn, out_dir, nval): 88 | 89 | dat = np.genfromtxt(annot_fn, names=True, delimiter=',', 90 | autostrip=True, dtype=None) 91 | im_fn = dat['FILE'] 92 | (face_x, face_y, face_w, face_h) = ( 93 | dat['FACE_X'], 94 | dat['FACE_Y'], 95 | dat['FACE_WIDTH'], 96 | dat['FACE_HEIGHT']) 97 | 98 | class_ids = dat['SUBJECT_ID'] 99 | 100 | for c in tqdm(range(len(class_ids))): 101 | sel = (class_ids==class_ids[c]) 102 | 103 | class_image_fn = im_fn[sel] 104 | 105 | for i in xrange(len(class_image_fn)): 106 | # print class_image_fn[i] 107 | im = PIL.Image.open(osp.join(data_dir, class_image_fn[i])) 108 | rect = (face_x[sel][i], face_y[sel][i], 109 | face_x[sel][i]+ face_w[sel][i], 110 | face_y[sel][i]+face_h[sel][i]) 111 | imc = im.crop(rect) 112 | 113 | class_name, _ = osp.split(class_image_fn[i]) 114 | 115 | if not osp.exists(osp.join(out_dir, 'train', class_name)): 116 | os.makedirs(osp.join(out_dir, 'train', class_name)) 117 | 118 | if i < len(class_image_fn)-nval: 119 | imc.save(osp.join(out_dir, 'train', class_image_fn[i])) 120 | else: 121 | if not osp.exists(osp.join(out_dir, 'val', class_name)): 122 | os.makedirs(osp.join(out_dir, 'val', class_name)) 123 | imc.save(osp.join(out_dir, 'val', class_image_fn[i])) 124 | 125 | 126 | 127 | 128 | if __name__ == '__main__': 129 | main() 130 | 131 | -------------------------------------------------------------------------------- /data_loader.py: -------------------------------------------------------------------------------- 1 | import collections 2 | import os.path as osp 3 | # from __future__ import division 4 | 5 | import numpy as np 6 | import PIL.Image 7 | import scipy.io 8 | import skimage 9 | import skimage.color as color 10 | from skimage.transform import rescale 11 | from skimage.transform import resize 12 | import torch 13 | from torch.utils import data 14 | 15 | 16 | 17 | DEBUG = False 18 | 19 | 20 | 21 | class DemoFaceDataset(data.Dataset): 22 | ''' 23 | Dataset subclass for demonstrating how to load images in PyTorch. 24 | 25 | ''' 26 | 27 | # ----------------------------------------------------------------------------- 28 | def __init__(self, root, split='train', set='tiny', im_size=250): 29 | # ----------------------------------------------------------------------------- 30 | ''' 31 | Parameters 32 | ---------- 33 | root - Path to root of ImageNet dataset 34 | split - Either 'train' or 'val' 35 | set - Can be 'full', 'small' or 'tiny' (5 images) 36 | ''' 37 | self.root = root # E.g. '.../ImageNet/images' or '.../vgg-face/images' 38 | self.split = split 39 | self.files = collections.defaultdict(list) 40 | self.im_size = im_size # scale image to im_size x im_size 41 | self.set = set 42 | 43 | if set == 'small': 44 | raise NotImplementedError() 45 | 46 | elif set == 'tiny': 47 | # DEBUG: 5 images 48 | files_list = osp.join(root, 'tiny_face_' + self.split + '.txt') 49 | 50 | elif set == 'full': 51 | raise NotImplementedError() 52 | 53 | else: 54 | raise ValueError('Valid sets: `full`, `small`, `tiny`.') 55 | 56 | assert osp.exists(files_list), 'File does not exist: %s' % files_list 57 | 58 | imfn = [] 59 | with open(files_list, 'r') as ftrain: 60 | for line in ftrain: 61 | imfn.append(osp.join(root, line.strip())) 62 | self.files[split] = imfn 63 | 64 | 65 | # ----------------------------------------------------------------------------- 66 | def __len__(self): 67 | # ----------------------------------------------------------------------------- 68 | return len(self.files[self.split]) 69 | 70 | 71 | # ----------------------------------------------------------------------------- 72 | def __getitem__(self, index): 73 | # ----------------------------------------------------------------------------- 74 | img_file = self.files[self.split][index] 75 | img = PIL.Image.open(img_file) 76 | 77 | # HACK: for non-RGB images - 4-channel CMYK or 1-channel grayscale 78 | if len(img.getbands()) != 3: 79 | while len(img.getbands()) != 3: 80 | index -= 1 81 | img_file = self.files[self.split][index] # if -1, wrap-around 82 | img = PIL.Image.open(img_file) 83 | 84 | if self.im_size > 0: 85 | # Scales image to a square of default size 250x250 86 | scaled_dim = (self.im_size.astype(np.int32), 87 | self.im_size.astype(np.int32)) 88 | img = img.resize(scaled_dim, PIL.Image.BILINEAR) 89 | 90 | label = 1 # TODO: read in a class label for each image 91 | 92 | img = np.array(img, dtype=np.uint8) 93 | im_out = torch.from_numpy(im_out).float() 94 | im_out = im_out.permute(2,0,1) # C x H x W 95 | 96 | return im_out, label 97 | 98 | 99 | 100 | class LFWDataset(data.Dataset): 101 | ''' 102 | Dataset subclass for loading LFW images in PyTorch. 103 | This returns multiple images in a batch. 104 | ''' 105 | 106 | def __init__(self, path_list, issame_list, transforms, split = 'test'): 107 | ''' 108 | Parameters 109 | ---------- 110 | path_list - List of full path-names to LFW images 111 | ''' 112 | self.files = collections.defaultdict(list) 113 | self.split = split 114 | self.files[split] = path_list 115 | self.pair_label = issame_list 116 | self.transforms = transforms 117 | 118 | def __len__(self): 119 | return len(self.files[self.split]) 120 | 121 | def __getitem__(self, index): 122 | img_file = self.files[self.split][index] 123 | img = PIL.Image.open(img_file) 124 | if DEBUG: 125 | print img_file 126 | im_out = self.transforms(img) 127 | return im_out 128 | 129 | 130 | 131 | class IJBADataset(data.Dataset): 132 | ''' 133 | Dataset subclass for loading IJB-A images in PyTorch. 134 | This returns multiple images in a batch. 135 | Path_list -- full paths to cropped images saved as .jpg 136 | ''' 137 | def __init__(self, path_list, transforms, split=1): 138 | ''' 139 | Parameters 140 | ---------- 141 | path_list - List of full path-names to IJB-A images of one split 142 | ''' 143 | self.files = collections.defaultdict(list) 144 | self.split = split 145 | self.files[split] = path_list 146 | self.transforms = transforms 147 | 148 | def __len__(self): 149 | return len(self.files[self.split]) 150 | 151 | def __getitem__(self, index): 152 | img_file = self.files[self.split][index] 153 | img = PIL.Image.open(img_file) 154 | if not img.mode == 'RGB': 155 | img = img.convert('RGB') 156 | if DEBUG: 157 | print img_file 158 | im_out = self.transforms(img) 159 | return im_out 160 | 161 | 162 | -------------------------------------------------------------------------------- /tests/test_data_reader.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | import os.path as osp 4 | 5 | import numpy as np 6 | import PIL.Image 7 | import skimage.io 8 | import skimage.color as color 9 | import torch 10 | from torch.autograd import Variable 11 | 12 | import sys 13 | sys.path.append('/vis/home/arunirc/data1/Research/colorize-fcn/colorizer-fcn') 14 | import models 15 | import train 16 | import utils 17 | import data_loader 18 | 19 | root = '/vis/home/arunirc/data1/datasets/ImageNet/images/' 20 | 21 | 22 | def test_single_read(): 23 | print 'Entering: test_single_read' 24 | dataset = data_loader.ColorizeImageNet(root, split='train', set='small') 25 | img, lbl = dataset.__getitem__(0) 26 | assert len(lbl)==2 27 | assert np.min(lbl[0].numpy())==0 28 | assert np.max(lbl[0].numpy())==30 29 | print 'Test passed: test_single_read' 30 | 31 | 32 | def test_single_read_dimcheck(): 33 | print 'Entering: test_single_read_dimcheck' 34 | dataset = data_loader.ColorizeImageNet(root, split='train', set='small') 35 | img, lbl = dataset.__getitem__(0) 36 | assert len(lbl)==2 37 | im_hue = lbl[0].numpy() 38 | im_chroma = lbl[1].numpy() 39 | assert im_chroma.shape==im_hue.shape, \ 40 | 'Labels (Hue and Chroma maps) should have same dimensions.' 41 | print 'Test passed: test_single_read_dimcheck' 42 | 43 | 44 | def test_train_loader(): 45 | print 'Entering: test_train_loader' 46 | train_loader = torch.utils.data.DataLoader( 47 | data_loader.ColorizeImageNet(root, split='train', set='small'), 48 | batch_size=1, shuffle=False) 49 | dataiter = iter(train_loader) 50 | img, label = dataiter.next() 51 | assert len(label)==2, \ 52 | 'Network should predict a 2-tuple: hue-map and chroma-map.' 53 | im_hue = label[0].numpy() 54 | im_chroma = label[1].numpy() 55 | assert im_chroma.shape==im_hue.shape, \ 56 | 'Labels (Hue and Chroma maps) should have same dimensions.' 57 | print 'Test passed: test_train_loader' 58 | 59 | 60 | def test_dataset_read(): 61 | ''' 62 | Read through the entire dataset. 63 | ''' 64 | dataset = data_loader.ColorizeImageNet(\ 65 | root, split='train', set='small') 66 | 67 | for i in xrange(len(dataset)): 68 | # if i > 44890: # HACK: skipping over some stuff 69 | img_file = dataset.files['train'][i] 70 | img, lbl = dataset.__getitem__(i) 71 | assert type(lbl) == torch.FloatTensor 72 | assert type(img) == torch.FloatTensor 73 | print 'iter: %d,\t file: %s,\t imsize: %s' % (i, img_file, img.size()) 74 | 75 | 76 | def test_cmyk_read(): 77 | ''' 78 | Handle CMYK images -- skip to previous image. 79 | ''' 80 | print 'Entering: test_cmyk_read' 81 | dataset = data_loader.ColorizeImageNet(\ 82 | root, split='train', set='small') 83 | idx = 44896 84 | img_file = dataset.files['train'][idx] 85 | im1 = PIL.Image.open(img_file) 86 | im1 = np.asarray(im1, dtype=np.uint8) 87 | assert im1.shape[2]==4, 'Check that selected image is indeed CMYK.' 88 | img, lbl = dataset.__getitem__(idx) 89 | print 'Test passed: test_cmyk_read' 90 | 91 | 92 | def test_grayscale_read(): 93 | ''' 94 | Handle single-channel images -- skip to previous image. 95 | ''' 96 | print 'Entering: test_grayscale_read' 97 | dataset = data_loader.ColorizeImageNet(root, split='train', set='small') 98 | idx = 4606 99 | img_file = dataset.files['train'][idx] 100 | im1 = PIL.Image.open(img_file) 101 | im1 = np.asarray(im1, dtype=np.uint8) 102 | assert len(im1.shape)==2, 'Check that selected image is indeed grayscale.' 103 | img, lbl = dataset.__getitem__(idx) 104 | print 'Test passed: test_grayscale_read' 105 | 106 | 107 | def test_rgb_hsv(): 108 | # DEFER 109 | dataset = data_loader.ColorizeImageNet(\ 110 | root, split='train', set='small') 111 | img_file = dataset.files['train'][100] 112 | img = PIL.Image.open(img_file) 113 | img = np.array(img, dtype=np.uint8) 114 | assert np.max(img.shape) == 400 115 | 116 | 117 | def test_soft_bins(): 118 | dataset = \ 119 | data_loader.ColorizeImageNet(root, split='train', set='small', 120 | bins='soft') 121 | img, lbl = dataset.__getitem__(0) 122 | assert type(lbl) == torch.FloatTensor 123 | assert type(img) == torch.FloatTensor 124 | print 'Test passed: test_soft_bins' 125 | 126 | 127 | def test_lowpass_image(): 128 | dataset = \ 129 | data_loader.ColorizeImageNet(root, split='train', set='small', 130 | bins='soft', img_lowpass=8) 131 | img, lbl = dataset.__getitem__(0) 132 | assert type(lbl) == torch.FloatTensor 133 | assert type(img) == torch.FloatTensor 134 | print 'Test passed: test_soft_bins' 135 | 136 | 137 | def test_init_gmm(): 138 | # Pass paths to cached GMM and mean Lightness 139 | GMM_PATH = '/srv/data1/arunirc/Research/colorize-fcn/colorizer-fcn/logs/MODEL-fcn32s_color_CFG-014_VCS-db517d6_TIME-20171230-212406/gmm.pkl' 140 | MEAN_L_PATH = '/srv/data1/arunirc/Research/colorize-fcn/colorizer-fcn/logs/MODEL-fcn32s_color_CFG-014_VCS-db517d6_TIME-20171230-212406/mean_l.npy' 141 | dataset = \ 142 | data_loader.ColorizeImageNet( 143 | root, split='train', set='tiny', bins='soft', 144 | gmm_path=GMM_PATH, mean_l_path=MEAN_L_PATH) 145 | print 'Test passed: test_init_gmm' 146 | 147 | 148 | 149 | def main(): 150 | test_single_read() 151 | test_single_read_dimcheck() 152 | test_train_loader() 153 | test_cmyk_read() 154 | test_grayscale_read() 155 | test_soft_bins() 156 | test_lowpass_image() 157 | test_init_gmm() 158 | 159 | # 160 | # dataset.get_color_samples() 161 | # test_dataset_read() 162 | # TODO - test_labels 163 | # TODO - test colorspace conversions 164 | 165 | 166 | 167 | 168 | 169 | if __name__ == '__main__': 170 | main() 171 | -------------------------------------------------------------------------------- /run_resnet_demo.py: -------------------------------------------------------------------------------- 1 | 2 | import torch 3 | import torchvision 4 | from torchvision import models 5 | import torch.nn as nn 6 | import torch.utils.data 7 | import torchvision.transforms as transforms 8 | from torch.autograd import Variable 9 | import torch.nn as nn 10 | import torch.nn.functional as F 11 | 12 | import os 13 | import os.path as osp 14 | import yaml 15 | import numpy as np 16 | import PIL.Image 17 | import matplotlib 18 | matplotlib.use('Agg') 19 | import matplotlib.pyplot as plt 20 | 21 | import utils 22 | 23 | 24 | 25 | 26 | here = osp.dirname(osp.abspath(__file__)) # output folder is located here 27 | 28 | 29 | # ----------------------------------------------------------------------------- 30 | # 0. User-defined settings 31 | # ----------------------------------------------------------------------------- 32 | gpu = 0 # use gpu:0 by default 33 | # specify model path of trained ResNet-50 network: 34 | model_path = './umd-face/logs/MODEL-resnet_umdfaces_CFG-006_TIME-20180114-141943/model_best.pth.tar' 35 | num_class = 8277 # UMD-Faces had this many classes 36 | 37 | 38 | 39 | # ----------------------------------------------------------------------------- 40 | # 1. GPU setup 41 | # ----------------------------------------------------------------------------- 42 | os.environ['CUDA_VISIBLE_DEVICES'] = str(gpu) 43 | cuda = torch.cuda.is_available() 44 | torch.manual_seed(1337) 45 | if cuda: 46 | torch.cuda.manual_seed(1337) 47 | torch.backends.cudnn.enabled = True 48 | torch.backends.cudnn.benchmark = True 49 | 50 | 51 | 52 | # ----------------------------------------------------------------------------- 53 | # 2. Data preparation 54 | # ----------------------------------------------------------------------------- 55 | 56 | # Samples images taken for demo purpose from LFW: 57 | # http://vis-www.cs.umass.edu/lfw/ 58 | data_root = './samples/verif' 59 | file_path = [osp.join(data_root, 'Recep_Tayyip_Erdogan_0012.jpg'), 60 | osp.join(data_root, 'Recep_Tayyip_Erdogan_0015.jpg'), 61 | osp.join(data_root, 'Quincy_Jones_0001.jpg')] 62 | image = [PIL.Image.open(f).convert('RGB') for f in file_path] 63 | 64 | # Data transforms 65 | # http://pytorch.org/docs/master/torchvision/transforms.html 66 | # NOTE: these should be consistent with the training script val_loader 67 | # Since LFW images (250x250) are not close-crops, we modify the cropping a bit. 68 | RGB_MEAN = [ 0.485, 0.456, 0.406 ] 69 | RGB_STD = [ 0.229, 0.224, 0.225 ] 70 | test_transform = transforms.Compose([ 71 | transforms.CenterCrop(150), # 150x150 center crop 72 | transforms.Scale((224,224)), # resized to the network's required input size 73 | transforms.ToTensor(), 74 | transforms.Normalize(mean = RGB_MEAN, 75 | std = RGB_STD), 76 | ]) 77 | 78 | # apply the transform 79 | inputs = [test_transform(im) for im in image] 80 | 81 | 82 | 83 | # ----------------------------------------------------------------------------- 84 | # 3. Model 85 | # ----------------------------------------------------------------------------- 86 | # PyTorch ResNet model definition: 87 | # https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py 88 | # ResNet docs: 89 | # http://pytorch.org/docs/master/torchvision/models.html#id3 90 | model = torchvision.models.resnet50(pretrained=True) 91 | 92 | # Replace last layer (by default, resnet has 1000 output categories) 93 | model.fc = torch.nn.Linear(2048, num_class) # change to current dataset's classes 94 | 95 | # Pre-trained PyTorch model loaded from a file 96 | checkpoint = torch.load(model_path) 97 | 98 | if checkpoint['arch'] == 'DataParallel': 99 | # if we trained and saved our model using DataParallel 100 | model = torch.nn.DataParallel(model, device_ids=[0, 1, 2, 3, 4]) 101 | model.load_state_dict(checkpoint['model_state_dict']) 102 | model = model.module # get network module from inside its DataParallel wrapper 103 | else: 104 | model.load_state_dict(checkpoint['model_state_dict']) 105 | 106 | if cuda: 107 | model = model.cuda() 108 | 109 | 110 | # Convert the trained network into a "feature extractor" 111 | # From https://github.com/meliketoy/fine-tuning.pytorch/blob/master/extract_features.py#L85 112 | feature_map = list(model.children()) 113 | feature_map.pop() # remove the final "class prediction" layer 114 | extractor = nn.Sequential(*feature_map) # create feature extractor 115 | 116 | # Inspect the structure - it is a nested list of various modules 117 | print extractor[-1] # last layer of the model - avg-pool 118 | print extractor[-2][-1] # second-last layer's last module - output is 2048-dim 119 | 120 | 121 | 122 | # ----------------------------------------------------------------------------- 123 | # 4. Feature extraction 124 | # ----------------------------------------------------------------------------- 125 | # - simple, one input sample at a time 126 | features = [] 127 | for x in inputs: 128 | x = Variable(x, volatile=True) 129 | if cuda: 130 | x = x.cuda() 131 | x = x.view(1, x.size(0), x.size(1), x.size(2)) # add batch_dim=1 in the front 132 | feat = extractor(x).view(-1) # extract features of input `x`, reshape to 1-D vector 133 | features.append(feat) 134 | features = torch.stack(features) # N x 2048 for N inputs 135 | 136 | # get Tensors on CPU from autograd.Variables on GPU 137 | if cuda: 138 | features = features.data.cpu() 139 | else: 140 | features = features.data 141 | 142 | features = F.normalize(features, p=2, dim=1) # L2-normalize 143 | 144 | 145 | # ----------------------------------------------------------------------------- 146 | # 5. Face verification 147 | # ----------------------------------------------------------------------------- 148 | 149 | # L2-distance between features (Tensors) of same and different pairs 150 | d1 = (features[0] - features[1]).norm(p=2) # same pair 151 | d2 = (features[0] - features[2]).norm(p=2) # different pair 152 | 153 | print 'matched pair: %.2f' % d1 154 | print 'mismatched pair: %.2f' % d2 155 | assert d1 < d2 156 | 157 | # visualizations 158 | fig, ax = plt.subplots(nrows=2, ncols=2) 159 | plt.subplot(2, 2, 1) 160 | plt.title('matched pair') 161 | plt.imshow(image[0]) 162 | plt.tight_layout() 163 | 164 | plt.subplot(2, 2, 2) 165 | plt.imshow(image[1]) 166 | plt.title('d = %.3f' % d1) 167 | plt.tight_layout() 168 | 169 | plt.subplot(2, 2, 3) 170 | plt.imshow(image[0]) 171 | plt.title('mismatched pair') 172 | plt.tight_layout() 173 | 174 | plt.subplot(2, 2, 4) 175 | plt.imshow(image[2]) 176 | plt.title('d = %.3f' % d2) 177 | plt.tight_layout() 178 | 179 | plt.savefig(osp.join(here, 'demo_verif.png'), bbox_inches='tight') 180 | 181 | print 'Visualization saved in ' + osp.join(here, 'demo_verif.png') 182 | 183 | -------------------------------------------------------------------------------- /config.py: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | configurations = { 5 | # same configuration as original work 6 | # https://github.com/shelhamer/fcn.berkeleyvision.org 7 | 0: dict( 8 | max_iteration=100000, 9 | lr=1.0e-10, 10 | momentum=0.99, 11 | weight_decay=0.0005, 12 | interval_validate=4000, 13 | ), 14 | 15 | # python train_*.py -g 0 -c 1 16 | 1: dict( 17 | max_iteration=100, 18 | lr=0.01, # changed learning rate 19 | lr_decay_epoch=None, # disable automatic lr decay 20 | momentum=0.9, 21 | weight_decay=0.0005, 22 | interval_validate=10, 23 | optim='Adam', 24 | batch_size=100, 25 | ), 26 | 27 | 2: dict( 28 | max_iteration=8670, # num_iter_per_epoch = ceil(num_images/batch_size) 29 | lr=0.01, # high learning rate 30 | lr_decay_epoch=None, # disable automatic lr decay 31 | momentum=0.9, 32 | weight_decay=0.0005, 33 | interval_validate=10, 34 | optim='Adam', 35 | batch_size=250, 36 | ), 37 | 38 | 3: dict( 39 | # num_iter_per_epoch = ceil(num_images/batch_size) 40 | max_iteration=8670, # 10 epochs on subset of images 41 | lr=0.001, # lowered learning rate 42 | lr_decay_epoch=None, # disable automatic lr decay 43 | momentum=0.9, 44 | weight_decay=0.0005, 45 | interval_validate=10, 46 | optim='Adam', 47 | batch_size=250, 48 | ), 49 | # --------------------------------------------------------------------------------- 50 | 51 | # ResNet-50 on UMDFaces: stage 1 52 | 4: dict( 53 | # num_iter_per_epoch = ceil(num_images/batch_size) 54 | max_iteration=42180, # 30 epochs on full dataset (about 10 hours) 55 | lr=0.001, # learning rate 56 | lr_decay_epoch=None, # disable automatic lr decay 57 | momentum=0.9, 58 | weight_decay=0.0005, 59 | interval_validate=50, 60 | optim='Adam', 61 | batch_size=250, # DataParallel over 5 gpus 62 | ), 63 | 64 | # ResNet-50 on UMDFaces: stage 2 65 | 5: dict( 66 | # num_iter_per_epoch = ceil(num_images/batch_size) 67 | max_iteration=42180, # 30 epochs on full dataset 68 | lr=0.0001, # lowered learning rate 69 | lr_decay_epoch=None, # disable automatic lr decay 70 | momentum=0.9, 71 | weight_decay=0.0005, 72 | interval_validate=50, 73 | optim='Adam', 74 | batch_size=250, 75 | ), 76 | 77 | # ResNet-50 on UMDFaces: stage 3 78 | 6: dict( 79 | # num_iter_per_epoch = ceil(num_images/batch_size) 80 | max_iteration=42180, # 30 epochs on full dataset 81 | lr=0.00001, # lowered learning rate 82 | lr_decay_epoch=None, # disable automatic lr decay 83 | momentum=0.9, 84 | weight_decay=0.0005, 85 | interval_validate=50, 86 | optim='Adam', 87 | batch_size=250, 88 | ), 89 | # --------------------------------------------------------------------------------- 90 | 91 | # ResNet-101 on VGGFace2: stage 1 92 | 7: dict( 93 | # num_iter_per_epoch = ceil(num_images/batch_size) 94 | max_iteration=267630, # 30 epochs on full dataset 95 | lr=0.001, # learning rate 96 | lr_decay_epoch=None, # disable automatic lr decay 97 | momentum=0.9, 98 | weight_decay=0.0005, 99 | interval_validate=500, 100 | optim='Adam', 101 | batch_size=350, # DataParallel over 7 gpus 102 | ), 103 | 104 | # python vgg-face-2/train_resnet_vggface.py -c 8 -m PATH-TO-CFG-7-SAVED-MODEL 105 | 8: dict( 106 | # num_iter_per_epoch = ceil(num_images/batch_size) 107 | max_iteration=267630, # 30 epochs on full dataset 108 | lr=0.0001, # lowered learning rate 109 | lr_decay_epoch=None, # disable automatic lr decay 110 | momentum=0.9, 111 | weight_decay=0.0005, 112 | interval_validate=500, 113 | optim='Adam', 114 | batch_size=350, # DataParallel over 7 gpus 115 | ), 116 | 117 | #### 118 | # ResNet-101 on VGGFace2: stage 1 119 | 11: dict( 120 | # num_iter_per_epoch = ceil(num_images/batch_size) 121 | max_iteration=267630, # 30 epochs on full dataset 122 | lr=0.1, # learning rate 123 | lr_decay_epoch=None, # disable automatic lr decay 124 | momentum=0.9, 125 | weight_decay=0.0005, 126 | interval_validate=200, 127 | optim='Adam', 128 | batch_size=350, # DataParallel over 7 gpus 129 | ), 130 | 131 | # ResNet-101 on VGGFace2: stage 2 132 | # python vgg-face-2/train_resnet_vggface_scratch.py -c 12 -m ./vgg-face-2/logs/MODEL..-CFG-11... 133 | 12: dict( 134 | # num_iter_per_epoch = ceil(num_images/batch_size) 135 | max_iteration=267630, # 30 epochs on full dataset 136 | lr=0.01, # reduced learning rate by factor 10 137 | lr_decay_epoch=None, # disable automatic lr decay 138 | momentum=0.9, 139 | weight_decay=0.0005, 140 | interval_validate=200, 141 | optim='Adam', 142 | batch_size=350, # DataParallel over 7 gpus 143 | ), 144 | 145 | 13: dict( 146 | # num_iter_per_epoch = ceil(num_images/batch_size) 147 | max_iteration=267630, # 30 epochs on full dataset 148 | lr=0.001, # reduced learning rate by factor 10 149 | lr_decay_epoch=None, # disable automatic lr decay 150 | momentum=0.9, 151 | weight_decay=0.0005, 152 | interval_validate=200, 153 | optim='Adam', 154 | batch_size=350, # DataParallel over 7 gpus 155 | ), 156 | 157 | 158 | # ResNet from scratch with SGD 159 | 20: dict( 160 | # num_iter_per_epoch = ceil(num_images/batch_size) 161 | max_iteration=267630, # 22 epochs on full dataset 162 | lr=0.1, 163 | lr_decay_epoch=None, # disable automatic lr decay 164 | momentum=0.9, 165 | weight_decay=0.0005, 166 | interval_validate=200, 167 | optim='SGD', 168 | batch_size=256, # DataParallel over 7 gpus 169 | ), 170 | 171 | 21: dict( 172 | # num_iter_per_epoch = ceil(num_images/batch_size) 173 | max_iteration=267630, # 22 epochs on full dataset 174 | lr=0.01, 175 | lr_decay_epoch=None, # disable automatic lr decay 176 | momentum=0.9, 177 | weight_decay=0.0005, 178 | interval_validate=200, 179 | optim='SGD', 180 | batch_size=256, # DataParallel over 7 gpus 181 | ), 182 | 183 | 22: dict( 184 | # num_iter_per_epoch = ceil(num_images/batch_size) 185 | max_iteration=267630, # 22 epochs on full dataset 186 | lr=0.001, 187 | lr_decay_epoch=None, # disable automatic lr decay 188 | momentum=0.9, 189 | weight_decay=0.0005, 190 | interval_validate=200, 191 | optim='SGD', 192 | batch_size=256, # DataParallel over 7 gpus 193 | ), 194 | 195 | # Used to fine-tune Resnet101-512d in the second stage 196 | # (after the new layers are converged, entire net is fine-tuned) 197 | 23: dict( 198 | # num_iter_per_epoch = ceil(num_images/batch_size) 199 | max_iteration=267630, # 22 epochs on full dataset 200 | lr=0.0001, 201 | lr_decay_epoch=None, # disable automatic lr decay 202 | momentum=0.9, 203 | weight_decay=0.0005, 204 | interval_validate=200, 205 | optim='SGD', 206 | batch_size=256, # DataParallel over 7 gpus 207 | ), 208 | 209 | # lower learning rates on fine-tuning bottleneck (Resnet101-512d) 210 | 24: dict( 211 | # num_iter_per_epoch = ceil(num_images/batch_size) 212 | max_iteration=267630, # 22 epochs on full dataset 213 | lr=0.00001, # lowered learning rate 214 | lr_decay_epoch=None, # disable automatic lr decay 215 | momentum=0.9, 216 | weight_decay=0.0005, 217 | interval_validate=200, 218 | optim='SGD', 219 | batch_size=256, # DataParallel over 7 gpus 220 | ), 221 | 222 | 223 | } 224 | 225 | 226 | 227 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | This repository shows how to train ResNet models in PyTorch on publicly available face recognition datasets. 3 | 4 | ### Setup 5 | 6 | * Install [Anaconda](https://conda.io/docs/user-guide/install/linux.html) if not already installed in the system. 7 | * Create an Anaconda environment: `conda create -n resnet-face python=2.7` and activate it: `source activate resnet-face`. 8 | * Install PyTorch and TorchVision inside the Anaconda environment. First add a channel to conda: `conda config --add channels soumith`. Then install: `conda install pytorch torchvision cuda80 -c soumith`. 9 | * Install the dependencies using conda: `conda install scipy Pillow tqdm scikit-learn scikit-image numpy matplotlib ipython pyyaml`. 10 | * *Notes*: 11 | * Multiple GPUs (we used 5 GeForce GTX 1080Ti in parallel) recommended for the training to finish in reasonable time. 12 | * Tested on server running CentOS 13 | * Using [PyTorch](http://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html) 14 | * Optional - an explanatory [blogpost](https://blog.waya.ai/deep-residual-learning-9610bb62c355) on Deep Residual Networks (ResNets) structure. 15 | 16 | ### Contents 17 | - [ResNet-50 on UMD-Faces](https://github.com/AruniRC/resnet-face-pytorch#pytorch-resnet-on-umd-face) 18 | - [Dataset preparation](https://github.com/AruniRC/resnet-face-pytorch#dataset-preparation) 19 | - [Training](https://github.com/AruniRC/resnet-face-pytorch#training) 20 | - [Evaluation demo](https://github.com/AruniRC/resnet-face-pytorch#evaluation) 21 | - [ResNet-50 on VGGFace2](https://github.com/AruniRC/resnet-face-pytorch#pytorch-resnet-on-vggface2) 22 | - [Dataset preparation](https://github.com/AruniRC/resnet-face-pytorch#dataset-preparation-1) 23 | - [Training](https://github.com/AruniRC/resnet-face-pytorch#training-1) 24 | - [Evaluation LFW](https://github.com/AruniRC/resnet-face-pytorch#evaluation-1) 25 | 26 | 27 | 28 | ## PyTorch ResNet on UMD-Face 29 | 30 | Demo to train a ResNet-50 model on the [UMDFaces](http://www.umdfaces.io/) dataset. 31 | 32 | ### Dataset preparation 33 | 34 | * Download the [UMDFaces](http://www.umdfaces.io/) dataset (the 3 batches of _still_ images), which contains 367,888 face annotations for 8,277 subjects, split into 3 batches. 35 | * The images need to be cropped into 'train' and 'val' folders. Since the UMDFaces dataset does not specify training and validation sets, by default we select two images from every subject for validation. 36 | * The cropping code is in the Python script [umd-face/run_crop_face.py](./umd-face/run_crop_face.py). It takes the following command-line arguments: 37 | * --dataset_path (-d) 38 | * --output_path (-o) 39 | * --batch (-b) 40 | * The following shell command does the cropping for each batch in parallel, using default `dataset_path` and `output_path`. 41 | `for i in {0..2}; python umd-face/run_crop_face -b $i &; done`. 42 | 43 | :small_red_triangle: **TODO** - takes very long, convert into shell+ImageMagick script. 44 | 45 | 46 | ### Usage 47 | 48 | #### Training: 49 | * Training script is [umd-face/train_resnet_umdface.py](./umd-face/train_resnet_umdface.py) 50 | * Multiple GPUs: 51 | * Under section 3 ("Model") of the training script, we specify which GPUs to use in parallel: `model = torch.nn.DataParallel(model, device_ids=[0, 1, 2, 3, 4]).cuda()`. Change these numbers depending on the number of available GPUs. 52 | * Use `watch -d nvidia-smi` to constantly monitor the multi-GPU usage from the terminal. 53 | * :small_red_triangle: **TODO** - make this into command-line args. 54 | * At the terminal, specify where the cropped face images are saved using an environment variable: `DATASET_PATH=local/path/to/cropped/umd/faces` 55 | * [config.py](./config.py) lists all the training configurations (i.e. model hyper-parameters) in a numbered Python dict. 56 | * The training of ResNet-50 was done in 3 stages (*configs 4, 5 and 6*), each of *30 epochs*. For the first stage, we started with the ImageNet-pre-trained model from PyTorch. After the first stage, we started from the saved model of the previous stage (using the `--model_path` or `-m` command-line argument) and divided the learning rate by a factor of 10. 57 | * Stage 1 (config-4): train on the *full UMDFaces dataset for 30 epochs* (42180 iterations with batchsize 250) with a learning rate of 0.001, starting from an ImageNet pre-trained model. These settings are defined in *config-4* of [config.py](./config.py), which is selected using the `-c 4` flag in the command. Example to train a ResNet-50 on UMDFaces dataset using config-4: run `python umd-face/train_resnet_umdface.py -c 4 -d $DATASET_PATH`. 58 | * Stage 2 (config-5): use the best model checkpointed from config-4 to initialize the network and train it using config-5 `python umd-face/train_resnet_umdface.py -c 5 -m ./umd-face/logs/MODEL-resnet_umdfaces_CFG-004_TIMESTAMP/model_best.pth.tar -d $DATASET_PATH` and so on for the subsequent stages. 59 | * *Training logs:* Each time the training script is run, a new output folder with a timestamp is created by default under `./umd-face/logs` , i.e. `./umd-face/logs/MODEL-CFG-TIMESTAMP/`. Under an experiment's log folder the settings for each experiment can be viewed in `config.yaml`; metrics such as the training and validation losses are updated in `log.csv`. 60 | Most of the usual settings (data augmentations, learning rates, number of epochs to train, etc.) can be customized by editing `config.py` and `umd-face/train_resnet_umdface.py`. 61 | * *Plotting CSV logs:* The log-file plotting utility function can be called from the command line as shown in the snippet below. The figure is saved under the log folder in the output location of that experiment. 62 | * `LOG_FILE=umd-face/logs/MODEL-resnet_umdfaces_CFG-004_TIMESTAMP/log.csv` 63 | * `python -c "from utils import plot_log_csv; plot_log_csv('$LOG_FILE')"` 64 | * If that gives parsing errors: `python -c "from utils import plot_log; plot_log('$LOG_FILE')"` 65 | 66 | stage 1 | stage 2 | stage 3 67 | :------:|:----------:|:--------: 68 | ![](samples/stage1_log_plots.png)| ![](samples/stage2_log_plots.png) | ![](samples/stage3_log_plots.png) 69 | 70 | 71 | #### Pre-trained model: 72 | 73 | :red_circle: TODO - release pre-trained ResNet-50 on UMD-Faces :construction: 74 | 75 | 76 | #### Evaluation: 77 | 78 | *Verification demo:* We have a short script, [run_resnet_demo.py](./run_resnet_demo.py) to demonstrate the usage of the model on a toy face verification example. The visualized output of the demo is saved in the the root directory of the project. The 3 sample images are taken from the [LFW](http://vis-www.cs.umass.edu/lfw/) dataset. 79 | 80 | ![](samples/demo_verif.png) 81 | 82 | 83 | --- 84 | 85 | 86 | ## PyTorch ResNet on VGGFace2 87 | 88 | Training a *ResNet-50* model in PyTorch on the [VGGFace2](https://www.robots.ox.ac.uk/~vgg/data/vgg_face2/) dataset. 89 | 90 | 91 | ### Dataset preparation 92 | 93 | * Register on the [VGGFace2](https://www.robots.ox.ac.uk/~vgg/data/vgg_face2/) website and download their dataset 94 | * VGGFace2 provides loosely-cropped images. We use crops from the [Faster R-CNN face detector](https://github.com/playerkk/face-py-faster-rcnn), saved as a CSV in `[filename, subject_id, xmin, ymin, width, height]` format (the CSV with pre-computed face crops is not yet made available). 95 | * The `vgg-face-2/crop_face.sh` script is used to crop the face images into a separate output folder. Please look at the settings section in the script to assign correct paths depending on where the VGGFace2 data was downloaded on your local machine. This takes about a day. **TODO** - multi-process. 96 | * Training images are under `OUTPUT_PATH/train-crop` 97 | * Validation images (2 images per subject) under `OUTPUT_PATH/val-crop` 98 | 99 | ### Training 100 | 101 | * We used 7 GeForce GTX 1080Ti GPUs in parallel (PyTorch DataParallel) to train the network, using Batch Normalization, following the training procedure described in the [VGGFace2](https://www.robots.ox.ac.uk/~vgg/data/vgg_face2/) paper. 102 | * Training settings are defined under *configs-20, 21, 22* in [config.py](./config.py). 103 | * Briefly, training is done using SGD optimizer, starting with a learning rate of 0.1, which gets divided by 10 in subsequent stages. A new stage is begun whenever the validation curve flattens. 104 | * First stage training command, starting with a ResNet-50 from scratch: `python vgg-face-2/train_resnet50_vggface_scratch.py -c 20` 105 | * Subsequent training stages (with lowered learning rates) would be: 106 | - `python vgg-face-2/train_resnet50_vggface_scratch.py -c 21 -m PATH_TO_BEST_MODEL_CFG-20` 107 | - `python vgg-face-2/train_resnet50_vggface_scratch.py -c 22 -m PATH_TO_BEST_MODEL_CFG-21` 108 | 109 | ### Evaluation 110 | 111 | Instructions on how to setup and run the LFW evaluation are at [lfw/README.md](./lfw/README.md). 112 | 113 | DevTest | 10 fold 114 | :------:|:----------: 115 | ![](samples/lfw_roc_devTest.png)| ![](samples/lfw_roc_10fold.png) 116 | 117 | 118 | 119 | 120 | -------------------------------------------------------------------------------- /lfw/eval_lfw.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | import os.path as osp 4 | 5 | import torch 6 | import torchvision 7 | from torchvision import models 8 | import torch.nn as nn 9 | import torch.utils.data 10 | import torchvision.transforms as transforms 11 | import torchvision.datasets as datasets 12 | from torch.autograd import Variable 13 | import torch.nn.functional as F 14 | 15 | import tqdm 16 | import numpy as np 17 | import sklearn.metrics 18 | from sklearn import metrics 19 | from scipy.optimize import brentq 20 | from scipy import interpolate 21 | import matplotlib 22 | matplotlib.use('Agg') 23 | import matplotlib.pyplot as plt 24 | 25 | here = osp.dirname(osp.abspath(__file__)) # output folder is located here 26 | root_dir,_ = osp.split(here) 27 | import sys 28 | sys.path.append(root_dir) 29 | 30 | import models 31 | import utils 32 | import data_loader 33 | 34 | 35 | ''' 36 | Evaluate a network on the LFW verification task 37 | =============================================== 38 | Example usage: 39 | # Resnet 101 on 10 folds of LFW 40 | python lfw/eval_lfw.py -e lfw_eval_resnet101 --model_type resnet101 --fold 10 -m BEST_MODEL_PATH 41 | 42 | ''' 43 | 44 | 45 | 46 | def main(): 47 | parser = argparse.ArgumentParser() 48 | parser.add_argument('-e', '--exp_name', default='lfw_eval') 49 | parser.add_argument('-g', '--gpu', type=int, default=0) 50 | parser.add_argument('-d', '--dataset_path', 51 | default='/srv/data1/arunirc/datasets/lfw-deepfunneled') 52 | parser.add_argument('--fold', type=int, default=0, choices=[0,10]) 53 | parser.add_argument('--batch_size', type=int, default=100) 54 | parser.add_argument('-m', '--model_path', default=None, required=True, 55 | help='Path to pre-trained model') 56 | parser.add_argument('--model_type', default='resnet50', 57 | choices=['resnet50', 'resnet101', 'resnet101-512d']) 58 | 59 | args = parser.parse_args() 60 | 61 | 62 | # CUDA setup 63 | os.environ['CUDA_VISIBLE_DEVICES'] = str(args.gpu) 64 | cuda = torch.cuda.is_available() 65 | torch.manual_seed(1337) 66 | if cuda: 67 | torch.cuda.manual_seed(1337) 68 | torch.backends.cudnn.enabled = True 69 | torch.backends.cudnn.benchmark = True # enable if all images are same size 70 | 71 | if args.fold == 0: 72 | pairs_path = './lfw/data/pairsDevTest.txt' 73 | else: 74 | pairs_path = './lfw/data/pairs.txt' 75 | 76 | # ----------------------------------------------------------------------------- 77 | # 1. Dataset 78 | # ----------------------------------------------------------------------------- 79 | file_ext = 'jpg' # observe, no '.' before jpg 80 | num_class = 8631 81 | 82 | pairs = utils.read_pairs(pairs_path) 83 | path_list, issame_list = utils.get_paths(args.dataset_path, pairs, file_ext) 84 | 85 | # Define data transforms 86 | RGB_MEAN = [ 0.485, 0.456, 0.406 ] 87 | RGB_STD = [ 0.229, 0.224, 0.225 ] 88 | test_transform = transforms.Compose([ 89 | transforms.Scale((250,250)), # make 250x250 90 | transforms.CenterCrop(150), # then take 150x150 center crop 91 | transforms.Scale((224,224)), # resized to the network's required input size 92 | transforms.ToTensor(), 93 | transforms.Normalize(mean = RGB_MEAN, 94 | std = RGB_STD), 95 | ]) 96 | 97 | # Create data loader 98 | test_loader = torch.utils.data.DataLoader( 99 | data_loader.LFWDataset( 100 | path_list, issame_list, test_transform), 101 | batch_size=args.batch_size, shuffle=False ) 102 | 103 | 104 | # ----------------------------------------------------------------------------- 105 | # 2. Model 106 | # ----------------------------------------------------------------------------- 107 | if args.model_type == 'resnet50': 108 | model = torchvision.models.resnet50(pretrained=False) 109 | model.fc = torch.nn.Linear(2048, num_class) 110 | elif args.model_type == 'resnet101': 111 | model = torchvision.models.resnet101(pretrained=False) 112 | model.fc = torch.nn.Linear(2048, num_class) 113 | elif args.model_type == 'resnet101-512d': 114 | model = torchvision.models.resnet101(pretrained=False) 115 | layers = [] 116 | layers.append(torch.nn.Linear(2048, 512)) 117 | layers.append(torch.nn.Linear(512, num_class)) 118 | model.fc = torch.nn.Sequential(*layers) 119 | else: 120 | raise NotImplementedError 121 | 122 | checkpoint = torch.load(args.model_path) 123 | 124 | if checkpoint['arch'] == 'DataParallel': 125 | # if we trained and saved our model using DataParallel 126 | model = torch.nn.DataParallel(model, device_ids=[0, 1, 2, 3, 4]) 127 | model.load_state_dict(checkpoint['model_state_dict']) 128 | model = model.module # get network module from inside its DataParallel wrapper 129 | else: 130 | model.load_state_dict(checkpoint['model_state_dict']) 131 | 132 | if cuda: 133 | model = model.cuda() 134 | 135 | # Convert the trained network into a "feature extractor" 136 | feature_map = list(model.children()) 137 | if args.model_type == 'resnet101-512d': 138 | model.eval() 139 | extractor = model 140 | extractor.fc = nn.Sequential(extractor.fc[0]) 141 | else: 142 | feature_map.pop() 143 | extractor = nn.Sequential(*feature_map) 144 | 145 | extractor.eval() # set to evaluation mode (fixes BatchNorm, dropout, etc.) 146 | 147 | 148 | # ----------------------------------------------------------------------------- 149 | # 3. Feature extraction 150 | # ----------------------------------------------------------------------------- 151 | features = [] 152 | 153 | for batch_idx, images in tqdm.tqdm(enumerate(test_loader), 154 | total=len(test_loader), 155 | desc='Extracting features'): 156 | x = Variable(images, volatile=True) # test-time memory conservation 157 | if cuda: 158 | x = x.cuda() 159 | feat = extractor(x) 160 | if cuda: 161 | feat = feat.data.cpu() 162 | else: 163 | feat = feat.data 164 | features.append(feat) 165 | 166 | features = torch.stack(features) 167 | sz = features.size() 168 | features = features.view(sz[0]*sz[1], sz[2]) 169 | features = F.normalize(features, p=2, dim=1) # L2-normalize 170 | # TODO - cache features 171 | 172 | 173 | # ----------------------------------------------------------------------------- 174 | # 4. Verification 175 | # ----------------------------------------------------------------------------- 176 | num_feat = features.size()[0] 177 | feat_pair1 = features[np.arange(0,num_feat,2),:] 178 | feat_pair2 = features[np.arange(1,num_feat,2),:] 179 | feat_dist = (feat_pair1 - feat_pair2).norm(p=2, dim=1) 180 | feat_dist = feat_dist.numpy() 181 | 182 | # Eval metrics 183 | scores = -feat_dist 184 | gt = np.asarray(issame_list) 185 | 186 | if args.fold == 0: 187 | fig_path = osp.join(here, 188 | args.exp_name + '_' + args.model_type + '_lfw_roc_devTest.png') 189 | roc_auc = sklearn.metrics.roc_auc_score(gt, scores) 190 | fpr, tpr, thresholds = sklearn.metrics.roc_curve(gt, scores) 191 | print 'ROC-AUC: %.04f' % roc_auc 192 | # Plot and save ROC curve 193 | fig = plt.figure() 194 | plt.title('ROC - lfw dev-test') 195 | plt.plot(fpr, tpr, lw=2, label='ROC (auc = %0.4f)' % roc_auc) 196 | plt.xlim([0.0, 1.0]) 197 | plt.ylim([0.0, 1.05]) 198 | plt.grid() 199 | plt.xlabel('False Positive Rate') 200 | plt.ylabel('True Positive Rate') 201 | plt.legend(loc='lower right') 202 | plt.tight_layout() 203 | else: 204 | # 10 fold 205 | fold_size = 600 # 600 pairs in each fold 206 | roc_auc = np.zeros(10) 207 | roc_eer = np.zeros(10) 208 | 209 | fig = plt.figure() 210 | plt.xlim([0.0, 1.0]) 211 | plt.ylim([0.0, 1.05]) 212 | plt.grid() 213 | plt.xlabel('False Positive Rate') 214 | plt.ylabel('True Positive Rate') 215 | 216 | for i in tqdm.tqdm(range(10)): 217 | start = i * fold_size 218 | end = (i+1) * fold_size 219 | scores_fold = scores[start:end] 220 | gt_fold = gt[start:end] 221 | roc_auc[i] = sklearn.metrics.roc_auc_score(gt_fold, scores_fold) 222 | fpr, tpr, _ = sklearn.metrics.roc_curve(gt_fold, scores_fold) 223 | # EER calc: https://yangcha.github.io/EER-ROC/ 224 | roc_eer[i] = brentq( 225 | lambda x: 1. - x - interpolate.interp1d(fpr, tpr)(x), 0., 1.) 226 | plt.plot(fpr, tpr, alpha=0.4, 227 | lw=2, color='darkgreen', 228 | label='ROC(auc=%0.4f, eer=%0.4f)' % (roc_auc[i], roc_eer[i]) ) 229 | 230 | plt.title( 'AUC: %0.4f +/- %0.4f, EER: %0.4f +/- %0.4f' % 231 | (np.mean(roc_auc), np.std(roc_auc), 232 | np.mean(roc_eer), np.std(roc_eer)) ) 233 | plt.tight_layout() 234 | 235 | fig_path = osp.join(here, 236 | args.exp_name + '_' + args.model_type + '_lfw_roc_10fold.png') 237 | 238 | 239 | plt.savefig(fig_path, bbox_inches='tight') 240 | print 'ROC curve saved at: ' + fig_path 241 | 242 | 243 | 244 | 245 | 246 | 247 | if __name__ == '__main__': 248 | main() 249 | 250 | -------------------------------------------------------------------------------- /finetune.py: -------------------------------------------------------------------------------- 1 | import datetime 2 | import math 3 | import os 4 | import os.path as osp 5 | import shutil 6 | 7 | import numpy as np 8 | import PIL.Image 9 | import pytz 10 | import scipy.misc 11 | import torch 12 | from torch.autograd import Variable 13 | import torch.nn.functional as F 14 | import tqdm 15 | 16 | import utils 17 | import gc 18 | 19 | 20 | def lr_scheduler(optimizer, epoch, init_lr=0.001, lr_decay_epoch=7): 21 | """Decay learning rate by a factor of 0.1 every lr_decay_epoch epochs.""" 22 | lr = init_lr * (0.1**(epoch // lr_decay_epoch)) 23 | 24 | if epoch % lr_decay_epoch == 0: 25 | print('LR is set to {}'.format(lr)) 26 | 27 | for param_group in optimizer.param_groups: 28 | param_group['lr'] = lr 29 | 30 | return optimizer 31 | 32 | 33 | 34 | class Trainer(object): 35 | 36 | # ----------------------------------------------------------------------------- 37 | def __init__(self, cuda, model, criterion, optimizer, init_lr, 38 | train_loader, val_loader, out, max_iter, 39 | lr_decay_epoch=None, interval_validate=None): 40 | # ----------------------------------------------------------------------------- 41 | self.cuda = cuda 42 | 43 | self.model = model 44 | self.criterion = criterion 45 | self.optim = optimizer 46 | self.train_loader = train_loader 47 | self.val_loader = val_loader 48 | self.best_acc = 0 49 | self.init_lr = init_lr 50 | self.lr_decay_epoch = lr_decay_epoch 51 | self.epoch = 0 52 | self.iteration = 0 53 | self.max_iter = max_iter 54 | self.best_loss = 0 55 | 56 | self.timestamp_start = \ 57 | datetime.datetime.now(pytz.timezone('US/Eastern')) 58 | 59 | if interval_validate is None: 60 | self.interval_validate = len(self.train_loader) 61 | else: 62 | self.interval_validate = interval_validate 63 | 64 | self.out = out 65 | if not osp.exists(self.out): 66 | os.makedirs(self.out) 67 | 68 | self.log_headers = [ 69 | 'epoch', 70 | 'iteration', 71 | 'train/loss', 72 | 'train/acc', 73 | 'valid/loss', 74 | 'valid/acc', 75 | 'elapsed_time', 76 | ] 77 | if not osp.exists(osp.join(self.out, 'log.csv')): 78 | with open(osp.join(self.out, 'log.csv'), 'w') as f: 79 | f.write(','.join(self.log_headers) + '\n') 80 | 81 | 82 | 83 | 84 | # ----------------------------------------------------------------------------- 85 | def validate(self, max_num=500): 86 | # ----------------------------------------------------------------------------- 87 | training = self.model.module.fc.training 88 | self.model.eval() 89 | 90 | MAX_NUM = max_num # HACK: stop after 500 images 91 | 92 | n_class = len(self.val_loader.dataset.classes) 93 | val_loss = 0 94 | label_trues, label_preds = [], [] 95 | 96 | for batch_idx, (data, (target)) in tqdm.tqdm( 97 | enumerate(self.val_loader), total=len(self.val_loader), 98 | desc='Val=%d' % self.iteration, ncols=80, 99 | leave=False): 100 | 101 | # Computing val losses 102 | if self.cuda: 103 | data, target = data.cuda(), target.cuda() 104 | 105 | data, target = Variable(data), Variable(target) 106 | score = self.model(data) 107 | loss = self.criterion(score, target) 108 | 109 | if np.isnan(float(loss.data[0])): 110 | raise ValueError('loss is NaN while validating') 111 | 112 | val_loss += float(loss.data[0]) / len(data) 113 | 114 | lbl_pred = score.data.max(1)[1].cpu().numpy() 115 | lbl_true = target.data.cpu() 116 | lbl_pred = lbl_pred.squeeze() 117 | lbl_true = np.squeeze(lbl_true.numpy()) 118 | 119 | del target, score 120 | 121 | label_trues.append(lbl_true) 122 | label_preds.append(lbl_pred) 123 | 124 | del lbl_true, lbl_pred, data, loss 125 | 126 | if batch_idx > MAX_NUM: 127 | break 128 | 129 | # Computing metrics 130 | val_loss /= len(self.val_loader) 131 | val_acc = self.eval_metric(label_trues, label_preds) 132 | 133 | # Logging 134 | with open(osp.join(self.out, 'log.csv'), 'a') as f: 135 | elapsed_time = ( 136 | datetime.datetime.now(pytz.timezone('US/Eastern')) - 137 | self.timestamp_start).total_seconds() 138 | log = [self.epoch, self.iteration] + [''] * 2 + \ 139 | [val_loss, val_acc] + [elapsed_time] 140 | log = map(str, log) 141 | f.write(','.join(log) + '\n') 142 | 143 | del label_trues, label_preds 144 | 145 | # Saving the best performing model 146 | is_best = val_acc > self.best_acc 147 | if is_best: 148 | self.best_acc = val_acc 149 | 150 | torch.save({ 151 | 'epoch': self.epoch, 152 | 'iteration': self.iteration, 153 | 'arch': self.model.__class__.__name__, 154 | 'optim_state_dict': self.optim.state_dict(), 155 | 'model_state_dict': self.model.state_dict(), 156 | 'best_acc': self.best_acc, 157 | }, osp.join(self.out, 'checkpoint.pth.tar')) 158 | 159 | if is_best: 160 | shutil.copy(osp.join(self.out, 'checkpoint.pth.tar'), 161 | osp.join(self.out, 'model_best.pth.tar')) 162 | 163 | if training: 164 | self.model.module.fc.train() 165 | 166 | 167 | 168 | # ----------------------------------------------------------------------------- 169 | def train_epoch(self): 170 | # ----------------------------------------------------------------------------- 171 | self.model.module.fc.train() # for dataParallel finetuning (TODO - make general) 172 | n_class = len(self.train_loader.dataset.classes) 173 | 174 | for batch_idx, (data, target) in tqdm.tqdm( 175 | enumerate(self.train_loader), total=len(self.train_loader), 176 | desc='Train epoch=%d' % self.epoch, ncols=80, leave=False): 177 | 178 | iteration = batch_idx + self.epoch * len(self.train_loader) 179 | 180 | if self.iteration != 0 and (iteration - 1) != self.iteration: 181 | continue # for resuming 182 | self.iteration = iteration 183 | 184 | if self.iteration % self.interval_validate == 0: 185 | self.validate() 186 | 187 | assert self.model.module.fc.training 188 | 189 | # Computing Losses 190 | if self.cuda: 191 | data, target = data.cuda(), target.cuda() 192 | data, target = Variable(data), Variable(target) 193 | score = self.model(data) # batch_size x num_class 194 | 195 | loss = self.criterion(score, target) 196 | 197 | if np.isnan(float(loss.data[0])): 198 | raise ValueError('loss is NaN while training') 199 | # print list(self.model.parameters())[0].grad 200 | 201 | # Gradient descent 202 | self.optim.zero_grad() 203 | loss.backward() 204 | self.optim.step() 205 | 206 | # Computing metrics 207 | lbl_pred = score.data.max(1)[1].cpu().numpy() 208 | lbl_pred = lbl_pred.squeeze() 209 | lbl_true = target.data.cpu() 210 | lbl_true = np.squeeze(lbl_true.numpy()) 211 | train_accu = self.eval_metric([lbl_pred], [lbl_true]) 212 | 213 | # Logging 214 | with open(osp.join(self.out, 'log.csv'), 'a') as f: 215 | elapsed_time = ( 216 | datetime.datetime.now(pytz.timezone('US/Eastern')) - 217 | self.timestamp_start).total_seconds() 218 | log = [self.epoch, self.iteration] + [loss.data[0]] + \ 219 | [train_accu] + [''] * 2 + [elapsed_time] 220 | log = map(str, log) 221 | f.write(','.join(log) + '\n') 222 | # print '\nEpoch: ' + str(self.epoch) + ' Iter: ' + str(self.iteration) + \ 223 | # ' Loss: ' + str(loss.data[0]) 224 | 225 | if self.iteration >= self.max_iter: 226 | break 227 | 228 | 229 | # ----------------------------------------------------------------------------- 230 | def eval_metric(self, lbl_pred, lbl_true): 231 | # ----------------------------------------------------------------------------- 232 | # Over-all accuracy 233 | # TODO: per-class accuracy 234 | accu = [] 235 | for lt, lp in zip(lbl_true, lbl_pred): 236 | accu.append(np.mean(lt == lp)) 237 | return np.mean(accu) 238 | 239 | 240 | # ----------------------------------------------------------------------------- 241 | def train(self): 242 | # ----------------------------------------------------------------------------- 243 | max_epoch = int(math.ceil(1. * self.max_iter / len(self.train_loader))) 244 | print 'Number of iters in an epoch: %d' % len(self.train_loader) 245 | print 'Total epochs: %d' % max_epoch 246 | 247 | for epoch in tqdm.trange(self.epoch, max_epoch, 248 | desc='Train epochs', ncols=80, leave=True): 249 | self.epoch = epoch 250 | 251 | if self.lr_decay_epoch is None: 252 | pass 253 | else: 254 | assert self.lr_decay_epoch < max_epoch 255 | lr_scheduler(self.optim, self.epoch, 256 | init_lr=self.init_lr, 257 | lr_decay_epoch=self.lr_decay_epoch) 258 | 259 | self.train_epoch() 260 | if self.iteration >= self.max_iter: 261 | break 262 | 263 | -------------------------------------------------------------------------------- /train.py: -------------------------------------------------------------------------------- 1 | import datetime 2 | import math 3 | import os 4 | import os.path as osp 5 | import shutil 6 | 7 | import numpy as np 8 | import PIL.Image 9 | import pytz 10 | import scipy.misc 11 | import torch 12 | from torch.autograd import Variable 13 | import torch.nn.functional as F 14 | import tqdm 15 | 16 | import utils 17 | import gc 18 | 19 | 20 | def lr_scheduler(optimizer, epoch, init_lr=0.001, lr_decay_epoch=7): 21 | """Decay learning rate by a factor of 0.1 every lr_decay_epoch epochs.""" 22 | lr = init_lr * (0.1**(epoch // lr_decay_epoch)) 23 | 24 | if epoch % lr_decay_epoch == 0: 25 | print('LR is set to {}'.format(lr)) 26 | 27 | for param_group in optimizer.param_groups: 28 | param_group['lr'] = lr 29 | 30 | return optimizer 31 | 32 | 33 | 34 | class Trainer(object): 35 | 36 | # ----------------------------------------------------------------------------- 37 | def __init__(self, cuda, model, criterion, optimizer, init_lr, 38 | train_loader, val_loader, out, max_iter, 39 | lr_decay_epoch=None, interval_validate=None): 40 | # ----------------------------------------------------------------------------- 41 | self.cuda = cuda 42 | 43 | self.model = model 44 | self.criterion = criterion 45 | self.optim = optimizer 46 | self.train_loader = train_loader 47 | self.val_loader = val_loader 48 | self.best_acc = 0 49 | self.init_lr = init_lr 50 | self.lr_decay_epoch = lr_decay_epoch 51 | self.epoch = 0 52 | self.iteration = 0 53 | self.max_iter = max_iter 54 | self.best_loss = 0 55 | 56 | self.timestamp_start = \ 57 | datetime.datetime.now(pytz.timezone('US/Eastern')) 58 | 59 | if interval_validate is None: 60 | self.interval_validate = len(self.train_loader) 61 | else: 62 | self.interval_validate = interval_validate 63 | 64 | self.out = out 65 | if not osp.exists(self.out): 66 | os.makedirs(self.out) 67 | 68 | self.log_headers = [ 69 | 'epoch', 70 | 'iteration', 71 | 'train/loss', 72 | 'train/acc', 73 | 'valid/loss', 74 | 'valid/acc', 75 | 'elapsed_time', 76 | ] 77 | if not osp.exists(osp.join(self.out, 'log.csv')): 78 | with open(osp.join(self.out, 'log.csv'), 'w') as f: 79 | f.write(','.join(self.log_headers) + '\n') 80 | 81 | 82 | 83 | 84 | # ----------------------------------------------------------------------------- 85 | def validate(self, max_num=500): 86 | # ----------------------------------------------------------------------------- 87 | training = self.model.training 88 | self.model.eval() 89 | MAX_NUM = max_num # HACK: stop after 500 images 90 | 91 | n_class = len(self.val_loader.dataset.classes) 92 | val_loss = 0 93 | label_trues, label_preds = [], [] 94 | 95 | for batch_idx, (data, (target)) in tqdm.tqdm( 96 | enumerate(self.val_loader), total=len(self.val_loader), 97 | desc='Val=%d' % self.iteration, ncols=80, 98 | leave=False): 99 | 100 | # Computing val losses 101 | if self.cuda: 102 | data, target = data.cuda(), target.cuda() 103 | 104 | data, target = Variable(data), Variable(target) 105 | score = self.model(data) 106 | loss = self.criterion(score, target) 107 | 108 | if np.isnan(float(loss.data[0])): 109 | raise ValueError('loss is NaN while validating') 110 | 111 | val_loss += float(loss.data[0]) / len(data) 112 | 113 | lbl_pred = score.data.max(1)[1].cpu().numpy() 114 | lbl_true = target.data.cpu() 115 | lbl_pred = lbl_pred.squeeze() 116 | lbl_true = np.squeeze(lbl_true.numpy()) 117 | 118 | del target, score 119 | 120 | label_trues.append(lbl_true) 121 | label_preds.append(lbl_pred) 122 | 123 | del lbl_true, lbl_pred, data, loss 124 | 125 | if batch_idx > MAX_NUM: 126 | break 127 | 128 | # Computing metrics 129 | val_loss /= len(self.val_loader) 130 | val_acc = self.eval_metric(label_trues, label_preds) 131 | 132 | # Logging 133 | with open(osp.join(self.out, 'log.csv'), 'a') as f: 134 | elapsed_time = ( 135 | datetime.datetime.now(pytz.timezone('US/Eastern')) - 136 | self.timestamp_start).total_seconds() 137 | log = [self.epoch, self.iteration] + [''] * 2 + \ 138 | [val_loss, val_acc] + [elapsed_time] 139 | log = map(str, log) 140 | f.write(','.join(log) + '\n') 141 | 142 | del label_trues, label_preds 143 | 144 | # Saving the best performing model 145 | is_best = val_acc > self.best_acc 146 | if is_best: 147 | self.best_acc = val_acc 148 | 149 | torch.save({ 150 | 'epoch': self.epoch, 151 | 'iteration': self.iteration, 152 | 'arch': self.model.__class__.__name__, 153 | 'optim_state_dict': self.optim.state_dict(), 154 | 'model_state_dict': self.model.state_dict(), 155 | 'best_acc': self.best_acc, 156 | }, osp.join(self.out, 'checkpoint.pth.tar')) 157 | 158 | if is_best: 159 | shutil.copy(osp.join(self.out, 'checkpoint.pth.tar'), 160 | osp.join(self.out, 'model_best.pth.tar')) 161 | 162 | if training: 163 | self.model.train() 164 | 165 | 166 | 167 | # ----------------------------------------------------------------------------- 168 | def train_epoch(self): 169 | # ----------------------------------------------------------------------------- 170 | self.model.train() 171 | n_class = len(self.train_loader.dataset.classes) 172 | 173 | for batch_idx, (data, target) in tqdm.tqdm( 174 | enumerate(self.train_loader), total=len(self.train_loader), 175 | desc='Train epoch=%d' % self.epoch, ncols=80, leave=False): 176 | 177 | if batch_idx == len(self.train_loader)-1: 178 | break # discard last batch in epoch (unequal batch-sizes mess up BatchNorm) 179 | 180 | iteration = batch_idx + self.epoch * len(self.train_loader) 181 | if self.iteration != 0 and (iteration - 1) != self.iteration: 182 | continue # for resuming 183 | self.iteration = iteration 184 | 185 | if self.iteration % self.interval_validate == 0: 186 | self.validate() 187 | 188 | assert self.model.training 189 | 190 | # Computing Losses 191 | if self.cuda: 192 | data, target = data.cuda(), target.cuda() 193 | data, target = Variable(data), Variable(target) 194 | score = self.model(data) # batch_size x num_class 195 | 196 | loss = self.criterion(score, target) 197 | 198 | if np.isnan(float(loss.data[0])): 199 | raise ValueError('loss is NaN while training') 200 | # print list(self.model.parameters())[0].grad 201 | 202 | # Gradient descent 203 | self.optim.zero_grad() 204 | loss.backward() 205 | self.optim.step() 206 | 207 | # Computing metrics 208 | lbl_pred = score.data.max(1)[1].cpu().numpy() 209 | lbl_pred = lbl_pred.squeeze() 210 | lbl_true = target.data.cpu() 211 | lbl_true = np.squeeze(lbl_true.numpy()) 212 | train_accu = self.eval_metric([lbl_pred], [lbl_true]) 213 | 214 | # Logging 215 | with open(osp.join(self.out, 'log.csv'), 'a') as f: 216 | elapsed_time = ( 217 | datetime.datetime.now(pytz.timezone('US/Eastern')) - 218 | self.timestamp_start).total_seconds() 219 | log = [self.epoch, self.iteration] + [loss.data[0]] + \ 220 | [train_accu] + [''] * 2 + [elapsed_time] 221 | log = map(str, log) 222 | f.write(','.join(log) + '\n') 223 | # print '\nEpoch: ' + str(self.epoch) + ' Iter: ' + str(self.iteration) + \ 224 | # ' Loss: ' + str(loss.data[0]) 225 | 226 | if self.iteration >= self.max_iter: 227 | break 228 | 229 | 230 | # ----------------------------------------------------------------------------- 231 | def eval_metric(self, lbl_pred, lbl_true): 232 | # ----------------------------------------------------------------------------- 233 | # Over-all accuracy 234 | # TODO: per-class accuracy 235 | accu = [] 236 | for lt, lp in zip(lbl_true, lbl_pred): 237 | accu.append(np.mean(lt == lp)) 238 | return np.mean(accu) 239 | 240 | 241 | # ----------------------------------------------------------------------------- 242 | def train(self): 243 | # ----------------------------------------------------------------------------- 244 | max_epoch = int(math.ceil(1. * self.max_iter / len(self.train_loader))) 245 | print 'Number of iters in an epoch: %d' % len(self.train_loader) 246 | print 'Total epochs: %d' % max_epoch 247 | 248 | for epoch in tqdm.trange(self.epoch, max_epoch, 249 | desc='Train epochs', ncols=80, leave=True): 250 | self.epoch = epoch 251 | 252 | if self.lr_decay_epoch is None: 253 | pass 254 | else: 255 | assert self.lr_decay_epoch < max_epoch 256 | lr_scheduler(self.optim, self.epoch, 257 | init_lr=self.init_lr, 258 | lr_decay_epoch=self.lr_decay_epoch) 259 | 260 | self.train_epoch() 261 | if self.iteration >= self.max_iter: 262 | break 263 | 264 | -------------------------------------------------------------------------------- /umd-face/train_resnet_umdface.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import datetime 3 | import os 4 | import os.path as osp 5 | import pytz 6 | 7 | import torch 8 | import torchvision 9 | from torchvision import models 10 | import torch.nn as nn 11 | import torch.optim 12 | import torch.utils.data 13 | import torchvision.transforms as transforms 14 | import torchvision.datasets as datasets 15 | from torch.autograd import Variable 16 | 17 | import yaml 18 | import numpy as np 19 | import matplotlib 20 | matplotlib.use('Agg') 21 | import matplotlib.pyplot as plt 22 | 23 | here = osp.dirname(osp.abspath(__file__)) # output folder is located here 24 | root_dir,_ = osp.split(here) 25 | import sys 26 | sys.path.append(root_dir) 27 | 28 | import train 29 | from config import configurations 30 | import utils 31 | 32 | 33 | 34 | 35 | 36 | def main(): 37 | parser = argparse.ArgumentParser() 38 | parser.add_argument('-e', '--exp_name', default='resnet_umdfaces') 39 | parser.add_argument('-c', '--config', type=int, default=1, 40 | choices=configurations.keys()) 41 | parser.add_argument('-d', '--dataset_path', 42 | default='/srv/data1/arunirc/datasets/UMDFaces/face_crops') 43 | parser.add_argument('-m', '--model_path', default=None, 44 | help='Initialize from pre-trained model') 45 | parser.add_argument('--resume', help='Checkpoint path') 46 | args = parser.parse_args() 47 | 48 | # gpu = args.gpu 49 | cfg = configurations[args.config] 50 | out = get_log_dir(args.exp_name, args.config, cfg, verbose=False) 51 | resume = args.resume 52 | 53 | # os.environ['CUDA_VISIBLE_DEVICES'] = str(gpu) 54 | cuda = torch.cuda.is_available() 55 | 56 | torch.manual_seed(1337) 57 | if cuda: 58 | torch.cuda.manual_seed(1337) 59 | torch.backends.cudnn.enabled = True 60 | torch.backends.cudnn.benchmark = True # enable if all images are same size 61 | 62 | 63 | 64 | # ----------------------------------------------------------------------------- 65 | # 1. Dataset 66 | # ----------------------------------------------------------------------------- 67 | # Images should be arranged like this: 68 | # data_root/ 69 | # class_1/....jpg.. 70 | # class_2/....jpg.. 71 | # ......./....jpg.. 72 | data_root = args.dataset_path 73 | kwargs = {'num_workers': 4, 'pin_memory': True} if cuda else {} 74 | RGB_MEAN = [ 0.485, 0.456, 0.406 ] 75 | RGB_STD = [ 0.229, 0.224, 0.225 ] 76 | 77 | # Data transforms 78 | # http://pytorch.org/docs/master/torchvision/transforms.html 79 | train_transform = transforms.Compose([ 80 | transforms.Scale(256), # smaller side resized 81 | transforms.RandomCrop(224), 82 | transforms.RandomHorizontalFlip(), 83 | transforms.ToTensor(), 84 | transforms.Normalize(mean = RGB_MEAN, 85 | std = RGB_STD), 86 | ]) 87 | val_transform = transforms.Compose([ 88 | transforms.Scale((224,224)), 89 | # transforms.CenterCrop(224), 90 | transforms.ToTensor(), 91 | transforms.Normalize(mean = RGB_MEAN, 92 | std = RGB_STD), 93 | ]) 94 | 95 | # Data loaders - using PyTorch built-in objects 96 | # loader = DataLoaderClass(DatasetClass) 97 | # * `DataLoaderClass` is PyTorch provided torch.utils.data.DataLoader 98 | # * `DatasetClass` loads samples from a dataset; can be a standard class 99 | # provided by PyTorch (datasets.ImageFolder) or a custom-made class. 100 | # - More info: http://pytorch.org/docs/master/torchvision/datasets.html#imagefolder 101 | # * Balanced class sampling: https://discuss.pytorch.org/t/balanced-sampling-between-classes-with-torchvision-dataloader/2703/3 102 | traindir = osp.join(data_root, 'train') 103 | dataset_train = datasets.ImageFolder(traindir, train_transform) 104 | # For unbalanced dataset we create a weighted sampler 105 | weights = utils.make_weights_for_balanced_classes( 106 | dataset_train.imgs, len(dataset_train.classes)) 107 | weights = torch.DoubleTensor(weights) 108 | sampler = torch.utils.data.sampler.WeightedRandomSampler(weights, len(weights)) 109 | train_loader = torch.utils.data.DataLoader( 110 | dataset_train, batch_size=cfg['batch_size'], 111 | sampler = sampler, **kwargs) 112 | 113 | valdir = osp.join(data_root, 'val') 114 | val_loader = torch.utils.data.DataLoader( 115 | datasets.ImageFolder(valdir, val_transform), 116 | batch_size=cfg['batch_size'], shuffle=False, **kwargs) 117 | 118 | # print 'dataset classes:' + str(train_loader.dataset.classes) 119 | num_class = len(train_loader.dataset.classes) 120 | print 'Number of classes: %d' % num_class 121 | 122 | 123 | 124 | # ----------------------------------------------------------------------------- 125 | # 2. Model 126 | # ----------------------------------------------------------------------------- 127 | model = torchvision.models.resnet50(pretrained=True) # ImageNet pre-trained for quicker convergence 128 | 129 | # Check if final fc layer sizes match num_class 130 | if not model.fc.weight.size()[0] == num_class: 131 | # Replace last layer 132 | print model.fc 133 | model.fc = torch.nn.Linear(2048, num_class) 134 | print model.fc 135 | else: 136 | pass 137 | 138 | # TODO - config options for DataParallel and device_ids 139 | model = torch.nn.DataParallel(model, device_ids=[0, 1, 2, 3, 4]) 140 | 141 | if cuda: 142 | model.cuda() 143 | 144 | if args.model_path: 145 | # If existing model is to be loaded from a file 146 | checkpoint = torch.load(args.model_path) 147 | model.load_state_dict(checkpoint['model_state_dict']) 148 | 149 | start_epoch = 0 150 | start_iteration = 0 151 | 152 | # Loss - cross entropy between predicted scores (unnormalized) and class labels (integers) 153 | criterion = nn.CrossEntropyLoss() 154 | if cuda: 155 | criterion = criterion.cuda() 156 | 157 | if resume: 158 | # Resume training from last saved checkpoint 159 | checkpoint = torch.load(resume) 160 | model.load_state_dict(checkpoint['model_state_dict']) 161 | start_epoch = checkpoint['epoch'] 162 | start_iteration = checkpoint['iteration'] 163 | else: 164 | pass 165 | 166 | 167 | # ----------------------------------------------------------------------------- 168 | # 3. Optimizer 169 | # ----------------------------------------------------------------------------- 170 | params = filter(lambda p: p.requires_grad, model.parameters()) 171 | # Parameters with p.requires_grad=False are not updated during training. 172 | # This can be specified when defining the nn.Modules during model creation 173 | 174 | if 'optim' in cfg.keys(): 175 | if cfg['optim'].lower()=='sgd': 176 | optim = torch.optim.SGD(params, 177 | lr=cfg['lr'], 178 | momentum=cfg['momentum'], 179 | weight_decay=cfg['weight_decay']) 180 | 181 | elif cfg['optim'].lower()=='adam': 182 | optim = torch.optim.Adam(params, 183 | lr=cfg['lr'], weight_decay=cfg['weight_decay']) 184 | 185 | else: 186 | raise NotImplementedError('Optimizers: SGD or Adam') 187 | else: 188 | optim = torch.optim.SGD(params, 189 | lr=cfg['lr'], 190 | momentum=cfg['momentum'], 191 | weight_decay=cfg['weight_decay']) 192 | 193 | if resume: 194 | optim.load_state_dict(checkpoint['optim_state_dict']) 195 | 196 | 197 | # ----------------------------------------------------------------------------- 198 | # [optional] Sanity-check: forward pass with a single batch 199 | # ----------------------------------------------------------------------------- 200 | DEBUG = False 201 | if DEBUG: 202 | # model = model.cpu() 203 | dataiter = iter(val_loader) 204 | img, label = dataiter.next() 205 | 206 | print 'Labels: ' + str(label.size()) # batchSize x num_class 207 | print 'Input: ' + str(img.size()) # batchSize x 3 x 224 x 224 208 | 209 | im = img.squeeze().numpy() 210 | im = im[0,:,:,:] # get first image in the batch 211 | im = im.transpose((1,2,0)) # permute to 224x224x3 212 | im = im * [ 0.229, 0.224, 0.225 ] # unnormalize 213 | im = im + [ 0.485, 0.456, 0.406 ] 214 | im[im<0] = 0 215 | 216 | f = plt.figure() 217 | plt.imshow(im) 218 | plt.savefig('sanity-check-im.jpg') # save transformed image in current folder 219 | 220 | inputs = Variable(img) 221 | if cuda: 222 | inputs = inputs.cuda() 223 | 224 | model.eval() 225 | outputs = model(inputs) 226 | print 'Network output: ' + str(outputs.size()) 227 | model.train() 228 | import pdb; pdb.set_trace() # breakpoint c5e7c878 // 229 | 230 | 231 | else: 232 | pass 233 | 234 | 235 | # ----------------------------------------------------------------------------- 236 | # 4. Training 237 | # ----------------------------------------------------------------------------- 238 | trainer = train.Trainer( 239 | cuda=cuda, 240 | model=model, 241 | criterion=criterion, 242 | optimizer=optim, 243 | init_lr=cfg['lr'], 244 | lr_decay_epoch = cfg['lr_decay_epoch'], 245 | train_loader=train_loader, 246 | val_loader=val_loader, 247 | out=out, 248 | max_iter=cfg['max_iteration'], 249 | interval_validate=cfg.get('interval_validate', len(train_loader)), 250 | ) 251 | 252 | trainer.epoch = start_epoch 253 | trainer.iteration = start_iteration 254 | trainer.train() 255 | 256 | 257 | 258 | def get_log_dir(model_name, config_id, cfg, verbose=True): 259 | # Creates an output directory for each experiment, timestamped 260 | name = 'MODEL-%s_CFG-%03d' % (model_name, config_id) 261 | if verbose: 262 | for k, v in cfg.items(): 263 | v = str(v) 264 | if '/' in v: 265 | continue 266 | name += '_%s-%s' % (k.upper(), v) 267 | now = datetime.datetime.now(pytz.timezone('US/Eastern')) 268 | name += '_TIME-%s' % now.strftime('%Y%m%d-%H%M%S') 269 | log_dir = osp.join(here, 'logs', name) 270 | if not osp.exists(log_dir): 271 | os.makedirs(log_dir) 272 | with open(osp.join(log_dir, 'config.yaml'), 'w') as f: 273 | yaml.safe_dump(cfg, f, default_flow_style=False) 274 | return log_dir 275 | 276 | 277 | if __name__ == '__main__': 278 | main() 279 | 280 | -------------------------------------------------------------------------------- /utils.py: -------------------------------------------------------------------------------- 1 | from __future__ import division 2 | 3 | import math 4 | import warnings 5 | 6 | try: 7 | import cv2 8 | except ImportError: 9 | cv2 = None 10 | 11 | import numpy as np 12 | import scipy.ndimage 13 | import six 14 | import skimage 15 | import skimage.color 16 | from skimage import img_as_ubyte 17 | import os 18 | import os.path as osp 19 | import matplotlib 20 | matplotlib.use('agg') 21 | import matplotlib.pyplot as plt 22 | import csv 23 | import scipy.signal 24 | 25 | 26 | 27 | 28 | def make_weights_for_balanced_classes(images, nclasses): 29 | ''' 30 | Make a vector of weights for each image in the dataset, based 31 | on class frequency. The returned vector of weights can be used 32 | to create a WeightedRandomSampler for a DataLoader to have 33 | class balancing when sampling for a training batch. 34 | images - torchvisionDataset.imgs 35 | nclasses - len(torchvisionDataset.classes) 36 | https://discuss.pytorch.org/t/balanced-sampling-between-classes-with-torchvision-dataloader/2703/3 37 | ''' 38 | count = [0] * nclasses 39 | for item in images: 40 | count[item[1]] += 1 # item is (img-data, label-id) 41 | weight_per_class = [0.] * nclasses 42 | N = float(sum(count)) # total number of images 43 | for i in range(nclasses): 44 | weight_per_class[i] = N/float(count[i]) 45 | weight = [0] * len(images) 46 | for idx, val in enumerate(images): 47 | weight[idx] = weight_per_class[val[1]] 48 | return weight 49 | 50 | 51 | def get_vgg_class_counts(log_path): 52 | ''' Dict of class frequencies from pre-computed text file ''' 53 | data_1 = np.genfromtxt(log_path, dtype=None) 54 | class_names = [x[0] for x in data_1] 55 | class_counts = [x[1] for x in data_1] 56 | class_count_dict = dict(zip(class_names, class_counts)) 57 | return class_count_dict 58 | 59 | 60 | 61 | def plot_log_csv(log_path): 62 | log_dir, _ = osp.split(log_path) 63 | dat = np.genfromtxt(log_path, names=True, 64 | delimiter=',', autostrip=True) 65 | 66 | train_loss = dat['trainloss'] 67 | train_loss_sel = ~np.isnan(train_loss) 68 | train_loss = train_loss[train_loss_sel] 69 | iter_train_loss = dat['iteration'][train_loss_sel] 70 | 71 | train_acc = dat['trainacc'] 72 | train_acc_sel = ~np.isnan(train_acc) 73 | train_acc = train_acc[train_acc_sel] 74 | iter_train_acc = dat['iteration'][train_acc_sel] 75 | 76 | val_loss = dat['validloss'] 77 | val_loss_sel = ~np.isnan(val_loss) 78 | val_loss = val_loss[val_loss_sel] 79 | iter_val_loss = dat['iteration'][val_loss_sel] 80 | 81 | mean_iu = dat['validacc'] 82 | mean_iu_sel = ~np.isnan(mean_iu) 83 | mean_iu = mean_iu[mean_iu_sel] 84 | iter_mean_iu = dat['iteration'][mean_iu_sel] 85 | 86 | fig, ax = plt.subplots(nrows=2, ncols=2) 87 | 88 | plt.subplot(2, 2, 1) 89 | plt.plot(iter_train_acc, train_acc, label='train') 90 | plt.ylabel('accuracy') 91 | plt.grid() 92 | plt.legend() 93 | plt.tight_layout() 94 | 95 | plt.subplot(2, 2, 2) 96 | plt.plot(iter_mean_iu, mean_iu, label='val') 97 | plt.grid() 98 | plt.legend() 99 | plt.tight_layout() 100 | 101 | plt.subplot(2, 2, 3) 102 | plt.plot(iter_train_loss, train_loss, label='train') 103 | plt.xlabel('iteration') 104 | plt.ylabel('loss') 105 | plt.grid() 106 | plt.legend() 107 | plt.tight_layout() 108 | 109 | plt.subplot(2, 2, 4) 110 | plt.plot(iter_val_loss, val_loss, label='val') 111 | plt.xlabel('iteration') 112 | plt.grid() 113 | plt.legend() 114 | plt.tight_layout() 115 | 116 | plt.savefig(osp.join(log_dir, 'log_plots.png'), bbox_inches='tight') 117 | 118 | 119 | 120 | 121 | 122 | def plot_log(log_path): 123 | log_dir, _ = osp.split(log_path) 124 | epoch = [] 125 | iteration = [] 126 | train_loss = [] 127 | train_acc = [] 128 | val_loss = [] 129 | val_acc = [] 130 | g = lambda x: x if x!='' else float('nan') 131 | reader = csv.reader( open(log_path, 'rb')) 132 | next(reader) # Skip header row. 133 | for line in reader: 134 | line_fields = [g(x) for x in line] 135 | epoch.append(float(line_fields[0])) 136 | iteration.append(float(line_fields[1])) 137 | train_loss.append(float(line_fields[2])) 138 | train_acc.append(float(line_fields[3])) 139 | val_loss.append(float(line_fields[4])) 140 | val_acc.append(float(line_fields[5])) 141 | 142 | epoch = np.array(epoch) 143 | iteration = np.array(iteration) 144 | train_loss = np.array(train_loss) 145 | train_acc = np.array(train_acc) 146 | val_loss = np.array(val_loss) 147 | val_acc = np.array(val_acc) 148 | 149 | train_loss_sel = ~np.isnan(train_loss) 150 | train_loss = train_loss[train_loss_sel] 151 | iter_train_loss = iteration[train_loss_sel] 152 | 153 | train_acc_sel = ~np.isnan(train_acc) 154 | train_acc = train_acc[train_acc_sel] 155 | iter_train_acc = iteration[train_acc_sel] 156 | 157 | val_loss_sel = ~np.isnan(val_loss) 158 | val_loss = val_loss[val_loss_sel] 159 | iter_val_loss = iteration[val_loss_sel] 160 | 161 | val_acc_sel = ~np.isnan(val_acc) 162 | val_acc = val_acc[val_acc_sel] 163 | iter_val_acc = iteration[val_acc_sel] 164 | 165 | fig, ax = plt.subplots(nrows=2, ncols=2) 166 | 167 | plt.subplot(2, 2, 1) 168 | plt.plot(iter_train_acc, train_acc, label='train', alpha=0.5, color='C0') 169 | box_pts = np.rint(np.sqrt(len(train_acc))).astype(np.int) 170 | plt.plot(iter_train_acc, savgol_smooth(train_acc, box_pts), color='C0') 171 | plt.ylabel('accuracy') 172 | plt.grid() 173 | plt.legend() 174 | plt.title('Training') 175 | plt.tight_layout() 176 | 177 | plt.subplot(2, 2, 2) 178 | plt.plot(iter_val_acc, val_acc, label='val', alpha=0.5, color='C1') 179 | box_pts = np.rint(np.sqrt(len(val_acc))).astype(np.int) 180 | plt.plot(iter_val_acc, savgol_smooth(val_acc, box_pts), color='C1') 181 | plt.grid() 182 | plt.legend() 183 | plt.title('Validation') 184 | plt.tight_layout() 185 | 186 | plt.subplot(2, 2, 3) 187 | plt.plot(iter_train_loss, train_loss, label='train', alpha=0.5, color='C0') 188 | box_pts = np.rint(np.sqrt(len(train_loss))).astype(np.int) 189 | plt.plot(iter_train_loss, savgol_smooth(train_loss, box_pts), color='C0') 190 | plt.xlabel('iteration') 191 | plt.ylabel('loss') 192 | plt.grid() 193 | plt.legend() 194 | plt.tight_layout() 195 | 196 | plt.subplot(2, 2, 4) 197 | plt.plot(iter_val_loss, val_loss, label='val', alpha=0.5, color='C1') 198 | box_pts = np.rint(np.sqrt(len(val_loss))).astype(np.int) 199 | plt.plot(iter_val_loss, savgol_smooth(val_loss, box_pts), color='C1') 200 | plt.xlabel('iteration') 201 | plt.grid() 202 | plt.legend() 203 | plt.tight_layout() 204 | 205 | plt.savefig(osp.join(log_dir, 'log_plots.png'), bbox_inches='tight') 206 | 207 | 208 | def savgol_smooth(y, box_pts): 209 | # use the Savitzky-Golay filter for 1-D smoothing 210 | if box_pts % 2 == 0: 211 | box_pts += 1 212 | y_smooth = scipy.signal.savgol_filter(y, box_pts, 2) 213 | return y_smooth 214 | 215 | # ----------------------------------------------------------------------------- 216 | # LFW helper code from FaceNet: https://github.com/davidsandberg/facenet 217 | # ----------------------------------------------------------------------------- 218 | 219 | # MIT License 220 | # 221 | # Copyright (c) 2016 David Sandberg 222 | # 223 | # Permission is hereby granted, free of charge, to any person obtaining a copy 224 | # of this software and associated documentation files (the "Software"), to deal 225 | # in the Software without restriction, including without limitation the rights 226 | # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 227 | # copies of the Software, and to permit persons to whom the Software is 228 | # furnished to do so, subject to the following conditions: 229 | # 230 | # The above copyright notice and this permission notice shall be included in all 231 | # copies or substantial portions of the Software. 232 | # 233 | # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 234 | # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 235 | # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 236 | # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 237 | # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 238 | # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 239 | # SOFTWARE. 240 | def get_paths(lfw_dir, pairs, file_ext): 241 | nrof_skipped_pairs = 0 242 | path_list = [] 243 | issame_list = [] 244 | for pair in pairs: 245 | if len(pair) == 3: 246 | path0 = os.path.join(lfw_dir, pair[0], pair[0] + '_' + '%04d' % int(pair[1])+'.'+file_ext) 247 | path1 = os.path.join(lfw_dir, pair[0], pair[0] + '_' + '%04d' % int(pair[2])+'.'+file_ext) 248 | issame = True 249 | elif len(pair) == 4: 250 | path0 = os.path.join(lfw_dir, pair[0], pair[0] + '_' + '%04d' % int(pair[1])+'.'+file_ext) 251 | path1 = os.path.join(lfw_dir, pair[2], pair[2] + '_' + '%04d' % int(pair[3])+'.'+file_ext) 252 | issame = False 253 | if os.path.exists(path0) and os.path.exists(path1): # Only add the pair if both paths exist 254 | path_list += (path0,path1) 255 | issame_list.append(issame) 256 | else: 257 | nrof_skipped_pairs += 1 258 | if nrof_skipped_pairs>0: 259 | print('Skipped %d image pairs' % nrof_skipped_pairs) 260 | 261 | return path_list, issame_list 262 | 263 | 264 | def read_pairs(pairs_filename, lfw_flag=True): 265 | pairs = [] 266 | with open(pairs_filename, 'r') as f: 267 | if lfw_flag: 268 | for line in f.readlines()[1:]: 269 | pair = line.strip().split() 270 | pairs.append(pair) 271 | else: 272 | for line in f.readlines(): 273 | pair = line.strip().split() 274 | pairs.append(pair) 275 | return np.array(pairs) 276 | 277 | 278 | # ----------------------------------------------------------------------------- 279 | # IJB-A helper code 280 | # ----------------------------------------------------------------------------- 281 | def get_ijba_1_1_metadata(protocol_file): 282 | metadata = {} 283 | template_id = [] 284 | subject_id = [] 285 | img_filename = [] 286 | media_id = [] 287 | sighting_id = [] 288 | 289 | with open(protocol_file, 'r') as f: 290 | for line in f.readlines()[1:]: 291 | line_fields = line.strip().split(',') 292 | template_id.append(int(line_fields[0])) 293 | subject_id.append(int(line_fields[1])) 294 | img_filename.append(line_fields[2]) 295 | media_id.append(int(line_fields[3])) 296 | sighting_id.append(int(line_fields[4])) 297 | 298 | metadata['template_id'] = np.array(template_id) 299 | metadata['subject_id'] = np.array(subject_id) 300 | metadata['img_filename'] = np.array(img_filename) 301 | metadata['media_id'] = np.array(media_id) 302 | metadata['sighting_id'] = np.array(sighting_id) 303 | return metadata 304 | 305 | 306 | def read_ijba_pairs(pairs_filename): 307 | pairs = [] 308 | with open(pairs_filename, 'r') as f: 309 | for line in f.readlines(): 310 | pair = line.strip().split(',') 311 | pairs.append(pair) 312 | return np.array(pairs).astype(np.int) 313 | 314 | -------------------------------------------------------------------------------- /vgg-face-2/train_resnet50_vggface_scratch.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import datetime 3 | import os 4 | import os.path as osp 5 | import pytz 6 | 7 | import torch 8 | import torchvision 9 | from torchvision import models 10 | import torch.nn as nn 11 | import torch.optim 12 | import torch.utils.data 13 | import torchvision.transforms as transforms 14 | import torchvision.datasets as datasets 15 | from torch.autograd import Variable 16 | 17 | import yaml 18 | import numpy as np 19 | import matplotlib 20 | matplotlib.use('Agg') 21 | import matplotlib.pyplot as plt 22 | 23 | here = osp.dirname(osp.abspath(__file__)) # output folder is located here 24 | root_dir,_ = osp.split(here) 25 | import sys 26 | sys.path.append(root_dir) 27 | 28 | import train 29 | import models 30 | import utils 31 | from config import configurations 32 | 33 | 34 | 35 | 36 | def main(): 37 | parser = argparse.ArgumentParser() 38 | parser.add_argument('-e', '--exp_name', default='resnet50_vggface') 39 | parser.add_argument('-c', '--config', type=int, default=1, 40 | choices=configurations.keys()) 41 | parser.add_argument('-d', '--dataset_path', 42 | default='/srv/data1/arunirc/datasets/vggface2') 43 | parser.add_argument('-m', '--model_path', default=None, 44 | help='Initialize from pre-trained model') 45 | parser.add_argument('--resume', help='Checkpoint path') 46 | parser.add_argument('--bottleneck', action='store_true', default=False, 47 | help='Add a 512-dim bottleneck layer with L2 normalization') 48 | args = parser.parse_args() 49 | 50 | # gpu = args.gpu 51 | cfg = configurations[args.config] 52 | out = get_log_dir(args.exp_name, args.config, cfg, verbose=False) 53 | resume = args.resume 54 | 55 | # os.environ['CUDA_VISIBLE_DEVICES'] = str(gpu) 56 | cuda = torch.cuda.is_available() 57 | 58 | torch.manual_seed(1337) 59 | if cuda: 60 | torch.cuda.manual_seed(1337) 61 | torch.backends.cudnn.enabled = True 62 | torch.backends.cudnn.benchmark = True # enable if all images are same size 63 | 64 | 65 | 66 | # ----------------------------------------------------------------------------- 67 | # 1. Dataset 68 | # ----------------------------------------------------------------------------- 69 | # Images should be arranged like this: 70 | # data_root/ 71 | # class_1/....jpg.. 72 | # class_2/....jpg.. 73 | # ......./....jpg.. 74 | data_root = args.dataset_path 75 | kwargs = {'num_workers': 4, 'pin_memory': True} if cuda else {} 76 | RGB_MEAN = [ 0.485, 0.456, 0.406 ] 77 | RGB_STD = [ 0.229, 0.224, 0.225 ] 78 | 79 | # Data transforms 80 | # http://pytorch.org/docs/master/torchvision/transforms.html 81 | train_transform = transforms.Compose([ 82 | transforms.Scale(256), # smaller side resized 83 | transforms.RandomCrop(224), 84 | transforms.RandomHorizontalFlip(), 85 | transforms.ToTensor(), 86 | transforms.Normalize(mean = RGB_MEAN, 87 | std = RGB_STD), 88 | ]) 89 | val_transform = transforms.Compose([ 90 | transforms.Scale(256), 91 | transforms.CenterCrop(224), 92 | transforms.ToTensor(), 93 | transforms.Normalize(mean = RGB_MEAN, 94 | std = RGB_STD), 95 | ]) 96 | 97 | # Data loaders - using PyTorch built-in objects 98 | # loader = DataLoaderClass(DatasetClass) 99 | # * `DataLoaderClass` is PyTorch provided torch.utils.data.DataLoader 100 | # * `DatasetClass` loads samples from a dataset; can be a standard class 101 | # provided by PyTorch (datasets.ImageFolder) or a custom-made class. 102 | # - More info: http://pytorch.org/docs/master/torchvision/datasets.html#imagefolder 103 | traindir = osp.join(data_root, 'train') 104 | dataset_train = datasets.ImageFolder(traindir, train_transform) 105 | 106 | # For unbalanced dataset we create a weighted sampler 107 | # * Balanced class sampling: https://discuss.pytorch.org/t/balanced-sampling-between-classes-with-torchvision-dataloader/2703/3 108 | weights = utils.make_weights_for_balanced_classes( 109 | dataset_train.imgs, len(dataset_train.classes)) 110 | weights = torch.DoubleTensor(weights) 111 | sampler = torch.utils.data.sampler.WeightedRandomSampler(weights, len(weights)) 112 | 113 | train_loader = torch.utils.data.DataLoader( 114 | dataset_train, batch_size=cfg['batch_size'], 115 | sampler = sampler, **kwargs) 116 | 117 | valdir = osp.join(data_root, 'val-crop') 118 | val_loader = torch.utils.data.DataLoader( 119 | datasets.ImageFolder(valdir, val_transform), 120 | batch_size=cfg['batch_size'], shuffle=False, **kwargs) 121 | 122 | # print 'dataset classes:' + str(train_loader.dataset.classes) 123 | num_class = len(train_loader.dataset.classes) 124 | print 'Number of classes: %d' % num_class 125 | 126 | 127 | 128 | # ----------------------------------------------------------------------------- 129 | # 2. Model 130 | # ----------------------------------------------------------------------------- 131 | model = torchvision.models.resnet50(pretrained=False) 132 | 133 | if type(model.fc) == torch.nn.modules.linear.Linear: 134 | # Check if final fc layer sizes match num_class 135 | if not model.fc.weight.size()[0] == num_class: 136 | # Replace last layer 137 | print model.fc 138 | model.fc = torch.nn.Linear(2048, num_class) 139 | print model.fc 140 | else: 141 | pass 142 | else: 143 | pass 144 | 145 | 146 | if args.model_path: 147 | # If existing model is to be loaded from a file 148 | checkpoint = torch.load(args.model_path) 149 | 150 | if checkpoint['arch'] == 'DataParallel': 151 | # if we trained and saved our model using DataParallel 152 | model = torch.nn.DataParallel(model, device_ids=[0, 1, 2, 3, 4, 5, 6, 7]) 153 | model.load_state_dict(checkpoint['model_state_dict']) 154 | model = model.module # get network module from inside its DataParallel wrapper 155 | else: 156 | model.load_state_dict(checkpoint['model_state_dict']) 157 | 158 | # Optionally add a "bottleneck + L2-norm" layer after GAP-layer 159 | # TODO -- loading a bottleneck model might be a problem .... do some unit-tests 160 | if args.bottleneck: 161 | layers = [] 162 | layers.append(torch.nn.Linear(2048, 512)) 163 | layers.append(nn.BatchNorm2d(512)) 164 | layers.append(torch.nn.ReLU(inplace=True)) 165 | layers.append(models.NormFeat()) # L2-normalization layer 166 | layers.append(torch.nn.Linear(512, num_class)) 167 | model.fc = torch.nn.Sequential(*layers) 168 | 169 | # TODO - config options for DataParallel and device_ids 170 | model = torch.nn.DataParallel(model, device_ids=[0, 1, 2, 3, 4, 5, 6, 7]) 171 | 172 | if cuda: 173 | model.cuda() 174 | 175 | start_epoch = 0 176 | start_iteration = 0 177 | 178 | # Loss - cross entropy between predicted scores (unnormalized) and class labels (integers) 179 | criterion = nn.CrossEntropyLoss() 180 | if cuda: 181 | criterion = criterion.cuda() 182 | 183 | if resume: 184 | # Resume training from last saved checkpoint 185 | checkpoint = torch.load(resume) 186 | model.load_state_dict(checkpoint['model_state_dict']) 187 | start_epoch = checkpoint['epoch'] 188 | start_iteration = checkpoint['iteration'] 189 | else: 190 | pass 191 | 192 | 193 | # ----------------------------------------------------------------------------- 194 | # 3. Optimizer 195 | # ----------------------------------------------------------------------------- 196 | params = filter(lambda p: p.requires_grad, model.parameters()) 197 | # Parameters with p.requires_grad=False are not updated during training. 198 | # This can be specified when defining the nn.Modules during model creation 199 | 200 | if 'optim' in cfg.keys(): 201 | if cfg['optim'].lower()=='sgd': 202 | optim = torch.optim.SGD(params, 203 | lr=cfg['lr'], 204 | momentum=cfg['momentum'], 205 | weight_decay=cfg['weight_decay']) 206 | 207 | elif cfg['optim'].lower()=='adam': 208 | optim = torch.optim.Adam(params, 209 | lr=cfg['lr'], weight_decay=cfg['weight_decay']) 210 | 211 | else: 212 | raise NotImplementedError('Optimizers: SGD or Adam') 213 | else: 214 | optim = torch.optim.SGD(params, 215 | lr=cfg['lr'], 216 | momentum=cfg['momentum'], 217 | weight_decay=cfg['weight_decay']) 218 | 219 | if resume: 220 | optim.load_state_dict(checkpoint['optim_state_dict']) 221 | 222 | 223 | # ----------------------------------------------------------------------------- 224 | # [optional] Sanity-check: forward pass with a single batch 225 | # ----------------------------------------------------------------------------- 226 | DEBUG = False 227 | if DEBUG: 228 | # model = model.cpu() 229 | dataiter = iter(val_loader) 230 | img, label = dataiter.next() 231 | 232 | print 'Labels: ' + str(label.size()) # batchSize x num_class 233 | print 'Input: ' + str(img.size()) # batchSize x 3 x 224 x 224 234 | 235 | im = img.squeeze().numpy() 236 | im = im[0,:,:,:] # get first image in the batch 237 | im = im.transpose((1,2,0)) # permute to 224x224x3 238 | im = im * [ 0.229, 0.224, 0.225 ] # unnormalize 239 | im = im + [ 0.485, 0.456, 0.406 ] 240 | im[im<0] = 0 241 | 242 | f = plt.figure() 243 | plt.imshow(im) 244 | plt.savefig('sanity-check-im.jpg') # save transformed image in current folder 245 | inputs = Variable(img) 246 | if cuda: 247 | inputs = inputs.cuda() 248 | 249 | model.eval() 250 | outputs = model(inputs) 251 | print 'Network output: ' + str(outputs.size()) 252 | model.train() 253 | 254 | else: 255 | pass 256 | 257 | 258 | # ----------------------------------------------------------------------------- 259 | # 4. Training 260 | # ----------------------------------------------------------------------------- 261 | trainer = train.Trainer( 262 | cuda=cuda, 263 | model=model, 264 | criterion=criterion, 265 | optimizer=optim, 266 | init_lr=cfg['lr'], 267 | lr_decay_epoch = cfg['lr_decay_epoch'], 268 | train_loader=train_loader, 269 | val_loader=val_loader, 270 | out=out, 271 | max_iter=cfg['max_iteration'], 272 | interval_validate=cfg.get('interval_validate', len(train_loader)), 273 | ) 274 | 275 | trainer.epoch = start_epoch 276 | trainer.iteration = start_iteration 277 | trainer.train() 278 | 279 | 280 | 281 | def get_log_dir(model_name, config_id, cfg, verbose=True): 282 | # Creates an output directory for each experiment, timestamped 283 | name = 'MODEL-%s_CFG-%03d' % (model_name, config_id) 284 | if verbose: 285 | for k, v in cfg.items(): 286 | v = str(v) 287 | if '/' in v: 288 | continue 289 | name += '_%s-%s' % (k.upper(), v) 290 | now = datetime.datetime.now(pytz.timezone('US/Eastern')) 291 | name += '_TIME-%s' % now.strftime('%Y%m%d-%H%M%S') 292 | log_dir = osp.join(here, 'logs', name) 293 | if not osp.exists(log_dir): 294 | os.makedirs(log_dir) 295 | with open(osp.join(log_dir, 'config.yaml'), 'w') as f: 296 | yaml.safe_dump(cfg, f, default_flow_style=False) 297 | return log_dir 298 | 299 | 300 | 301 | 302 | if __name__ == '__main__': 303 | main() 304 | 305 | -------------------------------------------------------------------------------- /vgg-face-2/train_resnet101_vggface.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import datetime 3 | import os 4 | import os.path as osp 5 | import pytz 6 | 7 | import torch 8 | import torchvision 9 | from torchvision import models 10 | import torch.nn as nn 11 | import torch.optim 12 | import torch.utils.data 13 | import torchvision.transforms as transforms 14 | import torchvision.datasets as datasets 15 | from torch.autograd import Variable 16 | 17 | import yaml 18 | import numpy as np 19 | import matplotlib 20 | matplotlib.use('Agg') 21 | import matplotlib.pyplot as plt 22 | 23 | here = osp.dirname(osp.abspath(__file__)) # output folder is located here 24 | root_dir,_ = osp.split(here) 25 | import sys 26 | sys.path.append(root_dir) 27 | 28 | import train 29 | import models 30 | import utils 31 | from config import configurations 32 | 33 | 34 | 35 | 36 | def main(): 37 | parser = argparse.ArgumentParser() 38 | parser.add_argument('-e', '--exp_name', default='resnet101_vggface_scratch') 39 | parser.add_argument('-c', '--config', type=int, default=1, 40 | choices=configurations.keys()) 41 | parser.add_argument('-d', '--dataset_path', 42 | default='/srv/data1/arunirc/datasets/vggface2') 43 | parser.add_argument('-m', '--model_path', default=None, 44 | help='Initialize from pre-trained model') 45 | parser.add_argument('--resume', help='Checkpoint path') 46 | parser.add_argument('--bottleneck', action='store_true', default=False, 47 | help='Add a 512-dim bottleneck layer with L2 normalization') 48 | args = parser.parse_args() 49 | 50 | # gpu = args.gpu 51 | cfg = configurations[args.config] 52 | out = get_log_dir(args.exp_name, args.config, cfg, verbose=False) 53 | resume = args.resume 54 | 55 | # os.environ['CUDA_VISIBLE_DEVICES'] = str(gpu) 56 | cuda = torch.cuda.is_available() 57 | 58 | torch.manual_seed(1337) 59 | if cuda: 60 | torch.cuda.manual_seed(1337) 61 | torch.backends.cudnn.enabled = True 62 | torch.backends.cudnn.benchmark = True # enable if all images are same size 63 | 64 | 65 | 66 | # ----------------------------------------------------------------------------- 67 | # 1. Dataset 68 | # ----------------------------------------------------------------------------- 69 | # Images should be arranged like this: 70 | # data_root/ 71 | # class_1/....jpg.. 72 | # class_2/....jpg.. 73 | # ......./....jpg.. 74 | data_root = args.dataset_path 75 | kwargs = {'num_workers': 4, 'pin_memory': True} if cuda else {} 76 | RGB_MEAN = [ 0.485, 0.456, 0.406 ] 77 | RGB_STD = [ 0.229, 0.224, 0.225 ] 78 | 79 | # Data transforms 80 | # http://pytorch.org/docs/master/torchvision/transforms.html 81 | train_transform = transforms.Compose([ 82 | transforms.Scale(256), # smaller side resized 83 | transforms.RandomCrop(224), 84 | transforms.RandomHorizontalFlip(), 85 | transforms.ToTensor(), 86 | transforms.Normalize(mean = RGB_MEAN, 87 | std = RGB_STD), 88 | ]) 89 | val_transform = transforms.Compose([ 90 | transforms.Scale(256), 91 | transforms.CenterCrop(224), 92 | transforms.ToTensor(), 93 | transforms.Normalize(mean = RGB_MEAN, 94 | std = RGB_STD), 95 | ]) 96 | 97 | # Data loaders - using PyTorch built-in objects 98 | # loader = DataLoaderClass(DatasetClass) 99 | # * `DataLoaderClass` is PyTorch provided torch.utils.data.DataLoader 100 | # * `DatasetClass` loads samples from a dataset; can be a standard class 101 | # provided by PyTorch (datasets.ImageFolder) or a custom-made class. 102 | # - More info: http://pytorch.org/docs/master/torchvision/datasets.html#imagefolder 103 | traindir = osp.join(data_root, 'train') 104 | dataset_train = datasets.ImageFolder(traindir, train_transform) 105 | 106 | # For unbalanced dataset we create a weighted sampler 107 | # * Balanced class sampling: https://discuss.pytorch.org/t/balanced-sampling-between-classes-with-torchvision-dataloader/2703/3 108 | weights = utils.make_weights_for_balanced_classes( 109 | dataset_train.imgs, len(dataset_train.classes)) 110 | weights = torch.DoubleTensor(weights) 111 | sampler = torch.utils.data.sampler.WeightedRandomSampler(weights, len(weights)) 112 | 113 | train_loader = torch.utils.data.DataLoader( 114 | dataset_train, batch_size=cfg['batch_size'], 115 | sampler = sampler, **kwargs) 116 | 117 | valdir = osp.join(data_root, 'val-crop') 118 | val_loader = torch.utils.data.DataLoader( 119 | datasets.ImageFolder(valdir, val_transform), 120 | batch_size=cfg['batch_size'], shuffle=False, **kwargs) 121 | 122 | # print 'dataset classes:' + str(train_loader.dataset.classes) 123 | num_class = len(train_loader.dataset.classes) 124 | print 'Number of classes: %d' % num_class 125 | 126 | 127 | 128 | # ----------------------------------------------------------------------------- 129 | # 2. Model 130 | # ----------------------------------------------------------------------------- 131 | model = torchvision.models.resnet101(pretrained=False) 132 | 133 | if type(model.fc) == torch.nn.modules.linear.Linear: 134 | # Check if final fc layer sizes match num_class 135 | if not model.fc.weight.size()[0] == num_class: 136 | # Replace last layer 137 | print model.fc 138 | model.fc = torch.nn.Linear(2048, num_class) 139 | print model.fc 140 | else: 141 | pass 142 | else: 143 | pass 144 | 145 | 146 | if args.model_path: 147 | # If existing model is to be loaded from a file 148 | checkpoint = torch.load(args.model_path) 149 | 150 | if checkpoint['arch'] == 'DataParallel': 151 | # if we trained and saved our model using DataParallel 152 | model = torch.nn.DataParallel(model, device_ids=[0, 1, 2, 3, 4, 5, 6, 7]) 153 | model.load_state_dict(checkpoint['model_state_dict']) 154 | model = model.module # get network module from inside its DataParallel wrapper 155 | else: 156 | model.load_state_dict(checkpoint['model_state_dict']) 157 | 158 | # Optionally add a "bottleneck + L2-norm" layer after GAP-layer 159 | # TODO -- loading a bottleneck model might be a problem .... do some unit-tests 160 | if args.bottleneck: 161 | layers = [] 162 | layers.append(torch.nn.Linear(2048, 512)) 163 | layers.append(nn.BatchNorm2d(512)) 164 | layers.append(torch.nn.ReLU(inplace=True)) 165 | layers.append(models.NormFeat()) # L2-normalization layer 166 | layers.append(torch.nn.Linear(512, num_class)) 167 | model.fc = torch.nn.Sequential(*layers) 168 | 169 | # TODO - config options for DataParallel and device_ids 170 | model = torch.nn.DataParallel(model, device_ids=[0, 1, 2, 3, 4, 5, 6, 7]) 171 | 172 | if cuda: 173 | model.cuda() 174 | 175 | start_epoch = 0 176 | start_iteration = 0 177 | 178 | # Loss - cross entropy between predicted scores (unnormalized) and class labels (integers) 179 | criterion = nn.CrossEntropyLoss() 180 | if cuda: 181 | criterion = criterion.cuda() 182 | 183 | if resume: 184 | # Resume training from last saved checkpoint 185 | checkpoint = torch.load(resume) 186 | model.load_state_dict(checkpoint['model_state_dict']) 187 | start_epoch = checkpoint['epoch'] 188 | start_iteration = checkpoint['iteration'] 189 | else: 190 | pass 191 | 192 | 193 | # ----------------------------------------------------------------------------- 194 | # 3. Optimizer 195 | # ----------------------------------------------------------------------------- 196 | params = filter(lambda p: p.requires_grad, model.parameters()) 197 | # Parameters with p.requires_grad=False are not updated during training. 198 | # This can be specified when defining the nn.Modules during model creation 199 | 200 | if 'optim' in cfg.keys(): 201 | if cfg['optim'].lower()=='sgd': 202 | optim = torch.optim.SGD(params, 203 | lr=cfg['lr'], 204 | momentum=cfg['momentum'], 205 | weight_decay=cfg['weight_decay']) 206 | 207 | elif cfg['optim'].lower()=='adam': 208 | optim = torch.optim.Adam(params, 209 | lr=cfg['lr'], weight_decay=cfg['weight_decay']) 210 | 211 | else: 212 | raise NotImplementedError('Optimizers: SGD or Adam') 213 | else: 214 | optim = torch.optim.SGD(params, 215 | lr=cfg['lr'], 216 | momentum=cfg['momentum'], 217 | weight_decay=cfg['weight_decay']) 218 | 219 | if resume: 220 | optim.load_state_dict(checkpoint['optim_state_dict']) 221 | 222 | 223 | # ----------------------------------------------------------------------------- 224 | # [optional] Sanity-check: forward pass with a single batch 225 | # ----------------------------------------------------------------------------- 226 | DEBUG = False 227 | if DEBUG: 228 | # model = model.cpu() 229 | dataiter = iter(val_loader) 230 | img, label = dataiter.next() 231 | 232 | print 'Labels: ' + str(label.size()) # batchSize x num_class 233 | print 'Input: ' + str(img.size()) # batchSize x 3 x 224 x 224 234 | 235 | im = img.squeeze().numpy() 236 | im = im[0,:,:,:] # get first image in the batch 237 | im = im.transpose((1,2,0)) # permute to 224x224x3 238 | im = im * [ 0.229, 0.224, 0.225 ] # unnormalize 239 | im = im + [ 0.485, 0.456, 0.406 ] 240 | im[im<0] = 0 241 | 242 | f = plt.figure() 243 | plt.imshow(im) 244 | plt.savefig('sanity-check-im.jpg') # save transformed image in current folder 245 | inputs = Variable(img) 246 | if cuda: 247 | inputs = inputs.cuda() 248 | 249 | model.eval() 250 | outputs = model(inputs) 251 | print 'Network output: ' + str(outputs.size()) 252 | model.train() 253 | 254 | else: 255 | pass 256 | 257 | 258 | # ----------------------------------------------------------------------------- 259 | # 4. Training 260 | # ----------------------------------------------------------------------------- 261 | trainer = train.Trainer( 262 | cuda=cuda, 263 | model=model, 264 | criterion=criterion, 265 | optimizer=optim, 266 | init_lr=cfg['lr'], 267 | lr_decay_epoch = cfg['lr_decay_epoch'], 268 | train_loader=train_loader, 269 | val_loader=val_loader, 270 | out=out, 271 | max_iter=cfg['max_iteration'], 272 | interval_validate=cfg.get('interval_validate', len(train_loader)), 273 | ) 274 | 275 | trainer.epoch = start_epoch 276 | trainer.iteration = start_iteration 277 | trainer.train() 278 | 279 | 280 | 281 | def get_log_dir(model_name, config_id, cfg, verbose=True): 282 | # Creates an output directory for each experiment, timestamped 283 | name = 'MODEL-%s_CFG-%03d' % (model_name, config_id) 284 | if verbose: 285 | for k, v in cfg.items(): 286 | v = str(v) 287 | if '/' in v: 288 | continue 289 | name += '_%s-%s' % (k.upper(), v) 290 | now = datetime.datetime.now(pytz.timezone('US/Eastern')) 291 | name += '_TIME-%s' % now.strftime('%Y%m%d-%H%M%S') 292 | log_dir = osp.join(here, 'logs', name) 293 | if not osp.exists(log_dir): 294 | os.makedirs(log_dir) 295 | with open(osp.join(log_dir, 'config.yaml'), 'w') as f: 296 | yaml.safe_dump(cfg, f, default_flow_style=False) 297 | return log_dir 298 | 299 | 300 | 301 | 302 | if __name__ == '__main__': 303 | main() 304 | 305 | -------------------------------------------------------------------------------- /ijba/eval_ijba_1_1.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | import os.path as osp 4 | 5 | import torch 6 | import torchvision 7 | from torchvision import models 8 | import torch.nn as nn 9 | import torch.utils.data 10 | import torchvision.transforms as transforms 11 | import torchvision.datasets as datasets 12 | from torch.autograd import Variable 13 | import torch.nn.functional as F 14 | 15 | import yaml 16 | import tqdm 17 | import numpy as np 18 | import sklearn.metrics 19 | from sklearn import metrics 20 | from scipy import interpolate 21 | import scipy.io as sio 22 | import matplotlib 23 | matplotlib.use('Agg') 24 | import matplotlib.pyplot as plt 25 | 26 | here = osp.dirname(osp.abspath(__file__)) # output folder is located here 27 | root_dir,_ = osp.split(here) 28 | import sys 29 | sys.path.append(root_dir) 30 | 31 | import models 32 | import utils 33 | import data_loader 34 | 35 | 36 | ''' 37 | Evaluate a network on the IJB-A 1:1 verification task 38 | ===================================================== 39 | Example usage: TODO *** 40 | # Resnet 101 on 10 folds of IJB-A 1:1 41 | ''' 42 | # MODEL_PATH = '/srv/data1/arunirc/Research/resnet-face-pytorch/vgg-face-2/logs/MODEL-resnet101_vggface_scratch_CFG-022_TIME-20180210-201442/model_best.pth.tar' 43 | 44 | 45 | # Resnet101-512d-norm 46 | # MODEL_PATH = '/home/renyi/arunirc/data1/Research/resnet-face-pytorch/vgg-face-2/logs/MODEL-resnet101_512d_L2norm_ft2_CFG-023_TIME-20180214-020054/model_best.pth.tar' 47 | MODEL_TYPE = 'resnet101-512d-norm' 48 | # MODEL_PATH = '/home/renyi/arunirc/data1/Research/resnet-face-pytorch/vgg-face-2/logs/MODEL-resnet101_512d_L2norm_ft2_CFG-022_TIME-20180214-015313/model_best.pth.tar' 49 | MODEL_PATH = '/home/renyi/arunirc/data1/Research/resnet-face-pytorch/vgg-face-2/logs/MODEL-resnet101_512d_L2norm_ft2_CFG-024_TIME-20180214-160410/model_best.pth.tar' 50 | 51 | 52 | # Resnet101-512d 53 | # MODEL_PATH = '/home/renyi/arunirc/data1/Research/resnet-face-pytorch/vgg-face-2/logs/MODEL-resnet101_bottleneck_ft2_CFG-023_TIME-20180213-091016/model_best.pth.tar' 54 | # MODEL_TYPE = 'resnet101-512d' 55 | # MODEL_PATH = '/home/renyi/arunirc/data1/Research/resnet-face-pytorch/vgg-face-2/logs/MODEL-resnet101_bottleneck_ft1_CFG-021_TIME-20180212-192332/model_best.pth.tar' 56 | 57 | 58 | 59 | def main(): 60 | parser = argparse.ArgumentParser() 61 | parser.add_argument('-e', '--exp_name', default='ijba_eval') 62 | parser.add_argument('-g', '--gpu', type=int, default=0) 63 | 64 | parser.add_argument('-d', '--data_dir', 65 | default='/home/renyi/arunirc/data1/datasets/CS2') 66 | parser.add_argument('-p', '--protocol_dir', 67 | default='/home/renyi/arunirc/data1/datasets/IJB-A/IJB-A_11_sets/') 68 | parser.add_argument('--fold', type=int, default=1, choices=[1,10]) 69 | parser.add_argument('--sqrt', action='store_true', default=False, 70 | help='Add signed sqrt normalization') 71 | parser.add_argument('--cosine', action='store_true', default=False, 72 | help='Use cosine similarity instead of L2 distance') 73 | parser.add_argument('--batch_size', type=int, default=100) 74 | parser.add_argument('-m', '--model_path', 75 | default=MODEL_PATH, 76 | help='Path to pre-trained model') 77 | parser.add_argument('--model_type', default=MODEL_TYPE, 78 | choices=['resnet50', 'resnet101', 'resnet101-512d', 'resnet101-512d-norm']) 79 | 80 | args = parser.parse_args() 81 | 82 | 83 | # CUDA setup 84 | os.environ['CUDA_VISIBLE_DEVICES'] = str(args.gpu) 85 | cuda = torch.cuda.is_available() 86 | torch.manual_seed(1337) 87 | if cuda: 88 | torch.cuda.manual_seed(1337) 89 | torch.backends.cudnn.enabled = True 90 | torch.backends.cudnn.benchmark = True # enable if all images are same size 91 | 92 | 93 | # ----------------------------------------------------------------------------- 94 | # 1. Model 95 | # ----------------------------------------------------------------------------- 96 | num_class = 8631 97 | if args.model_type == 'resnet50': 98 | model = torchvision.models.resnet50(pretrained=False) 99 | model.fc = torch.nn.Linear(2048, num_class) 100 | elif args.model_type == 'resnet101': 101 | model = torchvision.models.resnet101(pretrained=False) 102 | model.fc = torch.nn.Linear(2048, num_class) 103 | elif args.model_type == 'resnet101-512d': 104 | model = torchvision.models.resnet101(pretrained=False) 105 | layers = [] 106 | layers.append(torch.nn.Linear(2048, 512)) 107 | layers.append(torch.nn.Linear(512, num_class)) 108 | model.fc = torch.nn.Sequential(*layers) 109 | elif args.model_type == 'resnet101-512d-norm': 110 | model = torchvision.models.resnet101(pretrained=False) 111 | layers = [] 112 | layers.append(torch.nn.Linear(2048, 512)) 113 | layers.append(models.NormFeat(scale_factor=50.0)) 114 | layers.append(torch.nn.Linear(512, num_class)) 115 | model.fc = torch.nn.Sequential(*layers) 116 | else: 117 | raise NotImplementedError 118 | 119 | checkpoint = torch.load(args.model_path) 120 | if checkpoint['arch'] == 'DataParallel': 121 | model = torch.nn.DataParallel(model, device_ids=[0, 1, 2, 3, 4]) 122 | model.load_state_dict(checkpoint['model_state_dict']) 123 | model = model.module # get network module from inside its DataParallel wrapper 124 | else: 125 | model.load_state_dict(checkpoint['model_state_dict']) 126 | 127 | if cuda: 128 | model = model.cuda() 129 | 130 | # Convert the trained network into a "feature extractor" 131 | feature_map = list(model.children()) 132 | if args.model_type == 'resnet101-512d' or args.model_type == 'resnet101-512d-norm': 133 | model.eval() 134 | extractor = model 135 | extractor.fc = nn.Sequential(extractor.fc[0]) 136 | else: 137 | feature_map.pop() 138 | extractor = nn.Sequential(*feature_map) 139 | extractor.eval() # ALWAYS set to evaluation mode (fixes BatchNorm, dropout, etc.) 140 | 141 | 142 | 143 | # ----------------------------------------------------------------------------- 144 | # 2. Dataset 145 | # ----------------------------------------------------------------------------- 146 | fold_id = 1 147 | file_ext = '.jpg' 148 | RGB_MEAN = [ 0.485, 0.456, 0.406 ] 149 | RGB_STD = [ 0.229, 0.224, 0.225 ] 150 | 151 | test_transform = transforms.Compose([ 152 | # transforms.Scale(224), 153 | # transforms.CenterCrop(224), 154 | transforms.Scale((224,224)), 155 | transforms.ToTensor(), 156 | transforms.Normalize(mean = RGB_MEAN, 157 | std = RGB_STD), 158 | ]) 159 | 160 | 161 | pairs_path = osp.join(args.protocol_dir, 'split%d' % fold_id, 162 | 'verify_comparisons_%d.csv' % fold_id) 163 | pairs = utils.read_ijba_pairs(pairs_path) 164 | protocol_file = osp.join(args.protocol_dir, 'split%d' % fold_id, 165 | 'verify_metadata_%d.csv' % fold_id) 166 | metadata = utils.get_ijba_1_1_metadata(protocol_file) # dict 167 | assert np.all(np.unique(pairs) == np.unique(metadata['template_id'])) # sanity-check 168 | path_list = np.array([osp.join(args.data_dir, str(x)+file_ext) 169 | for x in metadata['sighting_id'] ]) # face crops saved as 170 | 171 | # Create data loader 172 | test_loader = torch.utils.data.DataLoader( 173 | data_loader.IJBADataset( 174 | path_list, test_transform, split=fold_id), 175 | batch_size=args.batch_size, shuffle=False ) 176 | 177 | # testing 178 | # for i in range(len(test_loader.dataset)): 179 | # img = test_loader.dataset.__getitem__(i) 180 | # sz = img.shape 181 | # if sz[0] != 3: 182 | # print sz 183 | 184 | 185 | 186 | 187 | # ----------------------------------------------------------------------------- 188 | # 3. Feature extraction 189 | # ----------------------------------------------------------------------------- 190 | print 'Feature extraction...' 191 | cache_dir = osp.join(here, 'cache-' + args.model_type) 192 | if not osp.exists(cache_dir): 193 | os.makedirs(cache_dir) 194 | 195 | feat_path = osp.join(cache_dir, 'feat-fold-%d.mat' % fold_id) 196 | 197 | if not osp.exists(feat_path): 198 | features = [] 199 | for batch_idx, images in tqdm.tqdm(enumerate(test_loader), 200 | total=len(test_loader), 201 | desc='Extracting features'): 202 | x = Variable(images, volatile=True) # test-time memory conservation 203 | if cuda: 204 | x = x.cuda() 205 | feat = extractor(x) 206 | if cuda: 207 | feat = feat.data.cpu() # free up GPU 208 | else: 209 | feat = feat.data 210 | features.append(feat) 211 | 212 | features = torch.cat(features, dim=0) # (n_batch*batch_sz) x 512 213 | sio.savemat(feat_path, {'feat': features.cpu().numpy() }) 214 | else: 215 | dat = sio.loadmat(feat_path) 216 | features = torch.FloatTensor(dat['feat']) 217 | del dat 218 | print 'Loaded.' 219 | 220 | 221 | # ----------------------------------------------------------------------------- 222 | # 4. Verification 223 | # ----------------------------------------------------------------------------- 224 | scores = [] 225 | labels = [] 226 | 227 | # labels: is_same_subject 228 | print 'Computing pair labels . . . ' 229 | for pair in tqdm.tqdm(pairs): # TODO - check tqdm 230 | sel_t0 = np.where(metadata['template_id'] == pair[0]) 231 | sel_t1 = np.where(metadata['template_id'] == pair[1]) 232 | subject0 = np.unique(metadata['subject_id'][sel_t0]) 233 | subject1 = np.unique(metadata['subject_id'][sel_t1]) 234 | labels.append(int(subject0 == subject1)) 235 | labels = np.array(labels) 236 | print 'done' 237 | 238 | # templates: average pool, then L2-normalize 239 | print 'Pooling templates . . . ' 240 | pooled_features = [] 241 | template_set = np.unique(metadata['template_id']) 242 | for tid in tqdm.tqdm(template_set): 243 | sel = np.where(metadata['template_id'] == tid) 244 | # pool template: 1 x n x 512 -> 1 x 512 245 | feat = features[sel,:].mean(1) 246 | if args.sqrt: # signed-square-root normalization 247 | feat = torch.mul(torch.sign(feat),torch.sqrt(torch.abs(feat)+1e-12)) 248 | pooled_features.append(F.normalize(feat, p=2, dim=1) ) 249 | pooled_features = torch.cat(pooled_features, dim=0) # (n_batch*batch_sz) x 512 250 | print 'done' 251 | 252 | print 'Computing pair distances . . . ' 253 | for pair in tqdm.tqdm(pairs): 254 | sel_t0 = np.where(template_set == pair[0]) 255 | sel_t1 = np.where(template_set == pair[1]) 256 | if args.cosine: 257 | feat_dist = torch.dot(torch.squeeze(pooled_features[sel_t0]), 258 | torch.squeeze(pooled_features[sel_t1])) 259 | else: 260 | feat_dist = (pooled_features[sel_t0] - pooled_features[sel_t1]).norm(p=2, dim=1) 261 | feat_dist = -torch.squeeze(feat_dist) 262 | feat_dist = feat_dist.numpy() 263 | scores.append(feat_dist) # score: negative of L2-distance 264 | scores = np.array(scores) 265 | 266 | # Metrics: TAR (tpr) at FAR (fpr) 267 | fpr, tpr, thresholds = sklearn.metrics.roc_curve(labels, scores) 268 | fpr_levels = [0.0001, 0.001, 0.01, 0.1] 269 | f_interp = interpolate.interp1d(fpr, tpr) 270 | tpr_at_fpr = [ f_interp(x) for x in fpr_levels ] 271 | 272 | for (far, tar) in zip(fpr_levels, tpr_at_fpr): 273 | print 'TAR @ FAR=%.4f : %.4f' % (far, tar) 274 | 275 | res = {} 276 | res['TAR'] = tpr_at_fpr 277 | res['FAR'] = fpr_levels 278 | with open( osp.join(cache_dir, 'result-1-1-fold-%d.yaml' % fold_id), 279 | 'w') as f: 280 | yaml.dump(res, f, default_flow_style=False) 281 | 282 | sio.savemat(osp.join(cache_dir, 'roc-1-1-fold-%d.mat' % fold_id), 283 | {'fpr': fpr, 'tpr': tpr, 'thresholds': thresholds, 284 | 'tpr_at_fpr': tpr_at_fpr}) 285 | 286 | 287 | if __name__ == '__main__': 288 | main() 289 | 290 | -------------------------------------------------------------------------------- /models.py: -------------------------------------------------------------------------------- 1 | 2 | # import fcn 3 | import os.path as osp 4 | import numpy as np 5 | import torch 6 | import torch.nn as nn 7 | import torch.nn.functional as F 8 | 9 | 10 | 11 | class NormFeat(nn.Module): 12 | ''' L2 normalization of features ''' 13 | def __init__(self, scale_factor=1.0): 14 | super(NormFeat, self).__init__() 15 | self.scale_factor = scale_factor 16 | 17 | def forward(self, input): 18 | return self.scale_factor * F.normalize(input, p=2, dim=1) 19 | 20 | 21 | class ScaleFeat(nn.Module): 22 | # https://discuss.pytorch.org/t/is-scale-layer-available-in-pytorch/7954/6?u=arunirc 23 | def __init__(self, scale_factor=50.0): 24 | super().__init__() 25 | self.scale = scale_factor 26 | 27 | def forward(self, input): 28 | return input * self.scale 29 | 30 | 31 | # https://github.com/shelhamer/fcn.berkeleyvision.org/blob/master/surgery.py 32 | def get_upsampling_weight(in_channels, out_channels, kernel_size): 33 | """Make a 2D bilinear kernel suitable for upsampling""" 34 | factor = (kernel_size + 1) // 2 35 | if kernel_size % 2 == 1: 36 | center = factor - 1 37 | else: 38 | center = factor - 0.5 39 | og = np.ogrid[:kernel_size, :kernel_size] 40 | filt = (1 - abs(og[0] - center) / factor) * \ 41 | (1 - abs(og[1] - center) / factor) 42 | weight = np.zeros((in_channels, out_channels, kernel_size, kernel_size), 43 | dtype=np.float64) 44 | weight[range(in_channels), range(out_channels), :, :] = filt 45 | return torch.from_numpy(weight).float() 46 | 47 | 48 | 49 | class FCN32sColor(nn.Module): 50 | 51 | def __init__(self, n_class=32, bin_type='one-hot', batch_norm=True): 52 | super(FCN32sColor, self).__init__() 53 | self.n_class = n_class 54 | self.bin_type = bin_type 55 | self.batch_norm = batch_norm 56 | 57 | # conv1 58 | self.conv1_1 = nn.Conv2d(1, 64, 3, padding=100) 59 | self.relu1_1 = nn.ReLU(inplace=True) 60 | if batch_norm: 61 | self.conv1_1_bn = nn.BatchNorm2d(64) 62 | self.conv1_2 = nn.Conv2d(64, 64, 3, padding=1) 63 | self.relu1_2 = nn.ReLU(inplace=True) 64 | if batch_norm: 65 | self.conv1_2_bn = nn.BatchNorm2d(64) 66 | self.pool1 = nn.MaxPool2d(2, stride=2, ceil_mode=True) # 1/2 67 | 68 | # conv2 69 | self.conv2_1 = nn.Conv2d(64, 128, 3, padding=1) 70 | self.relu2_1 = nn.ReLU(inplace=True) 71 | if batch_norm: 72 | self.conv2_1_bn = nn.BatchNorm2d(128) 73 | self.conv2_2 = nn.Conv2d(128, 128, 3, padding=1) 74 | self.relu2_2 = nn.ReLU(inplace=True) 75 | if batch_norm: 76 | self.conv2_2_bn = nn.BatchNorm2d(128) 77 | self.pool2 = nn.MaxPool2d(2, stride=2, ceil_mode=True) # 1/4 78 | 79 | # conv3 80 | self.conv3_1 = nn.Conv2d(128, 256, 3, padding=1) 81 | self.relu3_1 = nn.ReLU(inplace=True) 82 | if batch_norm: 83 | self.conv3_1_bn = nn.BatchNorm2d(256) 84 | self.conv3_2 = nn.Conv2d(256, 256, 3, padding=1) 85 | self.relu3_2 = nn.ReLU(inplace=True) 86 | if batch_norm: 87 | self.conv3_2_bn = nn.BatchNorm2d(256) 88 | self.conv3_3 = nn.Conv2d(256, 256, 3, padding=1) 89 | self.relu3_3 = nn.ReLU(inplace=True) 90 | if batch_norm: 91 | self.conv3_3_bn = nn.BatchNorm2d(256) 92 | self.pool3 = nn.MaxPool2d(2, stride=2, ceil_mode=True) # 1/8 93 | 94 | # conv4 95 | self.conv4_1 = nn.Conv2d(256, 512, 3, padding=1) 96 | self.relu4_1 = nn.ReLU(inplace=True) 97 | if batch_norm: 98 | self.conv4_1_bn = nn.BatchNorm2d(512) 99 | self.conv4_2 = nn.Conv2d(512, 512, 3, padding=1) 100 | self.relu4_2 = nn.ReLU(inplace=True) 101 | if batch_norm: 102 | self.conv4_2_bn = nn.BatchNorm2d(512) 103 | self.conv4_3 = nn.Conv2d(512, 512, 3, padding=1) 104 | self.relu4_3 = nn.ReLU(inplace=True) 105 | if batch_norm: 106 | self.conv4_3_bn = nn.BatchNorm2d(512) 107 | self.pool4 = nn.MaxPool2d(2, stride=2, ceil_mode=True) # 1/16 108 | 109 | # conv5 110 | self.conv5_1 = nn.Conv2d(512, 512, 3, padding=1) 111 | self.relu5_1 = nn.ReLU(inplace=True) 112 | if batch_norm: 113 | self.conv5_1_bn = nn.BatchNorm2d(512) 114 | self.conv5_2 = nn.Conv2d(512, 512, 3, padding=1) 115 | self.relu5_2 = nn.ReLU(inplace=True) 116 | if batch_norm: 117 | self.conv5_2_bn = nn.BatchNorm2d(512) 118 | self.conv5_3 = nn.Conv2d(512, 512, 3, padding=1) 119 | self.relu5_3 = nn.ReLU(inplace=True) 120 | if batch_norm: 121 | self.conv5_3_bn = nn.BatchNorm2d(512) 122 | self.pool5 = nn.MaxPool2d(2, stride=2, ceil_mode=True) # 1/32 123 | 124 | # fc6 125 | self.fc6 = nn.Conv2d(512, 4096, 7) 126 | self.relu6 = nn.ReLU(inplace=True) 127 | if batch_norm: 128 | self.fc6_bn = nn.BatchNorm2d(4096) 129 | self.drop6 = nn.Dropout2d() 130 | 131 | # fc7 132 | self.fc7 = nn.Conv2d(4096, 4096, 1) 133 | self.relu7 = nn.ReLU(inplace=True) 134 | self.fc7_bn = nn.BatchNorm2d(4096) 135 | self.drop7 = nn.Dropout2d() 136 | 137 | if bin_type == 'one-hot': 138 | # NOTE: *two* output prediction maps for hue and chroma 139 | # TODO - not implemented error should be raised for this! 140 | self.score_fr_hue = nn.Conv2d(4096, n_class, 1) 141 | self.upscore_hue = nn.ConvTranspose2d(n_class, n_class, 64, stride=32, 142 | bias=False) 143 | self.score_fr_chroma = nn.Conv2d(4096, n_class, 1) 144 | self.upscore_chroma = nn.ConvTranspose2d(n_class, n_class, 64, stride=32, 145 | bias=False) 146 | self.upscore_hue.weight.requires_grad = False 147 | self.upscore_chroma.weight.requires_grad = False 148 | elif bin_type == 'soft': 149 | self.score_fr = nn.Conv2d(4096, n_class, 1) 150 | self.upscore = nn.ConvTranspose2d(n_class, n_class, 64, stride=32, 151 | bias=False) 152 | self.upscore.weight.requires_grad = False # fix bilinear upsampler 153 | 154 | self._initialize_weights() 155 | # TODO - init from pre-trained network 156 | 157 | 158 | 159 | def _initialize_weights(self): 160 | for m in self.modules(): 161 | if isinstance(m, nn.Conv2d): 162 | pass # leave the default PyTorch init 163 | if isinstance(m, nn.ConvTranspose2d): 164 | assert m.kernel_size[0] == m.kernel_size[1] 165 | initial_weight = get_upsampling_weight( 166 | m.in_channels, m.out_channels, m.kernel_size[0]) 167 | m.weight.data.copy_(initial_weight) 168 | 169 | 170 | def forward(self, x): 171 | h = x 172 | h = self.conv1_1(h) 173 | if self.batch_norm: 174 | h = self.conv1_1_bn(h) 175 | h = self.relu1_1(h) 176 | h = self.conv1_2(h) 177 | if self.batch_norm: 178 | h = self.conv1_2_bn(h) 179 | h = self.relu1_2(h) 180 | h = self.pool1(h) 181 | 182 | if self.batch_norm: 183 | h = self.relu2_1(self.conv2_1_bn(self.conv2_1(h))) 184 | else: 185 | h = self.relu2_1(self.conv2_1(h)) 186 | if self.batch_norm: 187 | h = self.relu2_2(self.conv2_2_bn(self.conv2_2(h))) 188 | else: 189 | h = self.relu2_2(self.conv2_2_bn(self.conv2_2(h))) 190 | h = self.pool2(h) 191 | 192 | if self.batch_norm: 193 | h = self.relu3_1(self.conv3_1_bn(self.conv3_1(h))) 194 | else: 195 | h = self.relu3_1(self.conv3_1(h)) 196 | if self.batch_norm: 197 | h = self.relu3_2(self.conv3_2_bn(self.conv3_2(h))) 198 | else: 199 | h = self.relu3_2(self.conv3_2(h)) 200 | if self.batch_norm: 201 | h = self.relu3_3(self.conv3_3_bn(self.conv3_3(h))) 202 | else: 203 | h = self.relu3_3(self.conv3_3(h)) 204 | h = self.pool3(h) 205 | 206 | if self.batch_norm: 207 | h = self.relu4_1(self.conv4_1_bn(self.conv4_1(h))) 208 | else: 209 | h = self.relu4_1(self.conv4_1(h)) 210 | if self.batch_norm: 211 | h = self.relu4_2(self.conv4_2_bn(self.conv4_2(h))) 212 | else: 213 | h = self.relu4_2(self.conv4_2(h)) 214 | if self.batch_norm: 215 | h = self.relu4_3(self.conv4_3_bn(self.conv4_3(h))) 216 | else: 217 | h = self.relu4_3(self.conv4_3(h)) 218 | h = self.pool4(h) 219 | 220 | if self.batch_norm: 221 | h = self.relu5_1(self.conv5_1_bn(self.conv5_1(h))) 222 | else: 223 | h = self.relu5_1(self.conv5_1(h)) 224 | if self.batch_norm: 225 | h = self.relu5_2(self.conv5_2_bn(self.conv5_2(h))) 226 | else: 227 | h = self.relu5_2(self.conv5_2(h)) 228 | if self.batch_norm: 229 | h = self.relu5_3(self.conv5_3_bn(self.conv5_3(h))) 230 | else: 231 | h = self.relu5_3(self.conv5_3(h)) 232 | h = self.pool5(h) 233 | 234 | if self.batch_norm: 235 | h = self.relu6(self.fc6_bn(self.fc6(h))) 236 | else: 237 | h = self.relu6(self.fc6(h)) 238 | h = self.drop6(h) 239 | 240 | if self.batch_norm: 241 | h = self.relu7(self.fc7_bn(self.fc7(h))) 242 | else: 243 | h = self.relu7(self.fc7(h)) 244 | h = self.drop7(h) 245 | 246 | if self.bin_type == 'one-hot': 247 | # hue prediction map 248 | h_hue = self.score_fr_hue(h) 249 | h_hue = self.upscore_hue(h_hue) 250 | h_hue = h_hue[:, :, 19:19 + x.size()[2], 19:19 + x.size()[3]].contiguous() 251 | 252 | # chroma prediction map 253 | h_chroma = self.score_fr_chroma(h) 254 | h_chroma = self.upscore_chroma(h_chroma) 255 | h_chroma = h_chroma[:, :, 19:19 + x.size()[2], 19:19 + x.size()[3]].contiguous() 256 | h = (h_hue, h_chroma) 257 | 258 | elif self.bin_type == 'soft': 259 | h = self.score_fr(h) 260 | h = self.upscore(h) 261 | h = h[:, :, 19:19 + x.size()[2], 19:19 + x.size()[3]].contiguous() 262 | 263 | return h 264 | 265 | 266 | 267 | 268 | class FCN16sColor(nn.Module): 269 | 270 | def __init__(self, n_class=32, bin_type='one-hot', batch_norm=True): 271 | super(FCN16sColor, self).__init__() 272 | self.n_class = n_class 273 | self.bin_type = bin_type 274 | self.batch_norm = batch_norm 275 | 276 | # conv1 277 | self.conv1_1 = nn.Conv2d(1, 64, 3, padding=100) 278 | self.relu1_1 = nn.ReLU(inplace=True) 279 | if batch_norm: 280 | self.conv1_1_bn = nn.BatchNorm2d(64) 281 | self.conv1_2 = nn.Conv2d(64, 64, 3, padding=1) 282 | self.relu1_2 = nn.ReLU(inplace=True) 283 | if batch_norm: 284 | self.conv1_2_bn = nn.BatchNorm2d(64) 285 | self.pool1 = nn.MaxPool2d(2, stride=2, ceil_mode=True) # 1/2 286 | 287 | # conv2 288 | self.conv2_1 = nn.Conv2d(64, 128, 3, padding=1) 289 | self.relu2_1 = nn.ReLU(inplace=True) 290 | if batch_norm: 291 | self.conv2_1_bn = nn.BatchNorm2d(128) 292 | self.conv2_2 = nn.Conv2d(128, 128, 3, padding=1) 293 | self.relu2_2 = nn.ReLU(inplace=True) 294 | if batch_norm: 295 | self.conv2_2_bn = nn.BatchNorm2d(128) 296 | self.pool2 = nn.MaxPool2d(2, stride=2, ceil_mode=True) # 1/4 297 | 298 | # conv3 299 | self.conv3_1 = nn.Conv2d(128, 256, 3, padding=1) 300 | self.relu3_1 = nn.ReLU(inplace=True) 301 | if batch_norm: 302 | self.conv3_1_bn = nn.BatchNorm2d(256) 303 | self.conv3_2 = nn.Conv2d(256, 256, 3, padding=1) 304 | self.relu3_2 = nn.ReLU(inplace=True) 305 | if batch_norm: 306 | self.conv3_2_bn = nn.BatchNorm2d(256) 307 | self.conv3_3 = nn.Conv2d(256, 256, 3, padding=1) 308 | self.relu3_3 = nn.ReLU(inplace=True) 309 | if batch_norm: 310 | self.conv3_3_bn = nn.BatchNorm2d(256) 311 | self.pool3 = nn.MaxPool2d(2, stride=2, ceil_mode=True) # 1/8 312 | 313 | # conv4 314 | self.conv4_1 = nn.Conv2d(256, 512, 3, padding=1) 315 | self.relu4_1 = nn.ReLU(inplace=True) 316 | if batch_norm: 317 | self.conv4_1_bn = nn.BatchNorm2d(512) 318 | self.conv4_2 = nn.Conv2d(512, 512, 3, padding=1) 319 | self.relu4_2 = nn.ReLU(inplace=True) 320 | if batch_norm: 321 | self.conv4_2_bn = nn.BatchNorm2d(512) 322 | self.conv4_3 = nn.Conv2d(512, 512, 3, padding=1) 323 | self.relu4_3 = nn.ReLU(inplace=True) 324 | if batch_norm: 325 | self.conv4_3_bn = nn.BatchNorm2d(512) 326 | self.pool4 = nn.MaxPool2d(2, stride=2, ceil_mode=True) # 1/16 327 | 328 | # conv5 329 | self.conv5_1 = nn.Conv2d(512, 512, 3, padding=1) 330 | self.relu5_1 = nn.ReLU(inplace=True) 331 | if batch_norm: 332 | self.conv5_1_bn = nn.BatchNorm2d(512) 333 | self.conv5_2 = nn.Conv2d(512, 512, 3, padding=1) 334 | self.relu5_2 = nn.ReLU(inplace=True) 335 | if batch_norm: 336 | self.conv5_2_bn = nn.BatchNorm2d(512) 337 | self.conv5_3 = nn.Conv2d(512, 512, 3, padding=1) 338 | self.relu5_3 = nn.ReLU(inplace=True) 339 | if batch_norm: 340 | self.conv5_3_bn = nn.BatchNorm2d(512) 341 | self.pool5 = nn.MaxPool2d(2, stride=2, ceil_mode=True) # 1/32 342 | 343 | # fc6 344 | self.fc6 = nn.Conv2d(512, 4096, 7) 345 | self.relu6 = nn.ReLU(inplace=True) 346 | if batch_norm: 347 | self.fc6_bn = nn.BatchNorm2d(4096) 348 | self.drop6 = nn.Dropout2d() 349 | 350 | # fc7 351 | self.fc7 = nn.Conv2d(4096, 4096, 1) 352 | self.relu7 = nn.ReLU(inplace=True) 353 | self.fc7_bn = nn.BatchNorm2d(4096) 354 | self.drop7 = nn.Dropout2d() 355 | 356 | if bin_type == 'one-hot': 357 | # NOTE: *two* output prediction maps for hue and chroma 358 | raise NotImplementedError('TODO - FCN 16s for separate hue-chroma') 359 | elif bin_type == 'soft': 360 | self.score_fr = nn.Conv2d(4096, n_class, 1) 361 | self.score_pool4 = nn.Conv2d(512, n_class, 1) 362 | 363 | self.upscore2 = nn.ConvTranspose2d(n_class, n_class, 4, stride=2, 364 | bias=False) 365 | self.upscore16 = nn.ConvTranspose2d(n_class, n_class, 32, stride=16, 366 | bias=False) 367 | self.upscore2.weight.requires_grad = False # fix bilinear upsamplers 368 | self.upscore16.weight.requires_grad = False 369 | 370 | self._initialize_weights() 371 | 372 | 373 | def _initialize_weights(self): 374 | for m in self.modules(): 375 | if isinstance(m, nn.Conv2d): 376 | pass # leave the default PyTorch init 377 | if isinstance(m, nn.ConvTranspose2d): 378 | assert m.kernel_size[0] == m.kernel_size[1] 379 | initial_weight = get_upsampling_weight( 380 | m.in_channels, m.out_channels, m.kernel_size[0]) 381 | m.weight.data.copy_(initial_weight) 382 | 383 | 384 | def forward(self, x): 385 | h = x 386 | h = self.conv1_1(h) 387 | if self.batch_norm: 388 | h = self.conv1_1_bn(h) 389 | h = self.relu1_1(h) 390 | h = self.conv1_2(h) 391 | if self.batch_norm: 392 | h = self.conv1_2_bn(h) 393 | h = self.relu1_2(h) 394 | h = self.pool1(h) 395 | 396 | if self.batch_norm: 397 | h = self.relu2_1(self.conv2_1_bn(self.conv2_1(h))) 398 | else: 399 | h = self.relu2_1(self.conv2_1(h)) 400 | if self.batch_norm: 401 | h = self.relu2_2(self.conv2_2_bn(self.conv2_2(h))) 402 | else: 403 | h = self.relu2_2(self.conv2_2_bn(self.conv2_2(h))) 404 | h = self.pool2(h) 405 | 406 | if self.batch_norm: 407 | h = self.relu3_1(self.conv3_1_bn(self.conv3_1(h))) 408 | else: 409 | h = self.relu3_1(self.conv3_1(h)) 410 | if self.batch_norm: 411 | h = self.relu3_2(self.conv3_2_bn(self.conv3_2(h))) 412 | else: 413 | h = self.relu3_2(self.conv3_2(h)) 414 | if self.batch_norm: 415 | h = self.relu3_3(self.conv3_3_bn(self.conv3_3(h))) 416 | else: 417 | h = self.relu3_3(self.conv3_3(h)) 418 | h = self.pool3(h) 419 | 420 | if self.batch_norm: 421 | h = self.relu4_1(self.conv4_1_bn(self.conv4_1(h))) 422 | else: 423 | h = self.relu4_1(self.conv4_1(h)) 424 | if self.batch_norm: 425 | h = self.relu4_2(self.conv4_2_bn(self.conv4_2(h))) 426 | else: 427 | h = self.relu4_2(self.conv4_2(h)) 428 | if self.batch_norm: 429 | h = self.relu4_3(self.conv4_3_bn(self.conv4_3(h))) 430 | else: 431 | h = self.relu4_3(self.conv4_3(h)) 432 | h = self.pool4(h) 433 | pool4 = h # 1/16 434 | 435 | if self.batch_norm: 436 | h = self.relu5_1(self.conv5_1_bn(self.conv5_1(h))) 437 | else: 438 | h = self.relu5_1(self.conv5_1(h)) 439 | if self.batch_norm: 440 | h = self.relu5_2(self.conv5_2_bn(self.conv5_2(h))) 441 | else: 442 | h = self.relu5_2(self.conv5_2(h)) 443 | if self.batch_norm: 444 | h = self.relu5_3(self.conv5_3_bn(self.conv5_3(h))) 445 | else: 446 | h = self.relu5_3(self.conv5_3(h)) 447 | h = self.pool5(h) 448 | 449 | if self.batch_norm: 450 | h = self.relu6(self.fc6_bn(self.fc6(h))) 451 | else: 452 | h = self.relu6(self.fc6(h)) 453 | h = self.drop6(h) 454 | 455 | if self.batch_norm: 456 | h = self.relu7(self.fc7_bn(self.fc7(h))) 457 | else: 458 | h = self.relu7(self.fc7(h)) 459 | h = self.drop7(h) 460 | 461 | if self.bin_type == 'one-hot': 462 | raise NotImplementedError('TODO - FCN 16s for separate hue-chroma') 463 | elif self.bin_type == 'soft': 464 | h = self.score_fr(h) 465 | h = self.upscore2(h) 466 | upscore2 = h # 1/16 467 | 468 | h = self.score_pool4(pool4) 469 | h = h[:, :, 5:5 + upscore2.size()[2], 5:5 + upscore2.size()[3]] 470 | score_pool4c = h # 1/16 471 | 472 | h = upscore2 + score_pool4c 473 | 474 | h = self.upscore16(h) 475 | h = h[:, :, 27:27 + x.size()[2], 27:27 + x.size()[3]].contiguous() 476 | 477 | return h 478 | 479 | 480 | def copy_params_from_fcn32s(self, fcn32s): 481 | for name, l1 in fcn32s.named_children(): 482 | try: 483 | l2 = getattr(self, name) 484 | l2.weight # skip ReLU / Dropout 485 | except Exception: 486 | continue 487 | assert l1.weight.size() == l2.weight.size() 488 | assert l1.bias.size() == l2.bias.size() 489 | l2.weight.data.copy_(l1.weight.data) 490 | l2.bias.data.copy_(l1.bias.data) 491 | 492 | 493 | 494 | class FCN8sColor(nn.Module): 495 | 496 | def __init__(self, n_class=32, bin_type='one-hot', batch_norm=True): 497 | super(FCN8sColor, self).__init__() 498 | self.n_class = n_class 499 | self.bin_type = bin_type 500 | self.batch_norm = batch_norm 501 | 502 | # conv1 503 | self.conv1_1 = nn.Conv2d(1, 64, 3, padding=100) 504 | self.relu1_1 = nn.ReLU(inplace=True) 505 | if batch_norm: 506 | self.conv1_1_bn = nn.BatchNorm2d(64) 507 | self.conv1_2 = nn.Conv2d(64, 64, 3, padding=1) 508 | self.relu1_2 = nn.ReLU(inplace=True) 509 | if batch_norm: 510 | self.conv1_2_bn = nn.BatchNorm2d(64) 511 | self.pool1 = nn.MaxPool2d(2, stride=2, ceil_mode=True) # 1/2 512 | 513 | # conv2 514 | self.conv2_1 = nn.Conv2d(64, 128, 3, padding=1) 515 | self.relu2_1 = nn.ReLU(inplace=True) 516 | if batch_norm: 517 | self.conv2_1_bn = nn.BatchNorm2d(128) 518 | self.conv2_2 = nn.Conv2d(128, 128, 3, padding=1) 519 | self.relu2_2 = nn.ReLU(inplace=True) 520 | if batch_norm: 521 | self.conv2_2_bn = nn.BatchNorm2d(128) 522 | self.pool2 = nn.MaxPool2d(2, stride=2, ceil_mode=True) # 1/4 523 | 524 | # conv3 525 | self.conv3_1 = nn.Conv2d(128, 256, 3, padding=1) 526 | self.relu3_1 = nn.ReLU(inplace=True) 527 | if batch_norm: 528 | self.conv3_1_bn = nn.BatchNorm2d(256) 529 | self.conv3_2 = nn.Conv2d(256, 256, 3, padding=1) 530 | self.relu3_2 = nn.ReLU(inplace=True) 531 | if batch_norm: 532 | self.conv3_2_bn = nn.BatchNorm2d(256) 533 | self.conv3_3 = nn.Conv2d(256, 256, 3, padding=1) 534 | self.relu3_3 = nn.ReLU(inplace=True) 535 | if batch_norm: 536 | self.conv3_3_bn = nn.BatchNorm2d(256) 537 | self.pool3 = nn.MaxPool2d(2, stride=2, ceil_mode=True) # 1/8 538 | 539 | # conv4 540 | self.conv4_1 = nn.Conv2d(256, 512, 3, padding=1) 541 | self.relu4_1 = nn.ReLU(inplace=True) 542 | if batch_norm: 543 | self.conv4_1_bn = nn.BatchNorm2d(512) 544 | self.conv4_2 = nn.Conv2d(512, 512, 3, padding=1) 545 | self.relu4_2 = nn.ReLU(inplace=True) 546 | if batch_norm: 547 | self.conv4_2_bn = nn.BatchNorm2d(512) 548 | self.conv4_3 = nn.Conv2d(512, 512, 3, padding=1) 549 | self.relu4_3 = nn.ReLU(inplace=True) 550 | if batch_norm: 551 | self.conv4_3_bn = nn.BatchNorm2d(512) 552 | self.pool4 = nn.MaxPool2d(2, stride=2, ceil_mode=True) # 1/16 553 | 554 | # conv5 555 | self.conv5_1 = nn.Conv2d(512, 512, 3, padding=1) 556 | self.relu5_1 = nn.ReLU(inplace=True) 557 | if batch_norm: 558 | self.conv5_1_bn = nn.BatchNorm2d(512) 559 | self.conv5_2 = nn.Conv2d(512, 512, 3, padding=1) 560 | self.relu5_2 = nn.ReLU(inplace=True) 561 | if batch_norm: 562 | self.conv5_2_bn = nn.BatchNorm2d(512) 563 | self.conv5_3 = nn.Conv2d(512, 512, 3, padding=1) 564 | self.relu5_3 = nn.ReLU(inplace=True) 565 | if batch_norm: 566 | self.conv5_3_bn = nn.BatchNorm2d(512) 567 | self.pool5 = nn.MaxPool2d(2, stride=2, ceil_mode=True) # 1/32 568 | 569 | # fc6 570 | self.fc6 = nn.Conv2d(512, 4096, 7) 571 | self.relu6 = nn.ReLU(inplace=True) 572 | if batch_norm: 573 | self.fc6_bn = nn.BatchNorm2d(4096) 574 | self.drop6 = nn.Dropout2d() 575 | 576 | # fc7 577 | self.fc7 = nn.Conv2d(4096, 4096, 1) 578 | self.relu7 = nn.ReLU(inplace=True) 579 | self.fc7_bn = nn.BatchNorm2d(4096) 580 | self.drop7 = nn.Dropout2d() 581 | 582 | if bin_type == 'one-hot': 583 | # NOTE: *two* output prediction maps for hue and chroma 584 | raise NotImplementedError('TODO - FCN 16s for separate hue-chroma') 585 | elif bin_type == 'soft': 586 | self.score_fr = nn.Conv2d(4096, n_class, 1) 587 | self.score_pool3 = nn.Conv2d(256, n_class, 1) 588 | self.score_pool4 = nn.Conv2d(512, n_class, 1) 589 | 590 | self.upscore2 = nn.ConvTranspose2d(n_class, n_class, 4, stride=2, 591 | bias=False) 592 | self.upscore8 = nn.ConvTranspose2d(n_class, n_class, 16, stride=8, 593 | bias=False) 594 | self.upscore_pool4 = nn.ConvTranspose2d(n_class, n_class, 4, stride=2, 595 | bias=False) 596 | 597 | self.upscore2.weight.requires_grad = False # fix bilinear upsamplers 598 | self.upscore8.weight.requires_grad = False 599 | self.upscore_pool4.weight.requires_grad = False 600 | 601 | self._initialize_weights() 602 | 603 | 604 | def _initialize_weights(self): 605 | for m in self.modules(): 606 | if isinstance(m, nn.Conv2d): 607 | pass # leave the default PyTorch init 608 | if isinstance(m, nn.ConvTranspose2d): 609 | assert m.kernel_size[0] == m.kernel_size[1] 610 | initial_weight = get_upsampling_weight( 611 | m.in_channels, m.out_channels, m.kernel_size[0]) 612 | m.weight.data.copy_(initial_weight) 613 | 614 | 615 | def forward(self, x): 616 | h = x 617 | h = self.conv1_1(h) 618 | if self.batch_norm: 619 | h = self.conv1_1_bn(h) 620 | h = self.relu1_1(h) 621 | h = self.conv1_2(h) 622 | if self.batch_norm: 623 | h = self.conv1_2_bn(h) 624 | h = self.relu1_2(h) 625 | h = self.pool1(h) 626 | 627 | if self.batch_norm: 628 | h = self.relu2_1(self.conv2_1_bn(self.conv2_1(h))) 629 | else: 630 | h = self.relu2_1(self.conv2_1(h)) 631 | if self.batch_norm: 632 | h = self.relu2_2(self.conv2_2_bn(self.conv2_2(h))) 633 | else: 634 | h = self.relu2_2(self.conv2_2_bn(self.conv2_2(h))) 635 | h = self.pool2(h) 636 | 637 | if self.batch_norm: 638 | h = self.relu3_1(self.conv3_1_bn(self.conv3_1(h))) 639 | else: 640 | h = self.relu3_1(self.conv3_1(h)) 641 | if self.batch_norm: 642 | h = self.relu3_2(self.conv3_2_bn(self.conv3_2(h))) 643 | else: 644 | h = self.relu3_2(self.conv3_2(h)) 645 | if self.batch_norm: 646 | h = self.relu3_3(self.conv3_3_bn(self.conv3_3(h))) 647 | else: 648 | h = self.relu3_3(self.conv3_3(h)) 649 | h = self.pool3(h) 650 | pool3 = h # 1/8 651 | 652 | if self.batch_norm: 653 | h = self.relu4_1(self.conv4_1_bn(self.conv4_1(h))) 654 | else: 655 | h = self.relu4_1(self.conv4_1(h)) 656 | if self.batch_norm: 657 | h = self.relu4_2(self.conv4_2_bn(self.conv4_2(h))) 658 | else: 659 | h = self.relu4_2(self.conv4_2(h)) 660 | if self.batch_norm: 661 | h = self.relu4_3(self.conv4_3_bn(self.conv4_3(h))) 662 | else: 663 | h = self.relu4_3(self.conv4_3(h)) 664 | h = self.pool4(h) 665 | pool4 = h # 1/16 666 | 667 | if self.batch_norm: 668 | h = self.relu5_1(self.conv5_1_bn(self.conv5_1(h))) 669 | else: 670 | h = self.relu5_1(self.conv5_1(h)) 671 | if self.batch_norm: 672 | h = self.relu5_2(self.conv5_2_bn(self.conv5_2(h))) 673 | else: 674 | h = self.relu5_2(self.conv5_2(h)) 675 | if self.batch_norm: 676 | h = self.relu5_3(self.conv5_3_bn(self.conv5_3(h))) 677 | else: 678 | h = self.relu5_3(self.conv5_3(h)) 679 | h = self.pool5(h) 680 | 681 | if self.batch_norm: 682 | h = self.relu6(self.fc6_bn(self.fc6(h))) 683 | else: 684 | h = self.relu6(self.fc6(h)) 685 | h = self.drop6(h) 686 | 687 | if self.batch_norm: 688 | h = self.relu7(self.fc7_bn(self.fc7(h))) 689 | else: 690 | h = self.relu7(self.fc7(h)) 691 | h = self.drop7(h) 692 | 693 | if self.bin_type == 'one-hot': 694 | raise NotImplementedError('TODO - FCN 16s for separate hue-chroma') 695 | elif self.bin_type == 'soft': 696 | h = self.score_fr(h) 697 | h = self.upscore2(h) 698 | upscore2 = h # 1/16 699 | 700 | h = self.score_pool4(pool4) 701 | h = h[:, :, 5:5 + upscore2.size()[2], 5:5 + upscore2.size()[3]] 702 | score_pool4c = h # 1/16 703 | 704 | h = upscore2 + score_pool4c # 1/16 705 | h = self.upscore_pool4(h) 706 | upscore_pool4 = h # 1/8 707 | 708 | h = self.score_pool3(pool3) 709 | h = h[:, :, 710 | 9:9 + upscore_pool4.size()[2], 711 | 9:9 + upscore_pool4.size()[3]] 712 | score_pool3c = h # 1/8 713 | 714 | h = upscore_pool4 + score_pool3c # 1/8 715 | 716 | h = self.upscore8(h) 717 | h = h[:, :, 31:31 + x.size()[2], 31:31 + x.size()[3]].contiguous() 718 | 719 | return h 720 | 721 | 722 | def copy_params_from_fcn16s(self, fcn16s): 723 | for name, l1 in fcn16s.named_children(): 724 | try: 725 | l2 = getattr(self, name) 726 | l2.weight # skip ReLU / Dropout 727 | except Exception: 728 | continue 729 | assert l1.weight.size() == l2.weight.size() 730 | l2.weight.data.copy_(l1.weight.data) 731 | if l1.bias is not None: 732 | assert l1.bias.size() == l2.bias.size() 733 | l2.bias.data.copy_(l1.bias.data) 734 | 735 | 736 | 737 | -------------------------------------------------------------------------------- /lfw/data/pairsDevTest.txt: -------------------------------------------------------------------------------- 1 | 500 2 | Abdullah_Gul 13 14 3 | Abdullah_Gul 13 16 4 | Abdullatif_Sener 1 2 5 | Adel_Al-Jubeir 1 3 6 | Al_Pacino 1 2 7 | Alan_Greenspan 1 5 8 | Albert_Costa 2 6 9 | Albert_Costa 4 6 10 | Albert_Costa 5 6 11 | Alejandro_Atchugarry 1 2 12 | Alex_Penelas 1 2 13 | Ali_Naimi 1 3 14 | Ali_Naimi 3 4 15 | Allison_Janney 1 2 16 | Allyson_Felix 2 5 17 | Alvaro_Uribe 7 10 18 | Alvaro_Uribe 17 26 19 | Alvaro_Uribe 28 29 20 | Amanda_Bynes 1 2 21 | Amanda_Bynes 1 3 22 | Amanda_Bynes 1 4 23 | Amanda_Bynes 3 4 24 | Amelie_Mauresmo 1 21 25 | Amelie_Mauresmo 4 11 26 | Amelie_Mauresmo 16 21 27 | Ana_Palacio 1 6 28 | Andrei_Mikhnevich 1 2 29 | Andy_Hebb 1 2 30 | Angelo_Reyes 1 2 31 | Angelo_Reyes 2 3 32 | Ann_Veneman 1 2 33 | Ann_Veneman 5 11 34 | Ann_Veneman 8 9 35 | Anna_Nicole_Smith 1 2 36 | Anne_Krueger 1 2 37 | Anne_Krueger 2 3 38 | Annette_Lu 1 2 39 | Annette_Lu 1 3 40 | Annette_Lu 2 3 41 | Antonio_Palocci 3 8 42 | Antonio_Trillanes 1 2 43 | Antonio_Trillanes 2 3 44 | Arlen_Specter 1 3 45 | Augustin_Calleri 1 3 46 | Augustin_Calleri 1 4 47 | Barbara_Brezigar 1 2 48 | Begum_Khaleda_Zia 1 2 49 | Ben_Affleck 1 3 50 | Ben_Affleck 2 6 51 | Ben_Affleck 3 6 52 | Bertie_Ahern 2 3 53 | Bill_Frist 1 3 54 | Bill_Frist 2 7 55 | Bill_Frist 3 8 56 | Bill_McBride 2 6 57 | Bill_McBride 2 10 58 | Bill_McBride 3 9 59 | Bill_Sizemore 1 2 60 | Bono 1 2 61 | Bono 1 3 62 | Bono 2 3 63 | Brian_Cowen 1 2 64 | Bridget_Fonda 1 3 65 | Bridgette_Wilson-Sampras 1 2 66 | Butch_Davis 1 2 67 | Calista_Flockhart 2 6 68 | Candice_Bergen 1 3 69 | Carla_Del_Ponte 2 4 70 | Carla_Del_Ponte 4 5 71 | Carlos_Manuel_Pruneda 1 2 72 | Carlos_Manuel_Pruneda 1 3 73 | Carlos_Manuel_Pruneda 2 3 74 | Carlos_Moya 7 11 75 | Carlos_Ortega 1 3 76 | Carolyn_Dawn_Johnson 2 3 77 | Carrie-Anne_Moss 2 5 78 | Carson_Daly 1 2 79 | Cathy_Freeman 1 2 80 | Chanda_Rubin 2 4 81 | Chang_Dae-whan 1 2 82 | Charles_Taylor 3 9 83 | Chen_Shui-bian 3 5 84 | Chita_Rivera 1 2 85 | Chok_Tong_Goh 1 2 86 | Chris_Bell 1 2 87 | Chris_Byrd 1 2 88 | Christian_Longo 1 3 89 | Christian_Longo 2 3 90 | Christian_Wulff 1 2 91 | Christine_Ebersole 1 2 92 | Christine_Todd_Whitman 5 6 93 | Christopher_Reeve 1 4 94 | Claire_Leger 1 2 95 | Condoleezza_Rice 2 5 96 | Condoleezza_Rice 7 10 97 | Cristina_Fernandez 1 2 98 | Dai_Bachtiar 1 2 99 | Daisy_Fuentes 3 4 100 | Dave_Campo 1 2 101 | Dave_Campo 2 3 102 | David_Stern 1 4 103 | David_Stern 3 4 104 | Denise_Johnson 1 2 105 | Dennis_Powell 1 2 106 | Denzel_Washington 1 3 107 | Denzel_Washington 2 4 108 | Denzel_Washington 2 5 109 | Derek_Jeter 2 3 110 | Diane_Green 1 2 111 | Dick_Clark 1 2 112 | Dick_Clark 1 3 113 | Dick_Clark 2 3 114 | Dick_Vermeil 1 2 115 | Dominique_de_Villepin 3 5 116 | Dominique_de_Villepin 7 11 117 | Dominique_de_Villepin 14 15 118 | Donald_Pettit 1 2 119 | Donald_Pettit 2 3 120 | Donald_Rumsfeld 22 36 121 | Donald_Rumsfeld 28 108 122 | Donald_Rumsfeld 41 48 123 | Doug_Collins 1 2 124 | Edward_Lu 2 4 125 | Edward_Norton 1 2 126 | Edwin_Edwards 1 2 127 | Edwin_Edwards 2 3 128 | Eliane_Karp 1 4 129 | Eliane_Karp 2 3 130 | Elin_Nordegren 1 2 131 | Elinor_Caplan 1 2 132 | Elisabeth_Schumacher 1 2 133 | Elizabeth_Smart 1 5 134 | Elizabeth_Smart 3 4 135 | Elsa_Zylberstein 1 2 136 | Elsa_Zylberstein 1 5 137 | Elsa_Zylberstein 3 6 138 | Emma_Watson 1 4 139 | Emma_Watson 2 3 140 | Emma_Watson 3 5 141 | Erik_Morales 1 3 142 | Erika_Christensen 1 2 143 | Erin_Runnion 3 4 144 | Ernie_Fletcher 1 2 145 | Fabiola_Zuluaga 1 2 146 | Felipe_Perez_Roque 1 4 147 | Felipe_Perez_Roque 2 4 148 | Fernando_Vargas 2 4 149 | Flor_Montulo 1 2 150 | Frank_Cassell 1 2 151 | Frank_Dunham_Jr 1 2 152 | Gao_Qiang 1 2 153 | Garry_Trudeau 1 2 154 | Gary_Doer 1 2 155 | Gary_Doer 1 3 156 | Gary_Doer 2 3 157 | Gary_Winnick 1 2 158 | Gene_Robinson 2 4 159 | George_Foreman 1 2 160 | George_Karl 1 2 161 | George_Robertson 5 20 162 | George_Robertson 12 19 163 | George_Robertson 15 22 164 | Georgi_Parvanov 1 2 165 | Geraldine_Chaplin 1 3 166 | Gerhard_Schroeder 3 26 167 | Gerry_Adams 3 7 168 | Gerry_Adams 5 8 169 | Gilberto_Rodriguez_Orejuela 1 3 170 | Gilberto_Rodriguez_Orejuela 2 4 171 | Gilberto_Rodriguez_Orejuela 3 4 172 | Giuseppe_Gibilisco 2 4 173 | Giuseppe_Gibilisco 3 4 174 | Glafcos_Clerides 1 4 175 | Gordon_Brown 4 11 176 | Gordon_Campbell 1 2 177 | Gray_Davis 6 23 178 | Gray_Davis 13 23 179 | Greg_Gilbert 1 2 180 | Greg_Ostertag 1 2 181 | Gregory_Hines 1 2 182 | Gus_Van_Sant 1 2 183 | Gwendal_Peizerat 1 2 184 | Gwyneth_Paltrow 4 5 185 | Harry_Kalas 1 2 186 | Hassan_Wirajuda 1 2 187 | Heather_Mills 3 4 188 | Heidi_Fleiss 1 3 189 | Heidi_Fleiss 1 4 190 | Hermann_Maier 1 2 191 | Horst_Koehler 1 3 192 | Ian_Thorpe 2 7 193 | Ian_Thorpe 3 10 194 | Ian_Thorpe 6 8 195 | Isabella_Rossellini 1 2 196 | Isabelle_Huppert 1 2 197 | Ismail_Merchant 1 2 198 | Jack_Grubman 1 2 199 | Jack_Nicholson 2 3 200 | Jacques_Chirac 22 44 201 | Jacques_Chirac 30 33 202 | Jacques_Chirac 31 52 203 | Jacques_Rogge 4 6 204 | Jake_Gyllenhaal 2 4 205 | Jake_Gyllenhaal 3 4 206 | James_Butts 1 2 207 | James_Franco 1 2 208 | James_Kelly 4 11 209 | James_Kelly 9 11 210 | James_Kopp 1 3 211 | James_Kopp 2 3 212 | James_Maguire 1 2 213 | James_Smith 1 2 214 | Jan_Ullrich 2 3 215 | Jan_Ullrich 2 5 216 | Janica_Kostelic 1 2 217 | Javier_Weber 1 2 218 | Javier_Weber 1 3 219 | Jean-Claude_Juncker 1 2 220 | Jean-Claude_Trichet 1 2 221 | Jean-Marc_de_La_Sabliere 1 2 222 | Jean-Pierre_Raffarin 2 6 223 | Jean_Carnahan 1 2 224 | Jefferson_Perez 1 2 225 | Jennifer_Capriati 3 13 226 | Jennifer_Capriati 24 29 227 | Jennifer_Connelly 1 4 228 | Jennifer_Connelly 3 4 229 | Jennifer_Keller 2 4 230 | Jennifer_Keller 3 4 231 | Jeong_Se-hyun 4 5 232 | Jeremy_Shockey 1 2 233 | Jesse_Ventura 1 3 234 | Jessica_Lange 1 2 235 | Jiang_Zemin 2 17 236 | Jiang_Zemin 12 15 237 | Jim_Harrick 1 2 238 | Jimmy_Carter 1 9 239 | Jimmy_Carter 4 8 240 | Jimmy_Carter 8 9 241 | Joe_Mantello 1 2 242 | Joe_Nichols 1 3 243 | Joe_Nichols 3 4 244 | John_Bolton 3 10 245 | John_Bolton 4 14 246 | John_Bolton 5 13 247 | John_Bolton 11 14 248 | John_Garamendi 1 2 249 | John_McCallum 1 2 250 | John_Negroponte 4 17 251 | John_Rowland 1 2 252 | John_Stallworth 1 2 253 | John_Travolta 4 5 254 | John_Walsh 1 2 255 | Johnny_Carson 1 2 256 | Jorge_Castaneda 1 2 257 | Jose_Dirceu 1 2 258 | Joseph_Deiss 1 2 259 | Judi_Dench 1 2 260 | Julie_Gerberding 7 12 261 | Julie_Gerberding 8 12 262 | Justin_Leonard 1 3 263 | Justine_Henin 1 3 264 | Justine_Henin 2 3 265 | Kamal_Kharrazi 3 5 266 | Keith_Bogans 2 3 267 | Kenneth_Evans 1 2 268 | Kim_Yong-il 1 2 269 | King_Abdullah_II 1 2 270 | King_Abdullah_II 1 3 271 | King_Abdullah_II 1 4 272 | King_Abdullah_II 2 3 273 | Kjell_Magne_Bondevik 1 2 274 | Kosuke_Kitajima 1 2 275 | Kristen_Breitweiser 1 2 276 | Kristin_Davis 1 2 277 | Kristin_Davis 2 3 278 | Kurt_Warner 1 5 279 | LK_Advani 1 2 280 | LK_Advani 1 3 281 | LK_Advani 2 3 282 | Larry_Coker 2 4 283 | Latrell_Sprewell 1 2 284 | Laura_Linney 1 3 285 | Lauren_Killian 1 2 286 | Laurent_Jalabert 1 2 287 | Leonardo_DiCaprio 1 8 288 | Leonardo_DiCaprio 3 7 289 | Leonid_Kuchma 1 6 290 | Leonid_Kuchma 2 3 291 | Leonid_Kuchma 2 4 292 | Leonid_Kuchma 4 6 293 | Leslie_Moonves 1 2 294 | Leszek_Miller 2 3 295 | Lino_Oviedo 1 2 296 | Lisa_Raymond 1 2 297 | Liza_Minnelli 1 3 298 | Lloyd_Ward 1 2 299 | Luciano_Pavarotti 1 3 300 | Lucy_Liu 2 4 301 | Luis_Figo 1 2 302 | Luis_Horna 1 3 303 | Luis_Horna 2 5 304 | Lynn_Abraham 1 2 305 | Mack_Brown 1 2 306 | Mahmoud_Abbas 19 26 307 | Marcelo_Rios 1 4 308 | Marcelo_Rios 2 4 309 | Marcelo_Rios 4 5 310 | Marieta_Chrousala 1 2 311 | Marieta_Chrousala 1 3 312 | Mario_Cipollini 1 2 313 | Mark_Dacey 1 2 314 | Mark_Richt 1 3 315 | Mark_Richt 2 3 316 | Martha_Burk 2 4 317 | Martha_Stewart 1 2 318 | Martha_Stewart 2 5 319 | Martha_Stewart 3 5 320 | Martin_Cauchon 1 2 321 | Martin_Sheen 1 2 322 | Mary_Landrieu 1 3 323 | Masum_Turker 1 2 324 | Masum_Turker 1 3 325 | Matt_Doherty 1 2 326 | Matt_Doherty 1 3 327 | Matthew_Broderick 1 4 328 | Matthew_Broderick 2 3 329 | Matthew_Broderick 3 4 330 | Melissa_Etheridge 1 2 331 | Michael_Jackson 2 8 332 | Michael_Phelps 1 3 333 | Michael_Sullivan 1 2 334 | Michel_Duclos 1 2 335 | Michelle_Yeoh 3 4 336 | Miguel_Contreras 1 2 337 | Mike_Price 1 2 338 | Mikhail_Youzhny 1 2 339 | Mikhail_Youzhny 1 3 340 | Mikhail_Youzhny 2 3 341 | Mikulas_Dzurinda 1 2 342 | Mireya_Moscoso 1 3 343 | Mireya_Moscoso 2 3 344 | Mohamed_ElBaradei 3 6 345 | Mohamed_ElBaradei 4 5 346 | Mohamed_ElBaradei 5 8 347 | Monica_Lewinsky 2 3 348 | Muhammad_Ali 4 7 349 | Muhammad_Ali 6 9 350 | Nancy_Pelosi 2 5 351 | Nancy_Pelosi 6 13 352 | Nancy_Pelosi 13 14 353 | Naomi_Campbell 1 2 354 | Naoto_Kan 2 3 355 | Nasser_al-Kidwa 1 2 356 | Nastassia_Kinski 1 2 357 | Natalie_Coughlin 1 2 358 | Natalie_Coughlin 3 5 359 | Natalie_Maines 2 4 360 | Nicanor_Duarte_Frutos 8 10 361 | Noah_Wyle 1 2 362 | Norm_Coleman 2 7 363 | Orrin_Hatch 1 2 364 | Osama_bin_Laden 2 4 365 | Owen_Wilson 1 2 366 | Ozzy_Osbourne 1 2 367 | Ozzy_Osbourne 2 3 368 | Padraig_Harrington 1 3 369 | Paris_Hilton 1 2 370 | Patrick_Leahy 1 2 371 | Patti_Labelle 1 2 372 | Patti_Labelle 2 3 373 | Patty_Schnyder 1 3 374 | Patty_Schnyder 1 4 375 | Paul-Henri_Mathieu 1 2 376 | Paul_Burrell 4 10 377 | Paul_Burrell 5 11 378 | Paul_Martin 1 8 379 | Paul_Martin 2 4 380 | Paul_Martin 3 6 381 | Paul_Patton 1 2 382 | Paul_Sarbanes 2 3 383 | Pedro_Malan 1 2 384 | Pedro_Malan 1 4 385 | Pedro_Malan 2 3 386 | Penelope_Cruz 1 2 387 | Peter_Greenaway 1 2 388 | Pierce_Brosnan 4 12 389 | Pierce_Brosnan 7 8 390 | Pierre_Boulanger 1 2 391 | Priscilla_Presley 1 2 392 | Rachel_Griffiths 1 3 393 | Rachel_Hunter 2 3 394 | Ralf_Schumacher 1 6 395 | Rebecca_Romijn-Stamos 2 4 396 | Rebecca_Romijn-Stamos 3 4 397 | Reese_Witherspoon 1 2 398 | Reese_Witherspoon 1 4 399 | Ricardo_Maduro 1 2 400 | Rich_Gannon 1 2 401 | Richard_Gere 1 10 402 | Richard_Gere 6 10 403 | Richard_Norton-Taylor 1 2 404 | Richard_Shelby 1 2 405 | Richie_Adubato 1 2 406 | Rick_Dinse 1 3 407 | Rick_Perry 1 2 408 | Rick_Perry 2 6 409 | Rick_Pitino 1 2 410 | Rick_Romley 1 2 411 | Rick_Wagoner 1 2 412 | Rob_Schneider 1 2 413 | Robbie_Fowler 1 2 414 | Robby_Ginepri 1 2 415 | Robert_Horan 1 2 416 | Robert_Mueller 1 3 417 | Robert_Redford 2 3 418 | Robert_Redford 2 5 419 | Robert_Redford 4 7 420 | Robert_Redford 7 8 421 | Rod_Blagojevich 1 2 422 | Rod_Stewart 1 3 423 | Ronaldo_Luis_Nazario_de_Lima 1 4 424 | Roy_Moore 2 3 425 | Rupert_Grint 2 3 426 | Russell_Coutts 1 2 427 | Russell_Crowe 1 2 428 | Salma_Hayek 1 7 429 | Salma_Hayek 8 10 430 | Sarah_Jessica_Parker 5 6 431 | Scott_McNealy 1 2 432 | Sean_Astin 1 2 433 | Sean_OKeefe 4 5 434 | Sean_Patrick_OMalley 1 2 435 | Sean_Patrick_OMalley 1 3 436 | Sean_Patrick_OMalley 2 3 437 | Sean_Penn 2 3 438 | Sergey_Lavrov 1 10 439 | Shane_Mosley 1 2 440 | Shannon_OBrien 1 2 441 | Sheila_Wellstone 1 2 442 | Silvio_Berlusconi 14 23 443 | Spencer_Abraham 12 13 444 | Stan_Heath 1 2 445 | Stefano_Accorsi 1 2 446 | Steve_Lavin 1 4 447 | Steven_Hatfill 1 2 448 | Surakait_Sathirathai 1 2 449 | Susan_Collins 1 2 450 | Susie_Castillo 1 2 451 | Svetlana_Koroleva 1 2 452 | Tammy_Lynn_Michaels 1 2 453 | Tang_Jiaxuan 1 6 454 | Tang_Jiaxuan 1 9 455 | Tang_Jiaxuan 1 10 456 | Tang_Jiaxuan 1 11 457 | Tassos_Papadopoulos 1 2 458 | Terry_McAuliffe 1 2 459 | Terry_McAuliffe 1 3 460 | Thabo_Mbeki 4 5 461 | Thaksin_Shinawatra 1 6 462 | Thaksin_Shinawatra 3 6 463 | Theodore_Tweed_Roosevelt 2 3 464 | Thomas_OBrien 8 9 465 | Tim_Allen 1 2 466 | Tim_Allen 2 4 467 | Tim_Curry 1 2 468 | Tippi_Hedren 1 2 469 | Todd_Haynes 1 3 470 | Todd_Haynes 1 4 471 | Tom_Coverdale 1 2 472 | Tom_Cruise 1 7 473 | Tom_Cruise 2 4 474 | Tom_Cruise 4 9 475 | Tommy_Haas 2 3 476 | Tomoko_Hagiwara 1 2 477 | Tony_Shalhoub 1 3 478 | Tony_Shalhoub 1 4 479 | Tracee_Ellis_Ross 1 2 480 | Tyler_Hamilton 1 2 481 | Vaclav_Havel 1 6 482 | Vaclav_Havel 1 9 483 | Vicente_Fernandez 2 5 484 | Vicente_Fernandez 4 5 485 | Victoria_Beckham 2 3 486 | Vincent_Brooks 1 2 487 | Vincent_Brooks 2 6 488 | Warren_Beatty 1 2 489 | Warren_Buffett 1 3 490 | Wesley_Clark 1 2 491 | Will_Smith 1 2 492 | William_Burns 1 2 493 | William_Macy 1 4 494 | William_Macy 2 3 495 | William_Macy 3 5 496 | William_Rehnquist 1 2 497 | Winona_Ryder 6 15 498 | Winona_Ryder 19 21 499 | Yevgeny_Kafelnikov 3 4 500 | Yoriko_Kawaguchi 3 10 501 | Zoran_Djindjic 3 4 502 | AJ_Lamas 1 Zach_Safrin 1 503 | Aaron_Guiel 1 Reese_Witherspoon 3 504 | Aaron_Tippin 1 Jose_Luis_Rodriguez_Zapatero 1 505 | Abdul_Majeed_Shobokshi 1 Charles_Cope 1 506 | Abdullah_Gul 16 Steve_Cox 1 507 | Abid_Hamid_Mahmud_Al-Tikriti 1 Eli_Broad 1 508 | Adam_Kennedy 1 Amelie_Mauresmo 19 509 | Adel_Al-Jubeir 1 Elisabeth_Welch 1 510 | Adrian_Murrell 1 Tommy_Franks 15 511 | Adriana_Lima 1 Terrence_Trammell 1 512 | Adriana_Perez_Navarro 1 Jennifer_Capriati 8 513 | Agbani_Darego 1 Malik_Mahmud 1 514 | Ahmed_Ghazi 1 Henry_Suazo 1 515 | Aileen_Riggin_Soule 1 Damarius_Bilbo 1 516 | Ain_Seppik 1 Donald_Regan 1 517 | Aitor_Gonzalez 1 Lily_Tomlin 2 518 | Al_Cardenas 1 Mary_Landrieu 3 519 | Al_Davis 2 Dennis_Powell 1 520 | Al_Pacino 1 Izzat_Ibrahim 1 521 | Aleksander_Voloshin 1 Ashlea_Talbot 1 522 | Alex_Cabrera 1 Richard_Carl 1 523 | Alex_Corretja 1 Newt_Gingrich 1 524 | Alex_Corretja 1 Tippi_Hedren 2 525 | Alexandra_Rozovskaya 1 Don_King 1 526 | Ali_Fallahian 1 Mohammed_Abulhasan 1 527 | Ali_Naimi 8 Vanessa_Laine 1 528 | Alicia_Keys 1 Giannina_Facio 1 529 | Alicia_Molik 1 Lena_Katina 1 530 | Aline_Chretien 1 Carlos_Iturgaitz 1 531 | Alisha_Richman 1 Spencer_Abraham 9 532 | Allison_Searing 1 Amy_Gale 1 533 | Allison_Searing 1 Mark_Martin 1 534 | Allison_Searing 1 Phillip_Fulmer 1 535 | Allison_Searing 1 Warren_Beatty 1 536 | Amanda_Marsh 1 Howard_Smith 2 537 | Amanda_Marsh 1 Svetlana_Koroleva 2 538 | Amelie_Mauresmo 6 Terri_Clark 1 539 | Amporn_Falise 1 Christiane_Wulff 1 540 | Amporn_Falise 1 Liza_Minnelli 2 541 | Amporn_Falise 1 Zhang_Yimou 1 542 | Amy_Gale 1 Frank_Murkowski 1 543 | Ana_Palacio 5 Erika_Christensen 2 544 | Ana_Palacio 6 Robert_Gordon_Card 1 545 | Andrew_Caldecott 1 Joe_Strummer 1 546 | Andrew_Shutley 1 Charles_Taylor 7 547 | Andrew_Wetzler 1 Cristina_Torrens_Valero 1 548 | Angela_Mascia-Frye 1 Derrick_Taylor 1 549 | Ann_Godbehere 1 Paul_Newman 1 550 | Ann_Godbehere 1 Tatiana_Kennedy_Schlossberg 1 551 | Anna_Chicherova 1 Bob_Iger 1 552 | Anna_Nicole_Smith 1 Tony_Shalhoub 4 553 | Anne_Krueger 3 Lucrecia_Orozco 1 554 | Anne_McLellan 2 Richard_Penniman 1 555 | Anne_McLellan 2 Roman_Tam 1 556 | Annette_Lu 1 Scott_Fawell 1 557 | Annette_Lu 1 Svetlana_Belousova 1 558 | Annie_Chaplin 1 Gloria_Gaynor 1 559 | Annie_Chaplin 1 Ion_Tiriac 1 560 | Annie_Chaplin 1 Michael_Keaton 1 561 | Antonio_Palocci 4 Donald_Regan 1 562 | Aretha_Franklin 1 Marc_Racicot 1 563 | Armando_Avila_Panchame 1 Hamza_Atiya_Muhsen 1 564 | Armando_Avila_Panchame 1 Hana_Urushima 1 565 | Arthur_Johnson 1 Steve_Pagliuca 1 566 | Ascencion_Barajas 1 Marissa_Jaret_Winokur 2 567 | Ashlea_Talbot 1 Sean_Patrick_Thomas 1 568 | Ashley_Judd 1 Pat_Burns 1 569 | Asif_Ali_Zardari 1 Cristina_Torrens_Valero 1 570 | Assad_Ahmadi 1 Jose_Cevallos 1 571 | Astou_Ndiaye-Diatta 1 Scott_Yates 1 572 | Augustin_Calleri 4 Brad_Miller 1 573 | Augustin_Calleri 4 John_Jones 1 574 | Babe_Ruth 1 Joshua_Perper 1 575 | Barbara_Bach 1 Eli_Rosenbaum 1 576 | Barbara_Becker 1 Gerhard_Boekel 1 577 | Barbara_Roberts 1 Kirsten_Clark 1 578 | Barry_Williams 1 Richard_Langille 1 579 | Barzan_al-Tikriti 1 Jim_Haslett 1 580 | Beecher_Ray_Kirby 1 Hashan_Tillakaratne 1 581 | Begum_Khaleda_Zia 1 Jesse_Jackson 5 582 | Ben_Cohen 1 Roger_Etchegaray 1 583 | Ben_Cohen 1 Zach_Parise 1 584 | Benjamin_McKenzie 1 Lisa_Leslie 1 585 | Betsy_Coffin 1 Gerard_Tronche 1 586 | Bill_Carmody 1 Michael_Sheehan 1 587 | Bill_Carmody 1 Tippi_Hedren 1 588 | Bill_Curry 1 Candice_Bergen 1 589 | Bill_Frist 2 Malcolm_Wild 1 590 | Bill_Frist 4 Donald_Pettit 2 591 | Bill_Kollar 1 Phillip_Seymor_Hoffmann 1 592 | Bill_McBride 8 John_Rowland 1 593 | Bill_Sizemore 1 Calista_Flockhart 3 594 | Billy_Boyd 1 Liza_Minnelli 3 595 | Billy_Boyd 1 Robin_Williams 1 596 | Bing_Crosby 1 Eric_Bana 1 597 | Bing_Crosby 1 John_Moxley 1 598 | Bing_Crosby 1 Leonid_Kuchma 5 599 | Bob_Iger 1 Brian_Cowen 1 600 | Bob_Iger 1 Steve_Lavin 6 601 | Bob_Newhart 1 Marina_Canetti 1 602 | Bob_Riley 1 Jim_Harrick 1 603 | Bono 1 Chen_Shui-bian 2 604 | Bono 2 Se_Hyuk_Joo 1 605 | Brad_Miller 1 Chris_Noth 1 606 | Brandon_Larson 1 James_Williams 1 607 | Brandon_Lloyd 1 Zach_Parise 1 608 | Brandon_Robinson 1 Mitchell_Garabedian 1 609 | Brandon_Webb 1 Teddy_Kollek 1 610 | Brian_Cowen 2 Dave_Johnson 1 611 | Brian_Griese 1 Jennie_Garth 1 612 | Brock_Berlin 1 Wendy_Selig 1 613 | Brooke_Adams 1 Gilberto_Rodriguez_Orejuela 2 614 | Brooke_Adams 1 Hussam_Mohammed_Amin 1 615 | Bruce_Willis 1 Heather_Mills 2 616 | Buddy_Ryan 1 Laurel_Clark 1 617 | Buddy_Ryan 1 Roy_Romanow 1 618 | Calvin_Joseph_Coleman 1 Ekaterina_Dmitriev 1 619 | Calvin_Joseph_Coleman 1 Victor_Garber 1 620 | Camilla_Parker_Bowles 2 Mary_Matalin 1 621 | Camille_Lewis 1 Keith_Lockhart 1 622 | Camille_Lewis 1 Melissa_Etheridge 1 623 | Candice_Bergen 1 Masum_Turker 2 624 | Candice_Bergen 2 Jake_Gyllenhaal 2 625 | Candice_Bergen 2 Prospero_Pichay 1 626 | Carey_Lowell 1 Helio_Castroneves 1 627 | Carey_Lowell 1 Joshua_Harapko 1 628 | Carlo_Ancelotti 1 Hugh_Miller 1 629 | Carlos_Beltran 1 Shoshannah_Stern 1 630 | Carlos_Ghosn 1 Renee_Zellweger 11 631 | Carlos_Iturgaitz 1 Francisco_Maturana 1 632 | Carlos_Manuel_Pruneda 1 Claudia_Coslovich 1 633 | Carly_Gullickson 1 Mike_Helton 2 634 | Caroline_Dhavernas 1 James_Smith 1 635 | Carrie-Anne_Moss 4 Mike_Richter 1 636 | Carson_Daly 1 Matthew_During 1 637 | Casey_Crowder 1 Joe_Nichols 1 638 | Casey_Crowder 1 Laszlo_Kovacs 1 639 | Cecilia_Chang 1 Jeffery_Hendren 1 640 | Cecilia_Cheung 1 Kirsten_Clark 1 641 | Cecilia_Cheung 1 Simon_Chalk 1 642 | Cedric_Benson 1 Greg_Hodge 1 643 | Cesar_Maia 2 Karin_Viard 1 644 | Chang_Jae_On 1 Oracene_Williams 1 645 | Chante_Jawan_Mallard 1 Muhammad_Ali 5 646 | Charla_Moye 1 Joe_Mantegna 1 647 | Charla_Moye 1 Robert_Nillson 1 648 | Charles_Cope 1 John_Franco 1 649 | Charley_Armey 1 Jose_Cevallos 1 650 | Charlie_Sheen 1 Deece_Eckstein 1 651 | Charlie_Sheen 1 Janice_Goldfinger 1 652 | Charlie_Zaa 2 William_Harrison 1 653 | Charmaine_Crooks 1 Esad_Landzo 1 654 | Charmaine_Crooks 1 Shigeru_Ishiba 1 655 | Chawki_Armali 1 Hilmi_Akin_Zorlu 1 656 | Chea_Sophara 1 Muhammad_Ali 6 657 | Chen_Shui-bian 2 Simon_Cowell 1 658 | Chen_Shui-bian 2 William_Nessen 1 659 | Chen_Shui-bian 3 Elizabeth_Smart 1 660 | Chita_Rivera 2 Jose_Luis_Chilavert 1 661 | Chris_Andrews 1 Claire_De_Gryse 1 662 | Chris_Andrews 1 Elinor_Caplan 2 663 | Chris_Byrd 1 David_Alpay 1 664 | Chris_Byrd 1 Lawrence_Di_Rita 1 665 | Chris_Noth 1 Frank_Sinatra 1 666 | Christian_Longo 2 Dustan_Mohr 1 667 | Christiane_Wulff 1 Paul_Schrader 1 668 | Christine_Ebersole 2 Vincent_Cianci_Jr 1 669 | Christine_Todd_Whitman 2 Peter_Rasch 1 670 | Christine_Todd_Whitman 5 Neil_Goldman 1 671 | Christopher_Russell 1 Rosario_Dawson 1 672 | Cindy_Taylor 1 Melissa_Joan_Hart 1 673 | Claire_Danes 1 Mitchell_Potter 1 674 | Claire_De_Gryse 1 Jim_Parque 1 675 | Claire_De_Gryse 1 Soenarno 1 676 | Claire_Leger 2 Elin_Nordegren 2 677 | Coleen_Rowley 1 Courtney_Love 1 678 | Colin_Campbell 1 Matt_Walters 1 679 | Colleen_OClair 1 Kim_Hong-gul 1 680 | Colleen_Ryan 1 Noah_Wyle 2 681 | Conchita_Martinez 1 Moby 1 682 | Cora_Cambell 1 Todd_Petit 1 683 | Courtney_Love 1 Tsutomu_Takebe 1 684 | Cristina_Torrens_Valero 1 Damarius_Bilbo 1 685 | Cristina_Torrens_Valero 1 Etta_James 1 686 | Damarius_Bilbo 1 Janis_Ruth_Coulter 1 687 | Damarius_Bilbo 1 Sheila_Taormina 1 688 | Dan_Ackroyd 1 Nikki_Teasley 1 689 | Dan_Bylsma 1 Jim_Parque 1 690 | Dan_Bylsma 1 Scott_Yates 1 691 | Daniel_Comisso_Urdaneta 1 Jean-Marc_de_La_Sabliere 1 692 | Daniel_Montgomery 1 Hassan_Wirajuda 1 693 | Daniel_Zelman 1 David_Surrett 1 694 | Daniele_Nardello 1 Peter_Hartz 1 695 | Dariusz_Michalczewski 1 Julien_Boutter 1 696 | Darvis_Patton 1 Estelle_Morris 1 697 | Darvis_Patton 1 Gordon_Lightfoot 1 698 | Darvis_Patton 1 Tammy_Lynn_Michaels 1 699 | Dave_Johnson 1 Hank_Aaron 1 700 | David_Braley 1 Gerard_Tronche 1 701 | David_Kelley 1 Gerald_Calabrese 1 702 | David_Shayler 1 Peter_Mullan 1 703 | David_Surrett 1 Oracene_Williams 1 704 | Dean_Sheremet 1 Marc_Racicot 1 705 | Denise_van_Outen 1 Kevin_Satterfield 1 706 | Dennis_Kozlowski 2 James_Ivory 1 707 | Dennis_Kozlowski 2 Regina_Ip 1 708 | Dennis_Kozlowski 2 Wendy_Selig 1 709 | Denzel_Washington 4 Sung_Hong_Choi 1 710 | Derrick_Battie 1 Lisa_Leslie 1 711 | Diane_Green 1 Ruth_Christofferson 1 712 | Diane_Green 1 Tim_Salmon 1 713 | Dick_Armey 1 Tom_Glavine 2 714 | Dirk_Kempthorne 1 George_Tenet 2 715 | Dominique_de_Villepin 13 Hanns_Schumacher 1 716 | Don_Carcieri 1 Jane_Rooney 1 717 | Don_King 1 Philip_Zalewski 1 718 | Don_King 1 Tab_Baldwin 1 719 | Don_Lake 1 Elena_Dementieva 1 720 | Donald_Keck 1 Julia_Ormond 1 721 | Donald_Pettit 3 Michel_Kratochvil 1 722 | Donald_Rumsfeld 40 Tamika_Catchings 1 723 | Donna_Walker 1 Phil_Bredesen 1 724 | Eddie_Sutton 1 George_Maxwell_Richards 1 725 | Edith_Masai 1 Idi_Amin 1 726 | Edward_Belvin 1 William_Harrison 1 727 | Edward_Egan 1 Jim_Abbott 1 728 | Edward_James_Olmos 1 Jesse_Harris 3 729 | Edward_Seaga 1 Fatmir_Limaj 1 730 | Elaine_Chao 1 Wilbert_Elki_Meza_Majino 1 731 | Elena_Dementieva 1 Wilton_Gregory 1 732 | Elin_Nordegren 1 Fredric_Seaman 1 733 | Elin_Nordegren 2 Ghassan_Elashi 1 734 | Elinor_Caplan 1 Henning_Scherf 1 735 | Eliott_Spitzer 1 Jerry_Bruckheimer 1 736 | Elisabeth_Welch 1 Francisco_Maturana 1 737 | Elizabeth_Regan 1 Pierre_Boulanger 2 738 | Elliott_Mincberg 1 Gene_Orza 1 739 | Enrica_Fico 1 Erwin_Abdullah 1 740 | Eric_Dubin 1 Yang_Pao-yu 1 741 | Eric_Lindros 1 Melvin_Talbert 1 742 | Eric_Lindros 1 Sterling_Hitchcock 1 743 | Eric_Snow 1 Robert_McKee 1 744 | Fabiola_Zuluaga 1 Rose_Linkins 1 745 | Farouk_Kaddoumi 1 Gina_Gershon 1 746 | Farouk_Kaddoumi 1 Will_Smith 2 747 | Felipe_Perez_Roque 2 Lloyd_Ward 2 748 | Festus_Mogae 1 Neil_Moritz 1 749 | Filip_De_Winter 1 Stuart_Townsend 1 750 | Filip_De_Winter 1 Sue_Grafton 1 751 | Flavia_Pennetta 1 Mikulas_Dzurinda 2 752 | Flor_Montulo 1 Steve_Blankenship 1 753 | Floyd_Mayweather 1 Lou_Lang 1 754 | Fran_Drescher 2 William_Martin 1 755 | Francis_Ricciardone 1 Gene_Sauers 1 756 | Franco_Frattini 1 Gholamreza_Aghazadeh 1 757 | Frank_Sinatra 1 Vicente_Fox 23 758 | Frank_Van_Ecke 1 Gary_Gero 1 759 | Fred_Durst 1 Patricia_Phillips 1 760 | Fred_Durst 1 Rose_Linkins 1 761 | Fred_Durst 1 Tara_Reid 1 762 | Frederick_Madden 1 Luis_Ernesto_Derbez_Bautista 3 763 | Gary_Leon_Ridgway 1 Lisa_Raymond 1 764 | Gary_Leon_Ridgway 1 Phillip_Seymor_Hoffmann 1 765 | Gary_Sayler 1 Nila_Ferran 1 766 | George_Foreman 1 George_Lucas 1 767 | Georgia_Giddings 1 Johnny_Carson 1 768 | Gerard_Tronche 1 Jana_Pittman 1 769 | Gerhard_Schroeder 3 Izzat_Ibrahim 1 770 | Ghassan_Elashi 1 Jesse_Ventura 2 771 | Ghassan_Elashi 1 Peter_Rasch 1 772 | Gideon_Yago 1 Robert_Gordon_Card 1 773 | Gina_Gershon 1 Stacey_Dales-Schuman 1 774 | Glenn_Tilton 1 Victor_Hanescu 1 775 | Gordon_Brown 12 Monica_Gabrielle 1 776 | Gordon_Campbell 1 Pierre_Boulanger 2 777 | Gordon_Lightfoot 1 Janez_Drnovsek 1 778 | Grace_Kelly 1 Leslie_Moonves 1 779 | Greg_Kinnear 1 Tim_Salmon 1 780 | Gregory_Geoffroy 2 Paul_Burrell 11 781 | Gregory_Hines 1 Mary_Frances_Seiter 1 782 | Gregory_Peck 1 John_Lawrence 1 783 | Gregory_Peck 1 John_Lynch 1 784 | Guillaume_Cannet 1 Mark_Mulder 1 785 | Guillaume_Cannet 1 Mohammad_Khatami 5 786 | Gunilla_Backman 1 Jennifer_Granholm 1 787 | Gunilla_Backman 1 Ray_Sherman 1 788 | Gustavo_Franco 1 Melissa_Joan_Hart 1 789 | Gwendal_Peizerat 2 Joe_Nichols 2 790 | Gwyneth_Paltrow 5 Mike_Richter 1 791 | Halbert_Fillinger 1 Rachel_Hunter 1 792 | Hank_Aaron 1 Ron_Howard 2 793 | Harland_Braun 1 Lisa_Leslie 1 794 | Harvey_Fierstein 1 Mike_Slive 1 795 | Helene_Eksterowicz 1 Mario_Lemieux 1 796 | Helio_Castroneves 1 Joxel_Garcia 1 797 | Hermann_Maier 1 Robin_Williams 1 798 | Howard_Smith 2 Tim_Pawlenty 1 799 | Hubie_Brown 1 Yoon_Jeong_Cho 1 800 | Hunter_Bates 1 James_Williams 1 801 | Hussam_Mohammed_Amin 1 Mike_Smith 1 802 | Hussam_Mohammed_Amin 1 Princess_Hisako 1 803 | Hutomo_Mandala_Putra 1 Thomas_Mesereau_Jr 1 804 | Iban_Mayo 2 Yves_Brodeur 1 805 | Imad_Moustapha 1 Will_Ofenheusle 1 806 | Ion_Tiriac 1 Lawrence_Roberts 1 807 | Irina_Yatchenko 1 Pedro_Solbes 4 808 | Isabelle_Huppert 1 Pedro_Solbes 1 809 | Isaiah_Washington 2 Lee_Baca 1 810 | Islam_Karimov 1 Shireen_Amir_Begum 1 811 | Ismail_Cem 1 Seth_Gorney 1 812 | Ismail_Merchant 1 Joseph_Deiss 1 813 | Ivan_Lee 1 Paul-Henri_Mathieu 3 814 | Izzat_Ibrahim 1 Tina_Sinatra 1 815 | Jack_Welch 1 Rick_Dinse 2 816 | Jack_Welch 1 William_Umbach 1 817 | Jalal_Talabani 1 Jean-Marc_Olive 1 818 | James_Franco 2 Lokendra_Bahadur_Chand 1 819 | James_Ivory 1 Paul_Martin 7 820 | James_Lockhart 1 Lisa_Girman 1 821 | Jan_Paul_Miller 1 Lynne_Slepian 1 822 | Jan_Ullrich 1 William_Rehnquist 1 823 | Janet_Chandler 1 Zoe_Ball 1 824 | Janis_Ruth_Coulter 1 Laurent_Woulzy 1 825 | Jason_Clermont 1 Vladimir_Ustinov 1 826 | Jason_Kapono 1 Patricia_Medina 1 827 | Jason_Mewes 1 Will_Smith 1 828 | Jason_White 1 Wolfgang_Clement 1 829 | Jean-Pierre_Raffarin 2 Robert_Horan 2 830 | Jean-Pierre_Raffarin 6 Paul_Kariya 1 831 | Jeane_Kirkpatrick 1 Richard_Carl 1 832 | Jeane_Kirkpatrick 1 Tamika_Catchings 1 833 | Jeanne_Anne_Schroeder 1 Yoriko_Kawaguchi 13 834 | Jeannette_Biedermann 1 Roy_Moore 6 835 | Jeffrey_Pfeffer 1 Terry_Lynn_Barton 1 836 | Jen_Bice 1 Lily_Tomlin 2 837 | Jen_Bice 1 Martha_Burk 2 838 | Jenna_Elfman 1 Mack_Brown 1 839 | Jenna_Elfman 1 Steffi_Graf 5 840 | Jenna_Elfman 1 Willie_Wilson 1 841 | Jennifer_Keller 4 Paula_Abdul 1 842 | Jennifer_Thompson 1 Serge_Melac 1 843 | Jeremy_Greenstock 7 Jim_Flaherty 1 844 | Jerry_Bruckheimer 1 John_Blaney 1 845 | Jerry_Jones 1 Robert_Woody_Johnson 1 846 | Jesse_Ventura 1 Reggie_Lewis 1 847 | Jesse_Ventura 3 Roy_Romanow 1 848 | Jessica_Biel 1 Jim_Freudenberg 1 849 | Jim_Calhoun 1 Perry_Farrell 1 850 | Jim_Cantalupo 1 Joe_Pantoliano 1 851 | Jim_Hahn 4 Vaclav_Havel 3 852 | Jim_Hahn 4 Vicente_Fernandez 2 853 | Jim_Letten 1 Victor_Hanescu 1 854 | Joaquin_Phoenix 1 Kajsa_Bergqvist 1 855 | Joe_Pantoliano 1 Robin_Tunney 1 856 | John_Belushi 1 Mahima_Chaudhari 1 857 | John_Blaney 2 Sam_Brownback 1 858 | John_Franco 1 Teri_Files 1 859 | John_Garamendi 2 Peter_Goldmark 1 860 | John_Kerr 1 Susan_Whelan 1 861 | John_Lynch 1 Stefano_Gabbana 1 862 | John_Mayer 3 Miguel_Jimenez 1 863 | John_McCallum 1 Mark_Andrew 1 864 | John_Rowland 2 Robert_Lee_Yates_Jr 1 865 | John_Walsh 2 Paul_Hogan 2 866 | Johnny_Depp 2 Scott_Dickson 1 867 | Jorge_Arce 1 Samuel_Waksal 3 868 | Jorge_Quiroga 1 Ramona_Rispton 1 869 | Jorge_Quiroga 1 Surakait_Sathirathai 1 870 | Jose_Bove 1 Michalis_Chrisohoides 1 871 | Jose_Bove 1 Princess_Diana 1 872 | Jose_Luis_Chilavert 1 Sofyan_Dawood 1 873 | Juan_Carlos 1 Patricia_Phillips 1 874 | Judy_Dean 1 Kristanna_Loken 1 875 | Julia_Ormond 1 Randy_Dryer 1 876 | Julio_Cesar_Chavez 1 Marc_Racicot 1 877 | Kaoru_Hasuike 1 Tom_Foy 1 878 | Karen_Allen 1 LeRoy_Millette_Jr 1 879 | Karen_Allen 1 Rick_Bragg 1 880 | Karl-Heinz_Rummenigge 1 Wayne_Newton 1 881 | Katie_Boone 1 Terri_Clark 1 882 | Keith_Osik 1 Robert_Nardelli 1 883 | Kent_Robinson 1 Marquier_Montano_Contreras 1 884 | Kevin_Gil 1 Mary_Bono 1 885 | Kevin_Hearn 1 Todd_Haynes 3 886 | Khader_Rashid_Rahim 1 Martha_Burk 1 887 | Khader_Rashid_Rahim 1 Michael_Arif 1 888 | Kim_Yong-il 2 Terri_Clark 1 889 | Koichi_Tanaka 1 Sterling_Hitchcock 1 890 | Kosuke_Kitajima 2 Qazi_Hussain_Ahmed 1 891 | Kristin_Davis 3 Steven_Van_Zandt 1 892 | Kurt_Hellstrom 1 Manuel_Pellegrini 1 893 | Laszlo_Kovacs 1 Michael_Arif 1 894 | Laura_Elena_Harring 1 Rowan_Williams 1 895 | Laurel_Clark 1 Rahul_Dravid 1 896 | Lawrence_Di_Rita 1 Tim_Allen 2 897 | Lawrence_Roberts 1 Malik_Mahmud 1 898 | Lea_Fastow 1 Todd_Parrott 1 899 | Lee_Baca 1 Pat_Summerall 1 900 | Leisel_Jones 1 Steven_Hatfill 1 901 | Lena_Katina 1 Queen_Noor 1 902 | Leonard_Glick 1 Terrence_Trammell 1 903 | Leonard_Schrank 1 Ozzy_Osbourne 1 904 | Leslie_Wiser_Jr 1 Mireya_Moscoso 1 905 | Lisa_Murkowski 1 Svetlana_Belousova 1 906 | Lisa_Raymond 2 Robert_Redford 7 907 | Ludwig_Ovalle 1 Mauricio_Macri 1 908 | Luis_Pujols 1 Rohman_al-Ghozi 1 909 | Makhdoom_Amin_Fahim 1 Tang_Jiaxuan 5 910 | Malcolm_Wild 1 Owen_Wilson 2 911 | Mamdouh_Habib 1 Polona_Bas 1 912 | Manijeh_Hekmat 1 Richard_Carl 1 913 | Manijeh_Hekmat 1 Se_Hyuk_Joo 1 914 | Marc_Anthony 1 Vince_Vaughan 1 915 | Margie_Puente 1 Rafiq_Hariri 1 916 | Maria_Callas 1 Melchor_Cob_Castro 1 917 | Mario_Lemieux 1 Pedro_Solbes 4 918 | Marissa_Jaret_Winokur 1 Vincent_Sombrotto 1 919 | Mark_Dacey 2 Russ_Ortiz 1 920 | Mark_Martin 1 Vladimir_Spidla 2 921 | Mark_Mishkin 1 Nick_Reilly 1 922 | Mark_Mishkin 1 Shawn_Marion 1 923 | Marricia_Tate 1 Rita_Moreno 2 924 | Mary_Landrieu 1 Robert_Gordon_Card 1 925 | Maryn_McKenna 1 Shawn_Marion 1 926 | Maryn_McKenna 1 Tim_Pawlenty 1 927 | Masum_Turker 1 Norman_Mineta 1 928 | Matt_Dillon 1 Regina_Ip 1 929 | Matthias_Sammer 1 Robbie_Naish 1 930 | Melissa_Etheridge 2 Teri_Files 1 931 | Michael_J_Sheehan 1 Michael_Smith_Foster 1 932 | Michael_Linscott 1 Robin_Tunney 1 933 | Michael_Phelps 5 Pat_Burns 2 934 | Michael_Sheehan 1 Vincent_Cianci_Jr 1 935 | Michael_Smith_Foster 1 Rupert_Murdoch 1 936 | Michelle_Yeoh 5 Shoshannah_Stern 1 937 | Miguel_Aldana_Ibarra 1 Mikhail_Gorbachev 1 938 | Miguel_Jimenez 1 Paris_Hilton 1 939 | Mike_Helton 2 Reggie_Lewis 1 940 | Mike_Leach 1 Thomas_Haeggstroem 1 941 | Mike_Maroth 1 Stefan_Holm 1 942 | Mike_Slive 1 Paul_Sarbanes 3 943 | Mikhail_Youzhny 2 Vince_Dooley 1 944 | Mikulas_Dzurinda 2 Shirley_Jones 1 945 | Miles_Stewart 1 Yasushi_Akashi 1 946 | Mirela_Manjani 1 Robert_Woody_Johnson 1 947 | Mitchell_McLaughlin 1 Timothy_Goebel 1 948 | Mohammed_Baqir_al-Hakim 3 Neil_Goldman 1 949 | Mona_Ayoub 1 Todd_Parrott 1 950 | Mufti_Mohammad_Syed 1 Ralf_Schumacher 4 951 | Nathalie_Gagnon 1 Rupert_Grint 2 952 | Neil_Goldman 1 Niall_Connolly 1 953 | Nick_Reilly 1 Tassos_Papadopoulos 1 954 | Nicolas_Sarkozy 1 Shanna_Zolman 1 955 | Norm_Coleman 2 Osmond_Smith 1 956 | Norman_Mailer 1 Steven_Curtis_Chapman 1 957 | Oliver_Neuville 1 Shawn_Marion 1 958 | Park_Jie-won 1 Steve_Patterson 1 959 | Patty_Sheehan 1 William_Nessen 1 960 | Paul_Hogan 1 Thomas_Watjen 1 961 | Paul_Hogan 1 William_Martin 1 962 | Paul_Kariya 1 Shannon_OBrien 2 963 | Paul_Martin 1 Stephanie_Cohen_Aloro 1 964 | Paul_Michael_Daniels 1 Simon_Chalk 1 965 | Paul_Schrader 1 Teri_Files 1 966 | Paula_Prentiss 1 Thomas_Mesereau_Jr 1 967 | Pedro_Martinez 1 Stephen_Glassroth 1 968 | Peter_Fitzgerald 1 Zelma_Novelo 1 969 | Phil_Jackson 1 Richard_Penniman 1 970 | Phillipe_Comtois 1 Richard_Carl 1 971 | Pierre_Boulanger 1 Shania_Twain 1 972 | Princess_Diana 1 Steffi_Graf 2 973 | Priscilla_Presley 1 Sergey_Lavrov 5 974 | Priscilla_Presley 1 Victoria_Beckham 3 975 | Richard_Butler 1 Victoria_Beckham 1 976 | Richard_Pennington 1 Stacy_Nelson 1 977 | Rick_Caruso 1 Steve_Wariner 1 978 | Rick_Perry 6 Svetlana_Koroleva 1 979 | Ricky_Cottrill 1 Sananda_Maitreya 1 980 | Robert_Flodquist 1 Winona_Ryder 11 981 | Rod_Stewart 1 Tom_Scully 1 982 | Rod_Stewart 3 Se_Hyuk_Joo 1 983 | Rod_Thorn 1 Yves_Brodeur 1 984 | Ron_Howard 1 Tim_Pawlenty 1 985 | Ronaldo_Luis_Nazario_de_Lima 1 Samuel_Waksal 2 986 | Sami_Al-Arian 1 Tommy_Lasorda 1 987 | Sarah_Canale 1 Steven_Curtis_Chapman 1 988 | Sasha_Cohen 1 Valery_Giscard_dEstaing 5 989 | Scott_Dalton 1 Tamara_Mowry 1 990 | Sergio_Castellitto 1 Steve_Lavin 5 991 | Seth_Gorney 1 Wilton_Gregory 1 992 | Shane_Mosley 1 Stacey_Dales-Schuman 1 993 | Sheila_Taormina 1 Stephan_Eberharter 1 994 | Stefano_Gabbana 1 Tang_Jiaxuan 3 995 | Steve_Wariner 1 Toshi_Izawa 1 996 | Steve_Zahn 1 Tab_Baldwin 1 997 | Susan_Whelan 1 Wolfgang_Schneiderhan 1 998 | Takeo_Fukui 1 Will_Ofenheusle 1 999 | Tamara_Mowry 1 Zach_Parise 1 1000 | Tatiana_Kennedy_Schlossberg 1 Thomas_Watjen 1 1001 | Todd_Petit 1 Vicente_Fernandez 3 1002 | --------------------------------------------------------------------------------