├── .gitignore ├── README.md ├── configs ├── cifar10_cnn.yaml ├── cifar10_resnet.yaml ├── hello.yaml ├── hpo_cifar10_cnn.yaml └── imagenet_resnet.yaml ├── data ├── __init__.py ├── cifar10.py ├── dummy.py └── imagenet.py ├── hpo ├── mnist-lenet5 │ ├── README.md │ ├── genetic.py │ └── source │ │ ├── .gitignore │ │ └── mnist.py └── sin │ ├── README.md │ ├── genetic_example.py │ ├── grid_example.py │ ├── random_example.py │ └── source │ └── sin.py ├── hpo_train.py ├── logs ├── README.md └── reference │ ├── cifar-cnn-25785469.out │ ├── cifar-resnet-25792238.out │ ├── hpo-cifar-cnn.out │ ├── hpo-mnist-lenet5.out │ ├── hpo-sin-genetic.out │ ├── hpo-sin-grid.out │ ├── hpo-sin-random.out │ └── mnist-lenet5.out ├── models ├── __init__.py ├── cnn.py └── resnet.py ├── notebooks └── Analysis.ipynb ├── scripts ├── cifar_cnn.sh ├── cifar_resnet.sh ├── hpo_cifar_cnn.sh ├── hpo_mnist_lenet5.sh ├── hpo_sin_genetic.sh ├── hpo_sin_grid.sh ├── hpo_sin_random.sh └── mnist_lenet5.sh ├── train.py └── utils ├── __init__.py ├── callbacks.py ├── device.py └── optimizers.py /.gitignore: -------------------------------------------------------------------------------- 1 | *.ipynb_checkpoints 2 | *.log 3 | logs/*.out 4 | *.gz 5 | run 6 | *__pycache__* 7 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # SC19 Tutorial: Deep Learning At Scale 2 | 3 | This repository contains the material for the SC19 tutorial: 4 | *Deep Learning at Scale*. 5 | 6 | Here you will links to slides and resources as well as all the code 7 | for the hands-on sessions. 8 | It contains specifications for a few datasets, a couple of CNN models, and 9 | all the training code to enable training the models in a distributed fashion 10 | using Horovod. 11 | 12 | As part of the tutorial, you will 13 | 1. Train a simple CNN to classify images from the CIFAR10 dataset on a single node 14 | 2. Train a ResNet model to classify the same images on multiple nodes 15 | 16 | **Contents** 17 | * [Links](https://github.com/NERSC/sc19-dl-tutorial#links) 18 | * [Installation](https://github.com/NERSC/sc19-dl-tutorial#installation) 19 | * [Navigating the repository](https://github.com/NERSC/sc19-dl-tutorial#navigating-the-repository) 20 | * [Hands-on walk-through](https://github.com/NERSC/sc19-dl-tutorial#hands-on-walk-through) 21 | * [Single node training example](https://github.com/NERSC/sc19-dl-tutorial#single-node-training-example) 22 | * [Multi-node training example](https://github.com/NERSC/sc19-dl-tutorial#multi-node-training-example) 23 | * [Advanced example: multi-node ResNet50 on ImageNet-100](https://github.com/NERSC/sc19-dl-tutorial#advanced-example-multi-node-resnet50-on-imagenet-100) 24 | * [Code references](https://github.com/NERSC/sc19-dl-tutorial#code-references) 25 | 26 | ## Links 27 | 28 | Presentation slides: https://drive.google.com/drive/folders/1KJm08Ry4qJXOl19MAu2Ao1t_fRNaMwZn?usp=sharing 29 | 30 | NERSC JupyterHub: https://jupyter.nersc.gov 31 | 32 | Join Slack: https://join.slack.com/t/nersc-dl-tutorial/shared_invite/enQtODMzMzQ1MTI5OTUyLWNlNzg2MjBkODIwODRlNTBkM2M4MjI0ZDk2ZDU4N2M3NjU5MDk1NTRmMTFhNWRkMTk0NGNhMzQ3YjU2NzU5NTk 33 | 34 | ## Installation 35 | 36 | 1. Start a terminal on Cori, either via ssh or from the Jupyter interface. 37 | 2. Clone the repository using git:\ 38 | `git clone https://github.com/NERSC/sc19-dl-tutorial.git` 39 | 40 | That's it! The rest of the software (Keras, TensorFlow) is pre-installed on Cori 41 | and loaded via the scripts used below. 42 | 43 | ## Navigating the repository 44 | 45 | **`train.py`** - the main training script which can be steered with YAML 46 | configuration files. 47 | 48 | **`data/`** - folder containing the specifications of the datasets. Each dataset 49 | has a corresponding name which is mapped to the specification in `data/__init__.py` 50 | 51 | **`models/`** - folder containing the Keras model definitions. Again, each model 52 | has a name which is interpreted in `models/__init__.py`. 53 | 54 | **`configs/`** - folder containing the configuration files. Each 55 | configuration specifies a dataset, a model, and all relevant configuration 56 | options (with some exceptions like the number of nodes, which is specified 57 | instead to SLURM via the command line). 58 | 59 | **`scripts/`** - contains an environment setup script and some SLURM scripts 60 | for easily submitting the example jobs to the Cori batch system. 61 | 62 | **`utils/`** - contains additional useful code for the training script, e.g. 63 | custom callbacks, device configuration, and optimizers logic. 64 | 65 | **`hpo/`** - contains READMEs and examples for HPO hands-on. 66 | 67 | ## Hands-on walk-through 68 | 69 | Go through the following steps as directed by the tutorial presenters. 70 | Discuss the questions with your neighbors. 71 | 72 | ### Single node training example 73 | 74 | We will start with single node training of a simple CNN to classify images 75 | from the CIFAR10 dataset. 76 | 77 | 1. Take a look at the simple CNN model defined here: [models/cnn.py](models/cnn.py). 78 | Consider the following things: 79 | * Note how the model is constructed as a sequence of layers 80 | * Note the structure of alternating convolutions, pooling, and dropout 81 | * Identify the _classifier head_ of the model; the part which computes the 82 | class probabilities. 83 | * *Can you figure out what the `Flatten()` layer does here, 84 | and why it is needed?* 85 | 86 | 2. Now take a look at the dataset code for CIFAR10: [data/cifar10.py](data/cifar10.py) 87 | * Keras has a convenient API for CIFAR10 which will automatically download 88 | the dataset for you. 89 | * Ask yourself: *why do we scale the dataset by 1/255?* 90 | * Note where we convert the labels (integers) to categorical class vectors. 91 | Ask yourself: *why do we have to do this?* 92 | * *What kinds of data augmentation are we applying?* 93 | 94 | 3. Next, take a look at the training script: [train.py](train.py). 95 | * Identify the part where we retrieve the dataset. 96 | * Identify the section where we retrieve the CNN model, the optimizer, and 97 | compile the model. 98 | * Now identify the part where we do the actual training. 99 | 100 | 4. Finally, look at the configuration file: [configs/cifar10_cnn.yaml](configs/cifar10_cnn.yaml). 101 | * YAML allows to express configurations in rich, human-readable, hierarchical structure. 102 | * Identify where you would edit to modify the optimizer, learning-rate, batch-size, etc. 103 | 104 | 5. Now we are ready to submit our training job to the Cori batch system. 105 | We have provided SLURM scripts to make this as simple as possible. 106 | To run the simple CNN training on CIFAR10 on a single KNL node, simply do:\ 107 | `sbatch scripts/cifar_cnn.sh` 108 | * **Important:** the first time you run a CIFAR10 example, it will 109 | automatically download the dataset. If you have more than one job attempting 110 | this download simultaneously it will likely fail. 111 | 112 | 6. Check on the status of your job by running `sqs`. 113 | Once the job starts running, you should see the output start to appear in the 114 | slurm log file `logs/cifar-cnn-*.out`. 115 | 116 | 7. When the job is finished, check the log to identify how well your model learned 117 | to solve the CIFAR10 classification task. For every epoch you should see the 118 | loss and accuracy reported for both the training set and the validation set. 119 | Take note of the best validation accuracy achieved. 120 | 121 | ### Multi-node training example 122 | 123 | To demonstrate scaling to multiple nodes, we will switch to a larger, more complex 124 | ResNet model. This model can achieve higher accuracy than our simple CNN, but 125 | it is quite a bit slower to train. By parallelizing the training across nodes 126 | we should be able to achieve a better result than our simple CNN in a practical 127 | amount of time. 128 | 129 | 1. Check out the ResNet model code in [models/resnet.py](models/resnet.py). 130 | Note this is quite a bit more complex than the simple CNN! In fact the model 131 | code is broken into multiple functions for easy reuse. We provide here two 132 | versions of ResNet models: a standard ResNet50 (with 50 layers) and a smaller 133 | ResNet consisting of 26 layers. 134 | * Identify the identy block and conv block functions. *How many convolutional 135 | layers do each of these have*? 136 | * Identify the functions that build the ResNet50 and the ResNetSmall. Given how 137 | many layers are in each block, *see if you can confirm how many layers (conv 138 | and dense) are in the models*. **Hint:** we don't normally count the 139 | convolution applied to the shortcuts. 140 | 141 | 2. Inspect the optimizer setup in [utils/optimizers.py](utils/optimizers.py). 142 | * Note how we scale the learning rate (`lr`) according to the number of 143 | processes (ranks). 144 | * Note how we construct our optimizer and then wrap it in the Horovod 145 | DistributedOptimizer. 146 | 147 | 3. Inspect [train.py](train.py) once again. 148 | * Identify the `init_workers` function where we initialize Horovod. 149 | Note where this is invoked in the main() function (right away). 150 | * Identify where we setup our training callbacks. 151 | * *Which callback ensures we have consistent model weights at the start of training?* 152 | * Identify the callbacks responsible for the learning rate schedule (warmup and decay). 153 | 154 | That's mostly it for the code. Note that in general when training distributed 155 | you might want to use more complicated data handling, e.g. to ensure different 156 | workers are always processing different samples of your data within a training 157 | epoch. In this case we aren't worrying about that and are, for simplicity, 158 | relying on the independent random shuffling of the data by each worker as well 159 | as the random data augmentation. 160 | 161 | 4. (**optional**) To gain an appreciation for the speedup of training on 162 | multiple nodes, you can first try to train the ResNet model on a single node. 163 | Adjust the configuration in [configs/cifar10_resnet.yaml](configs/cifar10_resnet.yaml) 164 | to train for just 1 epoch and then submit the job with\ 165 | `sbatch -N 1 scripts/cifar_resnet.sh` 166 | 167 | 5. Now we are ready to train our ResNet model on multiple nodes using Horovod 168 | and MPI! If you changed the config to 1 epoch above, be sure to change it back 169 | to 32 epochs for this step. To launch the ResNet training on 4 nodes, do:\ 170 | `sbatch -N 4 scripts/cifar_resnet.sh` 171 | 172 | 6. As before, watch the log file (`logs/cifar-resnet-*.out`) when the job starts. 173 | You'll see some printouts from every worker. Others are only printed from rank 0. 174 | 175 | 7. When the job is finished, look at the log and compare to the simple CNN case 176 | above. If you ran step 4, compare the time to train one epoch between single-node 177 | and multi-node. *Did your model manage to converge to a better validation accuracy 178 | than the simple CNN?* 179 | 180 | Now that you've finished the main tutorial material, try to play with the code 181 | and/or configuration to see the effect on the training results. You can try changing 182 | things like 183 | * Change the optimizer (search for Keras optimizers on google). 184 | * Change the nominal learning rate, number of warmup epochs, decay schedule 185 | * Change the learning rate scaling (e.g. try "sqrt" scaling instead of linear) 186 | 187 | Most of these things can be changed entirely within the configuration. 188 | See [configs/imagenet_resnet.yaml](configs/imagenet_resnet.yaml) for examples. 189 | 190 | ### Hyperparameter Optimization 191 | 192 | The following examples will walk you through how to utilize distributed 193 | hyperparameter optimization with the Cray HPO package. 194 | 195 | For documentation reference, see the 196 | [Cray HPO documentation](https://cray.github.io/crayai/hpo/hpo.html). 197 | 198 | #### Example: Hello World HPO examples 199 | 200 | This is a set of quick-running examples that you may view and run to get 201 | acquainted with the Cray HPO interface. 202 | 203 | Further instructions are in: `hpo/sin/README.md` 204 | 205 | #### Example: Applying distributed HPO to CNN CIFAR10 example 206 | 207 | In this example, we will be optimizing the hyperparameters of the CNN single node 208 | training example from before. 209 | 210 | 1. Take a look `train.py` again and follow the `--hpo` argument. Note that the 211 | loss value is being emitted when this argument is set, which is necessary to 212 | communicate the figure of merit back to the Cray HPO optimizer. Additionally, 213 | inspect the `configs/hpo_cifar10_cnn.yaml` configuration, and note that the 214 | number of epochs has been scaled down significantly so that this example can 215 | run to completion in a reasonable amount of time. 216 | 217 | 2. Take a look at the `hpo_train.py` HPO driver. This script sets up the 218 | evaluator, hyperparameter search space, and optimizer. Make sure you 219 | understand the HPO code by trying to answer these questions: 220 | 221 | * What hyperparameters are being optimized? 222 | * Which optimizer is being used for this example? 223 | * How many evaluations of `train.py` will this optimization run? 224 | 225 | 3. Now we are ready to run our hyperparameter optimization. Similar to before, 226 | a SLURM script is provided to run the HPO driver on 8 KNL nodes: 227 | 228 | ``` 229 | sbatch scripts/hpo_cifar_cnn.sh 230 | ``` 231 | 232 | This HPO run should take a while. Feel free to move on to the next example 233 | while this runs. 234 | 235 | 4. Take a look at your job output file (`*.out`) in the `logs/` directory. Try 236 | to identify the following: 237 | 238 | * How much was your figure of merit improved? 239 | * What were the improved hyperparameter values found? 240 | 241 | 242 | #### Example: Optimizing topology of LeNet-5 243 | 244 | This is an example of using Cray HPO to optimize hyperparameters for LeNet-5 245 | trained on the MNIST hand-written digits dataset. 246 | 247 | This example is unique because the figure of merit is the elapsed training time 248 | until a threshold accuracy is reached, to minimize time-to-accuracy. 249 | Additionally, this example shows how one can optimize network topology with 250 | other traditional hyperparameters. 251 | 252 | Further instructions are in: `hpo/mnist-lenet5/README.md` 253 | 254 | ## Code references 255 | 256 | Keras ResNet50 official model: 257 | https://github.com/keras-team/keras-applications/blob/master/keras_applications/resnet50.py 258 | 259 | Horovod ResNet + ImageNet example: 260 | https://github.com/uber/horovod/blob/master/examples/keras_imagenet_resnet50.py 261 | 262 | CIFAR10 CNN and ResNet examples: 263 | https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py 264 | https://github.com/keras-team/keras/blob/master/examples/cifar10_resnet.py 265 | -------------------------------------------------------------------------------- /configs/cifar10_cnn.yaml: -------------------------------------------------------------------------------- 1 | output_dir: $SCRATCH/sc19-dl-tutorial/cifar-cnn-N${SLURM_JOB_NUM_NODES}-${SLURM_JOB_ID} 2 | 3 | data: 4 | name: cifar10 5 | 6 | model: 7 | name: cnn 8 | input_shape: [32, 32, 3] 9 | n_classes: 10 10 | dropout: 0.1 11 | 12 | optimizer: 13 | name: Adam 14 | lr: 0.001 15 | 16 | training: 17 | batch_size: 512 18 | n_epochs: 16 19 | lr_warmup_epochs: 5 20 | loss: categorical_crossentropy 21 | metrics: [accuracy] 22 | 23 | device: 24 | intra_threads: 33 25 | inter_threads: 1 26 | blocktime: 0 27 | -------------------------------------------------------------------------------- /configs/cifar10_resnet.yaml: -------------------------------------------------------------------------------- 1 | output_dir: $SCRATCH/sc19-dl-tutorial/cifar10-resnet-N${SLURM_JOB_NUM_NODES}-${SLURM_JOB_ID} 2 | 3 | data: 4 | name: cifar10 5 | 6 | model: 7 | name: resnet_small 8 | input_shape: [32, 32, 3] 9 | n_classes: 10 10 | 11 | optimizer: 12 | name: Adam 13 | lr: 0.0001 14 | lr_scaling: linear 15 | 16 | training: 17 | batch_size: 64 18 | n_epochs: 16 19 | lr_warmup_epochs: 5 20 | loss: categorical_crossentropy 21 | metrics: [accuracy] 22 | 23 | device: 24 | intra_threads: 32 25 | blocktime: 0 26 | -------------------------------------------------------------------------------- /configs/hello.yaml: -------------------------------------------------------------------------------- 1 | output_dir: $SCRATCH/sc19-dl-tutorial/hello-world 2 | 3 | data: 4 | name: dummy 5 | input_shape: [64, 64, 3] 6 | n_train: 1024 7 | n_valid: 1024 8 | 9 | model: 10 | name: cnn 11 | input_shape: [64, 64, 3] 12 | n_classes: 1 13 | 14 | optimizer: 15 | name: Adam 16 | lr: 0.001 17 | 18 | training: 19 | batch_size: 32 20 | n_epochs: 1 21 | loss: binary_crossentropy 22 | metrics: [accuracy] 23 | -------------------------------------------------------------------------------- /configs/hpo_cifar10_cnn.yaml: -------------------------------------------------------------------------------- 1 | # Scaled down version of CNN CIFAR10 for quicker evaluations 2 | description: 'SHORT CNN CIFAR10' 3 | output_dir: $SCRATCH/sc19-dl-tutorial/cifar-cnn-N${SLURM_JOB_NUM_NODES}-${SLURM_JOB_ID} 4 | 5 | data: 6 | name: cifar10 7 | 8 | model: 9 | name: cnn 10 | input_shape: [32, 32, 3] 11 | n_classes: 10 12 | dropout: 0.1 13 | 14 | optimizer: 15 | name: Adam 16 | lr: 0.001 17 | 18 | # Scaled down n_epochs 19 | training: 20 | batch_size: 64 21 | n_epochs: 4 22 | lr_warmup_epochs: 0 23 | loss: categorical_crossentropy 24 | metrics: [accuracy] 25 | -------------------------------------------------------------------------------- /configs/imagenet_resnet.yaml: -------------------------------------------------------------------------------- 1 | # This configuration should match what is implemented in the horovod example: 2 | # https://github.com/uber/horovod/blob/master/examples/keras_imagenet_resnet50.py 3 | 4 | output_dir: $SCRATCH/sc19-dl-tutorial/imagenet-resnet-N${SLURM_JOB_NUM_NODES}-${SLURM_JOB_ID} 5 | 6 | data: 7 | name: imagenet 8 | train_dir: /global/cscratch1/sd/sfarrell/ImageNet-100/train 9 | valid_dir: /global/cscratch1/sd/sfarrell/ImageNet-100/validation 10 | 11 | model: 12 | name: resnet50 13 | input_shape: [224, 224, 3] 14 | n_classes: 100 15 | 16 | optimizer: 17 | name: SGD 18 | lr: 0.0125 19 | momentum: 0.9 20 | 21 | training: 22 | batch_size: 32 23 | n_epochs: 100 24 | lr_warmup_epochs: 5 25 | loss: categorical_crossentropy 26 | metrics: [accuracy, top_k_categorical_accuracy] 27 | lr_schedule: 28 | - {start_epoch: 5, end_epoch: 30, multiplier: 1.} 29 | - {start_epoch: 30, end_epoch: 60, multiplier: 1.e-1} 30 | - {start_epoch: 60, end_epoch: 80, multiplier: 1.e-2} 31 | - {start_epoch: 80, multiplier: 1.e-3} 32 | -------------------------------------------------------------------------------- /data/__init__.py: -------------------------------------------------------------------------------- 1 | """ 2 | Keras dataset specifications. 3 | TODO: add MNIST. 4 | """ 5 | 6 | def get_datasets(name, **data_args): 7 | if name == 'dummy': 8 | from .dummy import get_datasets 9 | return get_datasets(**data_args) 10 | elif name == 'cifar10': 11 | from .cifar10 import get_datasets 12 | return get_datasets(**data_args) 13 | elif name == 'imagenet': 14 | from .imagenet import get_datasets 15 | return get_datasets(**data_args) 16 | else: 17 | raise ValueError('Dataset %s unknown' % name) 18 | -------------------------------------------------------------------------------- /data/cifar10.py: -------------------------------------------------------------------------------- 1 | """ 2 | CIFAR10 dataset specification. 3 | 4 | https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py 5 | https://github.com/keras-team/keras/blob/master/examples/cifar10_resnet.py 6 | """ 7 | 8 | # Externals 9 | import keras 10 | from keras.datasets import cifar10 11 | from keras.preprocessing.image import ImageDataGenerator 12 | 13 | def get_datasets(batch_size, n_train=None, n_valid=None): 14 | """ 15 | Load the CIFAR10 data and construct pipeline. 16 | """ 17 | (x_train, y_train), (x_valid, y_valid) = cifar10.load_data() 18 | 19 | # Normalize pixel values 20 | x_train = x_train.astype('float32') / 255 21 | x_valid = x_valid.astype('float32') / 255 22 | 23 | # Select subset of data if specified 24 | if n_train is not None: 25 | x_train, y_train = x_train[:n_train], y_train[:n_train] 26 | if n_valid is not None: 27 | x_valid, y_valid = x_valid[:n_valid], y_valid[:n_valid] 28 | 29 | # Convert labels to class vectors 30 | n_classes = 10 31 | y_train = keras.utils.to_categorical(y_train, n_classes) 32 | y_valid = keras.utils.to_categorical(y_valid, n_classes) 33 | 34 | # Prepare the generators with data augmentation 35 | train_gen = ImageDataGenerator(width_shift_range=0.1, 36 | height_shift_range=0.1, 37 | horizontal_flip=True) 38 | valid_gen = ImageDataGenerator() 39 | train_iter = train_gen.flow(x_train, y_train, batch_size=batch_size, shuffle=True) 40 | valid_iter = valid_gen.flow(x_valid, y_valid, batch_size=batch_size, shuffle=True) 41 | return train_iter, valid_iter 42 | -------------------------------------------------------------------------------- /data/dummy.py: -------------------------------------------------------------------------------- 1 | """ 2 | Random dummy dataset specification. 3 | """ 4 | 5 | # System 6 | import math 7 | 8 | # Externals 9 | import numpy as np 10 | from keras.utils import Sequence 11 | 12 | class DummyDataset(Sequence): 13 | 14 | def __init__(self, n_samples, batch_size, input_shape, target_shape): 15 | self.x = np.random.normal(size=(n_samples,) + tuple(input_shape)) 16 | self.y = np.random.normal(size=(n_samples,) + tuple(target_shape)) 17 | self.batch_size = batch_size 18 | 19 | def __len__(self): 20 | return math.ceil(len(self.x) / self.batch_size) 21 | 22 | def __getitem__(self, idx): 23 | start = idx * self.batch_size 24 | end = (idx + 1) * self.batch_size 25 | return self.x[start:end], self.y[start:end] 26 | 27 | def get_datasets(batch_size, n_train=1024, n_valid=1024, 28 | input_shape=(32, 32, 3), target_shape=()): 29 | train_data = DummyDataset(n_train, batch_size, input_shape, target_shape) 30 | valid_data = DummyDataset(n_valid, batch_size, input_shape, target_shape) 31 | return train_data, valid_data 32 | -------------------------------------------------------------------------------- /data/imagenet.py: -------------------------------------------------------------------------------- 1 | """ 2 | ImageNet dataset specification. 3 | 4 | Adapted from 5 | https://github.com/uber/horovod/blob/master/examples/keras_imagenet_resnet50.py 6 | """ 7 | 8 | # Externals 9 | import keras 10 | from keras.preprocessing.image import ImageDataGenerator 11 | 12 | def get_datasets(batch_size, train_dir, valid_dir): 13 | train_gen = ImageDataGenerator( 14 | preprocessing_function=keras.applications.resnet50.preprocess_input, 15 | width_shift_range=0.33, height_shift_range=0.33, zoom_range=0.5, 16 | horizontal_flip=True) 17 | test_gen = ImageDataGenerator( 18 | preprocessing_function=keras.applications.resnet50.preprocess_input, 19 | zoom_range=(0.875, 0.875)) 20 | train_iter = train_gen.flow_from_directory(train_dir, batch_size=batch_size, 21 | target_size=(224, 224), shuffle=True) 22 | test_iter = train_gen.flow_from_directory(valid_dir, batch_size=batch_size, 23 | target_size=(224, 224), shuffle=True) 24 | return train_iter, test_iter 25 | -------------------------------------------------------------------------------- /hpo/mnist-lenet5/README.md: -------------------------------------------------------------------------------- 1 | # LeNet-5 trained on MNIST digits dataset 2 | 3 | This directory contains an HPO example of doing a topology optimization. 4 | This is an interesting example for 2 reasons: 5 | 6 | 1. The kernel script has been modified to minimize elapsed time instead of loss. 7 | 2. The hyperparameters and network topology are being optimized simultaneously. 8 | 9 | There is a reasonable chance you will not have time to run this example to 10 | completion within the time frame of the tutorial, and that is OK. Running this 11 | example is not required to understand what this section aims to demonstrate. 12 | 13 | ## LeNet and MNIST digits dataset 14 | 15 | The LeNet architecture is a CNN designed by Yann LeCunn for the task of image 16 | classification of handwritten digit images. 17 | 18 | It consists of two convolutional layers, each of which is followed by a 19 | subsampling layer, and then a pair of fully-connected layers with 20 | a final output layer. 21 | 22 | The MNIST dataset contains 70,000 labeled images of handwritten numerals; each 23 | single channel, 28×28pixel resolution. Ten thousand images are held out to form 24 | a testing set. 25 | 26 | ## Kernel Script 27 | 28 | The kernel script (`source/mnist.py`) trains the LeNet-5 CNN to predict the 29 | values of handwritten digits from 0 to 9 from the MNIST digits dataset. 30 | This script originates from the tensorflow repository and was modified to 31 | expose the hyperparameters as command line arguments and print out the figure 32 | of merit. 33 | 34 | The exposed hyperparameters include: 35 | 36 | - `dropout`: Dropout rate used for generalization 37 | - `momentum`: Momentum factor 38 | - `c1\_sz`: Filter size of the first convolutional layer 39 | - `c1\_ft`: Filter count of the first convolutional layer 40 | - `c2\_sz`: Filter size of the second convolutional layer 41 | - `c2\_ft`: Filter count of the second convolutional layer 42 | - `fullyc\_sz`: Width of the fully connected layer 43 | 44 | To run the kernel script directly, use the batch script: 45 | 46 | # From top-level directory of repository 47 | sbatch scripts/mnist_lenet5.sh 48 | 49 | A single run should not take more than a few minutes to complete. 50 | 51 | ## HPO Driver 52 | 53 | To run the HPO driver, use the batch script: 54 | 55 | # From top-level directory of repository 56 | sbatch scripts/hpo_mnist_lenet5.sh 57 | 58 | This example will take a while to run. Feel free to look over previous examples 59 | that you may have not yet completed while this runs. 60 | -------------------------------------------------------------------------------- /hpo/mnist-lenet5/genetic.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | 3 | from crayai import hpo 4 | 5 | from os import path 6 | from os.path import abspath, dirname 7 | 8 | # Give abs path to src directory so that this script can be run from anywhere: 9 | pwd = path.dirname(path.abspath(__file__)) 10 | src_path = path.join(pwd, 'source') 11 | run_path = path.join(pwd, 'run') 12 | 13 | argparser = argparse.ArgumentParser() 14 | argparser.add_argument('-N', '--num_nodes', type=int, default=1) 15 | argparser.add_argument('--generations', type=int, default=3) 16 | argparser.add_argument('--num_demes', type=int, default=1) 17 | argparser.add_argument('--pop_size', type=int, default=8) 18 | argparser.add_argument('--mutation_rate', type=float, default=0.5) 19 | argparser.add_argument('--crossover_rate', type=float, default=0.33) 20 | argparser.add_argument('--verbose', action='store_true') 21 | args = argparser.parse_args() 22 | 23 | print("------------------------------------------------------------") 24 | print("Genetic HPO Example: LeNet-5 (MNIST) TensorFlow -- Cray Inc.") 25 | print("------------------------------------------------------------") 26 | 27 | evaluator = hpo.Evaluator('python3 {0}/mnist.py'.format(src_path), 28 | run_path=run_path, 29 | src_path=src_path, 30 | nodes=args.num_nodes, 31 | verbose=args.verbose) 32 | 33 | optimizer = hpo.GeneticOptimizer(evaluator, 34 | generations=args.generations, 35 | num_demes=args.num_demes, 36 | pop_size=args.pop_size, 37 | mutation_rate=args.mutation_rate, 38 | crossover_rate=args.crossover_rate, 39 | verbose=args.verbose, 40 | log_fn='mnist-topology.log') 41 | 42 | params = hpo.Params([["--dropout", 0.5, (0.005, 0.9)], 43 | ["--momentum", 1.0e-4, (1.0e-6, 1.0e-2)], 44 | ["--c1_sz", 5, (2, 8)], 45 | ["--c1_ft", 32, (8, 128)], 46 | ["--c2_sz", 5, (2, 8)], 47 | ["--c2_ft", 64, (16, 256)], 48 | ["--fullyc_sz", 1024, (64, 4096)]]) 49 | 50 | optimizer.optimize(params) 51 | 52 | # Print best FoM value 53 | print('Best FoM: ', optimizer.best_fom) 54 | 55 | # Print best hyperparameters 56 | print('Best HPs:') 57 | for param, value in optimizer.best_params.items(): 58 | print(param, ' = ', value) 59 | 60 | print("------------------------------------------------------------") 61 | print("Done.") 62 | print("------------------------------------------------------------") 63 | -------------------------------------------------------------------------------- /hpo/mnist-lenet5/source/.gitignore: -------------------------------------------------------------------------------- 1 | data 2 | -------------------------------------------------------------------------------- /hpo/mnist-lenet5/source/mnist.py: -------------------------------------------------------------------------------- 1 | ############################################################ 2 | # Original from: # 3 | # http://tensorflow.org/tutorials/mnist/beginners/index.md # 4 | # Copyright 2015 TensorFlow Authors. Apache Version 2.0 # 5 | ############################################################ 6 | 7 | 8 | from __future__ import absolute_import 9 | from __future__ import division 10 | from __future__ import print_function 11 | 12 | import argparse 13 | import sys 14 | import numpy 15 | import timeit 16 | 17 | from tensorflow.examples.tutorials.mnist import input_data 18 | 19 | import tensorflow as tf 20 | 21 | 22 | ############################################################ 23 | 24 | 25 | target_error = 0.95 26 | mbatchsz = 100 27 | maxiters = 2000 28 | FLAGS = None 29 | 30 | 31 | ############################################################ 32 | 33 | 34 | def main(_): 35 | ############################## 36 | 37 | # Import data 38 | mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True) 39 | # Set NN topology 40 | topology = [FLAGS.c1_sz, FLAGS.c1_ft, FLAGS.c2_sz, FLAGS.c2_ft, FLAGS.fullyc_sz] 41 | print("NN Topology: ", topology) 42 | 43 | ############################## 44 | 45 | x = tf.placeholder(tf.float32, [None, 784]) 46 | W = tf.Variable(tf.zeros([784, 10])) 47 | b = tf.Variable(tf.zeros([10])) 48 | y = tf.matmul(x, W) + b 49 | y_ = tf.placeholder(tf.float32, [None, 10]) 50 | 51 | sess = tf.InteractiveSession() 52 | tf.global_variables_initializer().run() 53 | 54 | def weight_variable(shape): 55 | initial = tf.truncated_normal(shape, stddev=0.1) 56 | return tf.Variable(initial) 57 | 58 | def bias_variable(shape): 59 | initial = tf.constant(0.1, shape=shape) 60 | return tf.Variable(initial) 61 | 62 | def conv2d(x, W): 63 | return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') 64 | 65 | def max_pool_2x2(x): 66 | return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], 67 | strides=[1, 2, 2, 1], padding='SAME') 68 | 69 | W_conv1 = weight_variable([topology[0], topology[0], 1, topology[1]]) 70 | b_conv1 = bias_variable([topology[1]]) 71 | 72 | x_image = tf.reshape(x, [-1,28,28,1]) 73 | 74 | h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) 75 | h_pool1 = max_pool_2x2(h_conv1) 76 | 77 | W_conv2 = weight_variable([topology[2], topology[2], topology[1], topology[3]]) 78 | b_conv2 = bias_variable([topology[3]]) 79 | 80 | h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) 81 | h_pool2 = max_pool_2x2(h_conv2) 82 | 83 | W_fc1 = weight_variable([7 * 7 * topology[3], topology[4]]) 84 | b_fc1 = bias_variable([topology[4]]) 85 | 86 | h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*topology[3]]) 87 | h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1) 88 | 89 | keep_prob = tf.placeholder(tf.float32) 90 | h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob) 91 | 92 | W_fc2 = weight_variable([topology[4], 10]) 93 | b_fc2 = bias_variable([10]) 94 | 95 | y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2 96 | 97 | cross_entropy = tf.reduce_mean( 98 | tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv)) 99 | train_step = tf.train.AdamOptimizer(FLAGS.momentum).minimize(cross_entropy) 100 | correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1)) 101 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 102 | sess.run(tf.global_variables_initializer()) 103 | 104 | elapsed = 0.0 105 | iters = maxiters 106 | for i in range(iters): 107 | batch = mnist.train.next_batch(mbatchsz) 108 | if i%100 == 0: 109 | train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0}) 110 | val_acc = accuracy.eval(feed_dict={x: mnist.validation.images, y_: mnist.validation.labels, keep_prob: 1.0}) 111 | if i == 0: 112 | print("#iter traintime train-err val-err") 113 | print("%d %e %e %e"%(i,elapsed,train_accuracy,val_acc)) 114 | if val_acc >= target_error: 115 | if iters == i-100: 116 | iters=i 117 | break 118 | iters=i 119 | start_time = timeit.default_timer() 120 | train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: FLAGS.dropout}) 121 | if i >= 100: 122 | elapsed += timeit.default_timer() - start_time 123 | 124 | ############################## 125 | 126 | test_acc = accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}) 127 | print("\nTime to error (%f): %e s"%(1.0-target_error,elapsed)) 128 | print("Train iters: %d"%(iters)) 129 | print("Test accuracy: %e\n"%test_acc) 130 | print("FoM: %e"%(elapsed)) 131 | 132 | ############################## 133 | 134 | 135 | ############################################################ 136 | 137 | 138 | if __name__ == '__main__': 139 | parser = argparse.ArgumentParser() 140 | parser.add_argument('--data_dir', type=str, default='data', 141 | help='Directory for storing input data') 142 | # Topology 143 | parser.add_argument('--c1_sz', type=int, default=5) 144 | parser.add_argument('--c1_ft', type=int, default=32) 145 | parser.add_argument('--c2_sz', type=int, default=5) 146 | parser.add_argument('--c2_ft', type=int, default=64) 147 | parser.add_argument('--fullyc_sz', type=int, default=1024) 148 | 149 | parser.add_argument('--dropout', type=float, default='0.5', 150 | help='Probability of dropout.') 151 | parser.add_argument('--momentum', type=float, default='1e-4', 152 | help='Momentum for AdamOptimizer.') 153 | FLAGS, unparsed = parser.parse_known_args() 154 | tf.app.run(main=main, argv=[sys.argv[0]] + unparsed) 155 | 156 | 157 | ############################################################ 158 | -------------------------------------------------------------------------------- /hpo/sin/README.md: -------------------------------------------------------------------------------- 1 | # Hello World Examples 2 | 3 | This directory contains a "hello world" of hyperparameter optimization: 4 | Find the coefficients (our hyperparameter) of a 5th order polynomial that 5 | best fit a sin wave between -π and π. 6 | 7 | The sin example is a nice way to introduce the HPO interface due to the quick 8 | computation time. 9 | 10 | ## Kernel Script 11 | 12 | Take a look at the kernel script. See how the hyperparameters (polynomial 13 | coefficients) are exposed as command line options through argparse and how the 14 | figure of merit (cost function) is exposed through a print statement. 15 | 16 | If you want to run this script on the login node, be sure to load python3 17 | through the tensorflow module if you have not already: 18 | 19 | module load tensorflow 20 | 21 | The kernel script can be run via command line on the login node: 22 | 23 | # From this directory 24 | python3 source/sin.py 25 | 26 | Try running it directly with the default arguments. You are welcome to take a 27 | try hand-tuning the hyperparameters to lower the FoM value before trying HPO. 28 | The script supports a `--help` flag which lists the options: 29 | 30 | # From this directory 31 | python3 source/sin.py --help 32 | 33 | 34 | ## HPO Driver 35 | 36 | Take a look at each of the hpo examples, and see how the hyperparameter 37 | optimization is set up. Each example includes some comments describing the 38 | interface. You can also refer to the API documentation for reference: 39 | 40 | https://cray.github.io/crayai/hpo/hpo.html 41 | 42 | The HPO scripts should be launched onto an allocation by submitting their 43 | respective batch script: 44 | 45 | # From top-level directory of repository 46 | sbatch scripts/hpo_grid_example.sh 47 | sbatch scripts/hpo_random_example.sh 48 | sbatch scripts/hpo_genetic_example.sh 49 | 50 | Each of these examples takes 30-60 seconds to run to completion on 1 node. 51 | -------------------------------------------------------------------------------- /hpo/sin/genetic_example.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # encoding: utf-8 3 | """Genetic optimizer example""" 4 | from crayai import hpo 5 | 6 | evaluator = hpo.Evaluator('python source/sin.py', verbose=True) 7 | 8 | params = hpo.Params([["-a", 1.0, (-1.0, 1.0)], 9 | ["-b",-1.0, (-1.0, 1.0)], 10 | ["-c", 1.0, (-1.0, 1.0)], 11 | ["-d",-1.0, (-1.0, 1.0)], 12 | ["-e", 1.0, (-1.0, 1.0)], 13 | ["-f",-1.0, (-1.0, 1.0)], 14 | ["-g", 1.0, (-1.0, 1.0)]]) 15 | 16 | optimizer = hpo.GeneticOptimizer(evaluator, 17 | generations=13, 18 | pop_size=10, 19 | num_demes=1, 20 | mutation_rate=0.9, 21 | log_fn='genetic.log', 22 | verbose=True) 23 | 24 | 25 | optimizer.optimize(params) 26 | 27 | print(optimizer.best_fom) 28 | print(optimizer.best_params) 29 | -------------------------------------------------------------------------------- /hpo/sin/grid_example.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # encoding: utf-8 3 | """Grid optimizer example""" 4 | from crayai import hpo 5 | 6 | evaluator = hpo.Evaluator('python source/sin.py', verbose=True) 7 | 8 | params = hpo.Params([["-a", 1.0, (-1.0, 1.0)], 9 | ["-b",-1.0, (-1.0, 1.0)], 10 | ["-c", 1.0, (-1.0, 1.0)], 11 | ["-d",-1.0, (-1.0, 1.0)], 12 | ["-e", 1.0, (-1.0, 1.0)], 13 | ["-f",-1.0, (-1.0, 1.0)], 14 | ["-g", 1.0, (-1.0, 1.0)]]) 15 | 16 | # Choose 2 grid points from each HP in addition initial point 17 | optimizer = hpo.GridOptimizer(evaluator, 18 | grid_size=2, 19 | chunk_size=50, 20 | verbose=True) 21 | 22 | optimizer.optimize(params) 23 | 24 | print(optimizer.best_fom) 25 | print(optimizer.best_params) 26 | -------------------------------------------------------------------------------- /hpo/sin/random_example.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # encoding: utf-8 3 | """Random optimizer example""" 4 | from crayai import hpo 5 | 6 | evaluator = hpo.Evaluator('python source/sin.py', verbose=True) 7 | 8 | params = hpo.Params([["-a", 1.0, (-1.0, 1.0)], 9 | ["-b",-1.0, (-1.0, 1.0)], 10 | ["-c", 1.0, (-1.0, 1.0)], 11 | ["-d",-1.0, (-1.0, 1.0)], 12 | ["-e", 1.0, (-1.0, 1.0)], 13 | ["-f",-1.0, (-1.0, 1.0)], 14 | ["-g", 1.0, (-1.0, 1.0)]]) 15 | 16 | optimizer = hpo.RandomOptimizer(evaluator, 17 | num_iters=129, 18 | verbose=True) 19 | 20 | optimizer.optimize(params) 21 | 22 | print(optimizer.best_fom) 23 | print(optimizer.best_params) 24 | -------------------------------------------------------------------------------- /hpo/sin/source/sin.py: -------------------------------------------------------------------------------- 1 | import math 2 | import argparse 3 | import time 4 | 5 | def main(): 6 | # Parse command line args 7 | argparser = argparse.ArgumentParser() 8 | argparser.add_argument("-a", "--A", type=float, default=1.0) 9 | argparser.add_argument("-b", "--B", type=float, default=-1.0) 10 | argparser.add_argument("-c", "--C", type=float, default=1.0) 11 | argparser.add_argument("-d", "--D", type=float, default=-1.0) 12 | argparser.add_argument("-e", "--E", type=float, default=1.0) 13 | argparser.add_argument("-f", "--F", type=float, default=-1.0) 14 | argparser.add_argument("-g", "--G", type=float, default=1.0) 15 | args = argparser.parse_args() 16 | # Compute error 17 | start = time.time() 18 | err = 0.0 19 | for i in range(100): 20 | x = ((float(i)-50.0) / 50.0) * 3.1415926535 21 | val = args.A + args.B*pow(x,1) + args.C*pow(x,2) + args.D*pow(x,3) 22 | val = val + args.E*pow(x,4) + args.F*pow(x,5) + args.G*pow(x,6) 23 | val = abs( math.sin(x) - val ) 24 | val = val * val 25 | err = err + val 26 | stop = time.time() 27 | # Figure of merit scaled by run time 28 | fom = err*(stop-start) 29 | # Print the error as our FoM 30 | print("FoM: %e"%fom) 31 | 32 | main() 33 | 34 | -------------------------------------------------------------------------------- /hpo_train.py: -------------------------------------------------------------------------------- 1 | """Hyperparameter optimization of cifar10_cnn example (train.py)""" 2 | 3 | # System 4 | import argparse 5 | 6 | # Externals 7 | from crayai import hpo 8 | 9 | def parse_args(): 10 | parser = argparse.ArgumentParser('hpo_train.py') 11 | parser.add_argument('-N', '--num_nodes', type=int, default=1, 12 | help='number of nodes to evaluate over') 13 | parser.add_argument('-v', '--verbose', action='store_true', 14 | help='enable verbose HPO output') 15 | return parser.parse_args() 16 | 17 | 18 | def main(): 19 | args = parse_args() 20 | 21 | # Set up evaluator 22 | # configs/hpo_cifar10_cnn.yaml is a scaled down version of cifar10 23 | # --hpo is required for train.py to print the FoM 24 | # --no-output is required to avoid checkpointing 25 | eval_cmd = 'python ./train.py configs/hpo_cifar10_cnn.yaml --hpo --no-output' 26 | evaluator = hpo.Evaluator(eval_cmd, 27 | nodes=args.num_nodes, 28 | verbose=args.verbose) 29 | 30 | # Set up search space for HPs: learning rate and dropout 31 | params = hpo.Params([['--optimizer lr', 0.001, (1e-6, 1)], 32 | ['--dropout', 0.1, (0.0, 0.5)]]) 33 | 34 | 35 | # Set up genetic optimizer with 8 evaluations/generation and 3 generations 36 | optimizer = hpo.GeneticOptimizer(evaluator, generations=4, pop_size=8, 37 | num_demes=1, mutation_rate=0.6, 38 | verbose=args.verbose) 39 | 40 | # Optimize the hyperparameters 41 | optimizer.optimize(params) 42 | 43 | # Print the figure of merit value for the best set of hyperparameters 44 | print(optimizer.best_fom) 45 | # Print the best set of hyperparameters found 46 | print(optimizer.best_params) 47 | 48 | 49 | if __name__ == '__main__': 50 | main() 51 | -------------------------------------------------------------------------------- /logs/README.md: -------------------------------------------------------------------------------- 1 | Slurm logs go in this directory 2 | -------------------------------------------------------------------------------- /logs/reference/cifar-cnn-25785469.out: -------------------------------------------------------------------------------- 1 | Using TensorFlow backend. 2 | Using TensorFlow backend. 3 | 2019-11-13 12:46:18,804 INFO Initialized rank 0 out of 1 4 | 2019-11-13 12:46:18,805 INFO Job configuration: {'output_dir': '$SCRATCH/sc19-dl-tutorial/cifar-cnn-N${SLURM_JOB_NUM_NODES}-${SLURM_JOB_ID}', 'data': {'name': 'cifar10'}, 'model': {'name': 'cnn', 'input_shape': [32, 32, 3], 'n_classes': 10, 'dropout': 0.1}, 'optimizer': {'name': 'Adam', 'lr': 0.001}, 'training': {'batch_size': 512, 'n_epochs': 16, 'lr_warmup_epochs': 5, 'loss': 'categorical_crossentropy', 'metrics': ['accuracy']}, 'device': {'intra_threads': 33, 'inter_threads': 1, 'blocktime': 0}} 5 | 2019-11-13 12:46:18,805 INFO Saving job outputs to /global/cscratch1/sd/sfarrell/sc19-dl-tutorial/cifar-cnn-N1-25785469 6 | 2019-11-13 12:46:18.810798: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA 7 | 2019-11-13 12:46:18.877635: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1399985000 Hz 8 | 2019-11-13 12:46:18.947426: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5555565f9d00 executing computations on platform Host. Devices: 9 | 2019-11-13 12:46:18.947573: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): , 10 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. 11 | Instructions for updating: 12 | Colocations handled automatically by placer. 13 | 2019-11-13 12:46:23,705 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. 14 | Instructions for updating: 15 | Colocations handled automatically by placer. 16 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version. 17 | Instructions for updating: 18 | Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`. 19 | 2019-11-13 12:46:24,108 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version. 20 | Instructions for updating: 21 | Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`. 22 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 23 | Instructions for updating: 24 | Use tf.cast instead. 25 | 2019-11-13 12:46:24,692 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 26 | Instructions for updating: 27 | Use tf.cast instead. 28 | _________________________________________________________________ 29 | Layer (type) Output Shape Param # 30 | ================================================================= 31 | conv2d_1 (Conv2D) (None, 32, 32, 16) 448 32 | _________________________________________________________________ 33 | max_pooling2d_1 (MaxPooling2 (None, 16, 16, 16) 0 34 | _________________________________________________________________ 35 | conv2d_2 (Conv2D) (None, 16, 16, 32) 4640 36 | _________________________________________________________________ 37 | max_pooling2d_2 (MaxPooling2 (None, 8, 8, 32) 0 38 | _________________________________________________________________ 39 | conv2d_3 (Conv2D) (None, 8, 8, 64) 18496 40 | _________________________________________________________________ 41 | max_pooling2d_3 (MaxPooling2 (None, 4, 4, 64) 0 42 | _________________________________________________________________ 43 | flatten_1 (Flatten) (None, 1024) 0 44 | _________________________________________________________________ 45 | dense_1 (Dense) (None, 128) 131200 46 | _________________________________________________________________ 47 | dropout_1 (Dropout) (None, 128) 0 48 | _________________________________________________________________ 49 | dense_2 (Dense) (None, 10) 1290 50 | ================================================================= 51 | Total params: 156,074 52 | Trainable params: 156,074 53 | Non-trainable params: 0 54 | _________________________________________________________________ 55 | Epoch 1/16 56 | - 89s - loss: 1.9173 - acc: 0.2980 - val_loss: 1.6662 - val_acc: 0.4036 57 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/horovod/tensorflow/__init__.py:86: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 58 | Instructions for updating: 59 | Deprecated in favor of operator or tf.math.divide. 60 | 2019-11-13 12:48:01,166 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/horovod/tensorflow/__init__.py:86: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 61 | Instructions for updating: 62 | Deprecated in favor of operator or tf.math.divide. 63 | Epoch 2/16 64 | - 78s - loss: 1.6127 - acc: 0.4172 - val_loss: 1.5186 - val_acc: 0.4595 65 | Epoch 3/16 66 | - 82s - loss: 1.5170 - acc: 0.4517 - val_loss: 1.4431 - val_acc: 0.4820 67 | Epoch 4/16 68 | - 83s - loss: 1.4371 - acc: 0.4872 - val_loss: 1.3008 - val_acc: 0.5334 69 | Epoch 5/16 70 | - 83s - loss: 1.3775 - acc: 0.5063 - val_loss: 1.2889 - val_acc: 0.5383 71 | 72 | Epoch 5: finished gradual learning rate warmup to 0.001. 73 | Epoch 6/16 74 | - 83s - loss: 1.3300 - acc: 0.5255 - val_loss: 1.2231 - val_acc: 0.5638 75 | Epoch 7/16 76 | - 83s - loss: 1.2928 - acc: 0.5407 - val_loss: 1.2037 - val_acc: 0.5776 77 | Epoch 8/16 78 | - 83s - loss: 1.2677 - acc: 0.5489 - val_loss: 1.1607 - val_acc: 0.5962 79 | Epoch 9/16 80 | - 83s - loss: 1.2263 - acc: 0.5665 - val_loss: 1.1232 - val_acc: 0.6049 81 | Epoch 10/16 82 | - 83s - loss: 1.1957 - acc: 0.5750 - val_loss: 1.1827 - val_acc: 0.5814 83 | Epoch 11/16 84 | - 84s - loss: 1.1714 - acc: 0.5848 - val_loss: 1.1436 - val_acc: 0.5947 85 | Epoch 12/16 86 | - 82s - loss: 1.1568 - acc: 0.5921 - val_loss: 1.0506 - val_acc: 0.6270 87 | Epoch 13/16 88 | - 83s - loss: 1.1283 - acc: 0.5995 - val_loss: 1.0474 - val_acc: 0.6295 89 | Epoch 14/16 90 | - 84s - loss: 1.1109 - acc: 0.6075 - val_loss: 1.0292 - val_acc: 0.6366 91 | Epoch 15/16 92 | - 83s - loss: 1.0944 - acc: 0.6135 - val_loss: 1.0033 - val_acc: 0.6444 93 | Epoch 16/16 94 | - 83s - loss: 1.0786 - acc: 0.6174 - val_loss: 0.9982 - val_acc: 0.6495 95 | 2019-11-13 13:08:54,284 INFO Best validation accuracy: 0.650 96 | 2019-11-13 13:08:54,290 INFO Average time per epoch: 83.917 s 97 | 2019-11-13 13:08:54,386 INFO All done! 98 | -------------------------------------------------------------------------------- /logs/reference/cifar-resnet-25792238.out: -------------------------------------------------------------------------------- 1 | Using TensorFlow backend. 2 | Using TensorFlow backend. 3 | 2019-11-13 16:48:44,995 INFO Initialized rank 7 out of 8 4 | Using TensorFlow backend. 5 | 2019-11-13 16:48:44,997 INFO Initialized rank 1 out of 8 6 | Using TensorFlow backend. 7 | 2019-11-13 16:48:44,998 INFO Initialized rank 6 out of 8 8 | Using TensorFlow backend. 9 | 2019-11-13 16:48:44,996 INFO Initialized rank 5 out of 8 10 | Using TensorFlow backend. 11 | 2019-11-13 16:48:44,996 INFO Initialized rank 3 out of 8 12 | Using TensorFlow backend. 13 | 2019-11-13 16:48:44,997 INFO Initialized rank 4 out of 8 14 | Using TensorFlow backend. 15 | 2019-11-13 16:48:44,996 INFO Initialized rank 2 out of 8 16 | 2019-11-13 16:48:44.999173: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA 17 | 2019-11-13 16:48:44.999133: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA 18 | 2019-11-13 16:48:44.999074: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA 19 | 2019-11-13 16:48:45.000200: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA 20 | 2019-11-13 16:48:45.000819: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA 21 | 2019-11-13 16:48:44.999290: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA 22 | Using TensorFlow backend. 23 | 2019-11-13 16:48:44,999 INFO Initialized rank 0 out of 8 24 | 2019-11-13 16:48:45.001530: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA 25 | 2019-11-13 16:48:45,000 INFO Job configuration: {'output_dir': '$SCRATCH/sc19-dl-tutorial/cifar10-resnet-N${SLURM_JOB_NUM_NODES}-${SLURM_JOB_ID}', 'data': {'name': 'cifar10'}, 'model': {'name': 'resnet_small', 'input_shape': [32, 32, 3], 'n_classes': 10}, 'optimizer': {'name': 'Adam', 'lr': 0.0001, 'lr_scaling': 'linear'}, 'training': {'batch_size': 64, 'n_epochs': 32, 'lr_warmup_epochs': 5, 'loss': 'categorical_crossentropy', 'metrics': ['accuracy']}, 'device': {'intra_threads': 32, 'blocktime': 0}} 26 | 2019-11-13 16:48:45,001 INFO Saving job outputs to /global/cscratch1/sd/sfarrell/sc19-dl-tutorial/cifar10-resnet-N8-25792238 27 | 2019-11-13 16:48:45.007836: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA 28 | 2019-11-13 16:48:45.074556: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1399990000 Hz 29 | 2019-11-13 16:48:45.082744: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1400150000 Hz 30 | 2019-11-13 16:48:45.084018: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1400085000 Hz 31 | 2019-11-13 16:48:45.083821: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1400145000 Hz 32 | 2019-11-13 16:48:45.084377: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1399975000 Hz 33 | 2019-11-13 16:48:45.084743: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1399980000 Hz 34 | 2019-11-13 16:48:45.085307: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1400145000 Hz 35 | 2019-11-13 16:48:45.085074: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1399915000 Hz 36 | 2019-11-13 16:48:45.147722: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5555565fd060 executing computations on platform Host. Devices: 37 | 2019-11-13 16:48:45.147853: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): , 38 | 2019-11-13 16:48:45.152643: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5555565fd060 executing computations on platform Host. Devices: 39 | 2019-11-13 16:48:45.153416: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5555565fd060 executing computations on platform Host. Devices: 40 | 2019-11-13 16:48:45.153657: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5555565fd060 executing computations on platform Host. Devices: 41 | 2019-11-13 16:48:45.153688: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5555565fd060 executing computations on platform Host. Devices: 42 | 2019-11-13 16:48:45.153534: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): , 43 | 2019-11-13 16:48:45.153857: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5555565fd060 executing computations on platform Host. Devices: 44 | 2019-11-13 16:48:45.153793: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): , 45 | 2019-11-13 16:48:45.153822: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): , 46 | 2019-11-13 16:48:45.153979: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): , 47 | 2019-11-13 16:48:45.152777: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): , 48 | 2019-11-13 16:48:45.155094: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5555565fd060 executing computations on platform Host. Devices: 49 | 2019-11-13 16:48:45.156188: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5555565fd060 executing computations on platform Host. Devices: 50 | 2019-11-13 16:48:45.155226: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): , 51 | 2019-11-13 16:48:45.156330: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): , 52 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. 53 | Instructions for updating: 54 | Colocations handled automatically by placer. 55 | 2019-11-13 16:48:49,734 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. 56 | Instructions for updating: 57 | Colocations handled automatically by placer. 58 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. 59 | Instructions for updating: 60 | Colocations handled automatically by placer. 61 | 2019-11-13 16:48:49,743 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. 62 | Instructions for updating: 63 | Colocations handled automatically by placer. 64 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. 65 | Instructions for updating: 66 | Colocations handled automatically by placer. 67 | 2019-11-13 16:48:49,759 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. 68 | Instructions for updating: 69 | Colocations handled automatically by placer. 70 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. 71 | Instructions for updating: 72 | Colocations handled automatically by placer. 73 | 2019-11-13 16:48:49,761 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. 74 | Instructions for updating: 75 | Colocations handled automatically by placer. 76 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. 77 | Instructions for updating: 78 | Colocations handled automatically by placer. 79 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. 80 | Instructions for updating: 81 | Colocations handled automatically by placer. 82 | 2019-11-13 16:48:49,772 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. 83 | Instructions for updating: 84 | Colocations handled automatically by placer. 85 | 2019-11-13 16:48:49,772 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. 86 | Instructions for updating: 87 | Colocations handled automatically by placer. 88 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. 89 | Instructions for updating: 90 | Colocations handled automatically by placer. 91 | 2019-11-13 16:48:49,779 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. 92 | Instructions for updating: 93 | Colocations handled automatically by placer. 94 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. 95 | Instructions for updating: 96 | Colocations handled automatically by placer. 97 | 2019-11-13 16:48:49,782 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. 98 | Instructions for updating: 99 | Colocations handled automatically by placer. 100 | __________________________________________________________________________________________________ 101 | Layer (type) Output Shape Param # Connected to 102 | ================================================================================================== 103 | input_1 (InputLayer) (None, 32, 32, 3) 0 104 | __________________________________________________________________________________________________ 105 | conv1 (Conv2D) (None, 32, 32, 64) 1792 input_1[0][0] 106 | __________________________________________________________________________________________________ 107 | bn_conv1 (BatchNormalization) (None, 32, 32, 64) 256 conv1[0][0] 108 | __________________________________________________________________________________________________ 109 | activation_1 (Activation) (None, 32, 32, 64) 0 bn_conv1[0][0] 110 | __________________________________________________________________________________________________ 111 | res2a_branch2a (Conv2D) (None, 16, 16, 64) 4160 activation_1[0][0] 112 | __________________________________________________________________________________________________ 113 | bn2a_branch2a (BatchNormalizati (None, 16, 16, 64) 256 res2a_branch2a[0][0] 114 | __________________________________________________________________________________________________ 115 | activation_2 (Activation) (None, 16, 16, 64) 0 bn2a_branch2a[0][0] 116 | __________________________________________________________________________________________________ 117 | res2a_branch2b (Conv2D) (None, 16, 16, 64) 36928 activation_2[0][0] 118 | __________________________________________________________________________________________________ 119 | bn2a_branch2b (BatchNormalizati (None, 16, 16, 64) 256 res2a_branch2b[0][0] 120 | __________________________________________________________________________________________________ 121 | activation_3 (Activation) (None, 16, 16, 64) 0 bn2a_branch2b[0][0] 122 | __________________________________________________________________________________________________ 123 | res2a_branch2c (Conv2D) (None, 16, 16, 64) 4160 activation_3[0][0] 124 | __________________________________________________________________________________________________ 125 | res2a_branch1 (Conv2D) (None, 16, 16, 64) 4160 activation_1[0][0] 126 | __________________________________________________________________________________________________ 127 | bn2a_branch2c (BatchNormalizati (None, 16, 16, 64) 256 res2a_branch2c[0][0] 128 | __________________________________________________________________________________________________ 129 | bn2a_branch1 (BatchNormalizatio (None, 16, 16, 64) 256 res2a_branch1[0][0] 130 | __________________________________________________________________________________________________ 131 | add_1 (Add) (None, 16, 16, 64) 0 bn2a_branch2c[0][0] 132 | bn2a_branch1[0][0] 133 | __________________________________________________________________________________________________ 134 | activation_4 (Activation) (None, 16, 16, 64) 0 add_1[0][0] 135 | __________________________________________________________________________________________________ 136 | res2b_branch2a (Conv2D) (None, 16, 16, 64) 4160 activation_4[0][0] 137 | __________________________________________________________________________________________________ 138 | bn2b_branch2a (BatchNormalizati (None, 16, 16, 64) 256 res2b_branch2a[0][0] 139 | __________________________________________________________________________________________________ 140 | activation_5 (Activation) (None, 16, 16, 64) 0 bn2b_branch2a[0][0] 141 | __________________________________________________________________________________________________ 142 | res2b_branch2b (Conv2D) (None, 16, 16, 64) 36928 activation_5[0][0] 143 | __________________________________________________________________________________________________ 144 | bn2b_branch2b (BatchNormalizati (None, 16, 16, 64) 256 res2b_branch2b[0][0] 145 | __________________________________________________________________________________________________ 146 | activation_6 (Activation) (None, 16, 16, 64) 0 bn2b_branch2b[0][0] 147 | __________________________________________________________________________________________________ 148 | res2b_branch2c (Conv2D) (None, 16, 16, 64) 4160 activation_6[0][0] 149 | __________________________________________________________________________________________________ 150 | bn2b_branch2c (BatchNormalizati (None, 16, 16, 64) 256 res2b_branch2c[0][0] 151 | __________________________________________________________________________________________________ 152 | add_2 (Add) (None, 16, 16, 64) 0 bn2b_branch2c[0][0] 153 | activation_4[0][0] 154 | __________________________________________________________________________________________________ 155 | activation_7 (Activation) (None, 16, 16, 64) 0 add_2[0][0] 156 | __________________________________________________________________________________________________ 157 | res3a_branch2a (Conv2D) (None, 8, 8, 128) 8320 activation_7[0][0] 158 | __________________________________________________________________________________________________ 159 | bn3a_branch2a (BatchNormalizati (None, 8, 8, 128) 512 res3a_branch2a[0][0] 160 | __________________________________________________________________________________________________ 161 | activation_8 (Activation) (None, 8, 8, 128) 0 bn3a_branch2a[0][0] 162 | __________________________________________________________________________________________________ 163 | res3a_branch2b (Conv2D) (None, 8, 8, 128) 147584 activation_8[0][0] 164 | __________________________________________________________________________________________________ 165 | bn3a_branch2b (BatchNormalizati (None, 8, 8, 128) 512 res3a_branch2b[0][0] 166 | __________________________________________________________________________________________________ 167 | activation_9 (Activation) (None, 8, 8, 128) 0 bn3a_branch2b[0][0] 168 | __________________________________________________________________________________________________ 169 | res3a_branch2c (Conv2D) (None, 8, 8, 128) 16512 activation_9[0][0] 170 | __________________________________________________________________________________________________ 171 | res3a_branch1 (Conv2D) (None, 8, 8, 128) 8320 activation_7[0][0] 172 | __________________________________________________________________________________________________ 173 | bn3a_branch2c (BatchNormalizati (None, 8, 8, 128) 512 res3a_branch2c[0][0] 174 | __________________________________________________________________________________________________ 175 | bn3a_branch1 (BatchNormalizatio (None, 8, 8, 128) 512 res3a_branch1[0][0] 176 | __________________________________________________________________________________________________ 177 | add_3 (Add) (None, 8, 8, 128) 0 bn3a_branch2c[0][0] 178 | bn3a_branch1[0][0] 179 | __________________________________________________________________________________________________ 180 | activation_10 (Activation) (None, 8, 8, 128) 0 add_3[0][0] 181 | __________________________________________________________________________________________________ 182 | res3b_branch2a (Conv2D) (None, 8, 8, 128) 16512 activation_10[0][0] 183 | __________________________________________________________________________________________________ 184 | bn3b_branch2a (BatchNormalizati (None, 8, 8, 128) 512 res3b_branch2a[0][0] 185 | __________________________________________________________________________________________________ 186 | activation_11 (Activation) (None, 8, 8, 128) 0 bn3b_branch2a[0][0] 187 | __________________________________________________________________________________________________ 188 | res3b_branch2b (Conv2D) (None, 8, 8, 128) 147584 activation_11[0][0] 189 | __________________________________________________________________________________________________ 190 | bn3b_branch2b (BatchNormalizati (None, 8, 8, 128) 512 res3b_branch2b[0][0] 191 | __________________________________________________________________________________________________ 192 | activation_12 (Activation) (None, 8, 8, 128) 0 bn3b_branch2b[0][0] 193 | __________________________________________________________________________________________________ 194 | res3b_branch2c (Conv2D) (None, 8, 8, 128) 16512 activation_12[0][0] 195 | __________________________________________________________________________________________________ 196 | bn3b_branch2c (BatchNormalizati (None, 8, 8, 128) 512 res3b_branch2c[0][0] 197 | __________________________________________________________________________________________________ 198 | add_4 (Add) (None, 8, 8, 128) 0 bn3b_branch2c[0][0] 199 | activation_10[0][0] 200 | __________________________________________________________________________________________________ 201 | activation_13 (Activation) (None, 8, 8, 128) 0 add_4[0][0] 202 | __________________________________________________________________________________________________ 203 | res4a_branch2a (Conv2D) (None, 4, 4, 256) 33024 activation_13[0][0] 204 | __________________________________________________________________________________________________ 205 | bn4a_branch2a (BatchNormalizati (None, 4, 4, 256) 1024 res4a_branch2a[0][0] 206 | __________________________________________________________________________________________________ 207 | activation_14 (Activation) (None, 4, 4, 256) 0 bn4a_branch2a[0][0] 208 | __________________________________________________________________________________________________ 209 | res4a_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_14[0][0] 210 | __________________________________________________________________________________________________ 211 | bn4a_branch2b (BatchNormalizati (None, 4, 4, 256) 1024 res4a_branch2b[0][0] 212 | __________________________________________________________________________________________________ 213 | activation_15 (Activation) (None, 4, 4, 256) 0 bn4a_branch2b[0][0] 214 | __________________________________________________________________________________________________ 215 | res4a_branch2c (Conv2D) (None, 4, 4, 256) 65792 activation_15[0][0] 216 | __________________________________________________________________________________________________ 217 | res4a_branch1 (Conv2D) (None, 4, 4, 256) 33024 activation_13[0][0] 218 | __________________________________________________________________________________________________ 219 | bn4a_branch2c (BatchNormalizati (None, 4, 4, 256) 1024 res4a_branch2c[0][0] 220 | __________________________________________________________________________________________________ 221 | bn4a_branch1 (BatchNormalizatio (None, 4, 4, 256) 1024 res4a_branch1[0][0] 222 | __________________________________________________________________________________________________ 223 | add_5 (Add) (None, 4, 4, 256) 0 bn4a_branch2c[0][0] 224 | bn4a_branch1[0][0] 225 | __________________________________________________________________________________________________ 226 | activation_16 (Activation) (None, 4, 4, 256) 0 add_5[0][0] 227 | __________________________________________________________________________________________________ 228 | res4b_branch2a (Conv2D) (None, 4, 4, 256) 65792 activation_16[0][0] 229 | __________________________________________________________________________________________________ 230 | bn4b_branch2a (BatchNormalizati (None, 4, 4, 256) 1024 res4b_branch2a[0][0] 231 | __________________________________________________________________________________________________ 232 | activation_17 (Activation) (None, 4, 4, 256) 0 bn4b_branch2a[0][0] 233 | __________________________________________________________________________________________________ 234 | res4b_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_17[0][0] 235 | __________________________________________________________________________________________________ 236 | bn4b_branch2b (BatchNormalizati (None, 4, 4, 256) 1024 res4b_branch2b[0][0] 237 | __________________________________________________________________________________________________ 238 | activation_18 (Activation) (None, 4, 4, 256) 0 bn4b_branch2b[0][0] 239 | __________________________________________________________________________________________________ 240 | res4b_branch2c (Conv2D) (None, 4, 4, 256) 65792 activation_18[0][0] 241 | __________________________________________________________________________________________________ 242 | bn4b_branch2c (BatchNormalizati (None, 4, 4, 256) 1024 res4b_branch2c[0][0] 243 | __________________________________________________________________________________________________ 244 | add_6 (Add) (None, 4, 4, 256) 0 bn4b_branch2c[0][0] 245 | activation_16[0][0] 246 | __________________________________________________________________________________________________ 247 | activation_19 (Activation) (None, 4, 4, 256) 0 add_6[0][0] 248 | __________________________________________________________________________________________________ 249 | res5a_branch2a (Conv2D) (None, 2, 2, 512) 131584 activation_19[0][0] 250 | __________________________________________________________________________________________________ 251 | bn5a_branch2a (BatchNormalizati (None, 2, 2, 512) 2048 res5a_branch2a[0][0] 252 | __________________________________________________________________________________________________ 253 | activation_20 (Activation) (None, 2, 2, 512) 0 bn5a_branch2a[0][0] 254 | __________________________________________________________________________________________________ 255 | res5a_branch2b (Conv2D) (None, 2, 2, 512) 2359808 activation_20[0][0] 256 | __________________________________________________________________________________________________ 257 | bn5a_branch2b (BatchNormalizati (None, 2, 2, 512) 2048 res5a_branch2b[0][0] 258 | __________________________________________________________________________________________________ 259 | activation_21 (Activation) (None, 2, 2, 512) 0 bn5a_branch2b[0][0] 260 | __________________________________________________________________________________________________ 261 | res5a_branch2c (Conv2D) (None, 2, 2, 512) 262656 activation_21[0][0] 262 | __________________________________________________________________________________________________ 263 | res5a_branch1 (Conv2D) (None, 2, 2, 512) 131584 activation_19[0][0] 264 | __________________________________________________________________________________________________ 265 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 266 | Instructions for updating: 267 | Use tf.cast instead. 268 | 2019-11-13 16:49:24,104 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 269 | Instructions for updating: 270 | Use tf.cast instead. 271 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 272 | Instructions for updating: 273 | Use tf.cast instead. 274 | 2019-11-13 16:49:24,288 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 275 | Instructions for updating: 276 | Use tf.cast instead. 277 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 278 | Instructions for updating: 279 | Use tf.cast instead. 280 | 2019-11-13 16:49:24,300 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 281 | Instructions for updating: 282 | Use tf.cast instead. 283 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 284 | Instructions for updating: 285 | Use tf.cast instead. 286 | 2019-11-13 16:49:24,323 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 287 | Instructions for updating: 288 | Use tf.cast instead. 289 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 290 | Instructions for updating: 291 | Use tf.cast instead. 292 | 2019-11-13 16:49:24,335 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 293 | Instructions for updating: 294 | Use tf.cast instead. 295 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 296 | Instructions for updating: 297 | Use tf.cast instead. 298 | 2019-11-13 16:49:24,388 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 299 | Instructions for updating: 300 | Use tf.cast instead. 301 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 302 | Instructions for updating: 303 | Use tf.cast instead. 304 | 2019-11-13 16:49:24,435 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 305 | Instructions for updating: 306 | Use tf.cast instead. 307 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 308 | Instructions for updating: 309 | Use tf.cast instead. 310 | 2019-11-13 16:49:24,475 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 311 | Instructions for updating: 312 | Use tf.cast instead. 313 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/horovod/tensorflow/__init__.py:86: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 314 | Instructions for updating: 315 | Deprecated in favor of operator or tf.math.divide. 316 | 2019-11-13 16:49:32,056 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/horovod/tensorflow/__init__.py:86: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 317 | Instructions for updating: 318 | Deprecated in favor of operator or tf.math.divide. 319 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/horovod/tensorflow/__init__.py:86: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 320 | Instructions for updating: 321 | Deprecated in favor of operator or tf.math.divide. 322 | 2019-11-13 16:49:32,293 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/horovod/tensorflow/__init__.py:86: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 323 | Instructions for updating: 324 | Deprecated in favor of operator or tf.math.divide. 325 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/horovod/tensorflow/__init__.py:86: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 326 | Instructions for updating: 327 | Deprecated in favor of operator or tf.math.divide. 328 | 2019-11-13 16:49:32,296 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/horovod/tensorflow/__init__.py:86: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 329 | Instructions for updating: 330 | Deprecated in favor of operator or tf.math.divide. 331 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/horovod/tensorflow/__init__.py:86: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 332 | Instructions for updating: 333 | Deprecated in favor of operator or tf.math.divide. 334 | 2019-11-13 16:49:32,345 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/horovod/tensorflow/__init__.py:86: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 335 | Instructions for updating: 336 | Deprecated in favor of operator or tf.math.divide. 337 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/horovod/tensorflow/__init__.py:86: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 338 | Instructions for updating: 339 | Deprecated in favor of operator or tf.math.divide. 340 | 2019-11-13 16:49:32,356 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/horovod/tensorflow/__init__.py:86: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 341 | Instructions for updating: 342 | Deprecated in favor of operator or tf.math.divide. 343 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/horovod/tensorflow/__init__.py:86: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 344 | Instructions for updating: 345 | Deprecated in favor of operator or tf.math.divide. 346 | 2019-11-13 16:49:32,401 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/horovod/tensorflow/__init__.py:86: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 347 | Instructions for updating: 348 | Deprecated in favor of operator or tf.math.divide. 349 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/horovod/tensorflow/__init__.py:86: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 350 | Instructions for updating: 351 | Deprecated in favor of operator or tf.math.divide. 352 | 2019-11-13 16:49:32,451 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/horovod/tensorflow/__init__.py:86: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 353 | Instructions for updating: 354 | Deprecated in favor of operator or tf.math.divide. 355 | WARNING:tensorflow:From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/horovod/tensorflow/__init__.py:86: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 356 | Instructions for updating: 357 | Deprecated in favor of operator or tf.math.divide. 358 | 2019-11-13 16:49:32,499 WARNING From /usr/common/software/tensorflow/intel-tensorflow/1.13.1-py36/lib/python3.6/site-packages/horovod/tensorflow/__init__.py:86: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. 359 | Instructions for updating: 360 | Deprecated in favor of operator or tf.math.divide. 361 | bn5a_branch2c (BatchNormalizati (None, 2, 2, 512) 2048 res5a_branch2c[0][0] 362 | __________________________________________________________________________________________________ 363 | bn5a_branch1 (BatchNormalizatio (None, 2, 2, 512) 2048 res5a_branch1[0][0] 364 | __________________________________________________________________________________________________ 365 | add_7 (Add) (None, 2, 2, 512) 0 bn5a_branch2c[0][0] 366 | bn5a_branch1[0][0] 367 | __________________________________________________________________________________________________ 368 | activation_22 (Activation) (None, 2, 2, 512) 0 add_7[0][0] 369 | __________________________________________________________________________________________________ 370 | res5b_branch2a (Conv2D) (None, 2, 2, 512) 262656 activation_22[0][0] 371 | __________________________________________________________________________________________________ 372 | bn5b_branch2a (BatchNormalizati (None, 2, 2, 512) 2048 res5b_branch2a[0][0] 373 | __________________________________________________________________________________________________ 374 | activation_23 (Activation) (None, 2, 2, 512) 0 bn5b_branch2a[0][0] 375 | __________________________________________________________________________________________________ 376 | res5b_branch2b (Conv2D) (None, 2, 2, 512) 2359808 activation_23[0][0] 377 | __________________________________________________________________________________________________ 378 | bn5b_branch2b (BatchNormalizati (None, 2, 2, 512) 2048 res5b_branch2b[0][0] 379 | __________________________________________________________________________________________________ 380 | activation_24 (Activation) (None, 2, 2, 512) 0 bn5b_branch2b[0][0] 381 | __________________________________________________________________________________________________ 382 | res5b_branch2c (Conv2D) (None, 2, 2, 512) 262656 activation_24[0][0] 383 | __________________________________________________________________________________________________ 384 | bn5b_branch2c (BatchNormalizati (None, 2, 2, 512) 2048 res5b_branch2c[0][0] 385 | __________________________________________________________________________________________________ 386 | add_8 (Add) (None, 2, 2, 512) 0 bn5b_branch2c[0][0] 387 | activation_22[0][0] 388 | __________________________________________________________________________________________________ 389 | activation_25 (Activation) (None, 2, 2, 512) 0 add_8[0][0] 390 | __________________________________________________________________________________________________ 391 | avg_pool (GlobalAveragePooling2 (None, 512) 0 activation_25[0][0] 392 | __________________________________________________________________________________________________ 393 | fc1000 (Dense) (None, 10) 5130 avg_pool[0][0] 394 | ================================================================================================== 395 | Total params: 7,704,394 396 | Trainable params: 7,690,826 397 | Non-trainable params: 13,568 398 | __________________________________________________________________________________________________ 399 | Epoch 1/32 400 | - 174s - loss: 2.6844 - acc: 0.2759 - val_loss: 2.5136 - val_acc: 0.3586 401 | Epoch 2/32 402 | - 38s - loss: 2.3228 - acc: 0.4109 - val_loss: 2.2712 - val_acc: 0.4367 403 | Epoch 3/32 404 | - 37s - loss: 2.1374 - acc: 0.4612 - val_loss: 2.0731 - val_acc: 0.5008 405 | Epoch 4/32 406 | - 35s - loss: 2.0078 - acc: 0.5106 - val_loss: 2.0221 - val_acc: 0.5329 407 | Epoch 5/32 408 | - 35s - loss: 1.9103 - acc: 0.5507 - val_loss: 2.0130 - val_acc: 0.5337 409 | 410 | Epoch 5: finished gradual learning rate warmup to 0.0008. 411 | Epoch 6/32 412 | - 34s - loss: 1.8197 - acc: 0.5833 - val_loss: 1.8510 - val_acc: 0.5863 413 | Epoch 7/32 414 | - 35s - loss: 1.6915 - acc: 0.6224 - val_loss: 1.8517 - val_acc: 0.5641 415 | Epoch 8/32 416 | - 35s - loss: 1.6378 - acc: 0.6340 - val_loss: 1.6502 - val_acc: 0.6332 417 | Epoch 9/32 418 | - 34s - loss: 1.5585 - acc: 0.6601 - val_loss: 1.7221 - val_acc: 0.6139 419 | Epoch 10/32 420 | - 34s - loss: 1.5230 - acc: 0.6695 - val_loss: 1.7451 - val_acc: 0.6151 421 | Epoch 11/32 422 | - 34s - loss: 1.4444 - acc: 0.6993 - val_loss: 1.6942 - val_acc: 0.6225 423 | Epoch 12/32 424 | - 34s - loss: 1.3990 - acc: 0.7033 - val_loss: 1.5230 - val_acc: 0.6776 425 | Epoch 13/32 426 | - 35s - loss: 1.3188 - acc: 0.7303 - val_loss: 1.4569 - val_acc: 0.6817 427 | Epoch 14/32 428 | - 34s - loss: 1.2867 - acc: 0.7407 - val_loss: 1.5201 - val_acc: 0.6875 429 | Epoch 15/32 430 | - 34s - loss: 1.2525 - acc: 0.7468 - val_loss: 1.4897 - val_acc: 0.6760 431 | Epoch 16/32 432 | - 34s - loss: 1.2075 - acc: 0.7627 - val_loss: 1.2780 - val_acc: 0.7401 433 | Epoch 17/32 434 | - 35s - loss: 1.1533 - acc: 0.7743 - val_loss: 1.3750 - val_acc: 0.6978 435 | Epoch 18/32 436 | - 35s - loss: 1.1343 - acc: 0.7777 - val_loss: 1.2586 - val_acc: 0.7418 437 | Epoch 19/32 438 | - 34s - loss: 1.0797 - acc: 0.7969 - val_loss: 1.3009 - val_acc: 0.7434 439 | Epoch 20/32 440 | - 34s - loss: 1.0625 - acc: 0.8006 - val_loss: 1.2645 - val_acc: 0.7368 441 | Epoch 21/32 442 | - 34s - loss: 1.0053 - acc: 0.8183 - val_loss: 1.1778 - val_acc: 0.7656 443 | Epoch 22/32 444 | - 34s - loss: 0.9736 - acc: 0.8236 - val_loss: 1.1832 - val_acc: 0.7640 445 | Epoch 23/32 446 | - 35s - loss: 0.9682 - acc: 0.8246 - val_loss: 1.1552 - val_acc: 0.7656 447 | Epoch 24/32 448 | - 34s - loss: 0.9663 - acc: 0.8209 - val_loss: 1.1829 - val_acc: 0.7623 449 | Epoch 25/32 450 | - 34s - loss: 0.9081 - acc: 0.8420 - val_loss: 1.1996 - val_acc: 0.7483 451 | Epoch 26/32 452 | - 34s - loss: 0.9099 - acc: 0.8346 - val_loss: 1.1574 - val_acc: 0.7697 453 | Epoch 27/32 454 | - 34s - loss: 0.8759 - acc: 0.8491 - val_loss: 1.2273 - val_acc: 0.7418 455 | Epoch 28/32 456 | - 34s - loss: 0.8688 - acc: 0.8478 - val_loss: 1.0259 - val_acc: 0.8141 457 | Epoch 29/32 458 | - 34s - loss: 0.8445 - acc: 0.8484 - val_loss: 1.2405 - val_acc: 0.7401 459 | Epoch 30/32 460 | - 34s - loss: 0.8175 - acc: 0.8600 - val_loss: 1.1150 - val_acc: 0.7706 461 | Epoch 31/32 462 | - 34s - loss: 0.8187 - acc: 0.8637 - val_loss: 1.0831 - val_acc: 0.7845 463 | Epoch 32/32 464 | - 34s - loss: 0.7993 - acc: 0.8642 - val_loss: 1.0813 - val_acc: 0.7952 465 | 466 | Epoch 5: finished gradual learning rate warmup to 0.0008. 467 | 468 | Epoch 5: finished gradual learning rate warmup to 0.0008. 469 | 470 | Epoch 5: finished gradual learning rate warmup to 0.0008. 471 | 472 | Epoch 5: finished gradual learning rate warmup to 0.0008. 473 | 474 | Epoch 5: finished gradual learning rate warmup to 0.0008. 475 | 476 | Epoch 5: finished gradual learning rate warmup to 0.0008. 477 | 478 | Epoch 5: finished gradual learning rate warmup to 0.0008. 479 | 2019-11-13 17:14:43,598 INFO Best validation accuracy: 0.795 480 | 2019-11-13 17:14:43,601 INFO Average time per epoch [s]: 44.987 481 | 2019-11-13 17:14:43,657 INFO All done! 482 | Epoch time: 44.987020403146744 483 | -------------------------------------------------------------------------------- /logs/reference/hpo-cifar-cnn.out: -------------------------------------------------------------------------------- 1 | import keras; keras.datasets.cifar10.load_data() 2 | Using TensorFlow backend. 3 | python hpo_train.py -N 8 --verbose 4 | 1.2522231845855714 5 | {'--optimizer lr': 0.0011536366, '--dropout': 0.0051919226} 6 | Detected Slurm as the workload manager 7 | ------------------------------------------------------------ 8 | Optimizer Settings: 9 | ------------------------------------------------------------ 10 | generations: 4 11 | num_demes: 1 12 | pop_size: 8 13 | verbose: true 14 | mutation_rate: 0.6 15 | crossover_rate: 0.33 16 | migration_interval: 5 17 | ------------------------------------------------------------ 18 | Evaluator Settings: 19 | ------------------------------------------------------------ 20 | run_path: "run" 21 | fom: "FoM: " 22 | nodes: 8 23 | launcher: wlm 24 | ------------------------------------------------------------ 25 | Adding 8 individuals to each deme with genotype: 26 | --optimizer lr: 0.001, 27 | --dropout: 0.1, 28 | Adding mutants to first generation. 29 | ------------------------------------------------------------ 30 | Generation: 0 31 | ------------------------------------------------------------ 32 | Evaluating 8 genotypes. 33 | Generation 0: 1/8 evaluations completed 34 | Generation 0: 2/8 evaluations completed 35 | Generation 0: 3/8 evaluations completed 36 | Generation 0: 4/8 evaluations completed 37 | Generation 0: 5/8 evaluations completed 38 | Generation 0: 7/8 evaluations completed 39 | Generation 0: 8/8 evaluations completed 40 | Generation 0: 7/8 evaluations completed 41 | ------------------------------------------------------------ 42 | Global Best: deme1_ind5 1.258409e+00 (1.1x) [1.468e+00 avg] 43 | Best hyperparameters: 44 | --optimizer lr: 0.00099883403 45 | --dropout: 0.088423011 46 | ------------------------------------------------------------ 47 | deme1 size: 8 fom: 1.468e+00 (avg) 48 | deme1_ind5 fom: 1.258e+00 (local best) 49 | --optimizer lr=0.00099883403 --dropout=0.088423011 50 | ------------------------------------------------------------ 51 | Timings: 52 | Setup: 2.515e-04 s 53 | Reading checkpoint: 6.250e-07 s 54 | Evaluation: 4.158e+02 s 55 | Writing checkpoint: 1.375e-06 s 56 | Cleanup: 0.000e+00 s 57 | ------------------------------------------------------------ 58 | Migrating individuals across demes. 59 | Migrating deme1_ind5 to deme1 60 | ------------------------------------------------------------ 61 | Generation: 1 62 | ------------------------------------------------------------ 63 | Evaluating 8 genotypes. 64 | Generation 1: 1/8 evaluations completed 65 | Generation 1: 2/8 evaluations completed 66 | Generation 1: 3/8 evaluations completed 67 | Generation 1: 5/8 evaluations completed 68 | Generation 1: 5/8 evaluations completed 69 | Generation 1: 6/8 evaluations completed 70 | Generation 1: 7/8 evaluations completed 71 | Generation 1: 8/8 evaluations completed 72 | ------------------------------------------------------------ 73 | Global Best: deme1_ind5 1.258409e+00 (1.1x) [3.234e+00 avg] 74 | Best hyperparameters: 75 | --optimizer lr: 0.00099883403 76 | --dropout: 0.088423011 77 | ------------------------------------------------------------ 78 | deme1 size: 8 fom: 3.234e+00 (avg) 79 | deme1_ind13 fom: 1.272e+00 (local best) 80 | --optimizer lr=0.00099883403 --dropout=0.077457299 81 | ------------------------------------------------------------ 82 | Timings: 83 | Setup: 2.440e-04 s 84 | Reading checkpoint: 6.250e-07 s 85 | Evaluation: 4.110e+02 s 86 | Writing checkpoint: 1.375e-06 s 87 | Cleanup: 0.000e+00 s 88 | ------------------------------------------------------------ 89 | ------------------------------------------------------------ 90 | Generation: 2 91 | ------------------------------------------------------------ 92 | Evaluating 8 genotypes. 93 | Generation 2: 1/8 evaluations completed 94 | Generation 2: 2/8 evaluations completed 95 | Generation 2: 3/8 evaluations completed 96 | Generation 2: 4/8 evaluations completed 97 | Generation 2: 6/8 evaluations completed 98 | Generation 2: 7/8 evaluations completed 99 | Generation 2: 5/8 evaluations completed 100 | Generation 2: 8/8 evaluations completed 101 | ------------------------------------------------------------ 102 | Global Best: deme1_ind5 1.258409e+00 (1.1x) [1.938e+00 avg] 103 | Best hyperparameters: 104 | --optimizer lr: 0.00099883403 105 | --dropout: 0.088423011 106 | ------------------------------------------------------------ 107 | deme1 size: 8 fom: 1.938e+00 (avg) 108 | deme1_ind24 fom: 1.286e+00 (local best) 109 | --optimizer lr=0.00098792034 --dropout=0.086596908 110 | ------------------------------------------------------------ 111 | Timings: 112 | Setup: 1.826e-04 s 113 | Reading checkpoint: 7.500e-07 s 114 | Evaluation: 3.981e+02 s 115 | Writing checkpoint: 2.125e-06 s 116 | Cleanup: 2.500e-07 s 117 | ------------------------------------------------------------ 118 | ------------------------------------------------------------ 119 | Generation: 3 120 | ------------------------------------------------------------ 121 | Evaluating 8 genotypes. 122 | Generation 3: 1/8 evaluations completed 123 | Generation 3: 2/8 evaluations completed 124 | Generation 3: 3/8 evaluations completed 125 | Generation 3: 5/8 evaluations completed 126 | Generation 3: 6/8 evaluations completed 127 | Generation 3: 7/8 evaluations completed 128 | Generation 3: 4/8 evaluations completed 129 | Generation 3: 8/8 evaluations completed 130 | ------------------------------------------------------------ 131 | Global Best: deme1_ind32 1.252223e+00 (1.1x) [1.436e+00 avg] 132 | Best hyperparameters: 133 | --optimizer lr: 0.0011536366 134 | --dropout: 0.0051919226 135 | ------------------------------------------------------------ 136 | deme1 size: 8 fom: 1.436e+00 (avg) 137 | deme1_ind32 fom: 1.252e+00 (local best) 138 | --optimizer lr=0.0011536366 --dropout=0.0051919226 139 | ------------------------------------------------------------ 140 | Timings: 141 | Setup: 3.499e-03 s 142 | Reading checkpoint: 6.250e-07 s 143 | Evaluation: 3.889e+02 s 144 | Writing checkpoint: 1.875e-06 s 145 | Cleanup: 0.000e+00 s 146 | ------------------------------------------------------------ 147 | ------------------------------------------------------------ 148 | Best: deme1_ind32 fom: 1.252223e+00 (1.07988x) 149 | --optimizer lr=0.0011536366 --dropout=0.0051919226 150 | ------------------------------------------------------------ 151 | Nodes deallocated 152 | -------------------------------------------------------------------------------- /logs/reference/hpo-mnist-lenet5.out: -------------------------------------------------------------------------------- 1 | ------------------------------------------------------------ 2 | Optimizer Settings: 3 | ------------------------------------------------------------ 4 | generations: 3 5 | num_demes: 1 6 | pop_size: 8 7 | verbose: true 8 | mutation_rate: 0.5 9 | crossover_rate: 0.33 10 | migration_interval: 5 11 | Warning: overwriting existing log file: mnist-topology.log 12 | ------------------------------------------------------------ 13 | Adding 8 individuals to each deme with genotype: 14 | --dropout: 0.5, 15 | --momentum: 0.0001, 16 | --c1_sz: 5, 17 | --c1_ft: 32, 18 | --c2_sz: 5, 19 | --c2_ft: 64, 20 | --fullyc_sz: 1024, 21 | Adding mutants to first generation. 22 | ------------------------------------------------------------ 23 | Generation: 0 24 | ------------------------------------------------------------ 25 | Evaluating 8 genotypes. 26 | Generation 0: [---------------------------------------------] Generation 0: [#####----------------------------------------] Generation 0: [###########----------------------------------] Generation 0: [#################----------------------------] Generation 0: [######################-----------------------] Generation 0: [############################-----------------] Generation 0: [#################################------------] Generation 0: [#######################################------] Generation 0: [#############################################] 27 | ------------------------------------------------------------ 28 | Global Best: deme1_ind0 4.869086e+01 (1x) [1.406e+02 avg] 29 | Best hyperparameters: 30 | --dropout: 0.5 31 | --momentum: 0.0001 32 | --c1_sz: 5 33 | --c1_ft: 32 34 | --c2_sz: 5 35 | --c2_ft: 64 36 | --fullyc_sz: 1024 37 | ------------------------------------------------------------ 38 | deme1 size: 8 fom: 1.406e+02 (avg) 39 | deme1_ind0 fom: 4.869e+01 (local best) 40 | --dropout=0.5 --momentum=0.0001 --c1_sz=5 --c1_ft=32 --c2_sz=5 --c2_ft=64 --fullyc_sz=1024 41 | ------------------------------------------------------------ 42 | Timings: 43 | Setup: 7.287e-01 s 44 | Reading checkpoint: 1.250e-06 s 45 | Evaluation: 3.137e+02 s 46 | Writing checkpoint: 1.875e-06 s 47 | Cleanup: 1.250e-07 s 48 | ------------------------------------------------------------ 49 | Logging deme1 results: "Deme1_mnist-topology.log" 50 | Logging global results: "mnist-topology.log" 51 | Migrating individuals across demes. 52 | Migrating deme1_ind0 to deme1 53 | ------------------------------------------------------------ 54 | Generation: 1 55 | ------------------------------------------------------------ 56 | Evaluating 8 genotypes. 57 | Generation 1: [---------------------------------------------] Generation 1: [###########----------------------------------] Generation 1: [###########----------------------------------] Generation 1: [############################-----------------] Generation 1: [######################-----------------------] Generation 1: [#################################------------] Generation 1: [#################----------------------------] Generation 1: [#######################################------] Generation 1: [#############################################] 58 | ------------------------------------------------------------ 59 | Global Best: deme1_ind10 2.191395e+01 (2.2x) [7.695e+01 avg] 60 | Best hyperparameters: 61 | --dropout: 0.54346338 62 | --momentum: 0.0003550819 63 | --c1_sz: 4 64 | --c1_ft: 27 65 | --c2_sz: 4 66 | --c2_ft: 58 67 | --fullyc_sz: 1100 68 | ------------------------------------------------------------ 69 | deme1 size: 8 fom: 7.695e+01 (avg) 70 | deme1_ind10 fom: 2.191e+01 (local best) 71 | --dropout=0.54346338 --momentum=0.0003550819 --c1_sz=4 --c1_ft=27 --c2_sz=4 --c2_ft=58 --fullyc_sz=1100 72 | ------------------------------------------------------------ 73 | Timings: 74 | Setup: 7.553e-01 s 75 | Reading checkpoint: 1.750e-06 s 76 | Evaluation: 2.031e+02 s 77 | Writing checkpoint: 1.750e-06 s 78 | Cleanup: 0.000e+00 s 79 | ------------------------------------------------------------ 80 | Logging deme1 results: "Deme1_mnist-topology.log" 81 | Logging global results: "mnist-topology.log" 82 | ------------------------------------------------------------ 83 | Generation: 2 84 | ------------------------------------------------------------ 85 | Evaluating 8 genotypes. 86 | Generation 2: [---------------------------------------------] Generation 2: [###########----------------------------------] Generation 2: [#################################------------] Generation 2: [############################-----------------] Generation 2: [#####----------------------------------------] Generation 2: [######################-----------------------] Generation 2: [#################----------------------------] Generation 2: [#######################################------] Generation 2: [#############################################] 87 | ------------------------------------------------------------ 88 | Genetic HPO Example: LeNet-5 (MNIST) TensorFlow -- Cray Inc. 89 | ------------------------------------------------------------ 90 | Best FoM: 21.91395 91 | Best HPs: 92 | --dropout = 0.54346338 93 | --momentum = 0.0003550819 94 | --c1_sz = 4 95 | --c1_ft = 27 96 | --c2_sz = 4 97 | --c2_ft = 58 98 | --fullyc_sz = 1100 99 | ------------------------------------------------------------ 100 | Done. 101 | ------------------------------------------------------------ 102 | ------------------------------------------------------------ 103 | Global Best: deme1_ind10 2.191395e+01 (2.2x) [5.464e+01 avg] 104 | Best hyperparameters: 105 | --dropout: 0.54346338 106 | --momentum: 0.0003550819 107 | --c1_sz: 4 108 | --c1_ft: 27 109 | --c2_sz: 4 110 | --c2_ft: 58 111 | --fullyc_sz: 1100 112 | ------------------------------------------------------------ 113 | deme1 size: 8 fom: 5.464e+01 (avg) 114 | deme1_ind22 fom: 2.207e+01 (local best) 115 | --dropout=0.5 --momentum=0.00035288446 --c1_sz=6 --c1_ft=27 --c2_sz=4 --c2_ft=58 --fullyc_sz=1210 116 | ------------------------------------------------------------ 117 | Timings: 118 | Setup: 1.175e+00 s 119 | Reading checkpoint: 1.125e-06 s 120 | Evaluation: 2.844e+02 s 121 | Writing checkpoint: 1.625e-06 s 122 | Cleanup: 0.000e+00 s 123 | ------------------------------------------------------------ 124 | Logging deme1 results: "Deme1_mnist-topology.log" 125 | Logging global results: "mnist-topology.log" 126 | ------------------------------------------------------------ 127 | Best: deme1_ind10 fom: 2.191395e+01 (2.22191x) 128 | --dropout=0.54346338 --momentum=0.0003550819 --c1_sz=4 --c1_ft=27 --c2_sz=4 --c2_ft=58 --fullyc_sz=1100 129 | ------------------------------------------------------------ 130 | Nodes deallocated 131 | -------------------------------------------------------------------------------- /logs/reference/hpo-sin-genetic.out: -------------------------------------------------------------------------------- 1 | ------------------------------------------------------------ 2 | Optimizer Settings: 3 | ------------------------------------------------------------ 4 | generations: 13 5 | num_demes: 1 6 | pop_size: 10 7 | verbose: true 8 | mutation_rate: 0.9 9 | crossover_rate: 0.33 10 | migration_interval: 5 11 | ------------------------------------------------------------ 12 | Evaluator Settings: 13 | ------------------------------------------------------------ 14 | run_path: "run" 15 | fom: "FoM: " 16 | launcher: local 17 | Warning: overwriting existing log file: genetic.log 18 | ------------------------------------------------------------ 19 | Adding 10 individuals to each deme with genotype: 20 | -a: 1, 21 | -b: -1, 22 | -c: 1, 23 | -d: -1, 24 | -e: 1, 25 | -f: -1, 26 | -g: 1, 27 | Adding mutants to first generation. 28 | ------------------------------------------------------------ 29 | Generation: 0 30 | ------------------------------------------------------------ 31 | Evaluating 10 genotypes. 32 | Generation 0: 1/10 evaluations completed 33 | Generation 0: 2/10 evaluations completed 34 | Generation 0: 3/10 evaluations completed 35 | Generation 0: 4/10 evaluations completed 36 | Generation 0: 5/10 evaluations completed 37 | Generation 0: 6/10 evaluations completed 38 | Generation 0: 7/10 evaluations completed 39 | Generation 0: 8/10 evaluations completed 40 | Generation 0: 9/10 evaluations completed 41 | Generation 0: 10/10 evaluations completed 42 | ------------------------------------------------------------ 43 | Global Best: deme1_ind9 1.341319e+04 (1.1x) [1.414e+04 avg] 44 | Best hyperparameters: 45 | -a: 0.99776378 46 | -b: -1 47 | -c: 1 48 | -d: -0.90766364 49 | -e: 0.9939633 50 | -f: -0.88644436 51 | -g: 1 52 | ------------------------------------------------------------ 53 | deme1 size: 10 fom: 1.414e+04 (avg) 54 | deme1_ind9 fom: 1.341e+04 (local best) 55 | -a=0.99776378 -b=-1 -c=1 -d=-0.90766364 -e=0.9939633 -f=-0.88644436 -g=1 56 | ------------------------------------------------------------ 57 | Timings: 58 | Setup: 2.740e-03 s 59 | Reading checkpoint: 7.000e-07 s 60 | Evaluation: 4.559e-01 s 61 | Writing checkpoint: 2.200e-06 s 62 | Cleanup: 1.000e-07 s 63 | ------------------------------------------------------------ 64 | Logging deme1 results: "Deme1_genetic.log" 65 | Logging global results: "genetic.log" 66 | Migrating individuals across demes. 67 | Migrating deme1_ind9 to deme1 68 | ------------------------------------------------------------ 69 | Generation: 1 70 | ------------------------------------------------------------ 71 | Evaluating 10 genotypes. 72 | Generation 1: 1/10 evaluations completed 73 | Generation 1: 2/10 evaluations completed 74 | Generation 1: 3/10 evaluations completed 75 | Generation 1: 4/10 evaluations completed 76 | Generation 1: 5/10 evaluations completed 77 | Generation 1: 6/10 evaluations completed 78 | Generation 1: 7/10 evaluations completed 79 | Generation 1: 8/10 evaluations completed 80 | Generation 1: 9/10 evaluations completed 81 | Generation 1: 10/10 evaluations completed 82 | ------------------------------------------------------------ 83 | Global Best: deme1_ind16 1.082925e+04 (1.3x) [1.314e+04 avg] 84 | Best hyperparameters: 85 | -a: 0.83799557 86 | -b: -0.91119517 87 | -c: 1 88 | -d: -0.92303407 89 | -e: 1 90 | -f: -1 91 | -g: 0.84205527 92 | ------------------------------------------------------------ 93 | deme1 size: 10 fom: 1.314e+04 (avg) 94 | deme1_ind16 fom: 1.083e+04 (local best) 95 | -a=0.83799557 -b=-0.91119517 -c=1 -d=-0.92303407 -e=1 -f=-1 -g=0.84205527 96 | ------------------------------------------------------------ 97 | Timings: 98 | Setup: 2.755e-03 s 99 | Reading checkpoint: 9.000e-07 s 100 | Evaluation: 4.827e-01 s 101 | Writing checkpoint: 1.900e-06 s 102 | Cleanup: 2.000e-07 s 103 | ------------------------------------------------------------ 104 | Logging deme1 results: "Deme1_genetic.log" 105 | Logging global results: "genetic.log" 106 | ------------------------------------------------------------ 107 | Generation: 2 108 | ------------------------------------------------------------ 109 | Evaluating 10 genotypes. 110 | Generation 2: 1/10 evaluations completed 111 | Generation 2: 2/10 evaluations completed 112 | Generation 2: 3/10 evaluations completed 113 | Generation 2: 4/10 evaluations completed 114 | Generation 2: 5/10 evaluations completed 115 | Generation 2: 6/10 evaluations completed 116 | Generation 2: 7/10 evaluations completed 117 | Generation 2: 8/10 evaluations completed 118 | Generation 2: 9/10 evaluations completed 119 | Generation 2: 10/10 evaluations completed 120 | ------------------------------------------------------------ 121 | Global Best: deme1_ind23 1.025801e+04 (1.4x) [1.196e+04 avg] 122 | Best hyperparameters: 123 | -a: 1 124 | -b: -0.88453838 125 | -c: 0.99165825 126 | -d: -0.92536761 127 | -e: 1 128 | -f: -0.86116272 129 | -g: 0.84205527 130 | ------------------------------------------------------------ 131 | deme1 size: 10 fom: 1.196e+04 (avg) 132 | deme1_ind23 fom: 1.026e+04 (local best) 133 | -a=1 -b=-0.88453838 -c=0.99165825 -d=-0.92536761 -e=1 -f=-0.86116272 -g=0.84205527 134 | ------------------------------------------------------------ 135 | Timings: 136 | Setup: 2.746e-03 s 137 | Reading checkpoint: 9.000e-07 s 138 | Evaluation: 7.253e-01 s 139 | Writing checkpoint: 2.000e-06 s 140 | Cleanup: 2.000e-07 s 141 | ------------------------------------------------------------ 142 | Logging deme1 results: "Deme1_genetic.log" 143 | Logging global results: "genetic.log" 144 | ------------------------------------------------------------ 145 | Generation: 3 146 | ------------------------------------------------------------ 147 | Evaluating 10 genotypes. 148 | Generation 3: 1/10 evaluations completed 149 | Generation 3: 2/10 evaluations completed 150 | Generation 3: 3/10 evaluations completed 151 | Generation 3: 4/10 evaluations completed 152 | Generation 3: 5/10 evaluations completed 153 | Generation 3: 6/10 evaluations completed 154 | Generation 3: 7/10 evaluations completed 155 | Generation 3: 8/10 evaluations completed 156 | Generation 3: 9/10 evaluations completed 157 | Generation 3: 10/10 evaluations completed 158 | ------------------------------------------------------------ 159 | Global Best: deme1_ind32 8.989368e+03 (1.6x) [1.103e+04 avg] 160 | Best hyperparameters: 161 | -a: 0.82964022 162 | -b: -0.79553607 163 | -c: 1 164 | -d: -1 165 | -e: 0.96117368 166 | -f: -1 167 | -g: 0.74313058 168 | ------------------------------------------------------------ 169 | deme1 size: 10 fom: 1.103e+04 (avg) 170 | deme1_ind32 fom: 8.989e+03 (local best) 171 | -a=0.82964022 -b=-0.79553607 -c=1 -d=-1 -e=0.96117368 -f=-1 -g=0.74313058 172 | ------------------------------------------------------------ 173 | Timings: 174 | Setup: 2.746e-03 s 175 | Reading checkpoint: 1.400e-06 s 176 | Evaluation: 5.752e-01 s 177 | Writing checkpoint: 2.200e-06 s 178 | Cleanup: 0.000e+00 s 179 | ------------------------------------------------------------ 180 | Logging deme1 results: "Deme1_genetic.log" 181 | Logging global results: "genetic.log" 182 | ------------------------------------------------------------ 183 | Generation: 4 184 | ------------------------------------------------------------ 185 | Evaluating 10 genotypes. 186 | Generation 4: 1/10 evaluations completed 187 | Generation 4: 2/10 evaluations completed 188 | Generation 4: 3/10 evaluations completed 189 | Generation 4: 4/10 evaluations completed 190 | Generation 4: 5/10 evaluations completed 191 | Generation 4: 6/10 evaluations completed 192 | Generation 4: 7/10 evaluations completed 193 | Generation 4: 8/10 evaluations completed 194 | Generation 4: 9/10 evaluations completed 195 | Generation 4: 10/10 evaluations completed 196 | ------------------------------------------------------------ 197 | Global Best: deme1_ind50 7.228740e+03 (2x) [9.407e+03 avg] 198 | Best hyperparameters: 199 | -a: 0.85388454 200 | -b: -0.85113908 201 | -c: 0.86363183 202 | -d: -0.97305262 203 | -e: 0.88777286 204 | -f: -0.95899739 205 | -g: 0.65033172 206 | ------------------------------------------------------------ 207 | deme1 size: 10 fom: 9.407e+03 (avg) 208 | deme1_ind50 fom: 7.229e+03 (local best) 209 | -a=0.85388454 -b=-0.85113908 -c=0.86363183 -d=-0.97305262 -e=0.88777286 -f=-0.95899739 -g=0.65033172 210 | ------------------------------------------------------------ 211 | Timings: 212 | Setup: 2.731e-03 s 213 | Reading checkpoint: 1.000e-06 s 214 | Evaluation: 4.457e-01 s 215 | Writing checkpoint: 2.000e-06 s 216 | Cleanup: 1.000e-07 s 217 | ------------------------------------------------------------ 218 | Logging deme1 results: "Deme1_genetic.log" 219 | Logging global results: "genetic.log" 220 | ------------------------------------------------------------ 221 | Generation: 5 222 | ------------------------------------------------------------ 223 | Evaluating 10 genotypes. 224 | Generation 5: 1/10 evaluations completed 225 | Generation 5: 2/10 evaluations completed 226 | Generation 5: 3/10 evaluations completed 227 | Generation 5: 4/10 evaluations completed 228 | Generation 5: 5/10 evaluations completed 229 | Generation 5: 6/10 evaluations completed 230 | Generation 5: 7/10 evaluations completed 231 | Generation 5: 8/10 evaluations completed 232 | Generation 5: 9/10 evaluations completed 233 | Generation 5: 10/10 evaluations completed 234 | ------------------------------------------------------------ 235 | Global Best: deme1_ind51 5.464852e+03 (2.7x) [7.328e+03 avg] 236 | Best hyperparameters: 237 | -a: 0.79469586 238 | -b: -1 239 | -c: 0.90107568 240 | -d: -0.91551729 241 | -e: 0.83364471 242 | -f: -1 243 | -g: 0.51096221 244 | ------------------------------------------------------------ 245 | deme1 size: 10 fom: 7.328e+03 (avg) 246 | deme1_ind51 fom: 5.465e+03 (local best) 247 | -a=0.79469586 -b=-1 -c=0.90107568 -d=-0.91551729 -e=0.83364471 -f=-1 -g=0.51096221 248 | ------------------------------------------------------------ 249 | Timings: 250 | Setup: 2.771e-03 s 251 | Reading checkpoint: 1.000e-06 s 252 | Evaluation: 4.449e-01 s 253 | Writing checkpoint: 1.900e-06 s 254 | Cleanup: 1.000e-07 s 255 | ------------------------------------------------------------ 256 | Logging deme1 results: "Deme1_genetic.log" 257 | Logging global results: "genetic.log" 258 | Migrating individuals across demes. 259 | Migrating deme1_ind51 to deme1 260 | ------------------------------------------------------------ 261 | Generation: 6 262 | ------------------------------------------------------------ 263 | Evaluating 10 genotypes. 264 | Generation 6: 1/10 evaluations completed 265 | Generation 6: 2/10 evaluations completed 266 | Generation 6: 3/10 evaluations completed 267 | Generation 6: 4/10 evaluations completed 268 | Generation 6: 5/10 evaluations completed 269 | Generation 6: 6/10 evaluations completed 270 | Generation 6: 7/10 evaluations completed 271 | Generation 6: 8/10 evaluations completed 272 | Generation 6: 9/10 evaluations completed 273 | Generation 6: 10/10 evaluations completed 274 | ------------------------------------------------------------ 275 | Global Best: deme1_ind63 4.373028e+03 (3.3x) [6.487e+03 avg] 276 | Best hyperparameters: 277 | -a: 0.92824278 278 | -b: -1 279 | -c: 0.76946916 280 | -d: -0.739007 281 | -e: 0.83177148 282 | -f: -0.74999875 283 | -g: 0.48263319 284 | ------------------------------------------------------------ 285 | deme1 size: 10 fom: 6.487e+03 (avg) 286 | deme1_ind63 fom: 4.373e+03 (local best) 287 | -a=0.92824278 -b=-1 -c=0.76946916 -d=-0.739007 -e=0.83177148 -f=-0.74999875 -g=0.48263319 288 | ------------------------------------------------------------ 289 | Timings: 290 | Setup: 2.765e-03 s 291 | Reading checkpoint: 1.000e-06 s 292 | Evaluation: 4.417e-01 s 293 | Writing checkpoint: 2.100e-06 s 294 | Cleanup: 1.000e-07 s 295 | ------------------------------------------------------------ 296 | Logging deme1 results: "Deme1_genetic.log" 297 | Logging global results: "genetic.log" 298 | ------------------------------------------------------------ 299 | Generation: 7 300 | ------------------------------------------------------------ 301 | Evaluating 10 genotypes. 302 | Generation 7: 1/10 evaluations completed 303 | Generation 7: 2/10 evaluations completed 304 | Generation 7: 3/10 evaluations completed 305 | Generation 7: 4/10 evaluations completed 306 | Generation 7: 5/10 evaluations completed 307 | Generation 7: 6/10 evaluations completed 308 | Generation 7: 7/10 evaluations completed 309 | Generation 7: 8/10 evaluations completed 310 | Generation 7: 9/10 evaluations completed 311 | Generation 7: 10/10 evaluations completed 312 | ------------------------------------------------------------ 313 | Global Best: deme1_ind73 3.339487e+03 (4.4x) [4.777e+03 avg] 314 | Best hyperparameters: 315 | -a: 1 316 | -b: -0.95882309 317 | -c: 0.92539025 318 | -d: -0.739007 319 | -e: 0.72598804 320 | -f: -0.75289471 321 | -g: 0.39622662 322 | ------------------------------------------------------------ 323 | deme1 size: 10 fom: 4.777e+03 (avg) 324 | deme1_ind73 fom: 3.339e+03 (local best) 325 | -a=1 -b=-0.95882309 -c=0.92539025 -d=-0.739007 -e=0.72598804 -f=-0.75289471 -g=0.39622662 326 | ------------------------------------------------------------ 327 | Timings: 328 | Setup: 2.710e-03 s 329 | Reading checkpoint: 8.000e-07 s 330 | Evaluation: 4.420e-01 s 331 | Writing checkpoint: 2.200e-06 s 332 | Cleanup: 0.000e+00 s 333 | ------------------------------------------------------------ 334 | Logging deme1 results: "Deme1_genetic.log" 335 | Logging global results: "genetic.log" 336 | ------------------------------------------------------------ 337 | Generation: 8 338 | ------------------------------------------------------------ 339 | Evaluating 10 genotypes. 340 | Generation 8: 1/10 evaluations completed 341 | Generation 8: 2/10 evaluations completed 342 | Generation 8: 3/10 evaluations completed 343 | Generation 8: 4/10 evaluations completed 344 | Generation 8: 5/10 evaluations completed 345 | Generation 8: 6/10 evaluations completed 346 | Generation 8: 7/10 evaluations completed 347 | Generation 8: 8/10 evaluations completed 348 | Generation 8: 9/10 evaluations completed 349 | Generation 8: 10/10 evaluations completed 350 | ------------------------------------------------------------ 351 | Global Best: deme1_ind84 3.028281e+03 (4.8x) [4.208e+03 avg] 352 | Best hyperparameters: 353 | -a: 0.66129895 354 | -b: -0.82847478 355 | -c: 0.84150127 356 | -d: -0.91438633 357 | -e: 0.73735573 358 | -f: -0.67031382 359 | -g: 0.37153286 360 | ------------------------------------------------------------ 361 | deme1 size: 10 fom: 4.208e+03 (avg) 362 | deme1_ind84 fom: 3.028e+03 (local best) 363 | -a=0.66129895 -b=-0.82847478 -c=0.84150127 -d=-0.91438633 -e=0.73735573 -f=-0.67031382 -g=0.37153286 364 | ------------------------------------------------------------ 365 | Timings: 366 | Setup: 2.700e-03 s 367 | Reading checkpoint: 1.000e-06 s 368 | Evaluation: 4.427e-01 s 369 | Writing checkpoint: 2.000e-06 s 370 | Cleanup: 1.000e-07 s 371 | ------------------------------------------------------------ 372 | Logging deme1 results: "Deme1_genetic.log" 373 | Logging global results: "genetic.log" 374 | ------------------------------------------------------------ 375 | Generation: 9 376 | ------------------------------------------------------------ 377 | Evaluating 10 genotypes. 378 | Generation 9: 1/10 evaluations completed 379 | Generation 9: 2/10 evaluations completed 380 | Generation 9: 3/10 evaluations completed 381 | Generation 9: 4/10 evaluations completed 382 | Generation 9: 5/10 evaluations completed 383 | Generation 9: 6/10 evaluations completed 384 | Generation 9: 7/10 evaluations completed 385 | Generation 9: 8/10 evaluations completed 386 | Generation 9: 9/10 evaluations completed 387 | Generation 9: 10/10 evaluations completed 388 | ------------------------------------------------------------ 389 | Global Best: deme1_ind92 2.447685e+03 (6x) [3.390e+03 avg] 390 | Best hyperparameters: 391 | -a: 0.74588199 392 | -b: -0.99146308 393 | -c: 0.84746841 394 | -d: -0.92216931 395 | -e: 0.84171456 396 | -f: -0.67031382 397 | -g: 0.29526207 398 | ------------------------------------------------------------ 399 | deme1 size: 10 fom: 3.390e+03 (avg) 400 | deme1_ind92 fom: 2.448e+03 (local best) 401 | -a=0.74588199 -b=-0.99146308 -c=0.84746841 -d=-0.92216931 -e=0.84171456 -f=-0.67031382 -g=0.29526207 402 | ------------------------------------------------------------ 403 | Timings: 404 | Setup: 2.738e-03 s 405 | Reading checkpoint: 1.000e-06 s 406 | Evaluation: 4.449e-01 s 407 | Writing checkpoint: 2.200e-06 s 408 | Cleanup: 0.000e+00 s 409 | ------------------------------------------------------------ 410 | Logging deme1 results: "Deme1_genetic.log" 411 | Logging global results: "genetic.log" 412 | ------------------------------------------------------------ 413 | Generation: 10 414 | ------------------------------------------------------------ 415 | Evaluating 10 genotypes. 416 | Generation 10: 1/10 evaluations completed 417 | Generation 10: 2/10 evaluations completed 418 | Generation 10: 3/10 evaluations completed 419 | Generation 10: 4/10 evaluations completed 420 | Generation 10: 5/10 evaluations completed 421 | Generation 10: 6/10 evaluations completed 422 | Generation 10: 7/10 evaluations completed 423 | Generation 10: 8/10 evaluations completed 424 | Generation 10: 9/10 evaluations completed 425 | Generation 10: 10/10 evaluations completed 426 | ------------------------------------------------------------ 427 | Global Best: deme1_ind108 2.380012e+03 (6.1x) [3.037e+03 avg] 428 | Best hyperparameters: 429 | -a: 1 430 | -b: -0.9422307 431 | -c: 0.61173754 432 | -d: -0.92256054 433 | -e: 0.82376069 434 | -f: -0.59848258 435 | -g: 0.30472243 436 | ------------------------------------------------------------ 437 | deme1 size: 10 fom: 3.037e+03 (avg) 438 | deme1_ind108 fom: 2.380e+03 (local best) 439 | -a=1 -b=-0.9422307 -c=0.61173754 -d=-0.92256054 -e=0.82376069 -f=-0.59848258 -g=0.30472243 440 | ------------------------------------------------------------ 441 | Timings: 442 | Setup: 2.711e-03 s 443 | Reading checkpoin/global/cscratch1/sd/bja/sc19-dl-tutorial/hpo/sin/source 444 | 597.7443 445 | {'-a': 0.69540796, '-b': -0.8145451, '-c': 0.71544262, '-d': -0.94221679, '-e': 0.54342216, '-f': -0.50670459, '-g': 0.023590586} 446 | t: 1.000e-06 s 447 | Evaluation: 4.437e-01 s 448 | Writing checkpoint: 1.800e-06 s 449 | Cleanup: 0.000e+00 s 450 | ------------------------------------------------------------ 451 | Logging deme1 results: "Deme1_genetic.log" 452 | Logging global results: "genetic.log" 453 | Migrating individuals across demes. 454 | Migrating deme1_ind108 to deme1 455 | ------------------------------------------------------------ 456 | Generation: 11 457 | ------------------------------------------------------------ 458 | Evaluating 10 genotypes. 459 | Generation 11: 1/10 evaluations completed 460 | Generation 11: 2/10 evaluations completed 461 | Generation 11: 3/10 evaluations completed 462 | Generation 11: 4/10 evaluations completed 463 | Generation 11: 5/10 evaluations completed 464 | Generation 11: 6/10 evaluations completed 465 | Generation 11: 7/10 evaluations completed 466 | Generation 11: 8/10 evaluations completed 467 | Generation 11: 9/10 evaluations completed 468 | Generation 11: 10/10 evaluations completed 469 | ------------------------------------------------------------ 470 | Global Best: deme1_ind117 1.148292e+03 (13x) [2.598e+03 avg] 471 | Best hyperparameters: 472 | -a: 0.8025413 473 | -b: -0.8145451 474 | -c: 0.86724208 475 | -d: -0.91358901 476 | -e: 0.5640884 477 | -f: -0.51810975 478 | -g: 0.15993373 479 | ------------------------------------------------------------ 480 | deme1 size: 10 fom: 2.598e+03 (avg) 481 | deme1_ind117 fom: 1.148e+03 (local best) 482 | -a=0.8025413 -b=-0.8145451 -c=0.86724208 -d=-0.91358901 -e=0.5640884 -f=-0.51810975 -g=0.15993373 483 | ------------------------------------------------------------ 484 | Timings: 485 | Setup: 2.758e-03 s 486 | Reading checkpoint: 8.000e-07 s 487 | Evaluation: 4.510e-01 s 488 | Writing checkpoint: 2.200e-06 s 489 | Cleanup: 1.000e-07 s 490 | ------------------------------------------------------------ 491 | Logging deme1 results: "Deme1_genetic.log" 492 | Logging global results: "genetic.log" 493 | ------------------------------------------------------------ 494 | Generation: 12 495 | ------------------------------------------------------------ 496 | Evaluating 10 genotypes. 497 | Generation 12: 1/10 evaluations completed 498 | Generation 12: 2/10 evaluations completed 499 | Generation 12: 3/10 evaluations completed 500 | Generation 12: 4/10 evaluations completed 501 | Generation 12: 5/10 evaluations completed 502 | Generation 12: 6/10 evaluations completed 503 | Generation 12: 7/10 evaluations completed 504 | Generation 12: 8/10 evaluations completed 505 | Generation 12: 9/10 evaluations completed 506 | Generation 12: 10/10 evaluations completed 507 | ------------------------------------------------------------ 508 | Global Best: deme1_ind124 5.977443e+02 (24x) [1.277e+03 avg] 509 | Best hyperparameters: 510 | -a: 0.69540796 511 | -b: -0.8145451 512 | -c: 0.71544262 513 | -d: -0.94221679 514 | -e: 0.54342216 515 | -f: -0.50670459 516 | -g: 0.023590586 517 | ------------------------------------------------------------ 518 | deme1 size: 10 fom: 1.277e+03 (avg) 519 | deme1_ind124 fom: 5.977e+02 (local best) 520 | -a=0.69540796 -b=-0.8145451 -c=0.71544262 -d=-0.94221679 -e=0.54342216 -f=-0.50670459 -g=0.023590586 521 | ------------------------------------------------------------ 522 | Timings: 523 | Setup: 2.759e-03 s 524 | Reading checkpoint: 1.000e-06 s 525 | Evaluation: 4.480e-01 s 526 | Writing checkpoint: 2.100e-06 s 527 | Cleanup: 0.000e+00 s 528 | ------------------------------------------------------------ 529 | Logging deme1 results: "Deme1_genetic.log" 530 | Logging global results: "genetic.log" 531 | ------------------------------------------------------------ 532 | Best: deme1_ind124 fom: 5.977443e+02 (24.4225x) 533 | -a=0.69540796 -b=-0.8145451 -c=0.71544262 -d=-0.94221679 -e=0.54342216 -f=-0.50670459 -g=0.023590586 534 | ------------------------------------------------------------ 535 | -------------------------------------------------------------------------------- /logs/reference/hpo-sin-grid.out: -------------------------------------------------------------------------------- 1 | 6948.889 2 | {'-a': -1, '-b': -1, '-c': -1, '-d': -1, '-e': -1, '-f': 1, '-g': 1} 3 | ------------------------------------------------------------ 4 | Baseline Hyperparameters: 5 | ------------------------------------------------------------ 6 | -a: "1.0", 7 | -b: "-1.0", 8 | -c: "1.0", 9 | -d: "-1.0", 10 | -e: "1.0", 11 | -f: "-1.0", 12 | -g: "1.0", 13 | ------------------------------------------------------------ 14 | Grid Search with 128 Points: 15 | ------------------------------------------------------------ 16 | -a:-1.0 1.0 17 | -b:-1.0 1.0 18 | -c:-1.0 1.0 19 | -d:-1.0 1.0 20 | -e:-1.0 1.0 21 | -f:-1.0 1.0 22 | -g:-1.0 1.0 23 | ------------------------------------------------------------ 24 | Starting Search: 25 | ------------------------------------------------------------ 26 | Computing 129 grid points in batch sizes of 50 27 | Completed 50/129 (38.76%) grid points 28 | ------------------------------------------------------------ 29 | Best: point5 fom: 6.9489e+03 (2.1x) 30 | -a: "-1.0" 31 | -b: "-1.0" 32 | -c: "-1.0" 33 | -d: "-1.0" 34 | -e: "-1.0" 35 | -f: "1.0" 36 | -g: "1.0" 37 | ------------------------------------------------------------ 38 | Completed 100/129 (77.52%) grid points 39 | ------------------------------------------------------------ 40 | Best: point5 fom: 6.9489e+03 (2.1x) 41 | -a: "-1.0" 42 | -b: "-1.0" 43 | -c: "-1.0" 44 | -d: "-1.0" 45 | -e: "-1.0" 46 | -f: "1.0" 47 | -g: "1.0" 48 | ------------------------------------------------------------ 49 | Completed 129/129 (100.00%) grid points 50 | ------------------------------------------------------------ 51 | Best: point5 fom: 6.9489e+03 (2.1x) 52 | -a: "-1.0" 53 | -b: "-1.0" 54 | -c: "-1.0" 55 | -d: "-1.0" 56 | -e: "-1.0" 57 | -f: "1.0" 58 | -g: "1.0" 59 | ------------------------------------------------------------ 60 | fomFirst: 14656.5 61 | ------------------------------------------------------------ 62 | Best: point5 fom: 6.9489e+03 (2.1x) 63 | -a: "-1.0" 64 | -b: "-1.0" 65 | -c: "-1.0" 66 | -d: "-1.0" 67 | -e: "-1.0" 68 | -f: "1.0" 69 | -g: "1.0" 70 | ------------------------------------------------------------ 71 | -------------------------------------------------------------------------------- /logs/reference/hpo-sin-random.out: -------------------------------------------------------------------------------- 1 | 47.61292 2 | {'-a': -0.0074535096, '-b': -0.78004822, '-c': -0.5844797, '-d': -0.00028196877, '-e': 0.014913584, '-f': 0.0041539672, '-g': 0.07685726} 3 | ------------------------------------------------------------ 4 | Baseline Hyperparameters: 5 | ------------------------------------------------------------ 6 | -a: "1.0", 7 | -b: "-1.0", 8 | -c: "1.0", 9 | -d: "-1.0", 10 | -e: "1.0", 11 | -f: "-1.0", 12 | -g: "1.0", 13 | ------------------------------------------------------------ 14 | Random Search with 129 Iterations: 15 | ------------------------------------------------------------ 16 | -a:(-1, 1) 17 | -b:(-1, 1) 18 | -c:(-1, 1) 19 | -d:(-1, 1) 20 | -e:(-1, 1) 21 | -f:(-1, 1) 22 | -g:(-1, 1) 23 | ------------------------------------------------------------ 24 | Starting Search: 25 | ------------------------------------------------------------ 26 | fomFirst: 14186.4 27 | ------------------------------------------------------------ 28 | Best: point85 fom: 4.7613e+01 (3e+02x) 29 | -a: -0.0074535096, 30 | -b: -0.78004822, 31 | -c: -0.5844797, 32 | -d: -0.00028196877, 33 | -e: 0.014913584, 34 | -f: 0.0041539672, 35 | -g: 0.07685726, 36 | ------------------------------------------------------------ 37 | -------------------------------------------------------------------------------- /models/__init__.py: -------------------------------------------------------------------------------- 1 | """ 2 | Keras example model factory functions. 3 | """ 4 | 5 | def get_model(name, **model_args): 6 | if name == 'cnn': 7 | from .cnn import build_model 8 | return build_model(**model_args) 9 | elif name == 'resnet_small': 10 | from .resnet import ResNetSmall 11 | return ResNetSmall(**model_args) 12 | elif name == 'resnet50': 13 | from .resnet import ResNet50 14 | return ResNet50(**model_args) 15 | else: 16 | raise ValueError('Model %s unknown' % name) 17 | -------------------------------------------------------------------------------- /models/cnn.py: -------------------------------------------------------------------------------- 1 | """ 2 | Simple CNN classifier model. 3 | """ 4 | 5 | import keras 6 | from keras.models import Sequential 7 | from keras.layers import Dense, Dropout, Activation, Flatten 8 | from keras.layers import Conv2D, MaxPooling2D 9 | 10 | def build_model(input_shape=(32, 32, 3), n_classes=10, dropout=0): 11 | """Construct the simple CNN model""" 12 | conv_args = dict(kernel_size=3, padding='same', activation='relu') 13 | model = Sequential() 14 | model.add(Conv2D(16, input_shape=input_shape, **conv_args)) 15 | model.add(MaxPooling2D(pool_size=2)) 16 | model.add(Conv2D(32, **conv_args)) 17 | model.add(MaxPooling2D(pool_size=2)) 18 | model.add(Conv2D(64, **conv_args)) 19 | model.add(MaxPooling2D(pool_size=2)) 20 | model.add(Flatten()) 21 | model.add(Dense(128, activation='relu')) 22 | model.add(Dropout(dropout)) 23 | model.add(Dense(n_classes, activation='softmax')) 24 | return model 25 | -------------------------------------------------------------------------------- /models/resnet.py: -------------------------------------------------------------------------------- 1 | """ 2 | ResNet models for Keras. 3 | Implementations have been adapted from keras_applications/resnet50.py 4 | """ 5 | 6 | # Externals 7 | import keras 8 | from keras import backend, layers, models, regularizers 9 | 10 | def identity_block(input_tensor, kernel_size, filters, stage, block, 11 | l2_reg=5e-5, bn_mom=0.9): 12 | """The identity block is the block that has no conv layer at shortcut. 13 | 14 | # Arguments 15 | input_tensor: input tensor 16 | kernel_size: default 3, the kernel size of 17 | middle conv layer at main path 18 | filters: list of integers, the filters of 3 conv layer at main path 19 | stage: integer, current stage label, used for generating layer names 20 | block: 'a','b'..., current block label, used for generating layer names 21 | l2_reg: L2 weight regularization (weight decay) 22 | bn_mom: batch-norm momentum 23 | 24 | # Returns 25 | Output tensor for the block. 26 | """ 27 | filters1, filters2, filters3 = filters 28 | if backend.image_data_format() == 'channels_last': 29 | bn_axis = 3 30 | else: 31 | bn_axis = 1 32 | conv_name_base = 'res' + str(stage) + block + '_branch' 33 | bn_name_base = 'bn' + str(stage) + block + '_branch' 34 | 35 | x = layers.Conv2D(filters1, (1, 1), 36 | kernel_initializer='he_normal', 37 | kernel_regularizer=regularizers.l2(l2_reg), 38 | name=conv_name_base + '2a')(input_tensor) 39 | x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2a', 40 | momentum=bn_mom, epsilon=1e-5)(x) 41 | x = layers.Activation('relu')(x) 42 | 43 | x = layers.Conv2D(filters2, kernel_size, 44 | padding='same', 45 | kernel_initializer='he_normal', 46 | kernel_regularizer=regularizers.l2(l2_reg), 47 | name=conv_name_base + '2b')(x) 48 | x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2b', 49 | momentum=bn_mom, epsilon=1e-5)(x) 50 | x = layers.Activation('relu')(x) 51 | 52 | x = layers.Conv2D(filters3, (1, 1), 53 | kernel_initializer='he_normal', 54 | kernel_regularizer=regularizers.l2(l2_reg), 55 | name=conv_name_base + '2c')(x) 56 | x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2c', 57 | momentum=bn_mom, epsilon=1e-5)(x) 58 | 59 | x = layers.add([x, input_tensor]) 60 | x = layers.Activation('relu')(x) 61 | return x 62 | 63 | def conv_block(input_tensor, kernel_size, filters, stage, block, 64 | strides=(2, 2), l2_reg=5e-5, bn_mom=0.9): 65 | """A block that has a conv layer at shortcut. 66 | 67 | # Arguments 68 | input_tensor: input tensor 69 | kernel_size: default 3, the kernel size of 70 | middle conv layer at main path 71 | filters: list of integers, the filters of 3 conv layer at main path 72 | stage: integer, current stage label, used for generating layer names 73 | block: 'a','b'..., current block label, used for generating layer names 74 | strides: Strides for the first conv layer in the block. 75 | l2_reg: L2 weight regularization (weight decay) 76 | bn_mom: batch-norm momentum 77 | 78 | # Returns 79 | Output tensor for the block. 80 | """ 81 | filters1, filters2, filters3 = filters 82 | if backend.image_data_format() == 'channels_last': 83 | bn_axis = 3 84 | else: 85 | bn_axis = 1 86 | conv_name_base = 'res' + str(stage) + block + '_branch' 87 | bn_name_base = 'bn' + str(stage) + block + '_branch' 88 | 89 | x = layers.Conv2D(filters1, (1, 1), strides=strides, 90 | kernel_initializer='he_normal', 91 | kernel_regularizer=regularizers.l2(l2_reg), 92 | name=conv_name_base + '2a')(input_tensor) 93 | x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2a')(x) 94 | x = layers.Activation('relu')(x) 95 | 96 | x = layers.Conv2D(filters2, kernel_size, padding='same', 97 | kernel_initializer='he_normal', 98 | kernel_regularizer=regularizers.l2(l2_reg), 99 | name=conv_name_base + '2b')(x) 100 | x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2b')(x) 101 | x = layers.Activation('relu')(x) 102 | 103 | x = layers.Conv2D(filters3, (1, 1), 104 | kernel_initializer='he_normal', 105 | kernel_regularizer=regularizers.l2(l2_reg), 106 | name=conv_name_base + '2c')(x) 107 | x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2c')(x) 108 | 109 | shortcut = layers.Conv2D(filters3, (1, 1), strides=strides, 110 | kernel_initializer='he_normal', 111 | kernel_regularizer=regularizers.l2(l2_reg), 112 | name=conv_name_base + '1')(input_tensor) 113 | shortcut = layers.BatchNormalization( 114 | axis=bn_axis, name=bn_name_base + '1')(shortcut) 115 | 116 | x = layers.add([x, shortcut]) 117 | x = layers.Activation('relu')(x) 118 | return x 119 | 120 | def ResNet50(input_shape=(224, 224, 3), n_classes=1000, 121 | l2_reg=5e-5, bn_mom=0.9): 122 | """Instantiates the ResNet50 architecture. 123 | 124 | # Arguments 125 | input_shape: input shape tuple. It should have 3 input channels. 126 | n_classes: number of classes to classify images. 127 | l2_reg: L2 weight regularization (weight decay) 128 | bn_mom: batch-norm momentum 129 | 130 | # Returns 131 | A Keras model instance. 132 | """ 133 | img_input = layers.Input(shape=input_shape) 134 | 135 | if backend.image_data_format() == 'channels_last': 136 | bn_axis = 3 137 | else: 138 | bn_axis = 1 139 | 140 | x = layers.ZeroPadding2D(padding=(3, 3), name='conv1_pad')(img_input) 141 | x = layers.Conv2D(64, (7, 7), strides=(2, 2), padding='valid', 142 | kernel_initializer='he_normal', 143 | kernel_regularizer=regularizers.l2(l2_reg), 144 | name='conv1')(x) 145 | x = layers.BatchNormalization(axis=bn_axis, name='bn_conv1')(x) 146 | x = layers.Activation('relu')(x) 147 | x = layers.ZeroPadding2D(padding=(1, 1), name='pool1_pad')(x) 148 | x = layers.MaxPooling2D((3, 3), strides=(2, 2))(x) 149 | 150 | x = conv_block(x, 3, [64, 64, 256], stage=2, block='a', strides=(1, 1), 151 | l2_reg=l2_reg, bn_mom=bn_mom) 152 | x = identity_block(x, 3, [64, 64, 256], stage=2, block='b', 153 | l2_reg=l2_reg, bn_mom=bn_mom) 154 | x = identity_block(x, 3, [64, 64, 256], stage=2, block='c', 155 | l2_reg=l2_reg, bn_mom=bn_mom) 156 | 157 | x = conv_block(x, 3, [128, 128, 512], stage=3, block='a', 158 | l2_reg=l2_reg, bn_mom=bn_mom) 159 | x = identity_block(x, 3, [128, 128, 512], stage=3, block='b', 160 | l2_reg=l2_reg, bn_mom=bn_mom) 161 | x = identity_block(x, 3, [128, 128, 512], stage=3, block='c', 162 | l2_reg=l2_reg, bn_mom=bn_mom) 163 | x = identity_block(x, 3, [128, 128, 512], stage=3, block='d', 164 | l2_reg=l2_reg, bn_mom=bn_mom) 165 | 166 | x = conv_block(x, 3, [256, 256, 1024], stage=4, block='a', 167 | l2_reg=l2_reg, bn_mom=bn_mom) 168 | x = identity_block(x, 3, [256, 256, 1024], stage=4, block='b', 169 | l2_reg=l2_reg, bn_mom=bn_mom) 170 | x = identity_block(x, 3, [256, 256, 1024], stage=4, block='c', 171 | l2_reg=l2_reg, bn_mom=bn_mom) 172 | x = identity_block(x, 3, [256, 256, 1024], stage=4, block='d', 173 | l2_reg=l2_reg, bn_mom=bn_mom) 174 | x = identity_block(x, 3, [256, 256, 1024], stage=4, block='e', 175 | l2_reg=l2_reg, bn_mom=bn_mom) 176 | x = identity_block(x, 3, [256, 256, 1024], stage=4, block='f', 177 | l2_reg=l2_reg, bn_mom=bn_mom) 178 | 179 | x = conv_block(x, 3, [512, 512, 2048], stage=5, block='a', 180 | l2_reg=l2_reg, bn_mom=bn_mom) 181 | x = identity_block(x, 3, [512, 512, 2048], stage=5, block='b', 182 | l2_reg=l2_reg, bn_mom=bn_mom) 183 | x = identity_block(x, 3, [512, 512, 2048], stage=5, block='c', 184 | l2_reg=l2_reg, bn_mom=bn_mom) 185 | 186 | x = layers.GlobalAveragePooling2D(name='avg_pool')(x) 187 | x = layers.Dense(n_classes, activation='softmax', 188 | kernel_regularizer=regularizers.l2(l2_reg), 189 | name='fc1000')(x) 190 | 191 | return models.Model(img_input, x, name='resnet50') 192 | 193 | 194 | def ResNetSmall(input_shape=(32, 32, 3), n_classes=10, 195 | l2_reg=5e-5, bn_mom=0.9): 196 | """Instantiates the small ResNet architecture. 197 | 198 | # Arguments 199 | input_shape: input shape tuple. It should have 3 input channels. 200 | n_classes: number of classes to classify images. 201 | l2_reg: L2 weight regularization (weight decay) 202 | bn_mom: batch-norm momentum 203 | 204 | # Returns 205 | A Keras model instance. 206 | """ 207 | img_input = layers.Input(shape=input_shape) 208 | 209 | if backend.image_data_format() == 'channels_last': 210 | bn_axis = 3 211 | else: 212 | bn_axis = 1 213 | 214 | x = img_input 215 | x = layers.Conv2D(64, (3, 3), strides=(1, 1), padding='same', 216 | kernel_initializer='he_normal', 217 | kernel_regularizer=regularizers.l2(l2_reg), 218 | name='conv1')(img_input) 219 | x = layers.BatchNormalization(axis=bn_axis, name='bn_conv1')(x) 220 | x = layers.Activation('relu')(x) 221 | 222 | x = conv_block(x, 3, [64, 64, 64], stage=2, block='a', 223 | l2_reg=l2_reg, bn_mom=bn_mom) 224 | x = identity_block(x, 3, [64, 64, 64], stage=2, block='b', 225 | l2_reg=l2_reg, bn_mom=bn_mom) 226 | 227 | x = conv_block(x, 3, [128, 128, 128], stage=3, block='a', 228 | l2_reg=l2_reg, bn_mom=bn_mom) 229 | x = identity_block(x, 3, [128, 128, 128], stage=3, block='b', 230 | l2_reg=l2_reg, bn_mom=bn_mom) 231 | 232 | x = conv_block(x, 3, [256, 256, 256], stage=4, block='a', 233 | l2_reg=l2_reg, bn_mom=bn_mom) 234 | x = identity_block(x, 3, [256, 256, 256], stage=4, block='b', 235 | l2_reg=l2_reg, bn_mom=bn_mom) 236 | 237 | x = conv_block(x, 3, [512, 512, 512], stage=5, block='a', 238 | l2_reg=l2_reg, bn_mom=bn_mom) 239 | x = identity_block(x, 3, [512, 512, 512], stage=5, block='b', 240 | l2_reg=l2_reg, bn_mom=bn_mom) 241 | 242 | x = layers.GlobalAveragePooling2D(name='avg_pool')(x) 243 | x = layers.Dense(n_classes, activation='softmax', 244 | kernel_regularizer=regularizers.l2(l2_reg), 245 | name='fc1000')(x) 246 | 247 | return models.Model(img_input, x, name='resnet50') 248 | -------------------------------------------------------------------------------- /notebooks/Analysis.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Results analysis\n", 8 | "\n", 9 | "You can use this notebook to analyze the results of your training runs on Cori.\n", 10 | "\n", 11 | "More documentation to come." 12 | ] 13 | }, 14 | { 15 | "cell_type": "code", 16 | "execution_count": 1, 17 | "metadata": {}, 18 | "outputs": [], 19 | "source": [ 20 | "# System\n", 21 | "import os\n", 22 | "\n", 23 | "# Externals\n", 24 | "import numpy as np\n", 25 | "import matplotlib.pyplot as plt" 26 | ] 27 | }, 28 | { 29 | "cell_type": "code", 30 | "execution_count": 2, 31 | "metadata": {}, 32 | "outputs": [], 33 | "source": [ 34 | "# Magic\n", 35 | "%matplotlib inline" 36 | ] 37 | }, 38 | { 39 | "cell_type": "markdown", 40 | "metadata": {}, 41 | "source": [ 42 | "## Specify the results to load\n", 43 | "\n", 44 | "I'll start with plotting just one experiment. Later we may want to allow to plot multiple runs, like in Tensorboard." 45 | ] 46 | }, 47 | { 48 | "cell_type": "code", 49 | "execution_count": 22, 50 | "metadata": {}, 51 | "outputs": [], 52 | "source": [ 53 | "results_dir = '/global/cscratch1/sd/sfarrell/sc18-dl-tutorial/cifar10-cnn-N1-16150683'" 54 | ] 55 | }, 56 | { 57 | "cell_type": "code", 58 | "execution_count": 23, 59 | "metadata": {}, 60 | "outputs": [ 61 | { 62 | "name": "stdout", 63 | "output_type": "stream", 64 | "text": [ 65 | "\u001b[0m\u001b[01;34mcheckpoints\u001b[0m/ history.npz out.log\n" 66 | ] 67 | } 68 | ], 69 | "source": [ 70 | "ls $results_dir" 71 | ] 72 | }, 73 | { 74 | "cell_type": "markdown", 75 | "metadata": {}, 76 | "source": [ 77 | "## Load the summary results and inspect them" 78 | ] 79 | }, 80 | { 81 | "cell_type": "code", 82 | "execution_count": 24, 83 | "metadata": {}, 84 | "outputs": [], 85 | "source": [ 86 | "history = np.load(os.path.join(results_dir, 'history.npz'))" 87 | ] 88 | }, 89 | { 90 | "cell_type": "code", 91 | "execution_count": 25, 92 | "metadata": {}, 93 | "outputs": [], 94 | "source": [ 95 | "n_ranks = int(history['n_ranks'])" 96 | ] 97 | }, 98 | { 99 | "cell_type": "code", 100 | "execution_count": 26, 101 | "metadata": {}, 102 | "outputs": [ 103 | { 104 | "data": { 105 | "text/plain": [ 106 | "['n_ranks', 'val_loss', 'val_acc', 'loss', 'acc', 'lr']" 107 | ] 108 | }, 109 | "execution_count": 26, 110 | "metadata": {}, 111 | "output_type": "execute_result" 112 | } 113 | ], 114 | "source": [ 115 | "history.files" 116 | ] 117 | }, 118 | { 119 | "cell_type": "markdown", 120 | "metadata": {}, 121 | "source": [ 122 | "## Plot loss and accuracy" 123 | ] 124 | }, 125 | { 126 | "cell_type": "code", 127 | "execution_count": 27, 128 | "metadata": {}, 129 | "outputs": [ 130 | { 131 | "data": { 132 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAA1gAAAFgCAYAAACmKdhBAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4wLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvqOYd8AAAIABJREFUeJzs3Xd8VfX9x/HXN3tCJishg41siICCgFWWKCIi4qraWqq1dVRb7dTa8VPbWmvdA7XWzXQg4mCIgAKCbAibsBJICNnrfn9/nKARw8y9OTfJ+/l4nMfNPfec7/kkHrn3c7/f7+drrLWIiIiIiIhI3QW4HYCIiIiIiEhjoQRLRERERETES5RgiYiIiIiIeIkSLBERERERES9RgiUiIiIiIuIlSrBERERERES8RAmWiIiIiIiIlyjBEhERERER8RIlWCIiIiIiIl4S5HYApyshIcGmpaW5HYaIiNTRihUrDlprE92Ow9f0viUi0jic6vtWg0uw0tLSWL58udthiIhIHRljdrodQ33Q+5aISONwqu9bGiIoIiIiIiLiJUqwREREREREvEQJloiIiIiIiJc0uDlYIiK+VlFRQVZWFqWlpW6H0iiEhYWRnJxMcHCw26H4Dd1j3qV7TET8iRIsEZFjZGVlER0dTVpaGsYYt8Np0Ky1HDp0iKysLNLT090Ox2/oHvMe3WMi4m80RFBE5BilpaXEx8frg68XGGOIj49XT80xdI95j+4xEfE3SrBERGqhD77eo79l7fR38R79LUXEnyjBEhERERER8RIlWCIifubw4cM8+eSTp33eRRddxOHDh30QkTQ2usdERHxHCZaIiJ853offqqqqE543e/ZsYmJifBVWo2GMGWWM2WSM2WKMubeW1/9ljFlVvW02xhyu8VpVjdfeqd/IvUf3mIiI7zS5KoLWWo6UVtI8XKVcReTk/vTuOtbvPeLVNs9q04z7Lul23Nfvvfdetm7dSu/evQkODiYqKorWrVuzatUq1q9fz7hx49i9ezelpaXcfvvtTJ48GYC0tDSWL19OYWEho0ePZvDgwSxevJikpCRmzZpFeHi4V3+PhsgYEwg8AQwHsoBlxph3rLXrjx5jrb2zxvG/APrUaKLEWtvbmzHpHhMROQPWOluA//UXNbkE67Y3VrFubz6f3jXM7VBERGr14IMPsnbtWlatWsX8+fMZM2YMa9eu/aYE9ZQpU4iLi6OkpISzzz6byy+/nPj4+O+0kZmZyeuvv85zzz3HxIkTmTZtGtdee60bv46/6Q9ssdZuAzDGvAFcCqw/zvFXAffVU2z1RveYiDQIVRWQvxtyt0Pejuqt+ufcHRAaBVe/Ba17uhvnMZpcgtUmJowP1+6nymMJDFDVIRE5sRP1AtSX/v37f2d9n8cee4wZM2YAsHv3bjIzM7/34Tc9PZ3evZ2Oln79+rFjx456i9fPJQG7azzPAgbUdqAxJhVIBz6tsTvMGLMcqAQetNbOPM65k4HJACkpKScMSPeYiEi1A+th2XNwaKuTROVnga0xdDkwFGJTITYNUs6BjbPhv5fC9e9Cq+5uRf09TS7BSo2LpLzKw/4jpSTFaCiDiPi/yMjIb36eP38+H3/8MUuWLCEiIoJhw4bVuv5PaGjoNz8HBgZSUlJSL7E2ALV9s2aPc+wkYKq1Nd/dSbHW7jXGtAM+NcassdZu/V6D1j4LPAuQkZFxvPb9hu4xEXGVxwNfPAUf/wkCgyGxCySfDT0nOslUbBrEpkN06+8OCRxwM7x0Mfx3LFz/HrQ8y63f4DuaXoIVHwHAzkNFSrBExC9FR0dTUFBQ62v5+fnExsYSERHBxo0bWbp0aT1H1+BlAW1rPE8G9h7n2EnArTV3WGv3Vj9uM8bMx5mf9b0Ey9/pHhMRv5GfBTNvge0LofNFcMljEJV4aufGt4cb3oMXL/o2yWrRxbfxnoIml2ClxDkJ1u7cYmjvcjAiIrWIj49n0KBBdO/enfDwcFq2bPnNa6NGjeLpp5+mZ8+edO7cmYEDB7oYaYO0DOhojEkH9uAkUVcfe5AxpjMQCyypsS8WKLbWlhljEoBBwMP1ErWX6R4TEb+wZiq8/0uoqnQSq74/hNNdOLxmkvXyJXDD+5DYyTfxniJjrd+PXPiOjIwMu3z58jM+v7LKQ5c/zGHykHb8epT7Ga6I+J8NGzbQtWtXt8NoVGr7mxpjVlhrM+o7FmPMRcCjQCAwxVr7V2PMA8Bya+071cfcD4RZa++tcd65wDOAB2eZk0ettS+c7Hq1vW/pHvM+/U1FGpCSPHj/blg7FZL7w/hnIK5d3drM2QQvjQETCDfOdhIvLzvV960m14MVFBhAcmw4O3OL3Q5FRERcYK2dDcw+Zt8fj3l+fy3nLQZ6+DQ4EZHGbtsCZ0hg4QE4//cw+E4I9EJKktjZKXbx0hhnXtYN7/kkyToV/lc4vh6kxEey65ASLBERERGRelFRCnN+68yVCo6AH38EQ3/lneTqqBZdnSSrstQZLpi73Xttn4YmmWClxkWw81CR22GIiIiIiDR++9fAc+fD0ifg7J/ATxdCUl/fXKtlN7j+HagodpKsvJ2+uc4JNM0EKz6CI6WVHC4udzsUEREREZHGqSQP5v4envsBFB+Ca6bBmH9ASIRvr9uqB1w3E8qOwMsXw+HdJz/Hi5pkgnW0kuBODRMUEREREfGuilL4/N/w716w+HHocQXcsgQ6Xlh/MbTp7SRZJflOkpW/p94u3SQTrNR4Z0FFFboQEREREcHpbXplPMz5DexZAWdSadxTBateh8cz4KM/QtsBcMvnMO5JiIz3fswnk9QXrpsBxblOklWaXy+XbZIJ1tEerF2ahyUijUBUVBQAe/fuZcKECbUeM2zYME62xMWjjz5KcfG3XzxddNFFHD582HuBSoOle0ykCfjiGdj6CSx73hnS95++8OlfnfLnJ2MtZH4MzwyBmTdDZIJTbOKat505UW5K7gfXToOekyC0Wb1cskkmWOEhgbSIDtUQQRFpVNq0acPUqVPP+PxjP/zOnj2bmJgYb4QmjYTuMZFGqqwAlj4FnS+CuzNh7OPQvC189g94oj88NRgWPVr7XKa9K+G/l8Krl0N5IUyYAjd9CulD6v/3OJ62/WHYPae/iPEZanLrYB2VEhfBLg0RFJGT+eBep/qRN7XqAaMfPO7L99xzD6mpqfzsZz8D4P7778cYw8KFC8nLy6OiooK//OUvXHrppd85b8eOHVx88cWsXbuWkpISbrzxRtavX0/Xrl0pKSn55rhbbrmFZcuWUVJSwoQJE/jTn/7EY489xt69ezn//PNJSEhg3rx5pKWlsXz5chISEnjkkUeYMmUKADfddBN33HEHO3bsYPTo0QwePJjFixeTlJTErFmzCA8P9+7fq7HTPaZ7TBoOa2HGT2HHImjZ3fl/rXVP5zEmDQIaaN/Fipeg9DAM/iWEx0Df65ytYD+smwFrpsLH9zlb24HQYwIkZzjzq9ZOhYh4GPUQZPwIgkLc/m1c13QTrPgIlmw95HYYIiLfM2nSJO64445vPvy+9dZbzJkzhzvvvJNmzZpx8OBBBg4cyNixYzHH+TbuqaeeIiIigtWrV7N69Wr69v22HO5f//pX4uLiqKqq4oILLmD16tXcdtttPPLII8ybN4+EhITvtLVixQpefPFFvvjiC6y1DBgwgKFDhxIbG0tmZiavv/46zz33HBMnTmTatGlce+21vvvjiFfoHhM5Q0seh9VvQrthkL8btnwMtsp5LSQaWlUnXa2qk64WXSEo1M2IT66i1EmU0odA27O/+1p0Kxh4i7PlboO105xka/bdzutB4XDeXTDodghrXv+x+6kmm2ClxkUyY+UeSiuqCAsOdDscEfFXJ+gF8JU+ffqQnZ3N3r17ycnJITY2ltatW3PnnXeycOFCAgIC2LNnDwcOHKBVq1a1trFw4UJuu+02AHr27EnPnj2/ee2tt97i2WefpbKykn379rF+/frvvH6sRYsWcdlllxEZ6RQIGj9+PJ999hljx44lPT2d3r17A9CvXz927Njhpb9CE6J7TPeYNAw7l8BH90HXsTDxv85ws4pSyNkA+1Y7PdH718Cq16D8WeecgCDoMBxG/Q3i2rkb//F8/RoU7ofxz5z4uLh2MORXcN7dcGAd7FoCXcZAszb1E2cD0nQTrPgIrIWsvGI6tIh2OxwRke+YMGECU6dOZf/+/UyaNIlXX32VnJwcVqxYQXBwMGlpaZSWlp6wjdp6HrZv384//vEPli1bRmxsLDfccMNJ27EnqCQVGvrtN7OBgYHfGSYm/k33mMhpKMyBqTdCbCpc+vi3c3mCw6BNH2c7yuOBvO2wf7VTjW/5i/DEQBh8Jwy+A4L9aIhrVaUztyqpH6QPPbVzjKnuqevu29gasAY6ULTuUuK1FpaI+K9JkybxxhtvMHXqVCZMmEB+fj4tWrQgODiYefPmsXPniVemHzJkCK+++ioAa9euZfXq1QAcOXKEyMhImjdvzoEDB/jggw++OSc6OpqCgoJa25o5cybFxcUUFRUxY8YMzjvvPC/+tuIG3WMip8hTBdN+7JQxn/jfkw+FCwiA+PbQ7TIY8Rf4+XLoejEseBCeHAib59ZP3Kdi7TQ4vNMZ5ldPBSCagqbbg6XFhkXEj3Xr1o2CggKSkpJo3bo111xzDZdccgkZGRn07t2bLl26nPD8W265hRtvvJGePXvSu3dv+vfvD0CvXr3o06cP3bp1o127dgwaNOibcyZPnszo0aNp3bo18+bN+2Z/3759ueGGG75p46abbqJPnz4aqtXA6R4TOUULHoLtC2Dsf5x5VaerWWunsl7fH8LsX8FrV0DnMc7w4JgU78d7qjweWPQItDgLOo12L45GyJyoW94fZWRk2JOts3EqrLX0uH8uE/olc/9Yl+vzi4hf2bBhA127dnU7jEaltr+pMWaFtTbDpZDqTW3vW7rHvE9/U/mefV/D9oXQf/KZF5rY8jH8bwL0vhoufaLuvTyV5bD0CVjwsFORcMjdcO4v3CmEseE9ePMaGP889Lyi/q/fAJ3q+1aTHSJojCElLoKdWmxYREREpHE5ss9JjOb+Hp6/EA5uOf028rNg2k+cHp6L/uGdIXRBIc5crFu/hI7D4dM/w1PnwtZP69726bAWPvsnxKY5QxnFq5psggVOoYudWgtLREREGhOPBz76Iyx9Gopz3Y6m/lWWw9vXQ3mRszZT/m54ZgisfNVJLE65jRuhqsKZdxUS4d0YY9rCla/ANdPAeuCVy+Ct6511p+rDtvmw9ysYdAcENtkZQz7TpBOslLgIsnJLqPI0rGGSIuJ7DW34tD/T37J2+rt4j/6Wx9j4Lnz+b5hzD/yzC0yfDDs+P/XkoqGb+zvY/QWMewIG3gy3LIakvjDrZ06xitL8k7fx8X2Q9SVc+h9I6OC7WDteCLcsgfN/B5vnwMuXQMlh313vqM/+CVGtnKGP4nVNO8GKj6C8ysOBIycuHysiTUtYWBiHDh3ShzYvsNZy6NAhwsLC3A7Fr+ge8x7dY8fweJz5PfEdYPIC6HsdbPoAXroIHj8bFv8Hig55/7rlRZC1HFa8BO/fDVNGw5PnwK4vvH+tE/n6TfjyWTjn598OfWvWBn44C37wB1g3E54eDLuXHb+N9bNg6ZMw4Ob6GT4XHAZDfw3XTofc7fD2DU75dF/Z9QXs+My9uV9NQJPuE0yNcxY03HmomDYxfrQmgYi4Kjk5maysLHJyctwOpVEICwsjOTnZ7TD8iu4x79I9VsOm9+HAWrjsGWjT29mGP+AkFl+97MxJ+uQB6HIx9Lse0oY4ZcVPlbVOWe8D65xt/xrnMXcbUP2FQUgUtOwGZQXw2kT40RxoUQ8FSPavgXdvh9TBcOGfvvtaQKBTUCJ9iNOLNWUk/OB3zhC5gMBvjzu0FWbeCkkZMPzPvo+5prRBcMmjMOtWp/dxzD99c51Fj0B4HPS7wTftSxNPsKrXwtqVW8Q57eNdjkZE/EVwcDDp6eluhyGNmO4x8QlrnZLice2g+4Rv94dEQp9rnC17A6x4Gb5+HdZNh9h0Z39EvJMQlRVWPxZA2ZFvfy6v3l+SBxVH568biEt3kqmeVzqPLbtBTKqTtOXtgBdGwCvj4ccf+rYkeUkevHkthMfAFS8ef15R2/5w8yJ49w4n0dw2Hy571imlXlECb/3QOfeKl5yCFPWtz7WQswkWPwYJnWHAZO+2v3+NMxTx/N9BaJR325ZvNOkEq3XzMIICjNbCEhERkYZv0wfOB+hLnzx+gtGiq7P+0oX3w4Z3nGTr07/UOMBAaDMIjXY+gIdGQ1gzaJ4EIdHOIrsJHaFld6etE31Ij01zhr29eJFTxOFHH0Jkgvd+36M8Hpj+U8jfAzfOhqgWJz4+rLmzLlWHC5x1qZ46F8Y9CRvfc3r/rpnqFKFwy4X3Oz1pc+5xkuWOF3qv7UX/cnoY+//Ee23K9/gswTLGTAEuBrKttd2Pc8ww4FEgGDhorR3qq3hqExQYQHJsuCoJioiISMN2tPcqNs3pTTqZ4DDoOdHZCg4A1vngHRLpnXLkR7XqDle/4SRYr06A6991kjZvWvh3yPzQKaXetv+pnWOM01vUdgBM/RG8PsnZf97dTvl0NwUEwvhnYcoomHoj/PgjaHHihb9PyaGtsG6GM/cqPLbu7clx+bLIxUvAqOO9aIyJAZ4ExlpruwGurHCWEh/JLvVgiYiISEOWORf2rXIShNMtux3dEqJbOb1R3kyujko91xlyt2+1M4yvssx7bWd+BPP/D3pdBWffdPrnJ3SEmz525mL1vgbO/633YquL0CgnMQ0Od+axFR2se5uL/gUBwTDw1rq3JSfkswTLWrsQONHiC1cD0621u6qPz/ZVLCeSqsWGRUREpCGzFuY/6Mxx6jXJ7Whq13k0jP2PM+dpxs3gqap7m7nbYdpNznDFMY+ceXIYFArD/+QME6xZ8MJtzZNh0utQeKDuiWl+Fnz9hlNVMrql92KUWrlZpr0TEGuMmW+MWWGM+aEbQaTERXCktJLDxeVuXF5ERESkbrZ84iwae95dEBjsdjTH1+ea6oqG0+GDe+q2LldFCbx1HWCdBXu9vRCwv0juB+Oegl1LnAqJZ/o3W/w4YGHQ7V4NT2rnZpGLIKAfcAEQDiwxxiy11m4+9kBjzGRgMkBKincr0KRUVxLceaiYmAgXqsWIiIiInClrYcGD0Lwt9GoAi8YOuh0Ks2HJ4xCZCMPuOf02rIX3fgn718LVbzmVDBuz7uPh0BaY91dI6ATn/fL0zi866KxP1mOibys5yjfc7MHKAuZYa4ustQeBhUCv2g601j5rrc2w1mYkJiZ6NYhvS7VrHpaIiIg0MFs/haxlMPhOd8qKn4nhf3bmTM3/Gyx74fTPX/4CfP0aDLsXOo3wfnz+aMivoMcV8MmfYP07p3fu0qegshQG3+Gb2OR73OzBmgU8bowJAkKAAcC/6juIlDglWCIiIuKS4lwICjuzIW5HKwc2S3Iq4jUUAQHOfKziXHj/LmcNrm7jaj/WWig+BAcz4eBmZ/viGeg4Aob8un7jdpMxMPZxZ22xGT91eqLa9P7uMWWFkLvV6e06VONx/2roegkkdnYl9KbIl2XaXweGAQnGmCzgPpxy7Fhrn7bWbjDGzAFWAx7geWvtWl/FczwRIUEkRoeq0IWISBNhjBkF/BsIxHnvefCY1/8FnF/9NAJoYa2NqX7teuD31a/9xVr7cv1ELY1ScS48ORCCI+DaaRDf/vTO374Adn/hlCcPCvVNjL4SGOxUFnxlHEz/iVM1r3kKHDqaSG1xHg9lOosIf3NeKKQMdMqYB7g5EMsFwWEw6TV47gdOWfkBN0Putm+TqcL93z2+WbJzT/W9/vSHFUqdGFuXCYYuyMjIsMuXL/dqmxOeWkxggOHNn57j1XZFROT4jDErrLUZ9XzNQGAzMBxnqPoy4Cpr7frjHP8LoI+19kfGmDhgOZABWGAF0M9am1fbuUf54n1LGomZt8LXrzsL+ZoAuOpNaHv2qZ//4kXOB+zbVjkfvhuikjzn98g+5n/BqJbOfKP4Ds5jQkdna97Wvyr9ueHAOpgyGsrynd6/+A7VW3vnMa69s0BxYy384aJTfd9yc4ig30iJj2DJ1kNuhyEiIr7XH9hird0GYIx5A7gUqDXBAq7CGYEBMBL4yFqbW33uRzjrPb7u04ilcdo6D1b9z5k71ec6+N/l8PIlMOEF6DLm5Odv/wx2fg6jH264yRU4C95eNxNWv+EkVfEdIaEDhDV3OzL/1bIb/HIdVFVARJzb0Ugtmljfau1S4yLZf6SU0govrMkgIiL+LAnYXeN5VvW+7zHGpALpwKdncO5kY8xyY8zynJycOgctjUx5Mbx3h9PTMPQep+fhxx9By7PgjWvgy+dO3saChyCqlTP8q6GLbulUF+w1ySlLruTq5EKjlVz5MSVYOJUErYWsPBW6EBFp5GpbifR4Y+UnAVOttUe/fTvlc31Z/VYagfl/c4oVjH0MgsOdfVGJcP17zoK8s++GuX8Aj6f283d8Djs+c5KShtx7JdJIKcEC2sZ9uxaWiIg0allA2xrPk4G9xzl2Et8d/nc654rUbu9KWPKE0/OUNvi7r4VEwJX/g7NvgsWPwfSboLLs+20seAgiW0DGjfUTs4icFiVYfLsWlhIsEZFGbxnQ0RiTbowJwUmivreojDGmMxALLKmx+0NghDEm1hgTC4yo3idyaqoq4J1fOMnR8AdqPyYg0KkKeOGfYO00eOWy71bR27XUqR446PZve79ExK8owQLiI0OIDAnUWlgiIo2ctbYS+DlOYrQBeMtau84Y84AxZmyNQ68C3rA1Su1WF7f4M06Stgx44GjBC5FTsvg/sH8NjPkHhMcc/zhjnEVhxz8Pu7+EF0bC4V3OawsegogE9V6J+DFVEQSMMaTERyrBEhFpAqy1s4HZx+z74zHP7z/OuVOAKT4LThqvQ1th/oPOgq9dLzm1c3peAdGtnMIXz1/oFMTY+qnT+xUS6dt4RRogay1F5VXkFpaTW1xOblEZhwrLyS0qp9JjufX8DvUShxKsaqlxEWRmF7gdhoiIiDQ2Hg+8cxsEhcHov5/euennwY8/hP9NgPd/6ax7lPFj38Qp4gc8HktReSWFZZUUlFZSUFrBkdJvfy4oraSw+uf8kgoOFTkJVG5ROYeKyimvrL04TExEsBKs+pYaH8Gnm7LxeCwBAbUVihIRERE5AytfgZ2L4JJ/Q7PWp39+i65w08cw61boMQFCo7wfo4gLPB7LtoOFfLXrMCt3HWblrjwyswup8hyvuKsjwEBUaBDNwoOJjwyhRXQoXVo1Iz4qhLhIZ4v/5jGUuChnOlB9UYJVLSU+gvJKD/uPlNImRpNGRURExAuO7HNKrqedV7c1q5q1huumey8uERccLi5n1e7D1QlVHqt2H6agtBKAZmFB9E6J5fwuLYiLCCEqLIjosCCiw4KdZKr65+iwICJCAjHGfztElGBVS41zxjLvPFSsBEtERES844NfQWWp03vlxx8IRbyhvNJDTmEZ2UdKyS4oI7ugjJwjpWTllbAq6zDbcooApweqU8toLu7Zhr4pMfRJiaVdQmSjGUWmBKva0VLtu3KLOKd9vMvRiIiISIO3/h3Y8C5ccB/Et3c7GpETKq/0sGl/AQWlFZRVeSivrLFVP6+o8lBWva+0soqcgjJyCsrIPlJGdkEpecUV32s3wEBidCg9kmK4vG8yfVJi6JkcQ1Ro401DGu9vdppaNw8jKMBoLSwRERGpu5LDMPtX0KoHnPsLt6MR+Z6yyipWZ+WzdOshvtiey/KduZRW1F4gojYhgQHER4XQolkYKfERZKTF0iI6jBbNQmkRHUqL6DBaNgslLjKEoMCmtTKUEqxqQYEBJMWGs1Ol2kVERKSuPvojFGXD1W9AYLDb0UgjU+WxGDitIXVllVWs2nWYL7bnsnTbIb7alfdNQtWlVTSTzk7h7LQ44iJDCAkKIDQogJCgAEICqx9rPg8MaDTD+XxBCVYNKXER7FIPloiIiNTF9s/gq5ednqs2fdyORho4a60zh2n3YVbtdopDrN17hPJKDxEhgUSEBBEZWv0YEkhEaPVj9f6QwADW7T3CV7vyKKv0YAx0adWMq/qnMCA9ngHpccRGhrj9azYqSrBqSI2PYHXWPrfDEBERkYYqZzNMuwli02DYb92ORhqggtIKVmflVydTh1m1O4+DheUAhAYF0DO5OT8cmEpkaBDF5ZUUlVdRXFb9WF7JkZIK9ueXUFRWRVF5JSXlVXRoEcU1A1IZ2C6O/ulxxEQoofIlJVg1pMZFkl9SQX5xBc0j1J0vIiIipyF7A7w81vn5qjcgJMLdeMR1BaUV7D1cSlF5JUVllU7SU1ZJcXklhWVOQnR0X2FZJZnZBWRmF2Krl4FqlxjJkE6J9EmJpU/bGDq3iia4ic1naoiUYNWQUl1JcGduET0jYlyORkRERBqM/Wvhv5dCQBBc/y4kdnI7InHJjoNFfLzhAJ9syGbZjlwqT7BorjEQFRJERGggkSFBpMRHMKZHG3qnxNA7OUZf+DdQSrBqOFqqfeehYnomK8ESERFxXWk+fP0mHNwMI/4CwWFuR/R9+1Y7yVVQGNzwnkqyNzFVHstXu/K+Saq2ZBcC0KllFD8Z0o7ubZoTGRpIZGgQkTXmS0WFBhEWHODXC+bKmVGCVUNK3NG1sFToQkRExDXWwp6vYMUUWDMNKkuc/UGhMPKv7sZ2rL0r4b/jIDQarn8H4tq5HZHUg4LSChZuPsgnGw4wb1M2ecUVBAcaBqTHc82AFC7s2pK2cRoi2lQpwaohIiSIxOhQdh4qcjsUERGRpqesAFa/BStehP1rIDgSek6EjBth5f9gyRPQaSSkD3E7UkfWcnhlPIQ3h+vfg9hUtyOSOiitqCKvuJzconLyiirILS4nr6icQ0XO49HnuUXlbM0ppKLKEhsRzPmdW3BB15YM6ZRAdJiG9IkSrO9JiYvQYsMiIiL1ae9KWP4irJkKFUXQsgeMeQR6XAFhzZxjEjrBtvkw4xb42WIIa+5qyOz6Av53OUQmOHOuYtq6G4+9thbmAAAgAElEQVSckewjpcxatZdpX2WxcX/BcY+LiQgmLjKEuIgQ2sZFMLRzIhd2bUnflFgCtR6UHEMJ1jFS4yJYsu2Q22GIiIg0bpVlsPpNWD7FSbCCwqH75U5vVVI/Z/Z/TSGRcNmz8MJwmP1rGP+MO3ED7FwMr14BUS2dOVfN2rgXi5y2kvIq5q7fz7Sv9rAoMwePhV5tY7jzwk4kRocSFxlMbEQI8VEhxEaE0Dw8mCBV7pPToATrGCnxEcxYtYfSiirCggPdDkdERKRxsRY2fQAf/hbytkNiVxj9d2coYPhJCkwl94Ohv4b5/wedR0G3y+on5pq2L4TXroTmyU7PVXSr+o9BTpvHY/liey7Tv8rig7X7KSyrJCkmnJ8N68BlfZNonxjldojSiCjBOkZqfATWQlZeMR1aRLsdjoiISOORvQHm/Aa2zYPELnDtNGh/wfd7q07kvLsgcy68dye0HQjNWvsu3mNtnQevX+UsInz9OxDVov6uLWdka04hM77aw4yVe9hzuITIkEAu6tGa8X2TGZAeR4CG94kPKME6RkpcJOBUElSCJSIi4gXFuU6v07IXIDQKRj8MGT+CwDMoCBAY7AwVfHowzLrVSdJ8Wea6vBiy18OupfDJA5DQEX44y5l7JX6ntKKKL7fnsnBzDgszc9h8oJAAA+d1TOTXozoz4qxWhIdohJL4lhKsY9RcC0tERETqoKrSqQg476/OelYZP4Jhv4XI+Lq1m9ABRv4F3r8Llj0P/X/inXhLjzjVC/d9/e12cBNYj/N6UgZc8zZExHnnelJn1lq2ZBeyYHMOCzMP8sW2Q5RVeggJDODs9FgmZrRlbK82tGjmh+unSaOlBOsY8ZEhRIYEKsESERGpi23z4YN7IWeDU1Z91IPQspv32s/4sTOXa+4foN0wp2fpdB3aChve+TaZyt327WtRraB1L+h6ifPYupcz70qLwrrucHE5i7YcZOHmHD7LPMi+/FIA2idGcvWAFIZ0SmRgerx6qsQ1SrCOYYwhJT5Siw2LiIicidxtTtKz8T1nrtKVr0KXMd5PTIyBS5+AJwfC9J/Ajz869SGHZQWw8O+w5EnwVEBMipNA9b4aWveGVj0huqV345UTstaSX1LBwcIysgvKyKneDhaWOz8XlnHw6GNhGdZCdFgQgzskcNsFiZzXMYHkWC3sK/5BCVYtUuLC2ZJd6HYYIiIiDUvWCnhxNAQEwQX3wcCfQbAPh2ZFt4JL/g1v/dBJmM7/7YmP93ic0vAf3weFB6D3NfCD36vMej2rqPKwaX8Ba/bkO1tWPpsOFFBe6fnescGBhsSoUBKiQ2ndPIyeyc1pExPOoA7x9EqOUfl08UtKsGqRGh/JvE05eDxW1WVEREROhbUw5x4Ij4XJ8+uvut9Zl0Kvq2DhP6DDcGh7du3H7VkBH9wDWcucdbYmvQbJGfUTYxNWWeVhS04hq7OcRGr1nnw27DvyTTIVHRZEz+TmXH9OKi2bhZEYHepsUc5j8/BgjIZlSgOjBKsWKXERlFd62H+klDYx4W6HIyIi4v/WTXeSl7GP12/pdIDRD8GORTBjMty8yFmU+KjCbPjkT7DyfxDZAi590knIAtTz4QvWWjYfKGTuuv0s2JzD2r35lFY4yVRUaBDdk5pxw7lpdE9qTs+k5qTGRyiBkkZHCVYtalYSVIIlIiJyEhWl8PH90LK7M4+pvoU1h8uehpcuhrm/h4v/BZXl8OWzsOAhqCiBc38BQ34NYc3qP75GrspjWb4jl4/WH2Du+gPfzGPv1TaGq/un0jO5OT2Sm5MeH6mRQdIkKMGqReo3a2EVcU77OpaSFRERaey+fAYO74LrZkKAS5Xb0gbDuT+Hxf9xKgCunQoHN0OHC50KhmdSZVCOq6S8ioWZOXy0/gCfbswmt6ickMAABnWI5+ah7bmwawuVRpcmSwlWLdrEhBEUYFSqXUSkETLGjAL+DQQCz1trH6zlmInA/YAFvrbWXl29vwpYU33YLmvt2HoJ2p8VHXTmP3UcCe3PdzeWH/wBtnwK8/8Gce3gqjeh00iVVveCkvIqtuYUsnZPPh9vyGbRlhxKKzxEhwVxQZcWDD+rFUM7JxIVqo+WIvq/oBZBgQEkxYarVLuISCNjjAkEngCGA1nAMmPMO9ba9TWO6Qj8Bhhkrc0zxrSo0USJtbZ3vQbt7+Y/COVFMOLPbkcCQaFw1euwfSH0nOg8l9NSWuEkUpkHCtl8oIDNBwrJzC5gV24x1jrHtGkexpUZbRnRrRX90+MIViU/ke9QgnUcKXERSrBERBqf/sAWa+02AGPMG8ClwPoax/wEeMJamwdgrc2u9ygbipzNsHwK9LsBEju7HY0jNhVir3M7igYhv6SCr3blsXJnHhv3F5CZXcjOQ0V4qhOpoABDWkIk3do0Y1zvJDq1jKZzqyjaJ0apMIXICSjBOo7U+Aje/Xqf22GIiIh3JQG7azzPAgYcc0wnAGPM5zjDCO+31s6pfi3MGLMcqAQetNbO9HG8/u2jP0JwBAz7jduRyElYa9mdW8Lynbks35nHih15bM4uwFoIDDCkxUfQpVU0l/RqQ6eWUXRqGU1afCQhQeqdEjldSrCOIyUugvySCvKLK2gecYorw4uIiL+r7Wt3e8zzIKAjMAxIBj4zxnS31h4GUqy1e40x7YBPjTFrrLVbv3cRYyYDkwFSUlK8Gb//2LYANn8AF94PUYluRyPHqKjysH7vESeZ2pnL8h15ZBeUARAdGkSf1FjG9GxNRmosvdrGEKm5UyJeo/+bjiOlupLgztwiekbEuByNiIh4SRbQtsbzZGBvLccstdZWANuNMZtwEq5l1tq9ANbabcaY+UAf4HsJlrX2WeBZgIyMjGMTuIbPUwVzfwfNU2DALW5HI9UOFZYxb1MOn2w4wMLNORSVVwGQFBPOOe3jyUiNpV9qHJ1bRROocukiPqME6zhqroXVM1kJlohII7EM6GiMSQf2AJOAYxdumglcBbxkjEnAGTK4zRgTCxRba8uq9w8CHq6/0P3I12/A/jVw+QsQrFLcbrHWkpldyMcbDvDJhmy+2pWHtdCyWShjeycxqEM8GalxtGqu/0Yi9UkJ1nGkxDkJlgpdiIg0HtbaSmPMz4EPceZXTbHWrjPGPAAst9a+U/3aCGPMeqAK+JW19pAx5lzgGWOMBwjAmYO1/jiXarzKi+DTP0NSP+h+udvRNDnllR6+3J7rJFUbD7A7twSA7knNuO0HHbmwa0u6JzVTEQoRFynBOo7I0CASokLZeajI7VBERMSLrLWzgdnH7PtjjZ8t8MvqreYxi4Ee9RGjX1v8OBTsgyte0vpS9Si/uIK/zd7A7DX7KCirJDQogEEdErh5aHsu6NJSvVQifkQJ1gmkxkewLUcJloiICABH9sHnj8JZl0LKQLejaTKW78jl9jdWceBIKeP7JjH8rFYM6hBPRIg+xon4I/2feQID28Xx1PytZBeU0iJa3wyJiEgTN+8vUFXhVA4Un6vyWJ6ct4VHP8kkKSacabecS6+2mhcu4u+0uMEJjOudhMei9bBERET2r4GVr8KAn0JcO7ejafT255dyzfNL+edHm7m4Z2vev22wkiuRBkIJ1gl0bBlN96RmzFq1x+1QRERE3GMtfPg7CI+BIXe7HU2j9/H6A4z+90JWZ+Xzjyt68eiVvYkO05qcIg2FzxIsY8wUY0y2MWbtSY472xhTZYyZ4KtY6mJc7yRWZ+WzJbvQ7VBERETckTkXti+AofdCeKzb0TRapRVV3P/OOm7673LaxITz7i8GM6FfsioCijQwvuzBegkYdaIDjDGBwEM4JXH90iW92hBgUC+WiIg0TVWVMPcPENceMn7kdjSN1tacQsY/uZiXFu/gxkFpTP/ZubRPjHI7LBE5Az5LsKy1C4Hckxz2C2AakO2rOOqqZbMwBnVIYMbKPTiVe0VERJqQ1W/CwU0w/E8QFOJ2NI2OtZa3lu/m4scWsf9IKS9cn8F9l3QjNCjQ7dBE5Ay5VkXQGJMEXAb8ADjbrThOxbjeSdz19tes2JlHRlqc2+GIiIjUj6pKWPh3aNUTulzsdjQNwpHSCqYs2s6GfUfwWCeB8ljwVD86zy0ej7OvqLyStXuOcE67eB6d1JuWzVS1WKShc7NM+6PAPdbaqpONLTbGTAYmA6SkpNRDaN81snsrfjdzDTNX7VGCJSIiTceatyFvO1z5qhYVPomyyipeXbqL/3yaSV5xBR1aRBEUYAgwhoAACDAGYwwBxvk50BiMgajQIH4zugs3ndeOwAD9jUUaAzcTrAzgjerkKgG4yBhTaa2deeyB1tpngWcBMjIy6n2cXlRoEMPPasV7q/fxx4u7ERKk4osiItLIHe29atkDuoxxOxq/5fFY3l29l3/M3cTu3BLObR/Pb0Z3pUdyc7dDExGXuJZgWWvTj/5sjHkJeK+25MpfXNanDe9+vZcFm3MYflZLt8MRERHxrbXTIHcrTHxFvVfH8fmWgzz4wUbW7MmnS6toXrrxbIZ2SlTVP5EmzmcJljHmdWAYkGCMyQLuA4IBrLVP++q6vnJex0TiIkOYuXKPEiwREWncPFXVvVfdNfeqFuv3HuHBORtZuDmHpJhwHpnYi3G9kwjQED8RwYcJlrX2qtM49gZfxeEtwYEBXNKzNa8v282R0gqaacE/ERFprNZOh0OZMPG/EKBh8Udl5RXzyNzNzFi1h2Zhwfzuoq5cd04qYcGq+Cci33JzDlaDM65PEi8v2cmctfuZmNHW7XBERES8z1MFCx+GFmdBl0vcjsYv7D1cwnOfbePVpbvAwOQh7fjZ0A40j9CXrSLyfUqwTkPvtjGkxkcwc+UeJVgiItI4rZsBBzfDFS81+d6rbTmFPL1gKzNW7sFjYXyfJO4c3ok2MeFuhyYifkwJ1mkwxjCudxKPfZrJvvwSWjfXP7AiItKIeDzO3KvErtD1Urejcc3aPfk8NX8rs9fuIyQwgKv7p/CTIe1Ijo1wOzQRaQCaXoJ1eBfk7YD0IWd0+rg+Sfz7k0zeWbWXnw5t793YRERE3LR+JuRshAlTmlzvlbWWL7fn8uT8rSzYnEN0aBC3DG3PjYPSSYwOdTs8EWlAml6C9dEfYecSuGvjGZWdTU+IpHfbGGas3KMES0REGg+PBxY8DAmd4axxbkdTb6y1zNuUzRPztrJiZx7xkSH8amRnrjsnVQWtROSMNL0Eq+NIZ3z5vq+hTe8zauKyPknc9846Nu4/QpdWzbwcoIiIiAs2vAM5G+DyFyCg8VfFK6/08P6avTyzYBsb9xeQFBPOA5d2Y2JGW1UFFJE6aYIJ1nDAQObcM06wxvRszQPvrWfmyr3cO1oJloiINHAeDyx4CBI6QbfL3I7Gpw4WlvHaF7t4ZelOcgrK6NAiin9e0YuxvdsQHNi0hkWKiG80vQQrMgGS+sHmD2Hor8+oiYSoUIZ0TGDWqj38emRnLSwoIiIN28Z3IXs9jH+u0fZerd97hBc/386sr/dSXulhaKdEfnRFOud1SND7uIh4VdNLsAA6jYR5f4Oig07CdQbG9Uni9jdW8cX2XM5pH+/lAEVEROrJ0blX8R2g++VuR+NVVR7LxxsOMGXRdr7Ynkt4cCBXZrTl+nPT6NAiyu3wRKSRapoJVscRMO+vkPkR9L7qjJoYcVYrIkMCmbVqjxIsERFpuDa9DwfWwmXPNpreqyOlFby1bDcvL9nB7twSkmLC+e1FXbgyI0WLA4uIzzXNBKt1L4hqBZkfnnGCFR4SyMjurXh/zT7uH9tNE2JFRKThsdaZexXXvlH0XlV5LM8u3Mbjn2ZSVF5F/7Q4fju6K8PPakmQ5leJSD1pmgmWMU6xi/WzoKoCAs/s26xxvZOY/tUe5m3MZnSP1l4OUkRExMc2zYb9a2Dc0xDYsD8S7Msv4c43V7F0Wy4jzmrJbRd0pHtSc7fDEpEmqOl+ndNpJJQdgV1Lz7iJc9vHkxgdyoyVe7wYmIiISD2wFuY/CHHtoMcVbkdTJx+s2ceoRz9jdVY+D0/oyTPX9VNyJSKuaboJVrthEBDsDBM8Q0GBAYzt1YZ5m7I5XFzutdBERER8bvMc2L8azru7wfZeFZdXcu+01dzy6lekxkfw/m3nMTGjLcaoKqCIuKfpJlih0ZA2CDbPrVMzl/VJoqLKMnvNfi8FJiIi4mNH517FpkHPK92O5oysycrn4scW8eby3fxsWHum3XIu6QmRboclItKEEyyAjiPh4CbI23HGTXRr04wOLaKYqWGCIiINgjFmlDFmkzFmizHm3uMcM9EYs94Ys84Y81qN/dcbYzKrt+vrL2ov2/sV7F0Jg25vcL1XHo/lmQVbGf/U5xSXV/HaTQP59aguWiRYRPxG0/7XqNNI57EOvVjGGC7rk8SXO3LZnVvspcBERMQXjDGBwBPAaOAs4CpjzFnHHNMR+A0wyFrbDbijen8ccB8wAOgP3GeMia3H8L1n7XRnmHy3y9yO5LTszy/luilf8H8fbOTCri2Zc8d5WipFRPxO006w4ts7pWnrMA8LYGyvNgDMWqVeLBERP9cf2GKt3WatLQfeAC495pifAE9Ya/MArLXZ1ftHAh9Za3OrX/sIGFVPcXuPtbBuJnS4AMIbTn44d91+Rv97IV/tPMxDl/fgyWv6EhMR4nZYIiLf07QTLHB6sbZ/BuVFZ9xE27gIzmkXz3+X7KSkvMqLwYmIiJclAbtrPM+q3ldTJ6CTMeZzY8xSY8yo0zgXAGPMZGPMcmPM8pycHC+F7iVZy+BIFnQb73YkpySvqJy73vqaya+sICk2nPduG8yVZ6eokIWI+C0lWB1HQFUZbF9Yp2buGtGJ7IIypny+3UuBiYiID9T2qdwe8zwI6AgMA64CnjfGxJziuc5Oa5+11mZYazMSExPrEK4PrJ0OgaHQebTbkZyQtZZZq/Zw4SMLmLVqD7ee357ptwyifWKU26GJiJyQEqzUQRASBZvrNkwwIy2OC7u24OkFW1WyXUTEf2UBbWs8Twb21nLMLGtthbV2O7AJJ+E6lXP9m8cD62dCx+EQ1sztaI5rd24xN7y4jNvfWEVyXATv/mIwvxrZhZAgfWwREf+nf6mCQpw1sTLnOuPS6+BXI7tQWFbJk/O3eiU0ERHxumVAR2NMujEmBJgEvHPMMTOB8wGMMQk4Qwa3AR8CI4wxsdXFLUZU72s4di+Fgn1+W9yiymN5/rNtjPjXQpbtyOW+S85i+i3n0rW1/yaDIiLHUoIFzjysI3vgwLo6NdO5VTTj+yTz0uId7D1c4qXgRETEW6y1lcDPcRKjDcBb1tp1xpgHjDFjqw/7EDhkjFkPzAN+Za09ZK3NBf6Mk6QtAx6o3tdwrJ0OQeHQyf9qc6zfe4TLnvycv7y/gXPax/PRL4dy46B0AgM010pEGpaGtfiFr3Qc4TxmfgitutepqTuHd+Tdr/fy6MebeXhCLy8EJyIi3mStnQ3MPmbfH2v8bIFfVm/HnjsFmOLrGH3CUwXrZ0GnERDqP/OYSiuqePTjTJ77bBuxEcH856o+XNyztYpYiEiDpR4sgOhW0LpXndbDOio5NoLrzkll6oosMg8UeCE4ERERL9j5ORRl+9XwwM+3HGTkowt5esFWLu+bxMe/HMolvdoouRKRBk0J1lEdR0LWl1Bc99Eet57fgciQIP7+4SYvBCYiIuIF62ZAcKTzfucyay2PzN3ENc9/gQFe+8kAHp7QS+taiUijoATrqE4jwXpgyyd1biouMoSfDm3H3PUHWLGzYQ3PFxFpKIwxP68uNiEnU1UJ69+BzqMgJMLVUCqqPPx66moe+3QLE/olM+eOIZzbPsHVmEREvEkJ1lFt+kJEgjMPywt+NDidxOhQHvpgE7aO1QlFRKRWrYBlxpi3jDGjjMaVHd+OhVB80PXhgUVlldz08nLeXpHFbRd05O8TehIWHOhqTCIi3qYE66iAAGddkC0fOxOB6ygiJIjbLujIlztymbcp2wsBiohITdba3+OsT/UCcAOQaYz5mzGmvauB+aN1M5w1HzsMdy2EnIIyJj27lM8yc/i/8T345fBOmmslIo2SEqyaOo6AkjzIWuaV5iad3Za0+AgenrOJKo96sUREvK264t/+6q0SiAWmGmMedjUwf1JVARvehc4XQXCYKyFsyylk/FOfsyW7kOd+mMFV/VNciUNEpD4owaqp/Q/ABMJm7wwTDA4M4K4Rndm4v4BZq/Z4pU0REXEYY24zxqwAHgY+B3pYa28B+gGXuxqcP9k23/nysPt4Vy7/1a48Ln9qMUVlVbw+eSAXdG3pShwiIvVFCVZN4TGQcg5k1r1c+1FjerSme1Iz/jl3M2WVdR96KCIi30gAxltrR1pr37bWVgBYaz3Axe6G5kfWzYDQ5s6XiPXso/UHuPq5pTQLD2b6LefSu21MvccgIlLflGAdq9MIOLAWDu/2SnMBAYZ7RnVhz+ESXl26yyttiogI4CwW/E2pVmNMtDFmAIC1doNrUfmTyjLY8B50GQNBofV66f8t3clPX1lO55bRTLvlXNISIuv1+iIiblGCdayj64N4sRfrvI6JDOoQz+PztlBQWuG1dkVEmringMIaz4uq98lRWz+Fsvx6HR5oreUfH27i9zPXMrRTIq9PHkhCVP0mdyIiblKCdazEzhCT4tUEC+CeUV3ILSrnuYXbvNquiEgTZmyNdTCqhwYGuRiP/1k3A8JioN2werlcZZWHu99ezePztjDp7LY898MMIkL0n0REmhYlWMcyxunF2rYAKkq81mzP5BjG9GjN84u2k1NQ5rV2RUSasG3VhS6Cq7fbAX2LdVRFKWycDV0vgcDgernk/32wkWlfZXHHhR35v/E9CArUxwwRaXr0L19tOo2EyhLYscirzd49sjNllR7+82mmV9sVEWmibgbOBfYAWcAAYLKrEfmTLR9DeUG9DQ+cuXIPLyzazg3npnHHhVrjSkSaLiVYtUkbDEHhXivXflR6QiSTzm7La1/sYuehIq+2LSLS1Fhrs621k6y1Lay1La21V1trtbL7UeumQ0Q8pA3x+aXW7snn3umr6Z8ex+/GdPX59URE/NkpJVjGmPbGmNDqn4dVD8lovLVWg8Oh3VDI/BCsdxcIvv2CjgQHBnD/O+uwXm5bRKQpMcaEGWNuNcY8aYyZcnRzOy6/UF4Mm+ZA17EQ6Ns5ULlF5fz0lRXERoTwxNV9CdawQBFp4k71X8FpQJUxpgPwApAOvOazqPxBxxFweBfkbPJqsy2ahXHv6C7M25TDC4u2e7VtEZEm5hWgFTASWAAkAwWuRuQvMudCRZHPhwdWVnn4xetfkVNYxtPX9iMxWtUCRURONcHyWGsrgcuAR621dwKtfReWH+g4wnnM9O4wQYAfnpPKyG4teWjORr7efdjr7YuINBEdrLV/AIqstS8DY4AeLsfkH9ZNh8gWkDrIp5f5+4eb+HzLIf4yrju9tIiwiAhw6glWhTHmKuB64L3qffVTksgtMW2hZXevz8MCMMbw8OW9aBEdxs9f/4ojWhtLRORMHP3H87AxpjvQHEhzLxw/UVYIm+fCWZdCQKDPLvPu13t5ZuE2rhuYysSMtj67johIQ3OqCdaNwDnAX621240x6cD/fBeWn+g0CnYtgeJcrzfdPCKYx67qzd7Dpfxm+hrNxxIROX3PGmNigd8D7wDrgYfcDckPbJ7jVMLtdpnPLrFh3xF+PXU1Gamx/OHis3x2HRGRhuiUEixr7Xpr7W3W2ter38yirbUP+jg293W5CKzHJ71YAP1S47hrRCfeX72P177c5ZNriIg0RsaYAOCItTbPWrvQWtuuuprgM27H5rp1MyC6NaSc45PmDxc7RS2ahQfx5LV9CQlSUQsRkZpOtYrgfGNMM2NMHPA18KIx5hHfhuYHWvdx3qQ2ve+zS9w8pD3ndUzggXfXs3H/EZ9dR0SkMbHWeoCfux2H3yk9ApkfwVnjIMD7iU+Vx3LbG6vYl1/CU9f2o0V0mNevISLS0J3qv77NrbVHgPHAi9bafsCFvgvLTwQEQOfRsOVTqCjx0SUMj0zsTbPwYG599SuKyyt9ch0RkUboI2PM3caYtsaYuKOb20G5atMHUFXms+GB/5y7iYWbc3jg0u70TYn1yTVERBq6U02wgowxrYGJfFvkomnoMsYpdbttgc8ukRgdyqNX9mbbwSLum7XOZ9cREWlkfgTcCiwEVlRvy12NyG3rpkOzZEg+2+tNz16zjyfnb+Wq/ilc1T/F6+2LiDQWp5pgPQB8CGy11i4zxrQDMk90QvWCj9nGmLXHef0aY8zq6m2xMabX6YVeT9LOg5Bonw4TBBjUIYGfn9+Bt1dkMXPlHp9eS0SkMbDWpteytXM7LteUFcKWT6Cb94cHbj5QwN1vf02flBjuH6uiFiIiJ3JKy7tba98G3q7xfBtw+UlOewl4HPjvcV7fDgy11uYZY0YDzwIDTiWeehUUCh0vhE1z/p+9+46Pqkr/OP45mfSQTgghhd57kaICKhaKggVWsVfUtW1xd3Wrv93VdV1Xd117XayIBUVFka5IV5o0DS0JHUJoCaSd3x93YCOGlszkzky+79drXnPnzi3PdXBunjnnPAcqK/3Sp/2wewa1Zv66Qn43YTldshJpkdbAb+cSEQl2xphrq1tvrT3WfSe0bfgSKsv+N4+jjxw4VM6YVxcRFxXOs1f3JCrcf6XfRURCwckWucgyxkzwtkhtM8a8Z4zJOt4+1tovgGPWN7fWzrHW7va+nAcc93iuajsMDmyHTf7teRLuCePfo7sRER7GnW8u5mBZhV/PJyIS5E6r8ugPPAAMdzMgV+VOg4hYyOnr08O+Nm8jG3YV88QV3UlPUFELEZETOdnmmFdw5hhpAmQCH3nX+cpNwKfHetMYM8YYs8gYs2jHjh0+PO1Jan0ehIXDav92EwTISIzh0ZFdWbllL7WsSOoAACAASURBVH+btMrv5xMRCVbW2ruqPG4BugORbsflmtypTrf28CifHbK4tJwXvljHgDZp9GuZ6rPjioiEspNNsNKsta9Ya8u9j/8Cab4IwBhzNk6C9ZtjbWOtfd5a28ta2ystzSenPTUxSdD0DFgzqU5Od26HdG48ozlj527ks2+31sk5RURCQDHQ2u0gXFG4Dnavh1aDfHrYN+fnsetAKfcMauXT44qIhLKTTbB2GmOuNsZ4vI+rgV21PbkxpgvwIjDCWlvr4/lVuwth53ewM7dOTvebIW3pnJnIr99dSn5hcZ2cU0QkmBhjPjLGTPQ+PgbWAB+exH6DjTFrjDG5xpj7qnn/emPMDmPMEu/j5irvVVRZP9G3V1QLudOc55a+S7AOllXw7Kx1nNEqlZ5N63f1exGRU3GyCdaNOCXatwJbgJHADbU5sTEmB3gfuMZa+11tjlUn2g5xnv1cTfCwqHAPT17ZHWvh6pfmU7BbSZaIyFEeBf7pffwNGGCt/VHCVJUxxgM8BQwBOgCjjTHVlcV721rbzft4scr6kirrA2e819rpkNQUUlv67JBvLchj5/5D3HVO/WwUFBGpqZNKsKy1edba4dbaNGttI2vtxTiTDh+TMeYtYC7Q1hhTYIy5yRhzmzHmNu8mfwRSgae9vwQG9twlSdnQuEudjMM6rGlqHGNv6s3uA6WMenYu63bsr7Nzi4gEgTxgvrV2lrX2K2CXMabZCfbpDeRaa9dZa0uBccAI/4bpZ+WlsP4Lp3ugMT45pNN6tZbezVPo20Jjr0RETkVtao7/4nhvWmtHW2szrLUR1tosa+1L1tpnrbXPet+/2VqbXOWXwF61iKVutBsG+Qtg//Y6O2WPnGTGjelHaXklP3luLqu27K2zc4uIBLh3gMoqryuoMqXIMWQC+VVeF3jXHe0y7zyN7xpjsqusj/YWXZpnjLn4WCep0+JMBQugdL9Puwe+syifbXsPcc8gtV6JiJyq2iRYvvmZLJi0HQpY+O6zOj1thyYJvH1rP8LDwrji+XksyS+q0/OLiASocG8rFADe5RNVEazu3mWPev0R0Mxa2wWYCoyt8l6O9wfBK4F/GWOq7ZNXp8WZcqc5lW6bD/DJ4Q6VV/D0zLX0bJrM6aocKCJyymqTYB19Qwp9jTtDYg6srptqglW1atSAd27rR2JMBFe9MI956wK7JoiISB3YYYw5Mg7KGDMC2HmCfQqAqi1SWcDmqhtYa3dZaw95X74A9Kzy3mbv8zpgJk5peHetnQZZvSE6wSeHe+/rTWzZc5C7B7XG+KjLoYhIfXLcBMsYs88Ys7eaxz6cObHqF2Og3VBYNwNKD9T56bNTYhl/az8aJ0Zz3csLmLmm7roqiogEoNuA3xpj8owxeTjTfdx6gn0WAq2NMc2NMZHAFTjzPB5hjMmo8nI4sMq7PtkYE+VdbgicAaz0yZXU1P4dsGUptDrHJ4crq6jk6Zm5dM1OYkDrhj45pohIfXPcBMtaG2+tTajmEW+tDa+rIANK26FQfhDWznDl9I0Toxl/az9apjXgllcX8enyLa7EISLiNmvtWmttX5xqgB2ttadba487l4a1thy4E5iMkziNt9auMMb8uUpr2N3GmBXGmKXA3cD13vXtgUXe9TOAh6217iZYa6c7zz4afzVh8SYKdpdwz6BWar0SEamh2nQRrJ+ang7RiXU26XB1UhtE8daYvnTOTOSON7/h/W8KXItFRMQtxpiHjDFJ1tr91tp93hamv55oP2vtJGttG2ttS2vtg951f7TWTvQu32+t7Wit7WqtPdtau9q7fo61trN3fWdr7Uv+vcKTsHYaxKZCRrdaH6q8opKnZuTSKTOBs9s28kFwIiL1kxKsU+WJgNYXwJpPoaLctTASYyJ47aY+9G2Ryi/GL+W1eRtdi0VExCVDrLVHqv5Ya3cDQ12Mp25VVjotWC3OhrDa384nLt3Mxl3F3HWOxl6JiNSGEqyaaDcUSgohf76rYcRFhfPy9acxqF0j/vDBtzw3a62r8YiI1DHP4TFRAMaYGCDqONuHlm3L4cAOaHVurQ9VUWl5cnou7RrHc177dB8EJyJSfynBqolW54In0tVugodFR3h49pqeDOuSwd8+Xc197y1j/yH3WtZEROrQ68A070T2NwFT+GFJ9dCWO815bln7AhcfL9vMup0HuHtQa8LC1HolIlIbSrBqIioemg+E1Z+Adb9afYQnjCeu6M5tA1syflE+Fzz+BXPWnqhSsYhIcLPWPgL8Faf4RAfgM6Cpq0HVpbXTIb0zxNeuxamy0vKf6bm0btSAwR0b+yg4EZH6SwlWTbUbCrvXw47VbkcCgCfMcN+Qdrxz2+lEhodx5Qvz+dOH31JcqtYsEQlpW4FK4DJgEN6S6iHv0H7Im+eT8uyffruV3O37uUutVyIiPqEEq6baDHGeV3/ibhxH6dk0mUl39+f605sxdu5Ghv77SxZtKHQ7LBERnzHGtDHG/NEYswp4EsgHjLfi35Muh1c3NnwJlWW1Ls/utF59T4u0OIZ1zjjxDiIickJKsGoqIQMyewbEOKyjxUR6eGB4R8aN6UuFtYx6bi4PTVrFwbIKt0MTEfGF1TitVRdZa8+01v4HqF9fcLnTICIWcvrW6jBTVm1j9dZ93HVOKzxqvRIR8QklWLXRdihs+hr2BuZkv31bpPLpPQMY3TuH579Yx7AnvmRJftGJdxQRCWyX4XQNnGGMecEYMwioX9lB7lRo1h/Ca1400VrLE9O+p2lqLBd1aeLD4ERE6jclWLXRbpjzHICtWIc1iArnoUs68+qNvSkureCyZ+bwj8mrOVRev37sFZHQYa2dYK29HGgHzAR+DqQbY54xxpzvanB1oXCdMwa4Ve26B05fvZ0Vm/dyx9mtCPfozwEREV/RN2ptpLWDlBYBnWAdNqBNGp/9bACXdM/kqRlrGfHkV3y7aY/bYYmI1Ji19oC19g1r7YVAFrAEuM/lsPzvSHn2midYh1uvspJjuKR7po8CExERUIJVO8Y43QTXfwGH9rkdzQklxkTw6KiuvHRdL3YdKOXip77isSnfUVpe6XZoIiK1Yq0ttNY+Z62tfVm9QLd2OiQ1hdSWNT7Ekvwilhbs4daBLYlQ65WIiE/pW7W22g2DilKnP3yQGNQ+nSk/H8Dwrk14Ytr3DH9ytlqzRESCQXmp86Neq0HOj3w1NGHxJqLCwxjRTWOvRER8TQlWbWX3gdhUWB343QSrSoqN5LHLu/Hitb0oPFDKiKe+4rHP16g1S0QkkBUsgNL9teoeWFpeyUdLN3Nuh3QSoiN8GJyIiIASrNoL80CbwfD9ZKgoczuaU3Zuh3Sm/HwgI7o14YnpuQx/cjbLC9SaJSISkHKnQVg4NB9Q40PM+m4Hu4vLuFRjr0RE/EIJli+0HQoH98DGOW5HUiOJsRE89pNuvHRdL3YXl3Lx01/x6OQ1qjQoIhJo1k6DrN4QnVDjQ0xYXEBqXCQD2qT5MDARETlMCZYvtDwHwmNg5YduR1Irg9qn8/nPBnJJ90yenJHL8P98xbICzZslIhIQ9u+ALUuhVc3reOwpLmPqyu1c1LWJiluIiPiJvl19ITIWOl8G34yFrd+6HU2tJMY6lQZfuf40ikpKueTpOTzy2WpKStWaJSLiqrXTnedajL/6ZPkWSisqubSHugeKiPiLEixfOe8vEJMCH/4UKsrdjqbWzm7XiM9/PpBLu2fy9My1DPzHDF6bt5GyChXBEBFxxdppTlGljG41PsSExQW0TIujc2aiDwMTEZGqlGD5SmwKDPun031jzr/djsYnEmMi+Meorrx7Wz+apsbyhw++ZdA/Z/HB4k1UVlq3wxMRqT8qK50WrBZnQ1jNbt35hcUs3LCbS3tkYWpR4l1ERI5PCZYvdRgOHS6GmQ/DjjVuR+MzvZqlMP7Wfrxy/WnERYXzs7eXMPSJL5m2ahvWKtESEfG7bcvhwA5n/qsamrB4E4DmvhIR8TMlWL429B8Q2QA+vAMqQ2fckjGGs9s14pO7zuSJ0d0pKavgprGLGPXsXOav2+V2eCIioS13mvPcsmYFLqy1TFi8iT7NU8hKjvVhYCIicjQlWL7WoBEMeQQKFsL8Z92OxufCwgzDuzZh6i8G8uAlncjfXczlz8/jupcX8O0mzZ8lIuIXa6dDemeIb1yj3ZfkF7F+5wEVtxARqQNKsPyh80hnbqxpf4Fda92Oxi8iPGFc1acps351NvcPaceS/CIu/M9s7njzG1ZsVqIlIuIzh/ZD3rxalWefsHgTUeFhDOmc4cPARESkOkqw/MEYGPYYeCJh4l3O4OQQFR3h4daBLfni12dz59mtmLl6O8OemM3VL85n1nc7NEZLRKS2NnwJlWU1Ls9eWl7JR0s3c16HdBKiI3wcnIiIHE0Jlr8kZMDgh2DjV7DoJbej8bvEmAjuvaAtc+4bxG8Gt+O7bfu47uUFDP7Xl7yzKJ9D5aEzHk1EpE7lToOIWMjpW6PdZ323g93FZeoeKCJSR5Rg+VO3q5wByVP+BLs3uh1NnUiMjeD2s1oy+zfn8OiorhgDv3p3Gf3/PoOnZ+ayp7jM7RBFRIJL7lRo1h/Co2q0+4TFBaTGRdK/dZqPAxMRkeoowfInY+CiJ5znj+6BetRdLjI8jJE9s/j0nv68emNv2jaO55HP1tDv4Wk8MHEF+YXFbocoIhL4Ksqh+1XQ45oa7b6nuIypK7dzUdcmRHh0yxcRqQv6tvW3pGw478+wbgYsfs3taOqcMYYBbdJ47aY+TLq7P4M7Nub1eRsZ+I8Z/PSNr5mTu1OTFotInTLGDDbGrDHG5Bpj7qvm/euNMTuMMUu8j5urvHedMeZ77+M6vwfrCYcBv4L2F9Vo90+Wb6G0olLdA0VE6lC42wHUCz1vgBUTYPLvoNW5kFA/J3ns0CSBxy7vxq8Gt+W/czbw1vw8Ji3fSrPUWK7oncPInlk0bFCzLjAiIifDGOMBngLOAwqAhcaYidbalUdt+ra19s6j9k0B/gT0AizwtXff3XUQeo1MWFxAy7Q4Omcmuh2KiEi9oRasuhAWBsOfgIoy+Ohn9aqrYHUyEmO4f0h7FvzuXB6/vCuN4qN5+NPV9H1oGj9942u+/H6HWrVExF96A7nW2nXW2lJgHDDiJPe9AJhirS30JlVTgMF+irPW8guLWbhhN5f2yMIY43Y4IiL1hlqw6kpKCxj0R5h8PywbD10vdzsi10VHeLikexaXdM8id/s+xi3I571vCpi0fCvZKTFccVoOo3pm0Sgh2u1QRSR0ZAL5VV4XAH2q2e4yY8wA4Dvg59ba/GPsG7B97yYs3gTAiG71s9eEiIhb1IJVl/rcClm94dNfw75tbkcTUFo1iuf3F3Zg7v2D+PcV3chKiuUfk9fQ7+HpjHl1EZNXbGX73oNuhykiwa+6ppyjm8w/AppZa7sAU4Gxp7Cvs6ExY4wxi4wxi3bs2FHjYGvKWsuExZvo2yKFrOTYOj+/iEh9phasuhTmgRFPwbNnwuTfwsjQnx/rVEVHeBjRLZMR3TJZv/MA4xbm8e6iAj5f6SSkafFRdGqSQMcmiXTKdJ6zkmPU/UVETlYBkF3ldRawueoG1tpdVV6+APy9yr5nHbXvzOpOYq19HngeoFevXnXe53lJfhHrdx7g9oEt6/rUIiL1nhKsupbWBs64G774B/S5DbJPczuigNW8YRz3D2nPL89ry5L8IlZs3sO3m/ayYvMevvh+JxXecVqJMRF0bJJAxyYJdMpMpFezFDKTYlyOXkQC1EKgtTGmObAJuAK4suoGxpgMa+0W78vhwCrv8mTgIWNMsvf1+cD9/g/51E1YvImo8DCGdG7sdigiIvWOEiw3nPEz+OZVpxXrps+debLkmCLDw+jdPIXezVOOrDtYVsGarfv4dvMeVmzey4pNexg7dyOl5ZV4wgxXnJbNPYNaa/yWiPyAtbbcGHMnTrLkAV621q4wxvwZWGStnQjcbYwZDpQDhcD13n0LjTF/wUnSAP5srS2s84s4gdLySj5aupnzOqQTHx3hdjgiIvWOEiw3RDWAc34PE+9yyrd3utTtiIJOdISHrtlJdM1OOrKurKKS3O37Gbcgjzfm5/H+N5u4pX9zbhnQQn9kiMgR1tpJwKSj1v2xyvL9HKNlylr7MvCyXwOspVnf7WB3cZnmvhIRcYmKXLil21WQ3hmm/gnKVLzBFyI8YbTPSOD/RnRi6i8GMqh9I56YnstZ/5jJf79aT2l5pdshioj43fvfFJAaF0n/1mluhyIiUi8pwXJLmAcu+CsU5cH8Z92OJuQ0axjHk1f24MM7zqBNejwPfLSS8x6fxUdLN2uOLREJWXuKy5i2ajsXdW1ChEe3eBERN+jb100tzoI2Q+DLf8L+ui/jWx90zU7izVv68N8bTiMmwsNdby1mxFNfMSd3p9uhiYj43CfLt1BaUanugSIiLlKC5bbz/wJlxTDzIbcjCVnGGM5q24hJd/fnsZ90pfBAKVe+OJ9rX17A0vwirFWLloiEhgmLC2jVqAGdMxPdDkVEpN5SguW2hq2h103w9X9h+6oTbi41FxZmuLRHFtN+OZDfDW3P0vwiRjz1Fec//gVPz8xlc1GJ2yGKiNRYSWkFCzfsZminxpobUETERUqwAsFZ90FUPHz+e7cjqReiIzzcMqAFX/7mbB68pBOJMRE88tkazvj7dEY/P4/xi/LZd7DM7TBFRE5Jwe5iAFo2auByJCIi9ZvfEixjzMvGmO3GmG+P8b4xxjxhjMk1xiwzxvTwVywBLzYFBvwacqfC91PdjqbeSIiO4Ko+TXn39tP54ldn87NBbdiyp4Rfv7uMXn+dyl1vLWbG6u2UV6j6oIgEvrxCJ8HKTol1ORIRkfrNn/Ng/Rd4Enj1GO8PAVp7H32AZ7zP9VPvW2Dhi04rVouzwKMpyupSTmos95zbmrsHtWJxfhETvtnER8s289HSzTRsEMlFXZswrHMGXbOTVJlLRALS4QQrRwmWiIir/PZXvLX2C2NMs+NsMgJ41ToVBuYZY5KMMRnW2i3+iimghUfBeX+G8dfA4leh141uR1QvGWPokZNMj5xk/nBhB2au2c6ExZt4Y14er3y1gbhID31apHJ6y1TOaNWQtunxhIVprIOIuC+/sITYSA+pcZFuhyIiUq+52UySCeRXeV3gXfejBMsYMwYYA5CTk1Mnwbmi/UXQ9AyY/iB0GgnRCW5HVK9FhodxfsfGnN+xMXtKypiTu5Ov1u5kTu4upq/eDkBqXCT9vMnWGS0bkpOqX45FxB15hcXkpMSqwIWIiMvcTLCquwNUWy/bWvs88DxAr169QremtjFwwYPw/Fkw+zE49wGXA5LDEmMiGNI5gyGdMwDYXFTCnLW7jiRdHy9zfhfISo7hjJYNOa15Cl2zEmmR1gCPWrhEpA7kFxZr/JWISABwM8EqALKrvM4CNrsUS+Bo0h26joa5T0PPGyC5qdsRSTWaJMUwsmcWI3tmYa1l7Y79fJW7i69yd/Lpt1t4e5HTOBsX6aFjZiJdsxLpkpVEl6xE/cIsIj5nrSWvsJgzWjV0OxQRkXrPzQRrInCnMWYcTnGLPfV2/NXRzvkDrPgApj4Ao15xOxo5AWMMrRrF06pRPNed3oyKSsu6HftZVrCHZQVFLNu0h7FzN1Javh6ApNgIOmcm0sWbdPVtnkpibITLVyEiwWzn/lJKyirISYlxOxQRkXrPbwmWMeYt4CygoTGmAPgTEAFgrX0WmAQMBXKBYuAGf8USdBIz4Yy7Ydbfoe/tkN3b7YjkFHjCDK3T42mdHs9lPbMAKKuoZM3WfSzf5CRdS/P38OysdVRUWiI8hjNbNWRYlyac1yGdxBglWyJyavK9c2BpHKiIiPv8WUVw9Anet8Ad/jp/0Dv9bvh6LHx2P9w81RmfJUErwhNGp8xEOmUmMrq3U6jlYFkFKzbv4fMV2/h42RZmvLOUSE8YA9o0ZFiXDM5tn058tJItETmx/MNzYCUrwRIRcZsmWwpUUQ1g0B/gwztgyRvQ/Wr3YilcB8nNleT5WHSEh55NU+jZNIX7hrRjacEePlm2mU+WbWHqqu1EhodxVps0hnXJYFD7dBpE6X9XEale3i4nwcpSgiUi4jr9xRbIuo6GxW/Axz+HxGxoMbDuY1jyFnxwG2SdBoP/Dlk96z6GesAYQ7fsJLplJ3H/kPYszi/ik2VbmLR8C5+v3EZUeBhnt21Ej6ZJNE2No1lqHDkpscREetwOXUQCQF5hMY3io/SdICISAJRgBbIwD4x+E14eAuOughs+gYyudXf+fVvhs99AWnvYvRFePAe6Xgnn/gniG9ddHPVMWJihZ9NkejZN5vfD2vN13m4+WbaFz77dymcrtv5g2/SEKG/CFUvT1DiapsbSzPus7oUi9Uf+bmcOLBERcZ8SrEAXkwzXvA8vnQ+vXwY3TobUlv4/r7Xw8S+g/BBc/jo0aARf/hPmPQ2rJkL/X0DfOyAi2v+x1GNhYYbTmqVwWrMUHhjekT0lZeTtKmbDrgNs3HWADbuK2bjrADPW7GDHvoIf7NuqUQNOb5lKvxap9G2RSnJcpEtXISL+ll9YQp/mKW6HISIiKMEKDglN4Or34eUL4PVL4cbPIT7dv+f89j1Y8wmc9xdo2MpZd97/QY9r4fM/wLQ/O0U4LngQ2l2o8Vl1JDEmgs5ZiXTOSvzRewcOlZNX6CRca3ccYMH6Qt79uoBX524EoH1GAv1apHJ6y1R6t0ghQS1cIiGhtLySzXtKyFILlohIQFCCFSzS2sBV78LYC52WrBs+gegf/5HtEwd2wqe/hsye0O+oQo+pLZ1ui2tnOBUO374amvWHwQ9D407+iUdOSlxUOO0zEmifkQDAHWc75eGXFRQxJ3cXc9ft4vX5G3n5q/WEGeiUmUg/bwtX95xklYcXCVKbikqwFnURFBEJEEqwgklWT7j8NXjzcmdM1lXv+qeL3qR74dA+GPGUMw6sOi3Phttmw9evwIwH4bn+0PMGOPt3EJfq+5ikRiI8YUcqFd41qDUHyypYnFfE3HW7mLt2Jy/PXs9zs9YBTpfCbtlJdM9xim20TY8n3BPm8hWIyInkeUu0K8ESEQkMSrCCTatz4eJn4P1b4P2bYdTYYydBNbFyIqyYAOf8Hhq1P/62nnDofQt0ugxmPgwLX4Rl451qh80HOs8N26j7YACJjvA4rVYtU+G8NhSXlvPNxiIW5+1mSX4R01dv592vnbFcMREeOmcl0j0nie7ZSXTPSSY9QWPuRAJNvhIsEZGAogQrGHX5idONb/L98Mkv4cLHfZPEFBc6x2vcBc742cnvF5sCQx+BXjfA3Kdg3SxY/bHzXoPG0HyAN+kaAEk5tY9TfCY2MpwzWzfkzNYNAbDWkl9YwuL83SzOK2JxfhEvz15PWYUFIDMphnPbN+LCrk3omZNMWJiSZxG35RcWExkeRqP4KLdDERERlGAFr34/hQPbYfbj0CAdzr6/9sf87H4oKXSqFnpqMB6nUXsY8aRTgXD3Blg/C9Z/AetmwPLxzjbJzf+XbDU/S90JA4wxhpzUWHJSYxnRLROAg2UVrNyylyV5Rcxbt4txC/MZO3cj6QlRDO2cwYVdMuierWRLxC15hcVkJcfo/0ERkQChBCuYDfoT7N8Bsx6GBmlw2s01P9Z3k2HZOBj4G2jcuXZxGQMpzZ1Hz+udhGv7qv8lXN++D1//FyJi4bqPNXlxgIuO8NAjJ5keOcnceGZz9h8qZ9qqbXy8bAtvzMvjla820CQxmqGdMxjWJYNu2UkYdQsVqTN5hZoDS0QkkCjBCmbGwEX/huJd8Mm9EJsKHS859eOUFMFH90CjDtD/Xv/Emd7BefS9HSrKYfNieO9GGH8NjJnlJIgSFBpEhTOiWyYjumWy92CZk2wt3cLYuRt4cfZ6MpNiuLBLBhd0akyHjASiI3w4RlBEfiS/sJieTZPdDkNERLyUYAU7TziMfBleuwTeHwN7t0DP6yAy7uSPMeUPsH8bXPEmhNfBZLSecMg+zZnA+KXz4d0b4JoPnPUSVBKiI7ikexaXdM9iT0kZU1Zu4+Nlm3lp9nqe+2IdYQaapsbRulED2qTH0zrdeW6RFkdUuBIvkdraU1zG3oPlasESEQkg+os2FETGwpXjYPx1TuGLLx6B3rdC7zEnHuO0djp886pT1CKzR93Ee1hGV6cFbsKtMOWPMPihuj2/+FRiTAQje2YxsmcWRcWlfJW7izXb9vH9tn18t20f01Zvp6LSKZbhCTM0TY2lTaN42qQ3oEVaA9Lio0iJiyS1QSQpsZEqES9+Y4wZDPwb8AAvWmsfPsZ2I4F3gNOstYuMMc2AVcAa7ybzrLW3+T/iYztcoj0rWQmWiEigUIIVKmKS4bqJkL8AZv/LGZc15wnoca0zWXB11fsO7YOJ90BqazjLB0UyaqLrFbDpG5j3FDTpDl1GuROH+FRSbCTDumQwjIwj6w6VV7B+5wG+27af77ftY83WfazZto/PV27Fm3cddYwIJ+GKiyQ1LoqUBpE0jIskLT6KrORYspJjyEqOJSZSLWFy8owxHuAp4DygAFhojJlorV151HbxwN3A/KMOsdZa261Ogj0JmgNLRCTwKMEKNdm9YfSbsH21k2AtfBEWvACdR8EZ9zjjoA6b+gDsyYcbJ/tnwuKTdcGDsHU5TLzLqUTYuJN7sYjfRIV7aNc4gXaNE36w/mBZBfmFxew6UMqu/aUUHjhUZbmUnfsPsXbHfhZuKKWwuBR7VDLWsEHkDxKu7BTvc3IMmckx6oooR+sN5Fpr1wEYY8YBI4CVR233F+ARwA8DU33ncIKVnRLjciQiInKYEqxQ1agdXPw0nP1bmPu0U7Vv2ThoM9jpDmgrnOSr708hp4+7sXoiYNR/4fmB8PZVcMsMZ24tqReiIzy0To+n9UlsW1Fpe5/ZPgAAHl9JREFU2bn/EAW7SyjYXUzB7hLyC53n5Zv2MHnF1iNzdoHTFbF5wzjaNo6nbXo8bRvH065xPNnJsSppXX9lAvlVXhcAP/gSNMZ0B7KttR8bY45OsJobYxYDe4HfW2u/rO4kxpgxwBiAnBz/zf+Xv7uYlLhI4qNrMLWGiIj4hRKsUJeY5YxtGnCvk1DNfxZeGQyeKGdOqnP+4HaEjvh0+Mlr8MoQeP8WuHI8hKnlQX7IE2ZIT4gmPSG62qppFZWWbXsPHknA1u04wOqt+1hWUMQny7Yc2S420knq2qXH08abdOWkxJIcF0lcpEdl5kNbdR/ukazcGBMGPA5cX812W4Aca+0uY0xP4ANjTEdr7d4fHdDa54HnAXr16lVNJ1jfyC8sJlvdA0VEAooSrPoiNgUG/hr63QmLX3cm/j3/r06BjECRfRoMfQQ+/jnMeAgGBUjyJ0HDE2ZokhRDk6QYejf/YSvogUPlfOcd+7V6q1N4Y+qqbby9KP8H20V4DIkxkSTFRpAUE0FSbNVl53XzhnF0z0kiNlJfoUGoAMiu8joL2FzldTzQCZjpTbQbAxONMcOttYuAQwDW2q+NMWuBNsCiugi8OnmFxXTOTHTr9CIiUg39dVDfRMZCnzHOIxD1vMEpevHlo07Ri/YXuh2RhIi4qHC65yTTPeeHLV879h1izdZ9bC4qoaiklKLiMnYXl7HHu7ypqISVm/dQVFJGcWnFkf3CwwxdshLp0yKVPs1T6Nk0Wd20gsNCoLUxpjmwCbgCuPLwm9baPUDDw6+NMTOBe71VBNOAQmtthTGmBdAaWFeXwVdVXlHJpt0lDOucceKNRUSkzijBksBiDAx9FLatgAm3QcPpkNbG7agkhKXFR5EWH3VS2x4sq2BPSRmrtuxl/vpC5q/bxQtfrOOZmWsJM9ApM5E+zVPo0zyV05qnkBijhCvQWGvLjTF3ApNxyrS/bK1dYYz5M7DIWjvxOLsPAP5sjCkHKoDbrLWF/o+6elv2HKS80qqCoIhIgFGCJYEnIhoufw2e8xa9uHkaRCeceD8RP4uO8BAd4SE9IZqz2jYCoLi0nG82FjF//S7mrytk7JyNvPDleoyB9o0TyEqOwRNmCAszeIxxlo3BE0aVZee50lrKKiopLXeeneVKSr3LZRXO+vIKS8cmCZzXIZ3+rdNUqv4UWWsnAZOOWvfHY2x7VpXl94D3/BrcKcjfrRLtIiKBSAmWBKbELPjJWBg7HD643SmAEaaJZyXwxEaGc2brhpzZ2ulVdrCsgiX5RcxfV8iCDbvIKyym0loqKi2V1inEceRhLZXe54pKS3iYIcITRoQnjMjwMCI8VV57woiJ8JAQHY4FPluxlXe+LiA6IowzW6Vxfod0zmnfiIYNTq41ToJf/pES7UqwREQCiRIsCVzNznQKcUy+H2Y/5lRCFAlw0REe+rZIpW+LVDip4vM1U1peyYL1hUxZuZUpK7cxddU2jIGeOcmc1yGd8zqk0yKtgd/OL+7LKyzGE2bISHRxHkMREfkRJVgS2PreDpu/gel/hcZdoM35bkckEhAiw8OOtJw9MLwjKzbvZcrKbUxZuY2/fbqav326mpZpcZzbIZ226fFOdcXEGBonRhMZrtbgUJBXWEJmUgzhHn2eIiKBRAmWBDZj4KInYOd38N5NzngsFb0Q+QFjDJ0yE+mUmcjPz2tDwe5ipq3azpSV23jpy/WUV9oq20LDBlHehCuaJkkxZCRGk5nkJF/RER5vt0RDuPc5IiyMiPCwI10YPZqkOSDkFxZr/JWISABSgiWBLzIWLn8DXjgbxo12kqyYJLejEglYWcmxXHd6M647vRkHyyrYXFTC5qKDbN5TwuaiErZ4l9ds28fMNTsoKas48UGrCDMQ7gnj3dv60SVL/y+6Jb+wmPM7NnY7DBEROYoSLAkOSdlOoYuxFzktWVeOhzBVThM5kegIDy3SGhxzPJa1lqLiMjbvKWHb3oMcKqukrNJSVl5JeWUlpRWWcm/lwlLvc1lFJWWVlSdd3l58b/+hcnYdKCU7JcbtUERE5ChKsCR4NO0Hwx6Fj+6BqQ/A+X9xOyKRoGeMITkukuS4SDo2SXQ7HDlJhysIqougiEjgUYIlwaXn9bD1W5jzBKR3gq6Xux2RiEidy1OCJSISsFR6SILP4L9Bs/4w8S7Y9I3b0YiI1Dm1YImIBC4lWBJ8PBEwaizEp8O4q2Df1lM/RlEefHCH093Q2hNvL6FP/w4kiOQXFhMfFU5iTITboYiIyFHURVCCU1wqXPEWvHQevH0NXP8xhJ/EgPviQvjyn7DgeagsB1sJGd2g1w3+j1kCU2UlvH0VbPzKaRltcRa0OBtSWzo1zUUCUF5hMdkpsRj9GxURCThqwZLg1bgTXPIsFCyAj39x/BaIshKY/Tj8uxvMfQo6j4J7ljp/TE/+HRSur6uoJdDMfgzWTILsPrBlKUy6F57sCY93clo5l70D+7e7HaXID+RpDiwRkYClFiwJbh1GwMDfwKy/Q+PO0Pe2H75fWQFL34IZD8HeTdD6fDj3AUjv6Lw/4il4uh988FOnFSxUSr8f3AtR8WqBOZH1X8KMB52E+9IXnHWF62DdTFg/C1Z/DEted9Y36ggtBjpJefOBEBHtUtBS31VWWgp2lzCofbrboYiISDWUYEnwG3gfbFsBk38Ljdo5fwBbC99/7pRz374SmvSAS56D5v1/uG9iFgz5O3xwO8x7Bk6/04UL8LHvp8C4K53ubiOehIQmbkcUmPZtc+ZUS20FF/7rf8loakvncdpNToK+dZmTcK2bCQtfgnlPO4n6Ve+4Gb3UYzv2H+JQeSXZasESEQlI6iIowS8szOkq2LANjL8OVn4I/70Q3vwJlB+EUf+FW6b/OLk6rOtoaDsMpv0Ztq+u09B9buNcZ0xaYjbkzYWn+zpd3FTA4YcqK5zk6uBep2BKVPWT8BLmgSbd4cyfw7Ufwn0b4Yx7nOR928q6jVnE63CJ9uxkTTIsIhKIlGBJaIiKh9FvOa0Q46+FnWtg6KNwxwLoeMnxu8oZAxf9y/kje8KtUFFWd3H70tbl8OblkJgJN06G22ZDw7bw/s3wznVwYJfbEQaOmQ/Dhi9h2D8hvcPJ7xcRA6ffA+HRsOA5/8Unchx5u1SiXUQkkCnBktCR0hyufg/OfxDuXgy9b3FKup+MBo3gwsdhyxL48jH/xukPu9bCa5c6SeI1H0CDNKeb242fwaA/wepJTmvWms/cjtR9uVPhi39At6uh+1Wnvn9cKnQeCcvGQ8lu38cncgJ5hcUYA5lqwRIRCUhKsCS0ZPZ0xlFFxZ/6vh1GQOefwBePwObFvo/NX/ZuhlcvBlvhJFdJ2f97L8wD/X8BY2Y4SeRbl8OHdzpd4+qjPZvg/THQqD0M/UfNj9P7VigrhsWv+y42kZOUv7uYjIRoosJDpCiPiEiIUYIlUtXQRyAuDSbcBmUH3Y7mxIoL4bVLnJaUq9+DtDbVb9e4szMO7cxfwJI34JkzYMPsuo3VbRVlzrir8kPwk1chshbdqzK6QM7p3vnUKnwXo8hJyC8sJkvdA0VEApYSLJGqYpJh+JOwY7VTvjuQHdoHb4x05vAa/ZZTjOF4wqPg3D8547M84U4hkM9+68wRVh9M/4tT+OOif0PD1rU/Xp8xUJQH302u/bFEToHmwBIRCWxKsESO1vpc6HkDzPmPU5UvEJUfgnFXweYlTpXEY1VIrE52b6cAxmk3w7yn4LmBsGON30INCGs+ha/+Db1ucsZP+UK7CyG+iYpdSJ06WFbBtr2HlGCJiAQwJVgi1Tn/L5CUAx/cBof2ux3ND1WUO13d1s+Ci5+GdkNP/RiRcTDsUbhmgtO98KXzIW++72MNBLs3Ol0+M7rCBQ/57rieCGeurHUzg7+8vwSNgt1Oi7MSLBGRwKUES6Q6UfHO3Fq7N8KUP57cPpWVztxIC19yKszt3+H7uKyFj+6BVR/B4Ieh6xW1O17Lc+CmzyE2FV4d4bT0hJLyUnjneue/26ixEBHt2+P3vB48Uc5YLJE6kH94DiwlWCIiASvc7QBEAlbT06HfHTD3SWg3DFoN+uH7FeWwdRlsnOM88ub8uGx3RldoOcjZN7vPyZeNr4618PnvYcnrMPA+6Ht7zY9VVUpzJ8l6Y5TT7fCif0GPa31zbLdN+QNs/gYuf925Tl+LawidLoOl45zxbdGJvj+HSBVHJhlOUYl2EZFApQRL5HjO+QN8P8UpbX7rLGe+qY1fOQlV/nwo9XYfTGnhJGFNz4CcvnBwjzPfUu50Z+zP7McgMh6aD4BW5zhJ14n+4C8/BAd2woEdULwT1s5wkr3et8JZ9/n2OuMawnUfOZM0T7wL9m+D/vcef4LmQPftezD/Wej7U2h/kf/O02cMLH0TFr8B/X7qv/OI4CRY0RFhpDWIcjsUERE5Br8mWMaYwcC/AQ/worX24aPezwHGAknebe6z1k7yZ0wipyQi2ukq+OK58GiVynONOjjd85qe7pTrTsj48b5NusOAXznJ1vovIHcarJ0Gaz5x3k9p4SRa0YlOAnU4mTqwAw7sgkN7fnzMrqOdroH+SHyiGsCVbzvJ5PS/wr5tMOTvzlxaNVF+yKlc6IbvPof3b4XsvnDu//n3XE26O62TC56HPrdBmHpei/8criBogvnHDxGREOe3BMsY4wGeAs4DCoCFxpiJ1tqVVTb7PTDeWvuMMaYDMAlo5q+YRGoks4dTTGLrcm9C1Q9iU05+/+hEpwWl/UVON79da51EK3eqMydV+UFnDFRcmtOS1KS7sxzb0Hkdl+Y8GqRBcnP/tip5IuDiZ5xJiec8AQe2wyXPn/zYpYN7nZajb151/ntd/Ax0GeW/eKuzdga8fTWkd3ASxvBI/5+z9xin8EjuFGhzgf/PJ/VWvkq0i4gEPH+2YPUGcq216wCMMeOAEUDVBMsCCd7lRGCzH+MRqbmuV9S+oAQ4yVHDVs6jz63OOC4TFlitHmFhThXF+MYw+bdOa9roN489vshap7vkN6/BivehrNhp4cvoAu/fDCWFzrXWhY1z4K3RkNoKrvkAYpLq5rwdRjjj4+Y/pwRL/MZaS35hMX1bpLodioiIHIc//6rLBPKrvC7wrqvqAeBqY0wBTuvVXdUdyBgzxhizyBizaMcOP1RmE3GLJzywkquq+t0Bl73kJE+vDIW9W374/oGdzlxhT/WGly+AlR9A51Fw83S4fQ5cP8mZK+rTX8OMh5xEzJ8KFjmFOpKy4doPTq2VsbY8EdDrRqdlcuf3dXdeqTFjzGBjzBpjTK4x5piDGo0xI40x1hjTq8q6+737rTHG1FlGXXiglAOlFWrBEhEJcP78y666fkxH/4U1GvivtTYLGAq8Zoz5UUzW2uettb2stb3S0tL8EKqIVKvzSLhqPOze4MyVtWON07Vx/LXwz3ZOq010Egx/En65BoY/AVk9nZa6iGinNHq3q2HW32HSvU4pe3/YshRev9TpUnnth04Xx7rW83rwRKpkexCo0oV9CNABGO3tpn70dvHA3cD8Kus6AFcAHYHBwNPe4/nd4QqCSrBERAKbP7sIFgDZVV5n8eMugDfh3KCw1s41xkQDDYHtfoxLRE5Fy3Pg+o+d1qGnejvrYlKccUc9roFG7Y+9ryccRjzptCbNecIpY3/xs74dF7VtJbx6MUQlOJUQE5r47tinokEj6HgpLHnTqT4ZnXDifcQtJ9OFHeAvwCPAvVXWjQDGWWsPAeuNMbne4831d9BHEqxUJVgiIoHMny1YC4HWxpjmxphInF/8Jh61TR4wCMAY0x6IBtQHUCTQNOnuzJXV41oY+Qr8cjUMfuj4ydVhxjhjus79P6cAxltXQOkB38S183t4dbhTrfC6iZCU45vj1lSfMU7p/iVvuhuHnMgJu7AbY7oD2dbaj091X+/+Pu/aXrC7BIDsZCVYIiKBzG8JlrW2HLgTmAyswqkWuMIY82djzHDvZr8EbjHGLAXeAq631t8DNUSkRlJawPD/QKdLa1Z+/cyfOfuvmwGvjoDiwtrFU7gexnq/Sq6d6MTntsyekNnL6Sbor+6Q4gvH7cLu7ar+OM496pT2PbLCD13b83YV07BBFDGRddIjUUREasiv82B557SadNS6P1ZZXgmc4c8YRCSA9LgWYpLh3RudwhnXvF+zLn1F+U5yVV4C138CaW18H2tN9bnNqZ64djq0PtftaKR6J+rCHg90AmZ655tqDEz0/jh4Mt3f/cKZAyumLk4lIiK1EKDly0QkZLW/CK5+D/YUwEsXOPOCnYq9W5xugQf3wDUTIL2jf+KsqQ4joEE6zH/W7Ujk2I7bhd1au8da29Ba28xa2wyYBwy31i7ybneFMSbKGNMcaA0sqIug8zQHlohIUPBrC5aISLWaD4DrP4LXRzrVCa982+niV1HqPMpLoeJQNcuHYOoDsH+7M89Vk+5uX8mPhUc6Jdtn/s1JHlNbuh2RHMVaW26MOdyF3QO8fLgLO7DIWnv0eOGq+64wxozHKYhRDtxhra3wd8xlFZVs2VNCTsqPhnuJiEiAUYIlIu5o0h1unAyvXQwvDjr5/cJjnBaw7NP8F1tt9bwBvngUFrwAQx52Oxqpxom6sB+1/qyjXj8IPOi34KqxuaiESgtZasESEQl4SrBExD0NW8HNU2HFBMA4E/aGR4EnqspyhPP68HJitjPfVSCLT4eOF8Pi1+Gc30FUvNsRSZDTHFgiIsFDCZaIuCu+MfS93e0ofK/PbbD8Hfj459BykNNVMKUFxKY6petFToESLBGR4KEES0TEH7J6QaeRztxfy9/53/qoREhp/r+EK8X7nNoyeJKvQ/thzn9gwL1Oq6L4XV5hMZGeMNITot0ORURETkAJloiIv4x8CS5+BoryoHAtFK5zCl8UroNNXztdI22V+bIiYiEhExIzITELErKc5YRMp2tkYiZExrl3PeAUGHljFGxdDs3OhOb93Y2nnigoLCErOQZPWBAk4CIi9ZwSLBERfwqPdMaaNWz14/fKS2FP/v+Srj353scmyJ0G+7byozlso5MgKRu6Xwun3QRhdTjp7M7v4fXL4MAOGP2Wkqs6lFdYrAIXIiJBQgmWiIhbwiOdroHHKuVeXgr7tsDeTc68YYcfW5fBp7+CZW/D8CfqZi6w/AXw5uVgwuD6jyGzp//PKUfkFRbTNTvR7TBEROQkKMESEQlU4ZGQ3NR5VGUtLH8XPrsPnhsAp98NA38NETH+iWPVx/DeTZDQxCmRn9LCP+eRau0pKWNPSZkKXIiIBIkwtwMQEZFTZAx0GQV3LoQul8Psx+DpfrBupu/PtfBFGH+N00p20xQlVy7IVwVBEZGgogRLRCRYxabAxU/DtROdpOvVETDhdigurP2xrYWpD8Anv4TW58N1HwX+/GMh6nCClZWsBEtEJBgowRIRCXYtBsLtc6D/L2H5eHiyFyx920mSaqK8FCbcCrMfh543wOVvuF+9sB47MgdWqhIsEZFgoARLRCQURMTAoD/CrV9AcnOYMAZevxQK15/acQ7uhTdHOQU0zvk9XPg4eDRc1015hcUkxUaQEK05x0REgoHumiIioSS9I9z0OSx6Gab+HzzdFxp1gPgMiG9c/XNsitPFcO8WeGMk7FjtzN/V7Uq3r0ZwEiyNvxIRCR5KsEREQk2YB3rfAm2Hwlf/cubZ2r0B8uZCSTXjszyR0KAxlO6HilK4cjy0GlTnYUv1CnaX0KFJgtthiIjISVKCJSISqhIzYeg/friu7CDs3+ZMYrxvyw+fyw5A/3uhSTd34pVqtWscT8+cZLfDEBGRk6QES0SkPomIrn5uLQlYz1ytSZ1FRIKJilyIiIiIiIj4iBIsERERERERH1GCJSIiIiIi4iNKsERERERERHxECZaIiIiIiIiPKMESERERERHxESVYIiIiIiIiPqIES0RERERExEeUYImIiIiIiPiIEiwREREREREfUYIlIiIiIiLiI0qwREREREREfMRYa92O4ZQYY3YAG2t5mIbATh+EE4hC9dpC9bogdK8tVK8LdG2+0tRam1ZH53KN7lvHFarXBbq2YBSq1wWhe211fV0ndd8KugTLF4wxi6y1vdyOwx9C9dpC9bogdK8tVK8LdG1S90L1cwnV6wJdWzAK1euC0L22QL0udREUERERERHxESVYIiIiIiIiPlJfE6zn3Q7Aj0L12kL1uiB0ry1Urwt0bVL3QvVzCdXrAl1bMArV64LQvbaAvK56OQZLRERERETEH+prC5aIiIiIiIjPKcESERERERHxkXqXYBljBhtj1hhjco0x97kdj68YYzYYY5YbY5YYYxa5HU9tGGNeNsZsN8Z8W2VdijFmijHme+9zspsx1tQxru0BY8wm72e3xBgz1M0Ya8IYk22MmWGMWWWMWWGMuce7Pqg/t+NcVyh8ZtHGmAXGmKXea/s/7/rmxpj53s/sbWNMpNux1mehes8C3beCQajes0D3rWD83ILpvlWvxmAZYzzAd8B5QAGwEBhtrV3pamA+YIzZAPSy1gb9JHLGmAHAfuBVa20n77pHgEJr7cPePzKSrbW/cTPOmjjGtT0A7LfWPupmbLVhjMkAMqy13xhj4oGvgYuB6wniz+041/UTgv8zM0CctXa/MSYCmA3cA/wCeN9aO84Y8yyw1Fr7jJux1lehfM8C3beCQajes0D3rWAUTPet+taC1RvItdaus9aWAuOAES7HJEex1n4BFB61egQw1rs8FufLIugc49qCnrV2i7X2G+/yPmAVkEmQf27Hua6gZx37vS8jvA8LnAO8610fdJ9ZiNE9K0iE6n0rVO9ZoPtWMAqm+1Z9S7AygfwqrwsIkX90OP/APjfGfG2MGeN2MH6Qbq3dAs6XB9DI5Xh87U5jzDJvd4yg6o5wNGNMM6A7MJ8Q+tyOui4Igc/MGOMxxiwBtgNTgLVAkbW23LtJKH1HBqNQvmeB7lvBLOi//6rSfSt4BMt9q74lWKaadaHSR/IMa20PYAhwh7dZX4LDM0BLoBuwBfinu+HUnDGmAfAe8DNr7V634/GVaq4rJD4za22FtbYbkIXTWtK+us3qNiqpIpTvWaD7VrAKie+/w3TfCi7Bct+qbwlWAZBd5XUWsNmlWHzKWrvZ+7wdmIDzjy6UbPP2Kz7cv3i7y/H4jLV2m/cLoxJ4gSD97Lz9od8D3rDWvu9dHfSfW3XXFSqf2WHW2iJgJtAXSDLGhHvfCpnvyCAVsvcs0H0rWIXS95/uW8Er0O9b9S3BWgi09lYbiQSuACa6HFOtGWPivAMZMcbEAecD3x5/r6AzEbjOu3wd8KGLsfjU4S9yr0sIws/OO/D0JWCVtfaxKm8F9ed2rOsKkc8szRiT5F2OAc7F6as/Axjp3SzoPrMQE5L3LNB9K5iFwvcf6L5FEH5uwXTfqldVBAG8ZSn/BXiAl621D7ocUq0ZY1rg/PoHEA68GczXZYx5CzgLaAhsA/4EfACMB3KAPGCUtTboBt4e49rOwmmyt8AG4NbD/b+DhTHmTOBLYDlQ6V39W5x+30H7uR3nukYT/J9ZF5zBwB6cH9vGW2v/7P0+GQekAIuBq621h9yLtH4LxXsW6L4VLEL1ngW6bxGEn1sw3bfqXYIlIiIiIiLiL/Wti6CIiIiIiIjfKMESERERERHxESVYIiIiIiIiPqIES0RERERExEeUYImIiIiIiPiIEiwRPzLGVBhjllR53OfDYzczxgTdPBYiIhK4dN8Sqb3wE28iIrVQYq3t5nYQIiIiJ0n3LZFaUguWiAuMMRuMMX83xizwPlp51zc1xkwzxizzPud416cbYyYYY5Z6H6d7D+UxxrxgjFlhjPncO7O5iIiIT+m+JXLylGCJ+FfMUV0tLq/y3l5rbW/gSeBf3nVPAq9aa7sAbwBPeNc/Acyy1nYFegArvOtbA09ZazsCRcBlfr4eEREJbbpvidSSsda6HYNIyDLG7LfWNqhm/QbgHGvtOmNMBLDVWptqjNkJZFhry7zrt1hrGxpjdgBZ1tpDVY7RDJhirW3tff0bIMJa+1f/X5mIiIQi3bdEak8tWCLuscdYPtY21TlUZbkCjasUERH/0X1L5CQowRJxz+VVnud6l+cAV3iXrwJme5enAbcDGGM8xpiEugpSRETES/ctkZOgXw3+v527tUEgCMIA+o0i9EMzSILCgKIZBHXQAo4uENDDIu5IcJCwFwTvuV01bjI/uzCteVVdXs6n1trzy9tZVZ0zNDqW4902ybGq9kluSVbj/S7JoarWGTp+myTXyaMH4N/IW/Alb7DgB8Zd9kVr7f7rWADgHXkLPmdFEAAAoBMTLAAAgE5MsAAAADpRYAEAAHSiwAIAAOhEgQUAANCJAgsAAKCTB3ApryzYeF+VAAAAAElFTkSuQmCC\n", 133 | "text/plain": [ 134 | "
" 135 | ] 136 | }, 137 | "metadata": { 138 | "needs_background": "light" 139 | }, 140 | "output_type": "display_data" 141 | } 142 | ], 143 | "source": [ 144 | "fig, (ax0, ax1) = plt.subplots(nrows=1, ncols=2, figsize=(12, 5))\n", 145 | "\n", 146 | "# Plot losses\n", 147 | "ax0.plot(history['loss'], label='train')\n", 148 | "ax0.plot(history['val_loss'], label='validation')\n", 149 | "ax0.set_xlabel('Epoch')\n", 150 | "ax0.set_ylabel('Loss')\n", 151 | "ax0.legend(loc=0)\n", 152 | "\n", 153 | "ax1.plot(history['acc'], label='train')\n", 154 | "ax1.plot(history['val_acc'], label='validation')\n", 155 | "ax1.set_xlabel('Epoch')\n", 156 | "ax1.set_ylabel('Accuracy')\n", 157 | "ax1.legend(loc=0)\n", 158 | "\n", 159 | "plt.tight_layout()" 160 | ] 161 | } 162 | ], 163 | "metadata": { 164 | "kernelspec": { 165 | "display_name": "tf-1.11.0-py36", 166 | "language": "python", 167 | "name": "tf-1.11.0-py36" 168 | }, 169 | "language_info": { 170 | "codemirror_mode": { 171 | "name": "ipython", 172 | "version": 3 173 | }, 174 | "file_extension": ".py", 175 | "mimetype": "text/x-python", 176 | "name": "python", 177 | "nbconvert_exporter": "python", 178 | "pygments_lexer": "ipython3", 179 | "version": "3.6.5" 180 | }, 181 | "toc-autonumbering": false, 182 | "toc-showmarkdowntxt": true, 183 | "toc-showtags": false 184 | }, 185 | "nbformat": 4, 186 | "nbformat_minor": 2 187 | } 188 | -------------------------------------------------------------------------------- /scripts/cifar_cnn.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #SBATCH -J cifar-cnn 3 | #SBATCH --reservation dl4sci_sc19 4 | #SBATCH -C knl 5 | #SBATCH -N 1 6 | #SBATCH -q regular 7 | #SBATCH -t 1:00:00 8 | #SBATCH -o logs/%x-%j.out 9 | 10 | # Load the software 11 | module load tensorflow/intel-1.13.1-py36 12 | export KMP_BLOCKTIME=0 13 | export KMP_AFFINITY="granularity=fine,compact,1,0" 14 | config=configs/cifar10_cnn.yaml 15 | 16 | # Ensure dataset is downloaded by single process 17 | python -c "import keras; keras.datasets.cifar10.load_data()" 18 | 19 | # Run the training 20 | srun python train.py $config -d 21 | -------------------------------------------------------------------------------- /scripts/cifar_resnet.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #SBATCH -J cifar-resnet 3 | #SBATCH --reservation dl4sci_sc19 4 | #SBATCH -C knl 5 | #SBATCH -N 1 6 | #SBATCH -q regular 7 | #SBATCH -t 1:00:00 8 | #SBATCH -o logs/%x-%j.out 9 | 10 | # Load the software 11 | module load tensorflow/intel-1.13.1-py36 12 | export KMP_BLOCKTIME=0 13 | export KMP_AFFINITY="granularity=fine,compact,1,0" 14 | config=configs/cifar10_resnet.yaml 15 | 16 | # Ensure dataset is downloaded by single process 17 | python -c "import keras; keras.datasets.cifar10.load_data()" 18 | 19 | # Run the training 20 | srun python train.py $config -d 21 | -------------------------------------------------------------------------------- /scripts/hpo_cifar_cnn.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #SBATCH -J hpo-cifar-cnn 3 | #SBATCH --reservation dl4sci_sc19 4 | #SBATCH -C knl 5 | #SBATCH -N 8 6 | #SBATCH -q regular 7 | #SBATCH -t 1:00:00 8 | #SBATCH -o logs/%x-%j.out 9 | 10 | module load tensorflow/intel-1.13.1-py36 11 | module load cray-hpo 12 | 13 | # OpenMP settings 14 | export KMP_BLOCKTIME=0 15 | export KMP_AFFINITY="granularity=fine,compact,1,0" 16 | 17 | script=hpo_train.py 18 | args="-N ${SLURM_JOB_NUM_NODES} --verbose" 19 | 20 | # Ensure dataset is downloaded by single process 21 | 22 | echo "python -c \"import keras; keras.datasets.cifar10.load_data()\"" 23 | python -c "import keras; keras.datasets.cifar10.load_data()" 24 | 25 | echo "python -u $scripts $args" 26 | python -u $script $args 27 | -------------------------------------------------------------------------------- /scripts/hpo_mnist_lenet5.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #SBATCH -J hpo-mnist-lenet5 3 | #SBATCH --reservation dl4sci_sc19 4 | #SBATCH -C knl 5 | #SBATCH -N 8 6 | #SBATCH -q regular 7 | #SBATCH -t 1:00:00 8 | #SBATCH -o logs/%x-%j.out 9 | 10 | module load tensorflow/intel-1.13.1-py36 11 | module load cray-hpo 12 | 13 | # OpenMP settings 14 | export KMP_BLOCKTIME=0 15 | export KMP_AFFINITY="granularity=fine,compact,1,0" 16 | 17 | script=genetic.py 18 | args="-N ${SLURM_JOB_NUM_NODES} --verbose" 19 | path=hpo/mnist-lenet5 20 | 21 | echo "cd $path && python -u $scripts $args" 22 | cd $path && python -u $script $args 23 | -------------------------------------------------------------------------------- /scripts/hpo_sin_genetic.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #SBATCH -J hpo-sin-genetic 3 | #SBATCH --reservation dl4sci_sc19 4 | #SBATCH -C knl 5 | #SBATCH -N 1 6 | #SBATCH -q regular 7 | #SBATCH -t 5 8 | #SBATCH -o logs/%x-%j.out 9 | 10 | module load tensorflow/intel-1.13.1-py36 11 | module load cray-hpo 12 | 13 | script=genetic_example.py 14 | path=hpo/sin 15 | 16 | cd $path && python -u $script 17 | -------------------------------------------------------------------------------- /scripts/hpo_sin_grid.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #SBATCH -J hpo-sin-grid 3 | #SBATCH --reservation dl4sci_sc19 4 | #SBATCH -C knl 5 | #SBATCH -N 1 6 | #SBATCH -q regular 7 | #SBATCH -t 5 8 | #SBATCH -o logs/%x-%j.out 9 | 10 | module load tensorflow/intel-1.13.1-py36 11 | module load cray-hpo 12 | 13 | script=grid_example.py 14 | path=hpo/sin 15 | 16 | cd $path && python -u $script 17 | -------------------------------------------------------------------------------- /scripts/hpo_sin_random.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #SBATCH -J hpo-sin-random 3 | #SBATCH --reservation dl4sci_sc19 4 | #SBATCH -C knl 5 | #SBATCH -N 1 6 | #SBATCH -q regular 7 | #SBATCH -t 5 8 | #SBATCH -o logs/%x-%j.out 9 | 10 | module load tensorflow/intel-1.13.1-py36 11 | module load cray-hpo 12 | 13 | script=random_example.py 14 | path=hpo/sin 15 | 16 | cd $path && python -u $script 17 | -------------------------------------------------------------------------------- /scripts/mnist_lenet5.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #SBATCH -J mnist-lenet5 3 | #SBATCH --reservation dl4sci_sc19 4 | #SBATCH -C knl 5 | #SBATCH -N 1 6 | #SBATCH -q regular 7 | #SBATCH -t 30 8 | #SBATCH -o logs/%x-%j.out 9 | 10 | module load tensorflow/intel-1.13.1-py36 11 | 12 | script=mnist.py 13 | path=hpo/mnist-lenet5/source 14 | 15 | cd $path && python $script 16 | -------------------------------------------------------------------------------- /train.py: -------------------------------------------------------------------------------- 1 | """ 2 | Main training script for the Deep Learning at Scale Keras examples. 3 | """ 4 | 5 | # System 6 | import os 7 | import sys 8 | import argparse 9 | import logging 10 | 11 | # Externals 12 | import keras 13 | import horovod.keras as hvd 14 | import yaml 15 | import numpy as np 16 | 17 | # Locals 18 | from data import get_datasets 19 | from models import get_model 20 | from utils.device import configure_session 21 | from utils.optimizers import get_optimizer 22 | from utils.callbacks import TimingCallback 23 | 24 | #load dictionary from argparse 25 | class StoreDictKeyPair(argparse.Action): 26 | def __call__(self, parser, namespace, values, option_string=None): 27 | my_dict = {} 28 | for kv in values.split(','): 29 | k,v = kv.split('=') 30 | my_dict[k] = v 31 | setattr(namespace, self.dest, my_dict) 32 | 33 | def parse_args(): 34 | """Parse command line arguments.""" 35 | parser = argparse.ArgumentParser('train.py') 36 | add_arg = parser.add_argument 37 | add_arg('config', nargs='?', default='configs/hello.yaml') 38 | add_arg('-d', '--distributed', action='store_true') 39 | add_arg('-v', '--verbose', action='store_true') 40 | add_arg('--show-config', action='store_true') 41 | add_arg('--interactive', action='store_true') 42 | #parameters which override the YAML file 43 | add_arg('--dropout', type=float, help='keep rate for dropout layers') 44 | add_arg('--optimizer', action=StoreDictKeyPair, help='optimizer parameters') 45 | add_arg('--batch-size', type=int, help='batch size for training') 46 | add_arg('--n-epochs', type=int, help='number of epochs to train') 47 | add_arg('--no-output', action='store_true', 48 | help='disable checkpointing and summary saving') 49 | add_arg('--hpo', action='store_true', help='Enable HPO fom output') 50 | return parser.parse_args() 51 | 52 | def config_logging(verbose): 53 | log_format = '%(asctime)s %(levelname)s %(message)s' 54 | log_level = logging.DEBUG if verbose else logging.INFO 55 | logging.basicConfig(level=log_level, format=log_format) 56 | 57 | def init_workers(distributed=False): 58 | rank, n_ranks = 0, 1 59 | if distributed: 60 | hvd.init() 61 | rank, n_ranks = hvd.rank(), hvd.size() 62 | return rank, n_ranks 63 | 64 | def load_config(args): 65 | # Read base config from yaml file 66 | config_file = args.config 67 | with open(config_file) as f: 68 | config = yaml.load(f, Loader=yaml.FullLoader) 69 | 70 | # Override with command line arguments 71 | if args.dropout is not None: 72 | config['model']['dropout'] = args.dropout 73 | if args.batch_size is not None: 74 | config['training']['batch_size'] = args.batch_size 75 | if args.n_epochs is not None: 76 | config['training']['n_epochs'] = args.n_epochs 77 | if args.optimizer is not None: 78 | if 'name' in args.optimizer: 79 | config['optimizer']['name'] = args.optimizer['name'] 80 | if 'lr' in args.optimizer: 81 | config['optimizer']['lr'] = float(args.optimizer['lr'] ) 82 | if 'lr_scaling' in args.optimizer: 83 | config['optimizer']['lr_scaling'] = args.optimizer['lr_scaling'] 84 | if 'lr_warmup_epochs' in args.optimizer: 85 | config['training']['lr_warmup_epochs'] = int(args.optimizer['lr_warmup_epochs']) 86 | 87 | return config 88 | 89 | def get_basic_callbacks(distributed=False): 90 | cb = [] 91 | 92 | if distributed: 93 | #this is for broadcasting the initial model to all nodes 94 | cb.append(hvd.callbacks.BroadcastGlobalVariablesCallback(0)) 95 | 96 | #this is for averaging the reported metrics across all nodes 97 | cb.append(hvd.callbacks.MetricAverageCallback()) 98 | 99 | return cb 100 | 101 | def main(): 102 | """Main function""" 103 | 104 | # Initialization 105 | args = parse_args() 106 | rank, n_ranks = init_workers(args.distributed) 107 | 108 | # Load configuration 109 | config = load_config(args) 110 | train_config = config['training'] 111 | output_dir = os.path.expandvars(config['output_dir']) 112 | checkpoint_format = os.path.join(output_dir, 'checkpoints', 113 | 'checkpoint-{epoch}.h5') 114 | if rank==0 and not args.no_output: 115 | os.makedirs(output_dir, exist_ok=True) 116 | 117 | # Loggging 118 | config_logging(verbose=args.verbose) 119 | logging.info('Initialized rank %i out of %i', rank, n_ranks) 120 | if args.show_config: 121 | logging.info('Command line config: %s', args) 122 | if rank == 0: 123 | logging.info('Job configuration: %s', config) 124 | if args.no_output: 125 | logging.info('Disabling job outputs') 126 | else: 127 | logging.info('Saving job outputs to %s', output_dir) 128 | 129 | # Configure session 130 | device_config = config.get('device', {}) 131 | configure_session(**device_config) 132 | 133 | # Load the data 134 | train_gen, valid_gen = get_datasets(batch_size=train_config['batch_size'], 135 | **config['data']) 136 | 137 | # Build the model 138 | model = get_model(**config['model']) 139 | # Configure optimizer 140 | opt = get_optimizer(n_ranks=n_ranks, **config['optimizer']) 141 | # Compile the model 142 | model.compile(loss=train_config['loss'], optimizer=opt, 143 | metrics=train_config['metrics']) 144 | if rank == 0: 145 | model.summary() 146 | 147 | # Prepare the training callbacks 148 | callbacks = get_basic_callbacks(args.distributed) 149 | 150 | # Learning rate warmup 151 | warmup_epochs = train_config.get('lr_warmup_epochs', 0) 152 | callbacks.append(hvd.callbacks.LearningRateWarmupCallback( 153 | warmup_epochs=warmup_epochs, verbose=1)) 154 | 155 | # Learning rate decay schedule 156 | for lr_schedule in train_config.get('lr_schedule', []): 157 | if rank == 0: 158 | logging.info('Adding LR schedule: %s', lr_schedule) 159 | callbacks.append(hvd.callbacks.LearningRateScheduleCallback(**lr_schedule)) 160 | 161 | # Checkpoint only from rank 0 162 | if rank == 0 and not args.no_output: 163 | os.makedirs(os.path.dirname(checkpoint_format), exist_ok=True) 164 | callbacks.append(keras.callbacks.ModelCheckpoint(checkpoint_format)) 165 | 166 | # Timing callback 167 | timing_callback = TimingCallback() 168 | callbacks.append(timing_callback) 169 | 170 | # Train the model 171 | train_steps_per_epoch = max([len(train_gen) // n_ranks, 1]) 172 | valid_steps_per_epoch = max([len(valid_gen) // n_ranks, 1]) 173 | history = model.fit_generator(train_gen, 174 | epochs=train_config['n_epochs'], 175 | steps_per_epoch=train_steps_per_epoch, 176 | validation_data=valid_gen, 177 | validation_steps=valid_steps_per_epoch, 178 | callbacks=callbacks, 179 | workers=4, verbose=2 if rank==0 else 0) 180 | 181 | # Logging and saving 182 | if rank == 0: 183 | # Print some best-found metrics 184 | if 'val_acc' in history.history.keys(): 185 | logging.info('Best validation accuracy: %.3f', 186 | max(history.history['val_acc'])) 187 | if 'val_top_k_categorical_accuracy' in history.history.keys(): 188 | logging.info('Best top-5 validation accuracy: %.3f', 189 | max(history.history['val_top_k_categorical_accuracy'])) 190 | logging.info('Average time per epoch: %.3f s', 191 | np.mean(timing_callback.times)) 192 | # Save training history 193 | if not args.no_output: 194 | np.savez(os.path.join(output_dir, 'history'), 195 | n_ranks=n_ranks, **history.history) 196 | 197 | # Drop to IPython interactive shell 198 | if args.interactive and (rank == 0): 199 | logging.info('Starting IPython interactive session') 200 | import IPython 201 | IPython.embed() 202 | 203 | if rank == 0: 204 | if args.hpo: 205 | print('FoM: ' + str(history.history['val_loss'][0])) 206 | logging.info('All done!') 207 | 208 | if __name__ == '__main__': 209 | main() 210 | -------------------------------------------------------------------------------- /utils/__init__.py: -------------------------------------------------------------------------------- 1 | """ 2 | """ 3 | -------------------------------------------------------------------------------- /utils/callbacks.py: -------------------------------------------------------------------------------- 1 | """ 2 | This module contains some utility callbacks for Keras training. 3 | """ 4 | 5 | # System 6 | from time import time 7 | 8 | # Externals 9 | import keras 10 | 11 | class TimingCallback(keras.callbacks.Callback): 12 | """A Keras Callback which records the time of each epoch""" 13 | def __init__(self): 14 | self.times = [] 15 | 16 | def on_epoch_begin(self, epoch, logs={}): 17 | self.starttime = time() 18 | 19 | def on_epoch_end(self, epoch, logs={}): 20 | epoch_time = time() - self.starttime 21 | self.times.append(epoch_time) 22 | -------------------------------------------------------------------------------- /utils/device.py: -------------------------------------------------------------------------------- 1 | """ 2 | Hardware/device configuration 3 | """ 4 | 5 | # System 6 | import os 7 | 8 | # Externals 9 | import keras 10 | import tensorflow as tf 11 | 12 | def configure_session(intra_threads=32, inter_threads=2, 13 | blocktime=0, affinity='granularity=fine,compact,1,0', 14 | gpu=None): 15 | """Sets the thread knobs in the TF backend""" 16 | os.environ['KMP_BLOCKTIME'] = str(blocktime) 17 | os.environ['KMP_AFFINITY'] = affinity 18 | os.environ['OMP_NUM_THREADS'] = str(intra_threads) 19 | config = tf.ConfigProto( 20 | inter_op_parallelism_threads=inter_threads, 21 | intra_op_parallelism_threads=intra_threads 22 | ) 23 | if gpu is not None: 24 | config.gpu_options.visible_device_list = str(gpu) 25 | keras.backend.set_session(tf.Session(config=config)) 26 | -------------------------------------------------------------------------------- /utils/optimizers.py: -------------------------------------------------------------------------------- 1 | """ 2 | Utilty code for constructing optimizers and scheduling learning rates. 3 | """ 4 | 5 | # System 6 | import math 7 | 8 | # Externals 9 | import keras 10 | import horovod.keras as hvd 11 | 12 | def get_optimizer(name, lr, lr_scaling='linear', n_ranks=1, **opt_args): 13 | """ 14 | Configure the optimizer and scale the learning rate by n_ranks. 15 | """ 16 | # Scale the learning rate 17 | if lr_scaling == 'linear': 18 | lr = lr * n_ranks 19 | elif lr_scaling == 'sqrt': 20 | lr = lr * math.sqrt(n_ranks) 21 | 22 | # Construct the optimizer 23 | OptType = getattr(keras.optimizers, name) 24 | opt = OptType(lr=lr, **opt_args) 25 | 26 | # Distributed optimizer wrapper 27 | if n_ranks > 1: 28 | opt = hvd.DistributedOptimizer(opt) 29 | 30 | return opt 31 | --------------------------------------------------------------------------------