├── .gitignore
├── mlengine
├── trainer
│ ├── __init__.py
│ └── task.py
├── config.yaml
├── README.md
├── digits.py
└── digits.json
├── README.md
├── CONTRIBUTING.md
├── INSTALL.txt
├── tensorflowvisu.mplstyle
├── mnist_TF_layers.py
├── mnist_1.0_softmax.py
├── mnist_2.0_five_layers_sigmoid.py
├── mnist_2.1_five_layers_relu_lrdecay.py
├── mnist_3.0_convolutional.py
├── mnist_2.2_five_layers_relu_lrdecay_dropout.py
├── mnist_3.1_convolutional_bigger_dropout.py
├── mnist_4.0_batchnorm_five_layers_sigmoid.py
├── mnist_4.1_batchnorm_five_layers_relu.py
├── LICENSE
├── mnist_4.2_batchnorm_convolutional.py
├── tensorflowvisu.py
└── tensorflowvisu_digits.py
/.gitignore:
--------------------------------------------------------------------------------
1 | __pycache__
2 | data
3 | MNIST_data
4 | logs
5 | MNIST-data
6 | log
7 | checkpoints
8 |
--------------------------------------------------------------------------------
/mlengine/trainer/__init__.py:
--------------------------------------------------------------------------------
1 | # Licensed under the Apache License, Version 2.0 (the "License");
2 | # you may not use this file except in compliance with the License.
3 | # You may obtain a copy of the License at
4 | #
5 | # http://www.apache.org/licenses/LICENSE-2.0
6 | #
7 | # Unless required by applicable law or agreed to in writing, software
8 | # distributed under the License is distributed on an "AS IS" BASIS,
9 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 | # See the License for the specific language governing permissions and
11 | # limitations under the License.
12 | # ==============================================================================
13 |
--------------------------------------------------------------------------------
/mlengine/config.yaml:
--------------------------------------------------------------------------------
1 | # Licensed under the Apache License, Version 2.0 (the "License");
2 | # you may not use this file except in compliance with the License.
3 | # You may obtain a copy of the License at
4 | #
5 | # http://www.apache.org/licenses/LICENSE-2.0
6 | #
7 | # Unless required by applicable law or agreed to in writing, software
8 | # distributed under the License is distributed on an "AS IS" BASIS,
9 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 | # See the License for the specific language governing permissions and
11 | # limitations under the License.
12 |
13 | trainingInput:
14 | # Use a cluster with many workers and a few parameter servers.
15 | scaleTier: STANDARD_1
16 | #scaleTier: BASIC_GPU
17 | #scaleTier: PREMIUM_1
18 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | 
2 |
3 | This is support code for the codelab "[Tensorflow and deep learning - without a PhD](https://codelabs.developers.google.com/codelabs/cloud-tensorflow-mnist)"
4 |
5 | The presentation explaining the underlying concepts is [here](https://goo.gl/pHeXe7) and you will find codelab instructions to follow on its last slide. Do not forget to open the speaker notes in the presentation, a lot of the explanations are there.
6 |
7 | The lab takes 2.5 hours and takes you through the design and optimisation of a neural network for recognising handwritten digits, from the simplest possible solution all the way to a recognition accuracy above 99%. It covers dense and convolutional networks, as well as techniques such as learning rate decay and dropout.
8 |
9 | Installation instructions [here](INSTALL.txt). The short version is: install Python3, then pip3 install tensorflow and matplotlib.
10 |
11 | ---
12 |
13 | *Disclaimer: This is not an official Google product but sample code provided for an educational purpose*
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # How to contribute
2 |
3 | We'd love to accept your patches and contributions to this project. There are
4 | just a few small guidelines you need to follow.
5 |
6 | ## Contributor License Agreement
7 |
8 | Contributions to any Google project must be accompanied by a Contributor License
9 | Agreement. This is necessary because you own the copyright to your changes, even
10 | after your contribution becomes part of this project. So this agreement simply
11 | gives us permission to use and redistribute your contributions as part of the
12 | project. Head over to to see your current
13 | agreements on file or to sign a new one.
14 |
15 | You generally only need to submit a CLA once, so if you've already submitted one
16 | (even if it was for a different project), you probably don't need to do it
17 | again.
18 |
19 | ## Code reviews
20 |
21 | All submissions, including submissions by project members, require review. We
22 | use GitHub pull requests for this purpose. Consult [GitHub Help] for more
23 | information on using pull requests.
24 |
25 | [GitHub Help]: https://help.github.com/articles/about-pull-requests/
--------------------------------------------------------------------------------
/INSTALL.txt:
--------------------------------------------------------------------------------
1 | Python 3 is recommended for this lab. Python 2 works as well if you adapt the installation instructions.
2 |
3 | Installation instructions for straightforward pip install below.
4 |
5 | If you are a power user under a specific Python environment ((virtualenv, anaconda,
6 | docker), please visit tensorflow.org and follow the Python 3 instructions.
7 |
8 | MacOS:
9 | If you do not have it already, install git from https://git-scm.com/download/mac
10 | Install the latest version of python 3 from https://www.python.org/downloads/
11 | pip3 install --upgrade tensorflow
12 | pip3 install --upgrade matplotlib
13 |
14 | Ubuntu/Linux:
15 | sudo -H apt-get install git
16 | sudo -H apt-get install python3
17 | sudo -H apt-get install python3-matplotlib
18 | sudo -H apt-get install python3-pip
19 | sudo -H pip3 install --upgrade tensorflow
20 | # you might alo need to upgrade matplotlib, the version pulled by
21 | # apt-get is sometimes stale (but comes with the gfx backend)
22 | sudo -H pip3 install --upgrade matplotlib
23 |
24 | Windows:
25 | Install Anaconda, Python 3 version: https://www.continuum.io/downloads#windows
26 | Anaconda comes with matplotlib built in.
27 | In the Anaconda shell type: pip install --upgrade tensorflow
28 | If you get the error "Could not find a version that satisfies the requirement (...)" try the following alternative:
29 | conda config --add channels conda-forge
30 | conda install tensorflow
31 |
32 | TEST YOUR INSTALLATION:
33 | git clone https://github.com/martin-gorner/tensorflow-mnist-tutorial.git
34 | cd tensorflow-mnist-tutorial
35 | python3 mnist_1.0_softmax.py
36 | => A window should appear displaying a graphical visualisation and you should also see training data in the terminal.
37 |
--------------------------------------------------------------------------------
/tensorflowvisu.mplstyle:
--------------------------------------------------------------------------------
1 | # encoding: UTF-8
2 | # Copyright 2016 Google.com
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 |
16 | text.color: 555555
17 |
18 | legend.fancybox: true
19 | legend.framealpha: 0.4
20 | legend.facecolor: white
21 | legend.edgecolor: white
22 |
23 | patch.facecolor: FF3333
24 |
25 | # good for 1080p output
26 | figure.subplot.left: 0.11
27 | figure.subplot.right: 0.89
28 | figure.subplot.bottom: 0.07
29 | figure.subplot.top: 0.93
30 | figure.subplot.wspace: 0.3
31 | figure.subplot.hspace: 0.2
32 | axes.titlesize: xx-large
33 | xtick.labelsize: x-large
34 | ytick.labelsize: x-large
35 | legend.fontsize: x-large
36 | axes.prop_cycle: cycler('color', ['4444FF', 'FF3333', '22AA22']) + cycler('linewidth',[1, 3, 1])
37 |
38 | xtick.major.size: 0
39 | ytick.major.size: 0
40 | xtick.major.pad: 5
41 | ytick.major.pad: 5
42 |
43 | # alt settings for for 1300x800 screen output
44 | #figure.subplot.left: 0.07
45 | #figure.subplot.right: 0.93
46 | #figure.subplot.bottom: 0.07
47 | #figure.subplot.top: 0.93
48 | #figure.subplot.wspace: 0.2
49 | #figure.subplot.hspace: 0.2
50 | #axes.titlesize: large
51 | #xtick.labelsize: medium
52 | #ytick.labelsize: medium
53 | #legend.fontsize: medium
54 | #axes.prop_cycle: cycler('color', ['4444FF', 'FF3333', '22AA22']) + cycler('linewidth',[0.7, 2, 0.7])
55 |
56 |
--------------------------------------------------------------------------------
/mlengine/README.md:
--------------------------------------------------------------------------------
1 | **All scripts assume you are in the mlengine folder.**
2 | ## Train locally
3 | ```bash
4 | python trainer/task.py
5 | ```
6 | or
7 | ```bash
8 | mlengine mgorner$ gcloud ml-engine local train --module-name trainer.task --package-path trainer
9 | ```
10 | ## Train in the cloud
11 | (jobXXX, jobs/jobXXX, </jobs/jobXXX --project --config=config.yaml --module-name trainer.task --package-path trainer
14 | ```
15 | ## Predictions from the cloud
16 | Use the Cloud ML Engine UI to create a model and a version from
17 | the saved data from your training run.
18 | You will find it in folder:
19 |
20 | gs://<bucket>/jobs/jobXXX/export/Servo/XXXXXXXXXX
21 |
22 | Set your version of the model as the default version, then
23 | create the JSON payload. You can use the script:
24 | ```bash
25 | python digits.py > digits.json
26 | ```
27 | Then call the online predictions service, replacing with the name you have assigned:
28 | ```bash
29 | gcloud ml-engine predict --model --json-instances digits.json
30 | ```
31 | It should return a perfect scorecard:
32 |
33 | | CLASSES | PREDICTIONS |
34 | | ------------- | ------------- |
35 | | 8 | [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0] |
36 | | 7 | [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0] |
37 | | 7 | [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0] |
38 | | 5 | [0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0] |
39 | | 5 | [0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0] |
40 | ## Local predictions
41 | You can also simulate the prediction service locally, replace XXXXX with the # of your saved model:
42 | ```bash
43 | gcloud ml-engine local predict --model-dir checkpoints/export/Servo/XXXXX --json-instances digits.json
44 | ```
45 |
46 | ---
47 | ### Misc.
48 | If you want to experiment with TF Records, the sandard Tensorflow
49 | data format, you can run this script ((availble in the tensorflow distribution)
50 | to reformat the MNIST dataset into TF Records. It is not necessary for this sample though.
51 |
52 | ```bash
53 | python /tensorflow/examples/how_tos/reading_data/convert_to_records.py --directory=data --validation_size=0
54 | ```
55 |
56 |
--------------------------------------------------------------------------------
/mnist_TF_layers.py:
--------------------------------------------------------------------------------
1 | # Licensed under the Apache License, Version 2.0 (the "License");
2 | # you may not use this file except in compliance with the License.
3 | # You may obtain a copy of the License at
4 | #
5 | # http://www.apache.org/licenses/LICENSE-2.0
6 | #
7 | # Unless required by applicable law or agreed to in writing, software
8 | # distributed under the License is distributed on an "AS IS" BASIS,
9 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 | # See the License for the specific language governing permissions and
11 | # limitations under the License.
12 |
13 | import tensorflow as tf
14 | from tensorflow.contrib import learn
15 | from tensorflow.contrib import layers
16 | from tensorflow.contrib import metrics
17 | from tensorflow.contrib import framework
18 | from tensorflow.contrib.learn import MetricSpec
19 | from tensorflow.python.platform import tf_logging as logging
20 | from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
21 | import math
22 | from mlengine.digits import test_digits
23 | logging.set_verbosity(logging.INFO)
24 |
25 | # This sample shows how to write Tensorflow models using the high-level layers API
26 | # in Tensorflow. Using high-level APIs, you do not have to define placeholders and
27 | # variables yourself. Also, you will not need to write your own training loop by
28 | # using the Estimator interface instead.
29 | #
30 | # WARNING: tensorflow.contrib.learn.* APIs are still experimental and can change in breaking ways
31 | # as they mature. API stability will be ensured when tensorflow.contrib.learn becomes tensorflow.learn
32 |
33 | # Download images and labels into mnist.test (10K images+labels) and mnist.train (60K images+labels)
34 | mnist = read_data_sets("data", one_hot=False, reshape=True, validation_size=0)
35 |
36 | # In memory training data for this simple case.
37 | # When data is too large to fit in memory, use Tensorflow queues.
38 | def train_data_input_fn():
39 | return tf.train.shuffle_batch([tf.constant(mnist.train.images), tf.constant(mnist.train.labels)],
40 | batch_size=100, capacity=1100, min_after_dequeue=1000, enqueue_many=True)
41 |
42 | # Eval data is an in-memory constant here.
43 | def eval_data_input_fn():
44 | return tf.constant(mnist.test.images), tf.constant(mnist.test.labels)
45 |
46 |
47 | # Test data for a predictions run
48 | def predict_input_fn():
49 | return tf.cast(tf.constant(test_digits), tf.float32)
50 |
51 |
52 | # Model loss (not needed in INFER mode)
53 | def conv_model_loss(Ylogits, Y_, mode):
54 | return tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=tf.one_hot(Y_,10))) * 100 \
55 | if mode == learn.ModeKeys.TRAIN or mode == learn.ModeKeys.EVAL else None
56 |
57 |
58 | # Model optimiser (only needed in TRAIN mode)
59 | def conv_model_train_op(loss, mode):
60 | return layers.optimize_loss(loss, framework.get_global_step(), learning_rate=0.003, optimizer="Adam",
61 | # to remove learning rate decay, comment the next line
62 | learning_rate_decay_fn=lambda lr, step: 0.0001 + tf.train.exponential_decay(lr, step, -2000, math.e)
63 | ) if mode == learn.ModeKeys.TRAIN else None
64 |
65 |
66 | # Model evaluation metric (not needed in INFER mode)
67 | def conv_model_eval_metrics(classes, Y_, mode):
68 | # You can name the fields of your metrics dictionary as you like.
69 | return {'accuracy': metrics.accuracy(classes, Y_)} \
70 | if mode == learn.ModeKeys.TRAIN or mode == learn.ModeKeys.EVAL else None
71 |
72 | # Model
73 | def conv_model(X, Y_, mode):
74 | XX = tf.reshape(X, [-1, 28, 28, 1])
75 | biasInit = tf.constant_initializer(0.1, dtype=tf.float32)
76 | Y1 = layers.conv2d(XX, num_outputs=6, kernel_size=[6, 6], biases_initializer=biasInit)
77 | Y2 = layers.conv2d(Y1, num_outputs=12, kernel_size=[5, 5], stride=2, biases_initializer=biasInit)
78 | Y3 = layers.conv2d(Y2, num_outputs=24, kernel_size=[4, 4], stride=2, biases_initializer=biasInit)
79 | Y4 = layers.flatten(Y3)
80 | Y5 = layers.relu(Y4, 200, biases_initializer=biasInit)
81 | # to deactivate dropout on the dense layer, set keep_prob=1
82 | Y5d = layers.dropout(Y5, keep_prob=0.75, noise_shape=None, is_training=mode==learn.ModeKeys.TRAIN)
83 | Ylogits = layers.linear(Y5d, 10)
84 | predict = tf.nn.softmax(Ylogits)
85 | classes = tf.cast(tf.argmax(predict, 1), tf.uint8)
86 |
87 | loss = conv_model_loss(Ylogits, Y_, mode)
88 | train_op = conv_model_train_op(loss, mode)
89 | eval_metrics = conv_model_eval_metrics(classes, Y_, mode)
90 |
91 | return learn.ModelFnOps(
92 | mode=mode,
93 | # You can name the fields of your predictions dictionary as you like.
94 | predictions={"predictions": predict, "classes": classes},
95 | loss=loss,
96 | train_op=train_op,
97 | eval_metric_ops=eval_metrics
98 | )
99 |
100 | # Configuration to save a checkpoint every 1000 steps.
101 | training_config = tf.contrib.learn.RunConfig(save_checkpoints_secs=None, save_checkpoints_steps=1000, gpu_memory_fraction=0.9)
102 |
103 | estimator=learn.Estimator(model_fn=conv_model, model_dir="checkpoints", config=training_config)
104 |
105 | # Trains for 10000 additional steps saving checkpoints on a regular basis. The next
106 | # training will resume from the checkpoint unless you delete the "checkpoints" folder.
107 | estimator.fit(input_fn=train_data_input_fn, steps=10000)
108 | estimator.evaluate(input_fn=eval_data_input_fn, steps=1)
109 | digits = estimator.predict(input_fn=predict_input_fn)
110 | for digit in digits:
111 | print(str(digit['classes']), str(digit['predictions']))
--------------------------------------------------------------------------------
/mnist_1.0_softmax.py:
--------------------------------------------------------------------------------
1 | # encoding: UTF-8
2 | # Copyright 2016 Google.com
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 |
16 | import tensorflow as tf
17 | import tensorflowvisu
18 | from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
19 | tf.set_random_seed(0)
20 |
21 | # neural network with 1 layer of 10 softmax neurons
22 | #
23 | # · · · · · · · · · · (input data, flattened pixels) X [batch, 784] # 784 = 28 * 28
24 | # \x/x\x/x\x/x\x/x\x/ -- fully connected layer (softmax) W [784, 10] b[10]
25 | # · · · · · · · · Y [batch, 10]
26 |
27 | # The model is:
28 | #
29 | # Y = softmax( X * W + b)
30 | # X: matrix for 100 grayscale images of 28x28 pixels, flattened (there are 100 images in a mini-batch)
31 | # W: weight matrix with 784 lines and 10 columns
32 | # b: bias vector with 10 dimensions
33 | # +: add with broadcasting: adds the vector to each line of the matrix (numpy)
34 | # softmax(matrix) applies softmax on each line
35 | # softmax(line) applies an exp to each value then divides by the norm of the resulting line
36 | # Y: output matrix with 100 lines and 10 columns
37 |
38 | # Download images and labels into mnist.test (10K images+labels) and mnist.train (60K images+labels)
39 | mnist = read_data_sets("data", one_hot=True, reshape=False, validation_size=0)
40 |
41 | # input X: 28x28 grayscale images, the first dimension (None) will index the images in the mini-batch
42 | X = tf.placeholder(tf.float32, [None, 28, 28, 1])
43 | # correct answers will go here
44 | Y_ = tf.placeholder(tf.float32, [None, 10])
45 | # weights W[784, 10] 784=28*28
46 | W = tf.Variable(tf.zeros([784, 10]))
47 | # biases b[10]
48 | b = tf.Variable(tf.zeros([10]))
49 |
50 | # flatten the images into a single line of pixels
51 | # -1 in the shape definition means "the only possible dimension that will preserve the number of elements"
52 | XX = tf.reshape(X, [-1, 784])
53 |
54 | # The model
55 | Y = tf.nn.softmax(tf.matmul(XX, W) + b)
56 |
57 | # loss function: cross-entropy = - sum( Y_i * log(Yi) )
58 | # Y: the computed output vector
59 | # Y_: the desired output vector
60 |
61 | # cross-entropy
62 | # log takes the log of each element, * multiplies the tensors element by element
63 | # reduce_mean will add all the components in the tensor
64 | # so here we end up with the total cross-entropy for all images in the batch
65 | cross_entropy = -tf.reduce_mean(Y_ * tf.log(Y)) * 1000.0 # normalized for batches of 100 images,
66 | # *10 because "mean" included an unwanted division by 10
67 |
68 | # accuracy of the trained model, between 0 (worst) and 1 (best)
69 | correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
70 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
71 |
72 | # training, learning rate = 0.005
73 | train_step = tf.train.GradientDescentOptimizer(0.005).minimize(cross_entropy)
74 |
75 | # matplotlib visualisation
76 | allweights = tf.reshape(W, [-1])
77 | allbiases = tf.reshape(b, [-1])
78 | I = tensorflowvisu.tf_format_mnist_images(X, Y, Y_) # assembles 10x10 images by default
79 | It = tensorflowvisu.tf_format_mnist_images(X, Y, Y_, 1000, lines=25) # 1000 images on 25 lines
80 | datavis = tensorflowvisu.MnistDataVis()
81 |
82 | # init
83 | init = tf.global_variables_initializer()
84 | sess = tf.Session()
85 | sess.run(init)
86 |
87 |
88 | # You can call this function in a loop to train the model, 100 images at a time
89 | def training_step(i, update_test_data, update_train_data):
90 |
91 | # training on batches of 100 images with 100 labels
92 | batch_X, batch_Y = mnist.train.next_batch(100)
93 |
94 | # compute training values for visualisation
95 | if update_train_data:
96 | a, c, im, w, b = sess.run([accuracy, cross_entropy, I, allweights, allbiases], feed_dict={X: batch_X, Y_: batch_Y})
97 | datavis.append_training_curves_data(i, a, c)
98 | datavis.append_data_histograms(i, w, b)
99 | datavis.update_image1(im)
100 | print(str(i) + ": accuracy:" + str(a) + " loss: " + str(c))
101 |
102 | # compute test values for visualisation
103 | if update_test_data:
104 | a, c, im = sess.run([accuracy, cross_entropy, It], feed_dict={X: mnist.test.images, Y_: mnist.test.labels})
105 | datavis.append_test_curves_data(i, a, c)
106 | datavis.update_image2(im)
107 | print(str(i) + ": ********* epoch " + str(i*100//mnist.train.images.shape[0]+1) + " ********* test accuracy:" + str(a) + " test loss: " + str(c))
108 |
109 | # the backpropagation training step
110 | sess.run(train_step, feed_dict={X: batch_X, Y_: batch_Y})
111 |
112 |
113 | datavis.animate(training_step, iterations=2000+1, train_data_update_freq=10, test_data_update_freq=50, more_tests_at_start=True)
114 |
115 | # to save the animation as a movie, add save_movie=True as an argument to datavis.animate
116 | # to disable the visualisation use the following line instead of the datavis.animate line
117 | # for i in range(2000+1): training_step(i, i % 50 == 0, i % 10 == 0)
118 |
119 | print("max test accuracy: " + str(datavis.get_max_test_accuracy()))
120 |
121 | # final max test accuracy = 0.9268 (10K iterations). Accuracy should peak above 0.92 in the first 2000 iterations.
122 |
--------------------------------------------------------------------------------
/mlengine/trainer/task.py:
--------------------------------------------------------------------------------
1 | # Licensed under the Apache License, Version 2.0 (the "License");
2 | # you may not use this file except in compliance with the License.
3 | # You may obtain a copy of the License at
4 | #
5 | # http://www.apache.org/licenses/LICENSE-2.0
6 | #
7 | # Unless required by applicable law or agreed to in writing, software
8 | # distributed under the License is distributed on an "AS IS" BASIS,
9 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 | # See the License for the specific language governing permissions and
11 | # limitations under the License.
12 |
13 | import tensorflow as tf
14 | from tensorflow.contrib import learn
15 | from tensorflow.contrib import layers
16 | from tensorflow.contrib import metrics
17 | from tensorflow.contrib import framework
18 | from tensorflow.contrib.learn.python.learn import learn_runner
19 | from tensorflow.contrib.learn.python.learn.utils import saved_model_export_utils
20 | from tensorflow.contrib.learn.python.learn.utils import input_fn_utils
21 | from tensorflow.python.platform import tf_logging as logging
22 | from tensorflow.examples.tutorials.mnist import input_data
23 | import argparse
24 | import math
25 | import sys
26 | logging.set_verbosity(logging.INFO)
27 |
28 | # WARNING: tensorflow.contrib.learn.* APIs are still experimental and can change in breaking ways
29 | # as they mature. API stability will be ensured when tensorflow.contrib.learn becomes tensorflow.learn
30 |
31 | #
32 | # To run this: see README.md
33 | #
34 |
35 |
36 | # Called when the model is deployed for online predictions on Cloud ML Engine.
37 | def serving_input_fn():
38 | inputs = {'image': tf.placeholder(tf.uint8, [None, 28, 28])}
39 | # Here, you can transform the data received from the API call
40 | features = [tf.cast(inputs['image'], tf.float32)]
41 | return input_fn_utils.InputFnOps(features, None, inputs)
42 |
43 |
44 | # In memory training data for this simple case.
45 | # When data is too large to fit in memory, use Tensorflow queues.
46 | def train_data_input_fn(mnist):
47 | return tf.train.shuffle_batch([tf.constant(mnist.train.images), tf.constant(mnist.train.labels)],
48 | batch_size=100, capacity=1100, min_after_dequeue=1000, enqueue_many=True)
49 |
50 |
51 | # Eval data is an in-memory constant here.
52 | def eval_data_input_fn(mnist):
53 | return tf.constant(mnist.test.images), tf.constant(mnist.test.labels)
54 |
55 |
56 | # Model loss (not needed in INFER mode)
57 | def conv_model_loss(Ylogits, Y_, mode):
58 | return tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=tf.one_hot(Y_,10))) * 100 \
59 | if mode == learn.ModeKeys.TRAIN or mode == learn.ModeKeys.EVAL else None
60 |
61 |
62 | # Model optimiser (only needed in TRAIN mode)
63 | def conv_model_train_op(loss, mode):
64 | return layers.optimize_loss(loss, framework.get_global_step(), learning_rate=0.003, optimizer="Adam",
65 | # to remove learning rate decay, comment the next line
66 | learning_rate_decay_fn=lambda lr, step: 0.0001 + tf.train.exponential_decay(lr, step, -2000, math.e)
67 | ) if mode == learn.ModeKeys.TRAIN else None
68 |
69 |
70 |
71 | # Model evaluation metric (not needed in INFER mode)
72 | def conv_model_eval_metrics(classes, Y_, mode):
73 | # You can name the fields of your metrics dictionary as you like.
74 | return {'accuracy': metrics.accuracy(classes, Y_)} \
75 | if mode == learn.ModeKeys.TRAIN or mode == learn.ModeKeys.EVAL else None
76 |
77 | # Model
78 | def conv_model(X, Y_, mode):
79 | XX = tf.reshape(X, [-1, 28, 28, 1])
80 | biasInit = tf.constant_initializer(0.1, dtype=tf.float32)
81 | Y1 = layers.conv2d(XX, num_outputs=6, kernel_size=[6, 6], biases_initializer=biasInit)
82 | Y2 = layers.conv2d(Y1, num_outputs=12, kernel_size=[5, 5], stride=2, biases_initializer=biasInit)
83 | Y3 = layers.conv2d(Y2, num_outputs=24, kernel_size=[4, 4], stride=2, biases_initializer=biasInit)
84 | Y4 = layers.flatten(Y3)
85 | Y5 = layers.relu(Y4, 200, biases_initializer=biasInit)
86 | # to deactivate dropout on the dense layer, set keep_prob=1
87 | Y5d = layers.dropout(Y5, keep_prob=0.75, noise_shape=None, is_training=mode==learn.ModeKeys.TRAIN)
88 | Ylogits = layers.linear(Y5d, 10)
89 | predict = tf.nn.softmax(Ylogits)
90 | classes = tf.cast(tf.argmax(predict, 1), tf.uint8)
91 |
92 | loss = conv_model_loss(Ylogits, Y_, mode)
93 | train_op = conv_model_train_op(loss, mode)
94 | eval_metrics = conv_model_eval_metrics(classes, Y_, mode)
95 |
96 | return learn.ModelFnOps(
97 | mode=mode,
98 | # You can name the fields of your predictions dictionary as you like.
99 | predictions={"predictions": predict, "classes": classes},
100 | loss=loss,
101 | train_op=train_op,
102 | eval_metric_ops=eval_metrics
103 | )
104 |
105 | # Configuration to save a checkpoint every 1000 steps.
106 | training_config = tf.contrib.learn.RunConfig(save_checkpoints_secs=None, save_checkpoints_steps=1000, gpu_memory_fraction=0.9)
107 |
108 | # This will export a model at every checkpoint, including the transformations needed for online predictions.
109 | export_strategy=saved_model_export_utils.make_export_strategy(export_input_fn=serving_input_fn)
110 |
111 |
112 | # The Experiment is an Estimator with data loading functions and other parameters
113 | def experiment_fn_with_params(output_dir, data, **kwargs):
114 | ITERATIONS = 10000
115 | mnist = input_data.read_data_sets(data) # loads training and eval data in memory
116 | return learn.Experiment(
117 | estimator=learn.Estimator(model_fn=conv_model, model_dir=output_dir, config=training_config),
118 | train_input_fn=lambda: train_data_input_fn(mnist),
119 | eval_input_fn=lambda: eval_data_input_fn(mnist),
120 | train_steps=ITERATIONS,
121 | eval_steps=1,
122 | min_eval_frequency=1000,
123 | export_strategies=export_strategy
124 | )
125 |
126 |
127 | def main(argv):
128 | parser = argparse.ArgumentParser()
129 | # You must accept a --job-dir argument when running on Cloud ML Engine. It specifies where checkpoints should be saved.
130 | # You can define additional user arguments which will have to be specified after an empty arg -- on the command line:
131 | # gcloud ml-engine jobs submit training jobXXX --job-dir=... --ml-engine-args -- --user-args
132 | parser.add_argument('--job-dir', default="checkpoints", help='GCS or local path where to store training checkpoints')
133 | args = parser.parse_args()
134 | arguments = args.__dict__
135 | arguments['data'] = "data" # Hard-coded here: training data will be downloaded to folder 'data'.
136 |
137 | # learn_runner needs an experiment function with a single parameter: the output directory.
138 | # Here we pass additional command line arguments through a closure.
139 | output_dir = arguments.pop('job_dir')
140 | experiment_fn = lambda output_dir: experiment_fn_with_params(output_dir, **arguments)
141 | learn_runner.run(experiment_fn, output_dir)
142 |
143 |
144 | if __name__ == '__main__':
145 | main(sys.argv)
146 |
--------------------------------------------------------------------------------
/mnist_2.0_five_layers_sigmoid.py:
--------------------------------------------------------------------------------
1 | # encoding: UTF-8
2 | # Copyright 2016 Google.com
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 |
16 | import tensorflow as tf
17 | import tensorflowvisu
18 | from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
19 | tf.set_random_seed(0)
20 |
21 | # neural network with 5 layers
22 | #
23 | # · · · · · · · · · · (input data, flattened pixels) X [batch, 784] # 784 = 28*28
24 | # \x/x\x/x\x/x\x/x\x/ -- fully connected layer (sigmoid) W1 [784, 200] B1[200]
25 | # · · · · · · · · · Y1 [batch, 200]
26 | # \x/x\x/x\x/x\x/ -- fully connected layer (sigmoid) W2 [200, 100] B2[100]
27 | # · · · · · · · Y2 [batch, 100]
28 | # \x/x\x/x\x/ -- fully connected layer (sigmoid) W3 [100, 60] B3[60]
29 | # · · · · · Y3 [batch, 60]
30 | # \x/x\x/ -- fully connected layer (sigmoid) W4 [60, 30] B4[30]
31 | # · · · Y4 [batch, 30]
32 | # \x/ -- fully connected layer (softmax) W5 [30, 10] B5[10]
33 | # · Y5 [batch, 10]
34 |
35 | # Download images and labels into mnist.test (10K images+labels) and mnist.train (60K images+labels)
36 | mnist = read_data_sets("data", one_hot=True, reshape=False, validation_size=0)
37 |
38 | # input X: 28x28 grayscale images, the first dimension (None) will index the images in the mini-batch
39 | X = tf.placeholder(tf.float32, [None, 28, 28, 1])
40 | # correct answers will go here
41 | Y_ = tf.placeholder(tf.float32, [None, 10])
42 |
43 | # five layers and their number of neurons (tha last layer has 10 softmax neurons)
44 | L = 200
45 | M = 100
46 | N = 60
47 | O = 30
48 | # Weights initialised with small random values between -0.2 and +0.2
49 | # When using RELUs, make sure biases are initialised with small *positive* values for example 0.1 = tf.ones([K])/10
50 | W1 = tf.Variable(tf.truncated_normal([784, L], stddev=0.1)) # 784 = 28 * 28
51 | B1 = tf.Variable(tf.zeros([L]))
52 | W2 = tf.Variable(tf.truncated_normal([L, M], stddev=0.1))
53 | B2 = tf.Variable(tf.zeros([M]))
54 | W3 = tf.Variable(tf.truncated_normal([M, N], stddev=0.1))
55 | B3 = tf.Variable(tf.zeros([N]))
56 | W4 = tf.Variable(tf.truncated_normal([N, O], stddev=0.1))
57 | B4 = tf.Variable(tf.zeros([O]))
58 | W5 = tf.Variable(tf.truncated_normal([O, 10], stddev=0.1))
59 | B5 = tf.Variable(tf.zeros([10]))
60 |
61 | # The model
62 | XX = tf.reshape(X, [-1, 784])
63 | Y1 = tf.nn.sigmoid(tf.matmul(XX, W1) + B1)
64 | Y2 = tf.nn.sigmoid(tf.matmul(Y1, W2) + B2)
65 | Y3 = tf.nn.sigmoid(tf.matmul(Y2, W3) + B3)
66 | Y4 = tf.nn.sigmoid(tf.matmul(Y3, W4) + B4)
67 | Ylogits = tf.matmul(Y4, W5) + B5
68 | Y = tf.nn.softmax(Ylogits)
69 |
70 | # cross-entropy loss function (= -sum(Y_i * log(Yi)) ), normalised for batches of 100 images
71 | # TensorFlow provides the softmax_cross_entropy_with_logits function to avoid numerical stability
72 | # problems with log(0) which is NaN
73 | cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_)
74 | cross_entropy = tf.reduce_mean(cross_entropy)*100
75 |
76 | # accuracy of the trained model, between 0 (worst) and 1 (best)
77 | correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
78 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
79 |
80 | # matplotlib visualisation
81 | allweights = tf.concat([tf.reshape(W1, [-1]), tf.reshape(W2, [-1]), tf.reshape(W3, [-1]), tf.reshape(W4, [-1]), tf.reshape(W5, [-1])], 0)
82 | allbiases = tf.concat([tf.reshape(B1, [-1]), tf.reshape(B2, [-1]), tf.reshape(B3, [-1]), tf.reshape(B4, [-1]), tf.reshape(B5, [-1])], 0)
83 | I = tensorflowvisu.tf_format_mnist_images(X, Y, Y_)
84 | It = tensorflowvisu.tf_format_mnist_images(X, Y, Y_, 1000, lines=25)
85 | datavis = tensorflowvisu.MnistDataVis()
86 |
87 | # training step, learning rate = 0.003
88 | learning_rate = 0.003
89 | train_step = tf.train.AdamOptimizer(learning_rate).minimize(cross_entropy)
90 |
91 | # init
92 | init = tf.global_variables_initializer()
93 | sess = tf.Session()
94 | sess.run(init)
95 |
96 |
97 | # You can call this function in a loop to train the model, 100 images at a time
98 | def training_step(i, update_test_data, update_train_data):
99 |
100 | # training on batches of 100 images with 100 labels
101 | batch_X, batch_Y = mnist.train.next_batch(100)
102 |
103 | # compute training values for visualisation
104 | if update_train_data:
105 | a, c, im, w, b = sess.run([accuracy, cross_entropy, I, allweights, allbiases], {X: batch_X, Y_: batch_Y})
106 | print(str(i) + ": accuracy:" + str(a) + " loss: " + str(c) + " (lr:" + str(learning_rate) + ")")
107 | datavis.append_training_curves_data(i, a, c)
108 | datavis.update_image1(im)
109 | datavis.append_data_histograms(i, w, b)
110 |
111 | # compute test values for visualisation
112 | if update_test_data:
113 | a, c, im = sess.run([accuracy, cross_entropy, It], {X: mnist.test.images, Y_: mnist.test.labels})
114 | print(str(i) + ": ********* epoch " + str(i*100//mnist.train.images.shape[0]+1) + " ********* test accuracy:" + str(a) + " test loss: " + str(c))
115 | datavis.append_test_curves_data(i, a, c)
116 | datavis.update_image2(im)
117 |
118 | # the backpropagation training step
119 | sess.run(train_step, {X: batch_X, Y_: batch_Y})
120 |
121 | datavis.animate(training_step, iterations=10000+1, train_data_update_freq=20, test_data_update_freq=100, more_tests_at_start=True)
122 |
123 | # to save the animation as a movie, add save_movie=True as an argument to datavis.animate
124 | # to disable the visualisation use the following line instead of the datavis.animate line
125 | # for i in range(10000+1): training_step(i, i % 100 == 0, i % 20 == 0)
126 |
127 | print("max test accuracy: " + str(datavis.get_max_test_accuracy()))
128 |
129 | # Some results to expect:
130 | # (In all runs, if sigmoids are used, all biases are initialised at 0, if RELUs are used,
131 | # all biases are initialised at 0.1 apart from the last one which is initialised at 0.)
132 |
133 | ## learning rate = 0.003, 10K iterations
134 | # final test accuracy = 0.9788 (sigmoid - slow start, training cross-entropy not stabilised in the end)
135 | # final test accuracy = 0.9825 (relu - above 0.97 in the first 1500 iterations but noisy curves)
136 |
137 | ## now with learning rate = 0.0001, 10K iterations
138 | # final test accuracy = 0.9722 (relu - slow but smooth curve, would have gone higher in 20K iterations)
139 |
140 | ## decaying learning rate from 0.003 to 0.0001 decay_speed 2000, 10K iterations
141 | # final test accuracy = 0.9746 (sigmoid - training cross-entropy not stabilised)
142 | # final test accuracy = 0.9824 (relu - training set fully learned, test accuracy stable)
143 |
--------------------------------------------------------------------------------
/mnist_2.1_five_layers_relu_lrdecay.py:
--------------------------------------------------------------------------------
1 | # encoding: UTF-8
2 | # Copyright 2016 Google.com
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 |
16 | import tensorflow as tf
17 | import tensorflowvisu
18 | import math
19 | from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
20 | tf.set_random_seed(0)
21 |
22 | # neural network with 5 layers
23 | #
24 | # · · · · · · · · · · (input data, flattened pixels) X [batch, 784] # 784 = 28*28
25 | # \x/x\x/x\x/x\x/x\x/ -- fully connected layer (relu) W1 [784, 200] B1[200]
26 | # · · · · · · · · · Y1 [batch, 200]
27 | # \x/x\x/x\x/x\x/ -- fully connected layer (relu) W2 [200, 100] B2[100]
28 | # · · · · · · · Y2 [batch, 100]
29 | # \x/x\x/x\x/ -- fully connected layer (relu) W3 [100, 60] B3[60]
30 | # · · · · · Y3 [batch, 60]
31 | # \x/x\x/ -- fully connected layer (relu) W4 [60, 30] B4[30]
32 | # · · · Y4 [batch, 30]
33 | # \x/ -- fully connected layer (softmax) W5 [30, 10] B5[10]
34 | # · Y5 [batch, 10]
35 |
36 | # Download images and labels into mnist.test (10K images+labels) and mnist.train (60K images+labels)
37 | mnist = read_data_sets("data", one_hot=True, reshape=False, validation_size=0)
38 |
39 | # input X: 28x28 grayscale images, the first dimension (None) will index the images in the mini-batch
40 | X = tf.placeholder(tf.float32, [None, 28, 28, 1])
41 | # correct answers will go here
42 | Y_ = tf.placeholder(tf.float32, [None, 10])
43 | # variable learning rate
44 | lr = tf.placeholder(tf.float32)
45 |
46 | # five layers and their number of neurons (tha last layer has 10 softmax neurons)
47 | L = 200
48 | M = 100
49 | N = 60
50 | O = 30
51 | # Weights initialised with small random values between -0.2 and +0.2
52 | # When using RELUs, make sure biases are initialised with small *positive* values for example 0.1 = tf.ones([K])/10
53 | W1 = tf.Variable(tf.truncated_normal([784, L], stddev=0.1)) # 784 = 28 * 28
54 | B1 = tf.Variable(tf.ones([L])/10)
55 | W2 = tf.Variable(tf.truncated_normal([L, M], stddev=0.1))
56 | B2 = tf.Variable(tf.ones([M])/10)
57 | W3 = tf.Variable(tf.truncated_normal([M, N], stddev=0.1))
58 | B3 = tf.Variable(tf.ones([N])/10)
59 | W4 = tf.Variable(tf.truncated_normal([N, O], stddev=0.1))
60 | B4 = tf.Variable(tf.ones([O])/10)
61 | W5 = tf.Variable(tf.truncated_normal([O, 10], stddev=0.1))
62 | B5 = tf.Variable(tf.zeros([10]))
63 |
64 | # The model
65 | XX = tf.reshape(X, [-1, 784])
66 | Y1 = tf.nn.relu(tf.matmul(XX, W1) + B1)
67 | Y2 = tf.nn.relu(tf.matmul(Y1, W2) + B2)
68 | Y3 = tf.nn.relu(tf.matmul(Y2, W3) + B3)
69 | Y4 = tf.nn.relu(tf.matmul(Y3, W4) + B4)
70 | Ylogits = tf.matmul(Y4, W5) + B5
71 | Y = tf.nn.softmax(Ylogits)
72 |
73 | # cross-entropy loss function (= -sum(Y_i * log(Yi)) ), normalised for batches of 100 images
74 | # TensorFlow provides the softmax_cross_entropy_with_logits function to avoid numerical stability
75 | # problems with log(0) which is NaN
76 | cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_)
77 | cross_entropy = tf.reduce_mean(cross_entropy)*100
78 |
79 | # accuracy of the trained model, between 0 (worst) and 1 (best)
80 | correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
81 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
82 |
83 | # matplotlib visualisation
84 | allweights = tf.concat([tf.reshape(W1, [-1]), tf.reshape(W2, [-1]), tf.reshape(W3, [-1]), tf.reshape(W4, [-1]), tf.reshape(W5, [-1])], 0)
85 | allbiases = tf.concat([tf.reshape(B1, [-1]), tf.reshape(B2, [-1]), tf.reshape(B3, [-1]), tf.reshape(B4, [-1]), tf.reshape(B5, [-1])], 0)
86 | I = tensorflowvisu.tf_format_mnist_images(X, Y, Y_)
87 | It = tensorflowvisu.tf_format_mnist_images(X, Y, Y_, 1000, lines=25)
88 | datavis = tensorflowvisu.MnistDataVis()
89 |
90 | # training step, the learning rate is a placeholder
91 | train_step = tf.train.AdamOptimizer(lr).minimize(cross_entropy)
92 |
93 | # init
94 | init = tf.global_variables_initializer()
95 | sess = tf.Session()
96 | sess.run(init)
97 |
98 |
99 | # You can call this function in a loop to train the model, 100 images at a time
100 | def training_step(i, update_test_data, update_train_data):
101 |
102 | # training on batches of 100 images with 100 labels
103 | batch_X, batch_Y = mnist.train.next_batch(100)
104 |
105 | # learning rate decay
106 | max_learning_rate = 0.003
107 | min_learning_rate = 0.0001
108 | decay_speed = 2000.0 # 0.003-0.0001-2000=>0.9826 done in 5000 iterations
109 | learning_rate = min_learning_rate + (max_learning_rate - min_learning_rate) * math.exp(-i/decay_speed)
110 |
111 | # compute training values for visualisation
112 | if update_train_data:
113 | a, c, im, w, b = sess.run([accuracy, cross_entropy, I, allweights, allbiases], {X: batch_X, Y_: batch_Y})
114 | print(str(i) + ": accuracy:" + str(a) + " loss: " + str(c) + " (lr:" + str(learning_rate) + ")")
115 | datavis.append_training_curves_data(i, a, c)
116 | datavis.update_image1(im)
117 | datavis.append_data_histograms(i, w, b)
118 |
119 | # compute test values for visualisation
120 | if update_test_data:
121 | a, c, im = sess.run([accuracy, cross_entropy, It], {X: mnist.test.images, Y_: mnist.test.labels})
122 | print(str(i) + ": ********* epoch " + str(i*100//mnist.train.images.shape[0]+1) + " ********* test accuracy:" + str(a) + " test loss: " + str(c))
123 | datavis.append_test_curves_data(i, a, c)
124 | datavis.update_image2(im)
125 |
126 | # the backpropagation training step
127 | sess.run(train_step, {X: batch_X, Y_: batch_Y, lr: learning_rate})
128 |
129 | datavis.animate(training_step, iterations=10000+1, train_data_update_freq=20, test_data_update_freq=100, more_tests_at_start=True)
130 |
131 | # to save the animation as a movie, add save_movie=True as an argument to datavis.animate
132 | # to disable the visualisation use the following line instead of the datavis.animate line
133 | # for i in range(10000+1): training_step(i, i % 100 == 0, i % 20 == 0)
134 |
135 | print("max test accuracy: " + str(datavis.get_max_test_accuracy()))
136 |
137 | # Some results to expect:
138 | # (In all runs, if sigmoids are used, all biases are initialised at 0, if RELUs are used,
139 | # all biases are initialised at 0.1 apart from the last one which is initialised at 0.)
140 |
141 | ## learning rate = 0.003, 10K iterations
142 | # final test accuracy = 0.9788 (sigmoid - slow start, training cross-entropy not stabilised in the end)
143 | # final test accuracy = 0.9825 (relu - above 0.97 in the first 1500 iterations but noisy curves)
144 |
145 | ## now with learning rate = 0.0001, 10K iterations
146 | # final test accuracy = 0.9722 (relu - slow but smooth curve, would have gone higher in 20K iterations)
147 |
148 | ## decaying learning rate from 0.003 to 0.0001 decay_speed 2000, 10K iterations
149 | # final test accuracy = 0.9746 (sigmoid - training cross-entropy not stabilised)
150 | # final test accuracy = 0.9824 (relu - training set fully learned, test accuracy stable)
151 |
--------------------------------------------------------------------------------
/mnist_3.0_convolutional.py:
--------------------------------------------------------------------------------
1 | # encoding: UTF-8
2 | # Copyright 2016 Google.com
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 |
16 | import tensorflow as tf
17 | import tensorflowvisu
18 | import math
19 | from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
20 | tf.set_random_seed(0)
21 |
22 | # Download images and labels into mnist.test (10K images+labels) and mnist.train (60K images+labels)
23 | mnist = read_data_sets("data", one_hot=True, reshape=False, validation_size=0)
24 |
25 | # neural network structure for this sample:
26 | #
27 | # · · · · · · · · · · (input data, 1-deep) X [batch, 28, 28, 1]
28 | # @ @ @ @ @ @ @ @ @ @ -- conv. layer 5x5x1=>4 stride 1 W1 [5, 5, 1, 4] B1 [4]
29 | # ∶∶∶∶∶∶∶∶∶∶∶∶∶∶∶∶∶∶∶ Y1 [batch, 28, 28, 4]
30 | # @ @ @ @ @ @ @ @ -- conv. layer 5x5x4=>8 stride 2 W2 [5, 5, 4, 8] B2 [8]
31 | # ∶∶∶∶∶∶∶∶∶∶∶∶∶∶∶ Y2 [batch, 14, 14, 8]
32 | # @ @ @ @ @ @ -- conv. layer 4x4x8=>12 stride 2 W3 [4, 4, 8, 12] B3 [12]
33 | # ∶∶∶∶∶∶∶∶∶∶∶ Y3 [batch, 7, 7, 12] => reshaped to YY [batch, 7*7*12]
34 | # \x/x\x\x/ -- fully connected layer (relu) W4 [7*7*12, 200] B4 [200]
35 | # · · · · Y4 [batch, 200]
36 | # \x/x\x/ -- fully connected layer (softmax) W5 [200, 10] B5 [10]
37 | # · · · Y [batch, 20]
38 |
39 | # input X: 28x28 grayscale images, the first dimension (None) will index the images in the mini-batch
40 | X = tf.placeholder(tf.float32, [None, 28, 28, 1])
41 | # correct answers will go here
42 | Y_ = tf.placeholder(tf.float32, [None, 10])
43 | # variable learning rate
44 | lr = tf.placeholder(tf.float32)
45 |
46 | # three convolutional layers with their channel counts, and a
47 | # fully connected layer (tha last layer has 10 softmax neurons)
48 | K = 4 # first convolutional layer output depth
49 | L = 8 # second convolutional layer output depth
50 | M = 12 # third convolutional layer
51 | N = 200 # fully connected layer
52 |
53 | W1 = tf.Variable(tf.truncated_normal([5, 5, 1, K], stddev=0.1)) # 5x5 patch, 1 input channel, K output channels
54 | B1 = tf.Variable(tf.ones([K])/10)
55 | W2 = tf.Variable(tf.truncated_normal([5, 5, K, L], stddev=0.1))
56 | B2 = tf.Variable(tf.ones([L])/10)
57 | W3 = tf.Variable(tf.truncated_normal([4, 4, L, M], stddev=0.1))
58 | B3 = tf.Variable(tf.ones([M])/10)
59 |
60 | W4 = tf.Variable(tf.truncated_normal([7 * 7 * M, N], stddev=0.1))
61 | B4 = tf.Variable(tf.ones([N])/10)
62 | W5 = tf.Variable(tf.truncated_normal([N, 10], stddev=0.1))
63 | B5 = tf.Variable(tf.ones([10])/10)
64 |
65 | # The model
66 | stride = 1 # output is 28x28
67 | Y1 = tf.nn.relu(tf.nn.conv2d(X, W1, strides=[1, stride, stride, 1], padding='SAME') + B1)
68 | stride = 2 # output is 14x14
69 | Y2 = tf.nn.relu(tf.nn.conv2d(Y1, W2, strides=[1, stride, stride, 1], padding='SAME') + B2)
70 | stride = 2 # output is 7x7
71 | Y3 = tf.nn.relu(tf.nn.conv2d(Y2, W3, strides=[1, stride, stride, 1], padding='SAME') + B3)
72 |
73 | # reshape the output from the third convolution for the fully connected layer
74 | YY = tf.reshape(Y3, shape=[-1, 7 * 7 * M])
75 |
76 | Y4 = tf.nn.relu(tf.matmul(YY, W4) + B4)
77 | Ylogits = tf.matmul(Y4, W5) + B5
78 | Y = tf.nn.softmax(Ylogits)
79 |
80 | # cross-entropy loss function (= -sum(Y_i * log(Yi)) ), normalised for batches of 100 images
81 | # TensorFlow provides the softmax_cross_entropy_with_logits function to avoid numerical stability
82 | # problems with log(0) which is NaN
83 | cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_)
84 | cross_entropy = tf.reduce_mean(cross_entropy)*100
85 |
86 | # accuracy of the trained model, between 0 (worst) and 1 (best)
87 | correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
88 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
89 |
90 | # matplotlib visualisation
91 | allweights = tf.concat([tf.reshape(W1, [-1]), tf.reshape(W2, [-1]), tf.reshape(W3, [-1]), tf.reshape(W4, [-1]), tf.reshape(W5, [-1])], 0)
92 | allbiases = tf.concat([tf.reshape(B1, [-1]), tf.reshape(B2, [-1]), tf.reshape(B3, [-1]), tf.reshape(B4, [-1]), tf.reshape(B5, [-1])], 0)
93 | I = tensorflowvisu.tf_format_mnist_images(X, Y, Y_)
94 | It = tensorflowvisu.tf_format_mnist_images(X, Y, Y_, 1000, lines=25)
95 | datavis = tensorflowvisu.MnistDataVis()
96 |
97 | # training step, the learning rate is a placeholder
98 | train_step = tf.train.AdamOptimizer(lr).minimize(cross_entropy)
99 |
100 | # init
101 | init = tf.global_variables_initializer()
102 | sess = tf.Session()
103 | sess.run(init)
104 |
105 | # You can call this function in a loop to train the model, 100 images at a time
106 | def training_step(i, update_test_data, update_train_data):
107 |
108 | # training on batches of 100 images with 100 labels
109 | batch_X, batch_Y = mnist.train.next_batch(100)
110 |
111 | # learning rate decay
112 | max_learning_rate = 0.003
113 | min_learning_rate = 0.0001
114 | decay_speed = 2000.0
115 | learning_rate = min_learning_rate + (max_learning_rate - min_learning_rate) * math.exp(-i/decay_speed)
116 |
117 | # compute training values for visualisation
118 | if update_train_data:
119 | a, c, im, w, b = sess.run([accuracy, cross_entropy, I, allweights, allbiases], {X: batch_X, Y_: batch_Y})
120 | print(str(i) + ": accuracy:" + str(a) + " loss: " + str(c) + " (lr:" + str(learning_rate) + ")")
121 | datavis.append_training_curves_data(i, a, c)
122 | datavis.update_image1(im)
123 | datavis.append_data_histograms(i, w, b)
124 |
125 | # compute test values for visualisation
126 | if update_test_data:
127 | a, c, im = sess.run([accuracy, cross_entropy, It], {X: mnist.test.images, Y_: mnist.test.labels})
128 | print(str(i) + ": ********* epoch " + str(i*100//mnist.train.images.shape[0]+1) + " ********* test accuracy:" + str(a) + " test loss: " + str(c))
129 | datavis.append_test_curves_data(i, a, c)
130 | datavis.update_image2(im)
131 |
132 | # the backpropagation training step
133 | sess.run(train_step, {X: batch_X, Y_: batch_Y, lr: learning_rate})
134 |
135 | datavis.animate(training_step, 10001, train_data_update_freq=10, test_data_update_freq=100)
136 |
137 | # to save the animation as a movie, add save_movie=True as an argument to datavis.animate
138 | # to disable the visualisation use the following line instead of the datavis.animate line
139 | # for i in range(10000+1): training_step(i, i % 100 == 0, i % 20 == 0)
140 |
141 | print("max test accuracy: " + str(datavis.get_max_test_accuracy()))
142 |
143 | # layers 4 8 12 200, patches 5x5str1 5x5str2 4x4str2 best 0.989 after 10000 iterations
144 | # layers 4 8 12 200, patches 5x5str1 4x4str2 4x4str2 best 0.9892 after 10000 iterations
145 | # layers 6 12 24 200, patches 5x5str1 4x4str2 4x4str2 best 0.9908 after 10000 iterations but going downhill from 5000 on
146 | # layers 6 12 24 200, patches 5x5str1 4x4str2 4x4str2 dropout=0.75 best 0.9922 after 10000 iterations (but above 0.99 after 1400 iterations only)
147 | # layers 4 8 12 200, patches 5x5str1 4x4str2 4x4str2 dropout=0.75, best 0.9914 at 13700 iterations
148 | # layers 9 16 25 200, patches 5x5str1 4x4str2 4x4str2 dropout=0.75, best 0.9918 at 10500 (but 0.99 at 1500 iterations already, 0.9915 at 5800)
149 | # layers 9 16 25 300, patches 5x5str1 4x4str2 4x4str2 dropout=0.75, best 0.9916 at 5500 iterations (but 0.9903 at 1200 iterations already)
150 | # attempts with 2 fully-connected layers: no better 300 and 100 neurons, dropout 0.75 and 0.5, 6x6 5x5 4x4 patches no better
151 | #*layers 6 12 24 200, patches 6x6str1 5x5str2 4x4str2 dropout=0.75 best 0.9928 after 12800 iterations (but consistently above 0.99 after 1300 iterations only, 0.9916 at 2300 iterations, 0.9921 at 5600, 0.9925 at 20000)
152 | # layers 6 12 24 200, patches 6x6str1 5x5str2 4x4str2 no dropout best 0.9906 after 3100 iterations (avove 0.99 from iteration 1400)
153 |
--------------------------------------------------------------------------------
/mnist_2.2_five_layers_relu_lrdecay_dropout.py:
--------------------------------------------------------------------------------
1 | # encoding: UTF-8
2 | # Copyright 2016 Google.com
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 |
16 | import tensorflow as tf
17 | import tensorflowvisu
18 | import math
19 | from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
20 | tf.set_random_seed(0)
21 |
22 | # neural network with 5 layers
23 | #
24 | # · · · · · · · · · · (input data, flattened pixels) X [batch, 784] # 784 = 28*28
25 | # \x/x\x/x\x/x\x/x\x/ ✞ -- fully connected layer (relu+dropout) W1 [784, 200] B1[200]
26 | # · · · · · · · · · Y1 [batch, 200]
27 | # \x/x\x/x\x/x\x/ ✞ -- fully connected layer (relu+dropout) W2 [200, 100] B2[100]
28 | # · · · · · · · Y2 [batch, 100]
29 | # \x/x\x/x\x/ ✞ -- fully connected layer (relu+dropout) W3 [100, 60] B3[60]
30 | # · · · · · Y3 [batch, 60]
31 | # \x/x\x/ ✞ -- fully connected layer (relu+dropout) W4 [60, 30] B4[30]
32 | # · · · Y4 [batch, 30]
33 | # \x/ -- fully connected layer (softmax) W5 [30, 10] B5[10]
34 | # · Y5 [batch, 10]
35 |
36 | # Download images and labels into mnist.test (10K images+labels) and mnist.train (60K images+labels)
37 | mnist = read_data_sets("data", one_hot=True, reshape=False, validation_size=0)
38 |
39 | # input X: 28x28 grayscale images, the first dimension (None) will index the images in the mini-batch
40 | X = tf.placeholder(tf.float32, [None, 28, 28, 1])
41 | # correct answers will go here
42 | Y_ = tf.placeholder(tf.float32, [None, 10])
43 | # variable learning rate
44 | lr = tf.placeholder(tf.float32)
45 | # Probability of keeping a node during dropout = 1.0 at test time (no dropout) and 0.75 at training time
46 | pkeep = tf.placeholder(tf.float32)
47 |
48 | # five layers and their number of neurons (tha last layer has 10 softmax neurons)
49 | L = 200
50 | M = 100
51 | N = 60
52 | O = 30
53 | # Weights initialised with small random values between -0.2 and +0.2
54 | # When using RELUs, make sure biases are initialised with small *positive* values for example 0.1 = tf.ones([K])/10
55 | W1 = tf.Variable(tf.truncated_normal([784, L], stddev=0.1)) # 784 = 28 * 28
56 | B1 = tf.Variable(tf.ones([L])/10)
57 | W2 = tf.Variable(tf.truncated_normal([L, M], stddev=0.1))
58 | B2 = tf.Variable(tf.ones([M])/10)
59 | W3 = tf.Variable(tf.truncated_normal([M, N], stddev=0.1))
60 | B3 = tf.Variable(tf.ones([N])/10)
61 | W4 = tf.Variable(tf.truncated_normal([N, O], stddev=0.1))
62 | B4 = tf.Variable(tf.ones([O])/10)
63 | W5 = tf.Variable(tf.truncated_normal([O, 10], stddev=0.1))
64 | B5 = tf.Variable(tf.zeros([10]))
65 |
66 | # The model, with dropout at each layer
67 | XX = tf.reshape(X, [-1, 28*28])
68 |
69 | Y1 = tf.nn.relu(tf.matmul(XX, W1) + B1)
70 | Y1d = tf.nn.dropout(Y1, pkeep)
71 |
72 | Y2 = tf.nn.relu(tf.matmul(Y1d, W2) + B2)
73 | Y2d = tf.nn.dropout(Y2, pkeep)
74 |
75 | Y3 = tf.nn.relu(tf.matmul(Y2d, W3) + B3)
76 | Y3d = tf.nn.dropout(Y3, pkeep)
77 |
78 | Y4 = tf.nn.relu(tf.matmul(Y3d, W4) + B4)
79 | Y4d = tf.nn.dropout(Y4, pkeep)
80 |
81 | Ylogits = tf.matmul(Y4d, W5) + B5
82 | Y = tf.nn.softmax(Ylogits)
83 |
84 | # cross-entropy loss function (= -sum(Y_i * log(Yi)) ), normalised for batches of 100 images
85 | # TensorFlow provides the softmax_cross_entropy_with_logits function to avoid numerical stability
86 | # problems with log(0) which is NaN
87 | cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_)
88 | cross_entropy = tf.reduce_mean(cross_entropy)*100
89 |
90 | # accuracy of the trained model, between 0 (worst) and 1 (best)
91 | correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
92 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
93 |
94 | # matplotlib visualisation
95 | allweights = tf.concat([tf.reshape(W1, [-1]), tf.reshape(W2, [-1]), tf.reshape(W3, [-1]), tf.reshape(W4, [-1]), tf.reshape(W5, [-1])], 0)
96 | allbiases = tf.concat([tf.reshape(B1, [-1]), tf.reshape(B2, [-1]), tf.reshape(B3, [-1]), tf.reshape(B4, [-1]), tf.reshape(B5, [-1])], 0)
97 | I = tensorflowvisu.tf_format_mnist_images(X, Y, Y_)
98 | It = tensorflowvisu.tf_format_mnist_images(X, Y, Y_, 1000, lines=25)
99 | datavis = tensorflowvisu.MnistDataVis()
100 |
101 | # training step, the learning rate is a placeholder
102 | train_step = tf.train.AdamOptimizer(lr).minimize(cross_entropy)
103 |
104 | # init
105 | init = tf.global_variables_initializer()
106 | sess = tf.Session()
107 | sess.run(init)
108 |
109 |
110 | # You can call this function in a loop to train the model, 100 images at a time
111 | def training_step(i, update_test_data, update_train_data):
112 |
113 | # training on batches of 100 images with 100 labels
114 | batch_X, batch_Y = mnist.train.next_batch(100)
115 |
116 | # learning rate decay
117 | max_learning_rate = 0.003
118 | min_learning_rate = 0.0001
119 | decay_speed = 2000.0 # 0.003-0.0001-2000=>0.9826 done in 5000 iterations
120 | learning_rate = min_learning_rate + (max_learning_rate - min_learning_rate) * math.exp(-i/decay_speed)
121 |
122 | # compute training values for visualisation
123 | if update_train_data:
124 | a, c, im, w, b = sess.run([accuracy, cross_entropy, I, allweights, allbiases], {X: batch_X, Y_: batch_Y, pkeep: 1.0})
125 | print(str(i) + ": accuracy:" + str(a) + " loss: " + str(c) + " (lr:" + str(learning_rate) + ")")
126 | datavis.append_training_curves_data(i, a, c)
127 | datavis.update_image1(im)
128 | datavis.append_data_histograms(i, w, b)
129 |
130 | # compute test values for visualisation
131 | if update_test_data:
132 | a, c, im = sess.run([accuracy, cross_entropy, It], {X: mnist.test.images, Y_: mnist.test.labels, pkeep: 1.0})
133 | print(str(i) + ": ********* epoch " + str(i*100//mnist.train.images.shape[0]+1) + " ********* test accuracy:" + str(a) + " test loss: " + str(c))
134 | datavis.append_test_curves_data(i, a, c)
135 | datavis.update_image2(im)
136 |
137 | # the backpropagation training step
138 | sess.run(train_step, {X: batch_X, Y_: batch_Y, pkeep: 0.75, lr: learning_rate})
139 |
140 | datavis.animate(training_step, iterations=10000+1, train_data_update_freq=20, test_data_update_freq=100, more_tests_at_start=True)
141 |
142 | # to save the animation as a movie, add save_movie=True as an argument to datavis.animate
143 | # to disable the visualisation use the following line instead of the datavis.animate line
144 | # for i in range(10000+1): training_step(i, i % 100 == 0, i % 20 == 0)
145 |
146 | print("max test accuracy: " + str(datavis.get_max_test_accuracy()))
147 |
148 | # Some results to expect:
149 | # (In all runs, if sigmoids are used, all biases are initialised at 0, if RELUs are used,
150 | # all biases are initialised at 0.1 apart from the last one which is initialised at 0.)
151 |
152 | ## test with and without dropout, decaying learning rate from 0.003 to 0.0001 decay_speed 2000, 10K iterations
153 | # final test accuracy = 0.9817 (relu, dropout 0.75, training cross-entropy still a bit noisy, test cross-entropy stable, test accuracy stable just under 98.2)
154 | # final test accuracy = 0.9824 (relu, no dropout, training cross-entropy down to 0, test cross-entropy goes up significantly, test accuracy stable around 98.2)
155 |
156 | ## learning rate = 0.003, 10K iterations, no dropout
157 | # final test accuracy = 0.9788 (sigmoid - slow start, training cross-entropy not stabilised in the end)
158 | # final test accuracy = 0.9825 (relu - above 0.97 in the first 1500 iterations but noisy curves)
159 |
160 | ## now with learning rate = 0.0001, 10K iterations, no dropout
161 | # final test accuracy = 0.9722 (relu - slow but smooth curve, would have gone higher in 20K iterations)
162 |
163 | ## decaying learning rate from 0.003 to 0.0001 decay_speed 2000, 10K iterations, no dropout
164 | # final test accuracy = 0.9746 (sigmoid - training cross-entropy not stabilised)
165 | # final test accuracy = 0.9824 (relu, training cross-entropy down to 0, test cross-entropy goes up significantly, test accuracy stable around 98.2)
166 | # on another run, peak at 0.9836
167 |
--------------------------------------------------------------------------------
/mnist_3.1_convolutional_bigger_dropout.py:
--------------------------------------------------------------------------------
1 | # encoding: UTF-8
2 | # Copyright 2016 Google.com
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 |
16 | import tensorflow as tf
17 | import tensorflowvisu
18 | import math
19 | from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
20 | tf.set_random_seed(0)
21 |
22 | # Download images and labels into mnist.test (10K images+labels) and mnist.train (60K images+labels)
23 | mnist = read_data_sets("data", one_hot=True, reshape=False, validation_size=0)
24 |
25 | # neural network structure for this sample:
26 | #
27 | # · · · · · · · · · · (input data, 1-deep) X [batch, 28, 28, 1]
28 | # @ @ @ @ @ @ @ @ @ @ -- conv. layer 6x6x1=>6 stride 1 W1 [5, 5, 1, 6] B1 [6]
29 | # ∶∶∶∶∶∶∶∶∶∶∶∶∶∶∶∶∶∶∶ Y1 [batch, 28, 28, 6]
30 | # @ @ @ @ @ @ @ @ -- conv. layer 5x5x6=>12 stride 2 W2 [5, 5, 6, 12] B2 [12]
31 | # ∶∶∶∶∶∶∶∶∶∶∶∶∶∶∶ Y2 [batch, 14, 14, 12]
32 | # @ @ @ @ @ @ -- conv. layer 4x4x12=>24 stride 2 W3 [4, 4, 12, 24] B3 [24]
33 | # ∶∶∶∶∶∶∶∶∶∶∶ Y3 [batch, 7, 7, 24] => reshaped to YY [batch, 7*7*24]
34 | # \x/x\x\x/ ✞ -- fully connected layer (relu+dropout) W4 [7*7*24, 200] B4 [200]
35 | # · · · · Y4 [batch, 200]
36 | # \x/x\x/ -- fully connected layer (softmax) W5 [200, 10] B5 [10]
37 | # · · · Y [batch, 20]
38 |
39 | # input X: 28x28 grayscale images, the first dimension (None) will index the images in the mini-batch
40 | X = tf.placeholder(tf.float32, [None, 28, 28, 1])
41 | # correct answers will go here
42 | Y_ = tf.placeholder(tf.float32, [None, 10])
43 | # variable learning rate
44 | lr = tf.placeholder(tf.float32)
45 | # Probability of keeping a node during dropout = 1.0 at test time (no dropout) and 0.75 at training time
46 | pkeep = tf.placeholder(tf.float32)
47 |
48 | # three convolutional layers with their channel counts, and a
49 | # fully connected layer (tha last layer has 10 softmax neurons)
50 | K = 6 # first convolutional layer output depth
51 | L = 12 # second convolutional layer output depth
52 | M = 24 # third convolutional layer
53 | N = 200 # fully connected layer
54 |
55 | W1 = tf.Variable(tf.truncated_normal([6, 6, 1, K], stddev=0.1)) # 6x6 patch, 1 input channel, K output channels
56 | B1 = tf.Variable(tf.constant(0.1, tf.float32, [K]))
57 | W2 = tf.Variable(tf.truncated_normal([5, 5, K, L], stddev=0.1))
58 | B2 = tf.Variable(tf.constant(0.1, tf.float32, [L]))
59 | W3 = tf.Variable(tf.truncated_normal([4, 4, L, M], stddev=0.1))
60 | B3 = tf.Variable(tf.constant(0.1, tf.float32, [M]))
61 |
62 | W4 = tf.Variable(tf.truncated_normal([7 * 7 * M, N], stddev=0.1))
63 | B4 = tf.Variable(tf.constant(0.1, tf.float32, [N]))
64 | W5 = tf.Variable(tf.truncated_normal([N, 10], stddev=0.1))
65 | B5 = tf.Variable(tf.constant(0.1, tf.float32, [10]))
66 |
67 | # The model
68 | stride = 1 # output is 28x28
69 | Y1 = tf.nn.relu(tf.nn.conv2d(X, W1, strides=[1, stride, stride, 1], padding='SAME') + B1)
70 | stride = 2 # output is 14x14
71 | Y2 = tf.nn.relu(tf.nn.conv2d(Y1, W2, strides=[1, stride, stride, 1], padding='SAME') + B2)
72 | stride = 2 # output is 7x7
73 | Y3 = tf.nn.relu(tf.nn.conv2d(Y2, W3, strides=[1, stride, stride, 1], padding='SAME') + B3)
74 |
75 | # reshape the output from the third convolution for the fully connected layer
76 | YY = tf.reshape(Y3, shape=[-1, 7 * 7 * M])
77 |
78 | Y4 = tf.nn.relu(tf.matmul(YY, W4) + B4)
79 | YY4 = tf.nn.dropout(Y4, pkeep)
80 | Ylogits = tf.matmul(YY4, W5) + B5
81 | Y = tf.nn.softmax(Ylogits)
82 |
83 | # cross-entropy loss function (= -sum(Y_i * log(Yi)) ), normalised for batches of 100 images
84 | # TensorFlow provides the softmax_cross_entropy_with_logits function to avoid numerical stability
85 | # problems with log(0) which is NaN
86 | cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_)
87 | cross_entropy = tf.reduce_mean(cross_entropy)*100
88 |
89 | # accuracy of the trained model, between 0 (worst) and 1 (best)
90 | correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
91 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
92 |
93 | # matplotlib visualisation
94 | allweights = tf.concat([tf.reshape(W1, [-1]), tf.reshape(W2, [-1]), tf.reshape(W3, [-1]), tf.reshape(W4, [-1]), tf.reshape(W5, [-1])], 0)
95 | allbiases = tf.concat([tf.reshape(B1, [-1]), tf.reshape(B2, [-1]), tf.reshape(B3, [-1]), tf.reshape(B4, [-1]), tf.reshape(B5, [-1])], 0)
96 | I = tensorflowvisu.tf_format_mnist_images(X, Y, Y_)
97 | It = tensorflowvisu.tf_format_mnist_images(X, Y, Y_, 1000, lines=25)
98 | datavis = tensorflowvisu.MnistDataVis()
99 |
100 | # training step, the learning rate is a placeholder
101 | train_step = tf.train.AdamOptimizer(lr).minimize(cross_entropy)
102 |
103 | # init
104 | init = tf.global_variables_initializer()
105 | sess = tf.Session()
106 | sess.run(init)
107 |
108 |
109 | # You can call this function in a loop to train the model, 100 images at a time
110 | def training_step(i, update_test_data, update_train_data):
111 |
112 | # training on batches of 100 images with 100 labels
113 | batch_X, batch_Y = mnist.train.next_batch(100)
114 |
115 | # learning rate decay
116 | max_learning_rate = 0.003
117 | min_learning_rate = 0.0001
118 | decay_speed = 2000.0
119 | learning_rate = min_learning_rate + (max_learning_rate - min_learning_rate) * math.exp(-i/decay_speed)
120 |
121 | # compute training values for visualisation
122 | if update_train_data:
123 | a, c, im, w, b = sess.run([accuracy, cross_entropy, I, allweights, allbiases], {X: batch_X, Y_: batch_Y, pkeep: 1.0})
124 | print(str(i) + ": accuracy:" + str(a) + " loss: " + str(c) + " (lr:" + str(learning_rate) + ")")
125 | datavis.append_training_curves_data(i, a, c)
126 | datavis.update_image1(im)
127 | datavis.append_data_histograms(i, w, b)
128 |
129 | # compute test values for visualisation
130 | if update_test_data:
131 | a, c, im = sess.run([accuracy, cross_entropy, It], {X: mnist.test.images, Y_: mnist.test.labels, pkeep: 1.0})
132 | print(str(i) + ": ********* epoch " + str(i*100//mnist.train.images.shape[0]+1) + " ********* test accuracy:" + str(a) + " test loss: " + str(c))
133 | datavis.append_test_curves_data(i, a, c)
134 | datavis.update_image2(im)
135 |
136 | # the backpropagation training step
137 | sess.run(train_step, {X: batch_X, Y_: batch_Y, lr: learning_rate, pkeep: 0.75})
138 |
139 | datavis.animate(training_step, 10001, train_data_update_freq=20, test_data_update_freq=100)
140 |
141 | # to save the animation as a movie, add save_movie=True as an argument to datavis.animate
142 | # to disable the visualisation use the following line instead of the datavis.animate line
143 | # for i in range(10000+1): training_step(i, i % 100 == 0, i % 20 == 0)
144 |
145 | print("max test accuracy: " + str(datavis.get_max_test_accuracy()))
146 |
147 | ## All runs 10K iterations:
148 | # layers 4 8 12 200, patches 5x5str1 5x5str2 4x4str2 best 0.989
149 | # layers 4 8 12 200, patches 5x5str1 4x4str2 4x4str2 best 0.9892
150 | # layers 6 12 24 200, patches 5x5str1 4x4str2 4x4str2 best 0.9908 after 10000 iterations but going downhill from 5000 on
151 | # layers 6 12 24 200, patches 5x5str1 4x4str2 4x4str2 dropout=0.75 best 0.9922 (but above 0.99 after 1400 iterations only)
152 | # layers 4 8 12 200, patches 5x5str1 4x4str2 4x4str2 dropout=0.75, best 0.9914 at 13700 iterations
153 | # layers 9 16 25 200, patches 5x5str1 4x4str2 4x4str2 dropout=0.75, best 0.9918 at 10500 (but 0.99 at 1500 iterations already, 0.9915 at 5800)
154 | # layers 9 16 25 300, patches 5x5str1 4x4str2 4x4str2 dropout=0.75, best 0.9916 at 5500 iterations (but 0.9903 at 1200 iterations already)
155 | # attempts with 2 fully-connected layers: no better 300 and 100 neurons, dropout 0.75 and 0.5, 6x6 5x5 4x4 patches no better
156 | # layers 6 12 24 200, patches 6x6str1 5x5str2 4x4str2 no dropout best 0.9906 after 3100 iterations (avove 0.99 from iteration 1400)
157 | #*layers 6 12 24 200, patches 6x6str1 5x5str2 4x4str2 dropout=0.75 best 0.9928 after 12800 iterations (but consistently above 0.99 after 1300 iterations only, 0.9916 at 2300 iterations, 0.9921 at 5600, 0.9925 at 20000)
158 | #*same with dacaying learning rate 0.003-0.0001-2000: best 0.9931 (on other runs max accuracy 0.9921, 0.9927, 0.9935, 0.9929, 0.9933)
159 |
160 |
--------------------------------------------------------------------------------
/mnist_4.0_batchnorm_five_layers_sigmoid.py:
--------------------------------------------------------------------------------
1 | # encoding: UTF-8
2 | # Copyright 2016 Google.com
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 |
16 | import tensorflow as tf
17 | import tensorflowvisu
18 | import math
19 | from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
20 | tf.set_random_seed(0)
21 |
22 | # neural network with 5 layers
23 | #
24 | # · · · · · · · · · · (input data, flattened pixels) X [batch, 784] # 784 = 28*28
25 | # \x/x\x/x\x/x\x/x\x/ -- fully connected layer (sigmoid+BN) W1 [784, 200] B1[200]
26 | # · · · · · · · · · Y1 [batch, 200]
27 | # \x/x\x/x\x/x\x/ -- fully connected layer (sigmoid+BN) W2 [200, 100] B2[100]
28 | # · · · · · · · Y2 [batch, 100]
29 | # \x/x\x/x\x/ -- fully connected layer (sigmoid+BN) W3 [100, 60] B3[60]
30 | # · · · · · Y3 [batch, 60]
31 | # \x/x\x/ -- fully connected layer (sigmoid+BN) W4 [60, 30] B4[30]
32 | # · · · Y4 [batch, 30]
33 | # \x/ -- fully connected layer (softmax+BN) W5 [30, 10] B5[10]
34 | # · Y5 [batch, 10]
35 |
36 | # Download images and labels into mnist.test (10K images+labels) and mnist.train (60K images+labels)
37 | mnist = read_data_sets("data", one_hot=True, reshape=False, validation_size=0)
38 |
39 | # input X: 28x28 grayscale images, the first dimension (None) will index the images in the mini-batch
40 | X = tf.placeholder(tf.float32, [None, 28, 28, 1])
41 | # correct answers will go here
42 | Y_ = tf.placeholder(tf.float32, [None, 10])
43 | # variable learning rate
44 | lr = tf.placeholder(tf.float32)
45 | # train/test selector for batch normalisation
46 | tst = tf.placeholder(tf.bool)
47 | # training iteration
48 | iter = tf.placeholder(tf.int32)
49 |
50 | # five layers and their number of neurons (tha last layer has 10 softmax neurons)
51 | L = 200
52 | M = 100
53 | N = 60
54 | P = 30
55 | Q = 10
56 |
57 | # Weights initialised with small random values between -0.2 and +0.2
58 | # When using RELUs, make sure biases are initialised with small *positive* values for example 0.1 = tf.ones([K])/10
59 | W1 = tf.Variable(tf.truncated_normal([784, L], stddev=0.1)) # 784 = 28 * 28
60 | S1 = tf.Variable(tf.ones([L]))
61 | O1 = tf.Variable(tf.zeros([L]))
62 | W2 = tf.Variable(tf.truncated_normal([L, M], stddev=0.1))
63 | S2 = tf.Variable(tf.ones([M]))
64 | O2 = tf.Variable(tf.zeros([M]))
65 | W3 = tf.Variable(tf.truncated_normal([M, N], stddev=0.1))
66 | S3 = tf.Variable(tf.ones([N]))
67 | O3 = tf.Variable(tf.zeros([N]))
68 | W4 = tf.Variable(tf.truncated_normal([N, P], stddev=0.1))
69 | S4 = tf.Variable(tf.ones([P]))
70 | O4 = tf.Variable(tf.zeros([P]))
71 | W5 = tf.Variable(tf.truncated_normal([P, Q], stddev=0.1))
72 | B5 = tf.Variable(tf.zeros([Q]))
73 |
74 | ## Batch normalisation conclusions with sigmoid activation function:
75 | # BN is applied between logits and the activation function
76 | # On Sigmoids it is very clear that without BN, the sigmoids saturate, with BN, they output
77 | # a clean gaussian distribution of values, especially with high initial learning rates.
78 |
79 | # sigmoid, no batch-norm, lr(0.003, 0.0001, 2000) => 97.5%
80 | # sigmoid, batch-norm lr(0.03, 0.0001, 1000) => 98%
81 | # sigmoid, batch-norm, no offsets => 97.3%
82 | # sigmoid, batch-norm, no scales => 98.1% but cannot hold fast learning rate at start
83 | # sigmoid, batch-norm, no scales, no offsets => 96%
84 |
85 | # Both scales and offsets are useful with sigmoids.
86 | # With RELUs, the scale variables can be omitted.
87 | # Biases are not useful with batch norm, offsets are to be used instead
88 |
89 | # Steady 98.5% accuracy using these parameters:
90 | # moving average decay: 0.998 (equivalent to averaging over two epochs)
91 | # learning rate decay from 0.03 to 0.0001 speed 1000 => max 98.59 at 6500 iterations, 98.54 at 10K it, 98% at 1300it, 98.5% at 3200it
92 |
93 | def batchnorm(Ylogits, Offset, Scale, is_test, iteration):
94 | exp_moving_avg = tf.train.ExponentialMovingAverage(0.998, iteration) # adding the iteration prevents from averaging across non-existing iterations
95 | bnepsilon = 1e-5
96 | mean, variance = tf.nn.moments(Ylogits, [0])
97 | update_moving_everages = exp_moving_avg.apply([mean, variance])
98 | m = tf.cond(is_test, lambda: exp_moving_avg.average(mean), lambda: mean)
99 | v = tf.cond(is_test, lambda: exp_moving_avg.average(variance), lambda: variance)
100 | Ybn = tf.nn.batch_normalization(Ylogits, m, v, Offset, Scale, bnepsilon)
101 | return Ybn, update_moving_everages
102 |
103 | def no_batchnorm(Ylogits, Offset, Scale, is_test, iteration):
104 | return Ylogits, tf.no_op()
105 |
106 | # The model
107 | XX = tf.reshape(X, [-1, 784])
108 |
109 | Y1l = tf.matmul(XX, W1)
110 | Y1bn, update_ema1 = batchnorm(Y1l, O1, S1, tst, iter)
111 | Y1 = tf.nn.sigmoid(Y1bn)
112 |
113 | Y2l = tf.matmul(Y1, W2)
114 | Y2bn, update_ema2 = batchnorm(Y2l, O2, S2, tst, iter)
115 | Y2 = tf.nn.sigmoid(Y2bn)
116 |
117 | Y3l = tf.matmul(Y2, W3)
118 | Y3bn, update_ema3 = batchnorm(Y3l, O3, S3, tst, iter)
119 | Y3 = tf.nn.sigmoid(Y3bn)
120 |
121 | Y4l = tf.matmul(Y3, W4)
122 | Y4bn, update_ema4 = batchnorm(Y4l, O4, S4, tst, iter)
123 | Y4 = tf.nn.sigmoid(Y4bn)
124 |
125 | Ylogits = tf.matmul(Y4, W5) + B5
126 | Y = tf.nn.softmax(Ylogits)
127 |
128 | update_ema = tf.group(update_ema1, update_ema2, update_ema3, update_ema4)
129 |
130 | # cross-entropy loss function (= -sum(Y_i * log(Yi)) ), normalised for batches of 100 images
131 | # TensorFlow provides the softmax_cross_entropy_with_logits function to avoid numerical stability
132 | # problems with log(0) which is NaN
133 | cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_)
134 | cross_entropy = tf.reduce_mean(cross_entropy)*100
135 |
136 | # accuracy of the trained model, between 0 (worst) and 1 (best)
137 | correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
138 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
139 |
140 | # matplotlib visualisation
141 | allweights = tf.concat([tf.reshape(W1, [-1]), tf.reshape(W2, [-1]), tf.reshape(W3, [-1]), tf.reshape(W4, [-1]), tf.reshape(W5, [-1])], 0)
142 | allbiases = tf.concat([tf.reshape(O1, [-1]), tf.reshape(O2, [-1]), tf.reshape(O3, [-1]), tf.reshape(O4, [-1]), tf.reshape(B5, [-1])], 0)
143 | # to use for sigmoid
144 | allactivations = tf.concat([tf.reshape(Y1, [-1]), tf.reshape(Y2, [-1]), tf.reshape(Y3, [-1]), tf.reshape(Y4, [-1])], 0)
145 | # to use for RELU
146 | #allactivations = tf.concat([tf.reduce_max(Y1, [0]), tf.reduce_max(Y2, [0]), tf.reduce_max(Y3, [0]), tf.reduce_max(Y4, [0])], 0)
147 | alllogits = tf.concat([tf.reshape(Y1l, [-1]), tf.reshape(Y2l, [-1]), tf.reshape(Y3l, [-1]), tf.reshape(Y4l, [-1])], 0)
148 | I = tensorflowvisu.tf_format_mnist_images(X, Y, Y_)
149 | It = tensorflowvisu.tf_format_mnist_images(X, Y, Y_, 1000, lines=25)
150 | datavis = tensorflowvisu.MnistDataVis(title4="Logits", title5="activations", histogram4colornum=2, histogram5colornum=2)
151 |
152 |
153 | # training step, the learning rate is a placeholder
154 | train_step = tf.train.AdamOptimizer(lr).minimize(cross_entropy)
155 |
156 | # init
157 | init = tf.global_variables_initializer()
158 | sess = tf.Session()
159 | sess.run(init)
160 |
161 |
162 | # You can call this function in a loop to train the model, 100 images at a time
163 | def training_step(i, update_test_data, update_train_data):
164 |
165 | # training on batches of 100 images with 100 labels
166 | batch_X, batch_Y = mnist.train.next_batch(100)
167 |
168 | # learning rate decay (without batch norm)
169 | #max_learning_rate = 0.003
170 | #min_learning_rate = 0.0001
171 | #decay_speed = 2000
172 | # learning rate decay (with batch norm)
173 | max_learning_rate = 0.03
174 | min_learning_rate = 0.0001
175 | decay_speed = 1000.0
176 | learning_rate = min_learning_rate + (max_learning_rate - min_learning_rate) * math.exp(-i/decay_speed)
177 |
178 | # compute training values for visualisation
179 | if update_train_data:
180 | a, c, im, al, ac = sess.run([accuracy, cross_entropy, I, alllogits, allactivations], {X: batch_X, Y_: batch_Y, tst: False})
181 | print(str(i) + ": accuracy:" + str(a) + " loss: " + str(c) + " (lr:" + str(learning_rate) + ")")
182 | datavis.append_training_curves_data(i, a, c)
183 | datavis.update_image1(im)
184 | datavis.append_data_histograms(i, al, ac)
185 |
186 | # compute test values for visualisation
187 | if update_test_data:
188 | a, c, im = sess.run([accuracy, cross_entropy, It], {X: mnist.test.images, Y_: mnist.test.labels, tst: True})
189 | print(str(i) + ": ********* epoch " + str(i*100//mnist.train.images.shape[0]+1) + " ********* test accuracy:" + str(a) + " test loss: " + str(c))
190 | datavis.append_test_curves_data(i, a, c)
191 | datavis.update_image2(im)
192 |
193 | # the backpropagation training step
194 | sess.run(train_step, {X: batch_X, Y_: batch_Y, lr: learning_rate, tst: False})
195 | sess.run(update_ema, {X: batch_X, Y_: batch_Y, tst: False, iter: i})
196 |
197 | datavis.animate(training_step, iterations=10000+1, train_data_update_freq=20, test_data_update_freq=100, more_tests_at_start=True)
198 |
199 | # to save the animation as a movie, add save_movie=True as an argument to datavis.animate
200 | # to disable the visualisation use the following line instead of the datavis.animate line
201 | # for i in range(10000+1): training_step(i, i % 100 == 0, i % 20 == 0)
202 |
203 | print("max test accuracy: " + str(datavis.get_max_test_accuracy()))
204 |
205 | # Some results to expect:
206 | # (In all runs, if sigmoids are used, all biases are initialised at 0, if RELUs are used,
207 | # all biases are initialised at 0.1 apart from the last one which is initialised at 0.)
208 |
209 | ## decaying learning rate from 0.003 to 0.0001 decay_speed 2000, 10K iterations
210 | # final test accuracy = 0.9813 (sigmoid - training cross-entropy not stabilised)
211 | # final test accuracy = 0.9842 (relu - training set fully learned, test accuracy stable)
212 |
--------------------------------------------------------------------------------
/mnist_4.1_batchnorm_five_layers_relu.py:
--------------------------------------------------------------------------------
1 | # encoding: UTF-8
2 | # Copyright 2016 Google.com
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 |
16 | import tensorflow as tf
17 | import tensorflowvisu
18 | import math
19 | from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
20 | tf.set_random_seed(0)
21 |
22 | # neural network with 5 layers
23 | #
24 | # · · · · · · · · · · (input data, flattened pixels) X [batch, 784] # 784 = 28*28
25 | # \x/x\x/x\x/x\x/x\x/ -- fully connected layer (relu+BN) W1 [784, 200] B1[200]
26 | # · · · · · · · · · Y1 [batch, 200]
27 | # \x/x\x/x\x/x\x/ -- fully connected layer (relu+BN) W2 [200, 100] B2[100]
28 | # · · · · · · · Y2 [batch, 100]
29 | # \x/x\x/x\x/ -- fully connected layer (relu+BN) W3 [100, 60] B3[60]
30 | # · · · · · Y3 [batch, 60]
31 | # \x/x\x/ -- fully connected layer (relu+BN) W4 [60, 30] B4[30]
32 | # · · · Y4 [batch, 30]
33 | # \x/ -- fully connected layer (softmax) W5 [30, 10] B5[10]
34 | # · Y5 [batch, 10]
35 |
36 | # Download images and labels into mnist.test (10K images+labels) and mnist.train (60K images+labels)
37 | mnist = read_data_sets("data", one_hot=True, reshape=False, validation_size=0)
38 |
39 | # input X: 28x28 grayscale images, the first dimension (None) will index the images in the mini-batch
40 | X = tf.placeholder(tf.float32, [None, 28, 28, 1])
41 | # correct answers will go here
42 | Y_ = tf.placeholder(tf.float32, [None, 10])
43 | # variable learning rate
44 | lr = tf.placeholder(tf.float32)
45 | # train/test selector for batch normalisation
46 | tst = tf.placeholder(tf.bool)
47 | # training iteration
48 | iter = tf.placeholder(tf.int32)
49 |
50 | # five layers and their number of neurons (tha last layer has 10 softmax neurons)
51 | L = 200
52 | M = 100
53 | N = 60
54 | P = 30
55 | Q = 10
56 |
57 | # Weights initialised with small random values between -0.2 and +0.2
58 | # When using RELUs, make sure biases are initialised with small *positive* values for example 0.1 = tf.ones([K])/10
59 | W1 = tf.Variable(tf.truncated_normal([784, L], stddev=0.1)) # 784 = 28 * 28
60 | B1 = tf.Variable(tf.ones([L])/10)
61 | W2 = tf.Variable(tf.truncated_normal([L, M], stddev=0.1))
62 | B2 = tf.Variable(tf.ones([M])/10)
63 | W3 = tf.Variable(tf.truncated_normal([M, N], stddev=0.1))
64 | B3 = tf.Variable(tf.ones([N])/10)
65 | W4 = tf.Variable(tf.truncated_normal([N, P], stddev=0.1))
66 | B4 = tf.Variable(tf.ones([P])/10)
67 | W5 = tf.Variable(tf.truncated_normal([P, Q], stddev=0.1))
68 | B5 = tf.Variable(tf.ones([Q])/10)
69 |
70 | ## Batch normalisation conclusions:
71 | # On RELUs, you have to display batch-max(activation) to see the nice effect on distribution but
72 | # it is very visible.
73 | # With RELUs, the scale and offset variables can be omitted. They do not seem to do anything.
74 |
75 | # Steady 98.5% accuracy using these parameters:
76 | # moving average decay: 0.998 (equivalent to averaging over two epochs)
77 | # learning rate decay from 0.03 to 0.0001 speed 1000 => max 98.59 at 6500 iterations, 98.54 at 10K it, 98% at 1300it, 98.5% at 3200it
78 |
79 | # relu, no batch-norm, lr(0.003, 0.0001, 2000) => 98.2%
80 | # relu, batch-norm lr(0.03, 0.0001, 1000) => 98.5% - 98.55%
81 | # relu, batch-norm, no offsets => 98.5% - 98.55% (no change)
82 | # relu, batch-norm, no scales => 98.5% - 98.55% (no change)
83 | # relu, batch-norm, no scales, no offsets => 98.5% - 98.55% (no change) - even peak at 98.59% :-)
84 |
85 | # Correct usage of batch norm scale and offset parameters:
86 | # According to BN paper, offsets should be kept and biases removed.
87 | # In practice, it seems to work well with BN without offsets and traditional biases.
88 | # "When the next layer is linear (also e.g. `nn.relu`), scaling can be
89 | # disabled since the scaling can be done by the next layer."
90 | # So apparently no need of scaling before a RELU.
91 | # => Using neither scales not offsets with RELUs.
92 |
93 | def batchnorm(Ylogits, is_test, iteration, offset, convolutional=False):
94 | exp_moving_avg = tf.train.ExponentialMovingAverage(0.999, iteration) # adding the iteration prevents from averaging across non-existing iterations
95 | bnepsilon = 1e-5
96 | if convolutional:
97 | mean, variance = tf.nn.moments(Ylogits, [0, 1, 2])
98 | else:
99 | mean, variance = tf.nn.moments(Ylogits, [0])
100 | update_moving_everages = exp_moving_avg.apply([mean, variance])
101 | m = tf.cond(is_test, lambda: exp_moving_avg.average(mean), lambda: mean)
102 | v = tf.cond(is_test, lambda: exp_moving_avg.average(variance), lambda: variance)
103 | Ybn = tf.nn.batch_normalization(Ylogits, m, v, offset, None, bnepsilon)
104 | return Ybn, update_moving_everages
105 |
106 | def no_batchnorm(Ylogits, is_test, iteration, offset, convolutional=False):
107 | return Ylogits, tf.no_op()
108 |
109 | # The model
110 | XX = tf.reshape(X, [-1, 784])
111 |
112 | # batch norm scaling is not useful with relus
113 | # batch norm offsets are used instead of biases
114 |
115 | Y1l = tf.matmul(XX, W1)
116 | Y1bn, update_ema1 = batchnorm(Y1l, tst, iter, B1)
117 | Y1 = tf.nn.relu(Y1bn)
118 |
119 | Y2l = tf.matmul(Y1, W2)
120 | Y2bn, update_ema2 = batchnorm(Y2l, tst, iter, B2)
121 | Y2 = tf.nn.relu(Y2bn)
122 |
123 | Y3l = tf.matmul(Y2, W3)
124 | Y3bn, update_ema3 = batchnorm(Y3l, tst, iter, B3)
125 | Y3 = tf.nn.relu(Y3bn)
126 |
127 | Y4l = tf.matmul(Y3, W4)
128 | Y4bn, update_ema4 = batchnorm(Y4l, tst, iter, B4)
129 | Y4 = tf.nn.relu(Y4bn)
130 |
131 | Ylogits = tf.matmul(Y4, W5) + B5
132 | Y = tf.nn.softmax(Ylogits)
133 |
134 | update_ema = tf.group(update_ema1, update_ema2, update_ema3, update_ema4)
135 |
136 | # cross-entropy loss function (= -sum(Y_i * log(Yi)) ), normalised for batches of 100 images
137 | # TensorFlow provides the softmax_cross_entropy_with_logits function to avoid numerical stability
138 | # problems with log(0) which is NaN
139 | cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_)
140 | cross_entropy = tf.reduce_mean(cross_entropy)*100
141 |
142 | # accuracy of the trained model, between 0 (worst) and 1 (best)
143 | correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
144 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
145 |
146 | # matplotlib visualisation
147 | allweights = tf.concat([tf.reshape(W1, [-1]), tf.reshape(W2, [-1]), tf.reshape(W3, [-1])], 0)
148 | allbiases = tf.concat([tf.reshape(B1, [-1]), tf.reshape(B2, [-1]), tf.reshape(B3, [-1])], 0)
149 | # to use for sigmoid
150 | #allactivations = tf.concat([tf.reshape(Y1, [-1]), tf.reshape(Y2, [-1]), tf.reshape(Y3, [-1]), tf.reshape(Y4, [-1])], 0)
151 | # to use for RELU
152 | allactivations = tf.concat([tf.reduce_max(Y1, [0]), tf.reduce_max(Y2, [0]), tf.reduce_max(Y3, [0]), tf.reduce_max(Y4, [0])], 0)
153 | alllogits = tf.concat([tf.reshape(Y1l, [-1]), tf.reshape(Y2l, [-1]), tf.reshape(Y3l, [-1]), tf.reshape(Y4l, [-1])], 0)
154 | I = tensorflowvisu.tf_format_mnist_images(X, Y, Y_)
155 | It = tensorflowvisu.tf_format_mnist_images(X, Y, Y_, 1000, lines=25)
156 | datavis = tensorflowvisu.MnistDataVis(title4="Logits", title5="Max activations across batch", histogram4colornum=2, histogram5colornum=2)
157 |
158 |
159 | # training step, the learning rate is a placeholder
160 | train_step = tf.train.AdamOptimizer(lr).minimize(cross_entropy)
161 |
162 | # init
163 | init = tf.global_variables_initializer()
164 | sess = tf.Session()
165 | sess.run(init)
166 |
167 |
168 | # You can call this function in a loop to train the model, 100 images at a time
169 | def training_step(i, update_test_data, update_train_data):
170 |
171 | # training on batches of 100 images with 100 labels
172 | batch_X, batch_Y = mnist.train.next_batch(100)
173 |
174 | # learning rate decay (without batch norm)
175 | #max_learning_rate = 0.003
176 | #min_learning_rate = 0.0001
177 | #decay_speed = 2000
178 | # learning rate decay (with batch norm)
179 | max_learning_rate = 0.03
180 | min_learning_rate = 0.0001
181 | decay_speed = 1000.0
182 | learning_rate = min_learning_rate + (max_learning_rate - min_learning_rate) * math.exp(-i/decay_speed)
183 |
184 | # compute training values for visualisation
185 | if update_train_data:
186 | a, c, im, al, ac = sess.run([accuracy, cross_entropy, I, alllogits, allactivations], {X: batch_X, Y_: batch_Y, tst: False})
187 | print(str(i) + ": accuracy:" + str(a) + " loss: " + str(c) + " (lr:" + str(learning_rate) + ")")
188 | datavis.append_training_curves_data(i, a, c)
189 | datavis.update_image1(im)
190 | datavis.append_data_histograms(i, al, ac)
191 |
192 | # compute test values for visualisation
193 | if update_test_data:
194 | a, c, im = sess.run([accuracy, cross_entropy, It], {X: mnist.test.images, Y_: mnist.test.labels, tst: True})
195 | print(str(i) + ": ********* epoch " + str(i*100//mnist.train.images.shape[0]+1) + " ********* test accuracy:" + str(a) + " test loss: " + str(c))
196 | datavis.append_test_curves_data(i, a, c)
197 | datavis.update_image2(im)
198 |
199 | # the backpropagation training step
200 | sess.run(train_step, {X: batch_X, Y_: batch_Y, lr: learning_rate, tst: False})
201 | sess.run(update_ema, {X: batch_X, Y_: batch_Y, tst: False, iter: i})
202 |
203 | datavis.animate(training_step, iterations=10000+1, train_data_update_freq=20, test_data_update_freq=100, more_tests_at_start=True)
204 |
205 | # to save the animation as a movie, add save_movie=True as an argument to datavis.animate
206 | # to disable the visualisation use the following line instead of the datavis.animate line
207 | # for i in range(10000+1): training_step(i, i % 100 == 0, i % 20 == 0)
208 |
209 | print("max test accuracy: " + str(datavis.get_max_test_accuracy()))
210 |
211 | # Some results to expect:
212 | # (In all runs, if sigmoids are used, all biases are initialised at 0, if RELUs are used,
213 | # all biases are initialised at 0.1 apart from the last one which is initialised at 0.)
214 |
215 | ## decaying learning rate from 0.003 to 0.0001 decay_speed 2000, 10K iterations
216 | # final test accuracy = 0.9813 (sigmoid - training cross-entropy not stabilised)
217 | # final test accuracy = 0.9842 (relu - training set fully learned, test accuracy stable)
218 |
--------------------------------------------------------------------------------
/mlengine/digits.py:
--------------------------------------------------------------------------------
1 | # Licensed under the Apache License, Version 2.0 (the "License");
2 | # you may not use this file except in compliance with the License.
3 | # You may obtain a copy of the License at
4 | #
5 | # http://www.apache.org/licenses/LICENSE-2.0
6 | #
7 | # Unless required by applicable law or agreed to in writing, software
8 | # distributed under the License is distributed on an "AS IS" BASIS,
9 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 | # See the License for the specific language governing permissions and
11 | # limitations under the License.
12 |
13 | import json
14 | import numpy as np
15 |
16 | digit8 = [[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
17 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
18 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
19 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
20 | [0,0,0,0,0,0,0,0,0,0,0,4,4,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0],
21 | [0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,4,0,0,0,0,0,0,0,0,0,0,0,0],
22 | [0,0,0,0,0,0,0,0,4,0,0,0,0,0,0,0,4,0,0,0,0,0,0,0,0,0,0,0],
23 | [0,0,0,0,0,0,0,4,0,0,0,0,0,0,0,0,0,4,0,0,0,0,0,0,0,0,0,0],
24 | [0,0,0,0,0,0,0,4,0,0,0,0,0,0,0,0,0,4,0,0,0,0,0,0,0,0,0,0],
25 | [0,0,0,0,0,0,0,4,0,0,0,0,0,0,0,0,0,4,2,0,0,0,0,0,0,0,0,0],
26 | [0,0,0,0,0,0,0,0,4,2,0,0,0,0,0,0,0,0,4,0,0,0,0,0,0,0,0,0],
27 | [0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,4,0,0,0,0,0,0,0,0,0],
28 | [0,0,0,0,0,0,0,0,0,0,0,4,0,0,0,0,0,0,4,0,0,0,0,0,0,0,0,0],
29 | [0,0,0,0,0,0,0,0,0,0,0,0,4,0,0,0,0,4,0,0,0,0,0,0,0,0,0,0],
30 | [0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,4,0,0,0,0,0,0,0,0,0,0,0],
31 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0],
32 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0],
33 | [0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,0,4,0,0,0,0,0,0,0,0,0,0,0],
34 | [0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,0,0,4,0,0,0,0,0,0,0,0,0,0],
35 | [0,0,0,0,0,0,0,0,0,0,0,0,4,0,0,0,0,0,4,0,0,0,0,0,0,0,0,0],
36 | [0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,2,4,0,0,0,0,0,0,0,0],
37 | [0,0,0,0,0,0,0,0,0,2,4,0,0,0,0,0,0,0,0,4,0,0,0,0,0,0,0,0],
38 | [0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,4,0,0,0,0,0,0,0,0],
39 | [0,0,0,0,0,0,0,0,4,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0],
40 | [0,0,0,0,0,0,0,0,4,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0],
41 | [0,0,0,0,0,0,0,0,0,4,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0],
42 | [0,0,0,0,0,0,0,0,0,0,4,4,4,4,4,4,0,0,0,0,0,0,0,0,0,0,0,0],
43 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
44 | ]
45 |
46 | digit7a = [[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
47 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,4,4,2,0,0,0,0,0,0,0,0],
48 | [0,0,0,0,0,0,0,0,2,4,4,4,4,4,4,4,4,4,4,4,2,0,0,0,0,0,0,0],
49 | [0,0,0,0,0,2,4,4,4,4,4,4,4,4,4,0,0,0,0,4,4,0,0,0,0,0,0,0],
50 | [0,0,0,0,4,4,4,4,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0],
51 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0],
52 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,2,0,0,0,0,0,0,0,0],
53 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,2,0,0,0,0,0,0,0,0],
54 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,0,0,0,0,0,0,0,0,0],
55 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0],
56 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0],
57 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0],
58 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0],
59 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,4,4,4,0,0,0,0,0,0,0],
60 | [0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,4,4,4,0,0,0,0,0,0,0,0,0,0],
61 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0],
62 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0],
63 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,0,0,0,0,0,0,0,0,0,0,0],
64 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0],
65 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0],
66 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0],
67 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0],
68 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0],
69 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0],
70 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0],
71 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0],
72 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0],
73 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
74 | ]
75 |
76 | digit7b = [[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
77 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,4,4,4,0,0,0,0,0,0,0,0],
78 | [0,0,0,0,0,0,0,0,4,4,4,4,4,4,4,4,4,4,4,4,0,0,0,0,0,0,0,0],
79 | [0,0,0,0,0,4,4,4,4,4,4,4,4,4,4,0,0,0,0,4,4,0,0,0,0,0,0,0],
80 | [0,0,0,0,4,4,4,4,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0],
81 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0],
82 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,2,0,0,0,0,0,0,0,0],
83 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,2,0,0,0,0,0,0,0,0],
84 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,0,0,0,0,0,0,0,0,0],
85 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0],
86 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0],
87 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0],
88 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0],
89 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,0,0,0,0,0,0,0,0,0,0],
90 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0],
91 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0],
92 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,2,0,0,0,0,0,0,0,0,0,0],
93 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,0,0,0,0,0,0,0,0,0,0,0],
94 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0],
95 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0],
96 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0],
97 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0],
98 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0],
99 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0],
100 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0],
101 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0],
102 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0],
103 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
104 | ]
105 |
106 | digit5a = [[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
107 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
108 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
109 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
110 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
111 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,2,0,0,0,0,0,0,0,0,0,0],
112 | [0,0,0,0,0,0,0,0,0,4,4,4,4,4,4,4,0,0,0,0,0,0,0,0,0,0,0,0],
113 | [0,0,0,0,0,0,0,0,2,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
114 | [0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
115 | [0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
116 | [0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
117 | [0,0,0,0,0,0,0,0,0,0,0,0,4,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0],
118 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0],
119 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0],
120 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,2,0,0,0,0,0,0,0,0,0],
121 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,0,0,0,0,0,0,0,0,0],
122 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0],
123 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,2,0,0,0,0,0,0,0,0,0],
124 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0],
125 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0],
126 | [0,0,0,0,0,0,0,0,0,0,0,0,4,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0],
127 | [0,0,0,0,0,0,0,0,0,0,4,4,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
128 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
129 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
130 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
131 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
132 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
133 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
134 | ]
135 |
136 | digit5b = [[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
137 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
138 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
139 | [0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,2,0,0,0,0,0,0,0,0,0,0,0,0],
140 | [0,0,0,0,0,0,0,4,4,4,4,4,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
141 | [0,0,0,0,0,0,2,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
142 | [0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
143 | [0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
144 | [0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
145 | [0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
146 | [0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
147 | [0,0,0,0,0,0,0,4,4,4,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
148 | [0,0,0,0,0,0,0,0,0,0,4,4,4,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
149 | [0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0],
150 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0],
151 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,2,0,0,0,0,0,0,0,0,0,0,0],
152 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,0,0,0,0,0,0,0,0,0,0,0],
153 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0],
154 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,2,0,0,0,0,0,0,0,0,0,0,0],
155 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0],
156 | [0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0],
157 | [0,0,0,0,0,0,4,4,0,0,4,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
158 | [0,0,0,0,0,0,0,4,4,4,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
159 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
160 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
161 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
162 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
163 | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
164 | ]
165 |
166 | def scale(x): return (np.array(x)*63).tolist()
167 |
168 | sdigit8 = scale(digit8)
169 | sdigit7a = scale(digit7a)
170 | sdigit7b = scale(digit7b)
171 | sdigit5a = scale(digit5a)
172 | sdigit5b = scale(digit5b)
173 | test_digits = [sdigit8, sdigit7a, sdigit7b, sdigit5a, sdigit5b]
174 |
175 | if __name__ == '__main__':
176 | # for online predictions, use this format
177 | print(json.dumps([sdigit8, sdigit7a, sdigit7b, sdigit5a, sdigit5b]))
178 |
179 | # for local predictions, use this format
180 | # print(json.dumps(sdigit8))
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "{}"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright {yyyy} {name of copyright owner}
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------
/mlengine/digits.json:
--------------------------------------------------------------------------------
1 | [[[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 252, 0, 0, 0, 0, 0, 0, 0, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 126, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 252, 126, 0, 0, 0, 0, 0, 0, 0, 0, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 0, 0, 0, 0, 0, 0, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 0, 0, 0, 0, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 0, 0, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 0, 0, 0, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 0, 0, 0, 0, 0, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 126, 252, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 126, 252, 0, 0, 0, 0, 0, 0, 0, 0, 252, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 252, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 252, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 252, 252, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 126, 252, 252, 252, 126, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 126, 252, 252, 252, 252, 252, 252, 252, 252, 252, 252, 252, 126, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 126, 252, 252, 252, 252, 252, 252, 252, 252, 252, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 252, 252, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 126, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 126, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 126, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 126, 252, 252, 252, 252, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 252, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 126, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 126, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 252, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 252, 252, 252, 252, 252, 252, 252, 252, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 252, 252, 252, 252, 252, 252, 252, 252, 252, 252, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 252, 252, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 126, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 126, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 126, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 126, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 126, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 126, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 126, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 126, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 252, 252, 252, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 126, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 126, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 126, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 126, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 126, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 126, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 252, 252, 252, 252, 252, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 126, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 252, 252, 252, 126, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 252, 126, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 126, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 126, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 126, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 252, 252, 0, 0, 252, 252, 252, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 252, 252, 252, 126, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]]
2 |
--------------------------------------------------------------------------------
/mnist_4.2_batchnorm_convolutional.py:
--------------------------------------------------------------------------------
1 | # encoding: UTF-8
2 | # Copyright 2016 Google.com
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 |
16 | import tensorflow as tf
17 | from tensorflow.python.framework import tensor_util
18 | import tensorflowvisu
19 | import math
20 | from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
21 | tf.set_random_seed(0.0)
22 |
23 | # Download images and labels into mnist.test (10K images+labels) and mnist.train (60K images+labels)
24 | mnist = read_data_sets("data", one_hot=True, reshape=False, validation_size=0)
25 |
26 | # neural network structure for this sample:
27 | #
28 | # · · · · · · · · · · (input data, 1-deep) X [batch, 28, 28, 1]
29 | # @ @ @ @ @ @ @ @ @ @ -- conv. layer +BN 6x6x1=>24 stride 1 W1 [5, 5, 1, 24] B1 [24]
30 | # ∶∶∶∶∶∶∶∶∶∶∶∶∶∶∶∶∶∶∶ Y1 [batch, 28, 28, 6]
31 | # @ @ @ @ @ @ @ @ -- conv. layer +BN 5x5x6=>48 stride 2 W2 [5, 5, 6, 48] B2 [48]
32 | # ∶∶∶∶∶∶∶∶∶∶∶∶∶∶∶ Y2 [batch, 14, 14, 12]
33 | # @ @ @ @ @ @ -- conv. layer +BN 4x4x12=>64 stride 2 W3 [4, 4, 12, 64] B3 [64]
34 | # ∶∶∶∶∶∶∶∶∶∶∶ Y3 [batch, 7, 7, 24] => reshaped to YY [batch, 7*7*24]
35 | # \x/x\x\x/ ✞ -- fully connected layer (relu+dropout+BN) W4 [7*7*24, 200] B4 [200]
36 | # · · · · Y4 [batch, 200]
37 | # \x/x\x/ -- fully connected layer (softmax) W5 [200, 10] B5 [10]
38 | # · · · Y [batch, 20]
39 |
40 | # input X: 28x28 grayscale images, the first dimension (None) will index the images in the mini-batch
41 | X = tf.placeholder(tf.float32, [None, 28, 28, 1])
42 | # correct answers will go here
43 | Y_ = tf.placeholder(tf.float32, [None, 10])
44 | # variable learning rate
45 | lr = tf.placeholder(tf.float32)
46 | # test flag for batch norm
47 | tst = tf.placeholder(tf.bool)
48 | iter = tf.placeholder(tf.int32)
49 | # dropout probability
50 | pkeep = tf.placeholder(tf.float32)
51 | pkeep_conv = tf.placeholder(tf.float32)
52 |
53 | def batchnorm(Ylogits, is_test, iteration, offset, convolutional=False):
54 | exp_moving_avg = tf.train.ExponentialMovingAverage(0.999, iteration) # adding the iteration prevents from averaging across non-existing iterations
55 | bnepsilon = 1e-5
56 | if convolutional:
57 | mean, variance = tf.nn.moments(Ylogits, [0, 1, 2])
58 | else:
59 | mean, variance = tf.nn.moments(Ylogits, [0])
60 | update_moving_everages = exp_moving_avg.apply([mean, variance])
61 | m = tf.cond(is_test, lambda: exp_moving_avg.average(mean), lambda: mean)
62 | v = tf.cond(is_test, lambda: exp_moving_avg.average(variance), lambda: variance)
63 | Ybn = tf.nn.batch_normalization(Ylogits, m, v, offset, None, bnepsilon)
64 | return Ybn, update_moving_everages
65 |
66 | def no_batchnorm(Ylogits, is_test, iteration, offset, convolutional=False):
67 | return Ylogits, tf.no_op()
68 |
69 | def compatible_convolutional_noise_shape(Y):
70 | noiseshape = tf.shape(Y)
71 | noiseshape = noiseshape * tf.constant([1,0,0,1]) + tf.constant([0,1,1,0])
72 | return noiseshape
73 |
74 | # three convolutional layers with their channel counts, and a
75 | # fully connected layer (tha last layer has 10 softmax neurons)
76 | K = 24 # first convolutional layer output depth
77 | L = 48 # second convolutional layer output depth
78 | M = 64 # third convolutional layer
79 | N = 200 # fully connected layer
80 |
81 | W1 = tf.Variable(tf.truncated_normal([6, 6, 1, K], stddev=0.1)) # 6x6 patch, 1 input channel, K output channels
82 | B1 = tf.Variable(tf.constant(0.1, tf.float32, [K]))
83 | W2 = tf.Variable(tf.truncated_normal([5, 5, K, L], stddev=0.1))
84 | B2 = tf.Variable(tf.constant(0.1, tf.float32, [L]))
85 | W3 = tf.Variable(tf.truncated_normal([4, 4, L, M], stddev=0.1))
86 | B3 = tf.Variable(tf.constant(0.1, tf.float32, [M]))
87 |
88 | W4 = tf.Variable(tf.truncated_normal([7 * 7 * M, N], stddev=0.1))
89 | B4 = tf.Variable(tf.constant(0.1, tf.float32, [N]))
90 | W5 = tf.Variable(tf.truncated_normal([N, 10], stddev=0.1))
91 | B5 = tf.Variable(tf.constant(0.1, tf.float32, [10]))
92 |
93 | # The model
94 | # batch norm scaling is not useful with relus
95 | # batch norm offsets are used instead of biases
96 | stride = 1 # output is 28x28
97 | Y1l = tf.nn.conv2d(X, W1, strides=[1, stride, stride, 1], padding='SAME')
98 | Y1bn, update_ema1 = batchnorm(Y1l, tst, iter, B1, convolutional=True)
99 | Y1r = tf.nn.relu(Y1bn)
100 | Y1 = tf.nn.dropout(Y1r, pkeep_conv, compatible_convolutional_noise_shape(Y1r))
101 | stride = 2 # output is 14x14
102 | Y2l = tf.nn.conv2d(Y1, W2, strides=[1, stride, stride, 1], padding='SAME')
103 | Y2bn, update_ema2 = batchnorm(Y2l, tst, iter, B2, convolutional=True)
104 | Y2r = tf.nn.relu(Y2bn)
105 | Y2 = tf.nn.dropout(Y2r, pkeep_conv, compatible_convolutional_noise_shape(Y2r))
106 | stride = 2 # output is 7x7
107 | Y3l = tf.nn.conv2d(Y2, W3, strides=[1, stride, stride, 1], padding='SAME')
108 | Y3bn, update_ema3 = batchnorm(Y3l, tst, iter, B3, convolutional=True)
109 | Y3r = tf.nn.relu(Y3bn)
110 | Y3 = tf.nn.dropout(Y3r, pkeep_conv, compatible_convolutional_noise_shape(Y3r))
111 |
112 | # reshape the output from the third convolution for the fully connected layer
113 | YY = tf.reshape(Y3, shape=[-1, 7 * 7 * M])
114 |
115 | Y4l = tf.matmul(YY, W4)
116 | Y4bn, update_ema4 = batchnorm(Y4l, tst, iter, B4)
117 | Y4r = tf.nn.relu(Y4bn)
118 | Y4 = tf.nn.dropout(Y4r, pkeep)
119 | Ylogits = tf.matmul(Y4, W5) + B5
120 | Y = tf.nn.softmax(Ylogits)
121 |
122 | update_ema = tf.group(update_ema1, update_ema2, update_ema3, update_ema4)
123 |
124 | # cross-entropy loss function (= -sum(Y_i * log(Yi)) ), normalised for batches of 100 images
125 | # TensorFlow provides the softmax_cross_entropy_with_logits function to avoid numerical stability
126 | # problems with log(0) which is NaN
127 | cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_)
128 | cross_entropy = tf.reduce_mean(cross_entropy)*100
129 |
130 | # accuracy of the trained model, between 0 (worst) and 1 (best)
131 | correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
132 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
133 |
134 | # matplotlib visualisation
135 | allweights = tf.concat([tf.reshape(W1, [-1]), tf.reshape(W2, [-1]), tf.reshape(W3, [-1]), tf.reshape(W4, [-1]), tf.reshape(W5, [-1])], 0)
136 | allbiases = tf.concat([tf.reshape(B1, [-1]), tf.reshape(B2, [-1]), tf.reshape(B3, [-1]), tf.reshape(B4, [-1]), tf.reshape(B5, [-1])], 0)
137 | conv_activations = tf.concat([tf.reshape(tf.reduce_max(Y1r, [0]), [-1]), tf.reshape(tf.reduce_max(Y2r, [0]), [-1]), tf.reshape(tf.reduce_max(Y3r, [0]), [-1])], 0)
138 | dense_activations = tf.reduce_max(Y4r, [0])
139 | I = tensorflowvisu.tf_format_mnist_images(X, Y, Y_)
140 | It = tensorflowvisu.tf_format_mnist_images(X, Y, Y_, 1000, lines=25)
141 | datavis = tensorflowvisu.MnistDataVis(title4="batch-max conv activation", title5="batch-max dense activations", histogram4colornum=2, histogram5colornum=2)
142 |
143 | # training step, the learning rate is a placeholder
144 | train_step = tf.train.AdamOptimizer(lr).minimize(cross_entropy)
145 |
146 | # init
147 | init = tf.global_variables_initializer()
148 | sess = tf.Session()
149 | sess.run(init)
150 |
151 |
152 | # You can call this function in a loop to train the model, 100 images at a time
153 | def training_step(i, update_test_data, update_train_data):
154 |
155 | # training on batches of 100 images with 100 labels
156 | batch_X, batch_Y = mnist.train.next_batch(100)
157 |
158 | # learning rate decay
159 | max_learning_rate = 0.02
160 | min_learning_rate = 0.0001
161 | decay_speed = 1600
162 | learning_rate = min_learning_rate + (max_learning_rate - min_learning_rate) * math.exp(-i/decay_speed)
163 |
164 | # compute training values for visualisation
165 | if update_train_data:
166 | a, c, im, ca, da = sess.run([accuracy, cross_entropy, I, conv_activations, dense_activations], {X: batch_X, Y_: batch_Y, tst: False, pkeep: 1.0, pkeep_conv: 1.0})
167 | print(str(i) + ": accuracy:" + str(a) + " loss: " + str(c) + " (lr:" + str(learning_rate) + ")")
168 | datavis.append_training_curves_data(i, a, c)
169 | datavis.update_image1(im)
170 | datavis.append_data_histograms(i, ca, da)
171 |
172 | # compute test values for visualisation
173 | if update_test_data:
174 | a, c, im = sess.run([accuracy, cross_entropy, It], {X: mnist.test.images, Y_: mnist.test.labels, tst: True, pkeep: 1.0, pkeep_conv: 1.0})
175 | print(str(i) + ": ********* epoch " + str(i*100//mnist.train.images.shape[0]+1) + " ********* test accuracy:" + str(a) + " test loss: " + str(c))
176 | datavis.append_test_curves_data(i, a, c)
177 | datavis.update_image2(im)
178 |
179 | # the backpropagation training step
180 | sess.run(train_step, {X: batch_X, Y_: batch_Y, lr: learning_rate, tst: False, pkeep: 0.75, pkeep_conv: 1.0})
181 | sess.run(update_ema, {X: batch_X, Y_: batch_Y, tst: False, iter: i, pkeep: 1.0, pkeep_conv: 1.0})
182 |
183 | datavis.animate(training_step, 10001, train_data_update_freq=20, test_data_update_freq=100)
184 |
185 | # to save the animation as a movie, add save_movie=True as an argument to datavis.animate
186 | # to disable the visualisation use the following line instead of the datavis.animate line
187 | # for i in range(10000+1): training_step(i, i % 100 == 0, i % 20 == 0)
188 |
189 | print("max test accuracy: " + str(datavis.get_max_test_accuracy()))
190 |
191 | ## All runs 10K iterations:
192 | # batch norm 0.998 lr 0.03-0.0001-1000 no BN offset or scale: best 0.9933 but most of the way under 0.993 and lots of variation. test loss under 2.2 though
193 | # batch norm 0.998 lr 0.03-0.0001-500 no BN offset or scale: best 0.9933 but really clean curves
194 | # batch norm 0.998 lr 0.03-0.0001-500 no BN offset or scale, dropout 0.8 on fully connected layer: max 0.9926
195 | # same as above but batch norm on fully connected layer only: max 0.9904
196 | # batch norm 0.998 lr 0.03-0.0001-500, withouts biases or BN offsets or scales at all: above 0.99 at 1200 iterations (!) but then max 0.9928 (record test loss though: 2.08193)
197 | # batch norm 0.998 lr 0.03-0.0001-500 dropout à.75, with biases replaced with BN offsets as per the book: above 0.99 at 900 iterations (!), max 0.9931 at 10K iterations, maybe could have gone higher still (record test loss though: below 2.05)
198 | # batch norm 0.998 lr 0.03-0.0001-500 no dropout, with biases replaced with BN offsets as per the book: above 0.99 at 900 iterations (!), max 0.993 (best loss at 2.0879 at 2100 it and went up after that)
199 | # batch norm 0.998 lr 0.03-0.0001-500 no dropout, offets and scales for BN, no biases: max 0.9935 at 2400 it but going down from there... also dense activations not so regular...
200 | # batch norm 0.999 + same as above: 0.9935 at 2400 iterations but downhill from there...
201 | # batch norm 0.999 lr 0.02-0.0002-2000 dropout 0.75, normal biases, no BN scales or offsets: max 0.9949 at 17K it (min test loss 1.64665 but cruising around 1.8) 0.994 at 3100 it, 0.9942 at 20K it, 0.99427 average on last 10K it
202 | # batch norm 0.999 lr 0.02-0.0001-1000 dropout 0.75, normal biases, no BN scales or offsets: max 0.9944 but oscillating in 0.9935-0.9940 region (test loss stable betwen 1.7 and 1.8 though)
203 | # batch norm 0.999 lr 0.02-0.0002-1000 dropout 0.75, normal biases, no BN scales or offsets: max 0.995, min test loss 1.49787 cruising below 1.6, then at 8K it something happens and cruise just above 1.6, 0.99436 average on last 10K it
204 | # => see which setting removes the weird event at 8K ?:
205 | # => in everything below batch norm 0.999 lr 0.02-0.0002-1000 dropout 0.75, normal biases, no BN scales or offsets, unless stated otherwise
206 | # remove n/n+1 in variation calculation: no good, m ax 0.994 buit cruising around 0.993
207 | # bn 0.9955 for cutoff at 2K it: still something happens at 8K. Max 0.995 but cruising at 0.9942-0.9943 only and downward trend above 15K. Test loss: nice cruise below 1.6
208 | # bn epsilon e-10 => max 0.9947 cruise around 0.9939, test loss never went below 1.6, barely below 1.7,
209 | # bn epsilon e-10 run 2=> max 0.9945 cruise around 0.9937, test loss never went below 1.6, barely below 1.7,
210 | # baseline run 2: max 0.995 cruising around 0.9946 0.9947, test loss cruising between 1.6 and 1.7 (baseline confirmed)
211 | # bn 0.998 for cutoff at 5K it: max 0.9948, test loss cruising btw 1.6 and 1.8, last 10K avg 0.99421
212 | # lr 0.015-0.0001-1500: max 0.9938, cruise between 0.993 and 0.994, test loss above 2.0 most of the time (not good)
213 | # bn 0.9999: max 0.9952, cruise between 0.994 and 0.995 with upward trend, fall in last 2K it. test loss cruise just above 1.6. Avg on last 10K it 0.99441. Could be stopped at 7000 it. Quite noisy overall.
214 | # bn 0.99955 for cutoff at 20K it: max 0.9948, cruise around 0.9942, test loss cruise around 1.7. Avg on last 10K it 0.99415
215 | # batch norm 0.999 lr 0.015-0.00015-1500 dropout 0.75, normal biases, no MB scales or offsets: cruise around 0.9937-00994, test loss cruise around 1.95-2.0 (not good)
216 | # batch norm 0.999 lr 0.03-0.0001-2000 dropout 0.75, normal biases, no MB scales or offsets: stable cruise around 0.9940, test loss cruise around 2.2, good stability in last 10K, bumpy slow start
217 | # batch norm 0.9999 lr 0.02-0.0001-1500 dropout 0.75, normal biases, no MB scales or offsets: max 0.995, stable btw 0.0040-0.9945, test loss stable around 1.7, good stability in last 4K, avg on last 10K: 0.99414, avg on last 4K
218 | # *batch norm 0.9999 lr 0.02-0.00015-1000 dropout 0.75, normal biases, no MB scales or offsets: max 0.9956 stable above 0.995!!! test loss stable around 1.6. Avg last 10K 0.99502. Avg 10K-13K 0.99526. Avg 8K-10K: 0.99514. Best example to run in 10K
219 | # same as above with different rnd seed: max 0.9938 only in 10K it, test loss in 1.9 region (very bad)
220 | # same as above with dropout 0.8: max 0.9937 only (bad)
221 | # same as above with dropout 0.66: max 0.9942 only, test loss between 1.7-1.8 (not good)
222 | # same as above with lr 0.015-0.0001-1200: max 0.9946 at 6500 it but something happens after that it it goes down (not good)
223 | # best * run 2 (lbl 5.1): max 0.9953, cruising around 0.995 until 12K it, went down a bit after that (still ok) avg 8-10K 0.99484
224 | # best * run 3 (lbl 5.2 video): max 0.9951, cruising just below 0.995, test loss cruising around 1.6
225 | # best * run 3-8: not good, usually in the 0.994 range
226 | # best * run 9: (lbl 5.3 video): max 0.9956, cruising above 0.995, test loss cruising around 1.6, avg 7K-10K it: 0.99518
227 | # added BN offests instead of biases as per the BN theory. Offsets initialised to 0.025
228 | # lr 0.005-0.00015-1000 max accuracy 0.9944 not good
229 | # lr 0.015-0.0001-1200 max accuracy 0.9950 but it was really a peak
230 | # same with offsets initialised to -0.25: very bad, not even 0.993, BN offsets stabilise even lower than -0.25
231 | # same with offsets initialised to 0.1: max accuracy 0.9946 bad
232 | # same with batch norm and dropout on fully connected layer only: max accuracy 0.9935 very bad
233 | # BN with no offset but regular biases applied after the BN in convolutional layers. BN with offset on fully connected layer: max accuracy 0.9949
234 | # BN and dropout on all layers, as per the book: max accuracy 0.9918 very bad
235 | # back to basics: batch norm 0.9999 lr 0.02-0.00015-1000 dropout 0.75 on dense layer, normal biases, no MB scales or offsets: 0.9935 (bad)
236 | # by the book: batch norm 0.9999 lr 0.02-0.00015-1000 dropout 0.75 on dense layer, BN offsets init to 0.01, no BN scales: max accuracy 0.9943
237 | # smaller batch size (33): max accuracy 0.9925 (not good)
238 | #* by the book: 3 conv layers 24-48-64, batch norm 0.999 lr 0.02-0.0001-1700 dropout 0.5 on dense layer, BN offsets init to 0.01, no BN scales: max accuracy 0.9954, stable around 0.9950, test loss goes as low as 1.45! (on GPU)
239 | #* by the book: 3 conv layers 24-48-64, batch norm 0.999 lr 0.02-0.0001-1800 dropout 0.5 on dense layer, BN offsets init to 0.01, no BN scales: max accuracy 0.9952, stable around 0.9950 (on GPU)
240 | # by the book: 3 conv layers 24-48-64, batch norm 0.999 lr 0.02-0.0001-1500 dropout 0.5 on dense layer, BN offsets init to 0.01, no BN scales: max accuracy 0.9947 (on GPU)
241 | #* by the book: 3 conv layers 24-48-64, batch norm 0.999 lr 0.02-0.0001-1600 dropout 0.5 on dense layer, BN offsets init to 0.01, no BN scales: max accuracy 0.9956, stable around 0.9952 (on GPU)
242 | #* 2nd run: max accuracy 0.9954, stable around 0.0049, test loss stable around 1.7 (on GPU)
243 | #* 3rd run: max accuracy 0.9949, stable around 0.9947, test loss stable around 1.6 (on GPU)
244 | #* 4th run: max accuracy 0.9952, stable around 0.9948, test loss stable around 1.7, 0.9952 at 3200 iterations (on GPU)
245 | #* 5th run: max accuracy 0.9952, stable around 0.9952, test loss stable around 1.7 (on GPU)
246 | # same conditions without batch norm: max accuracy below 0.9900 ! (on GPU)
247 | # same conditions with dropout 0.75: max accuracy 0.9953, stable around 0.9950, test loss stable around 1.6 (on GPU)
248 | # 2nd run: max accuracy 0.9958 (!), stable around 0.9950, test loss stable around 1.65 (on GPU)
249 | # 3rd run: max accuracy 0.9955 (!), stable around 0.9951, test loss stable around 1.65 (on GPU)
--------------------------------------------------------------------------------
/tensorflowvisu.py:
--------------------------------------------------------------------------------
1 | # encoding: UTF-8
2 | # Copyright 2016 Google.com
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 |
16 | import tensorflow as tf
17 | import numpy as np
18 | import matplotlib.pyplot as plt
19 | plt.style.use(["ggplot", "tensorflowvisu.mplstyle"])
20 | #import matplotlib
21 | #matplotlib.use('macosx') #this is the default on mac
22 | #print("matplotlib version: " + matplotlib.__version__)
23 | import matplotlib.animation as animation
24 | from matplotlib import rcParams
25 | import math
26 | import tensorflowvisu_digits
27 | tf.set_random_seed(0)
28 |
29 | # number of percentile slices for histogram visualisations
30 | HISTOGRAM_BUCKETS = 7
31 |
32 | # X: tensor of shape [100+, 28, 28, 1] containing a batch of images (float32)
33 | # Y: tensor of shape [100+, 10] containing recognised digits (one-hot vectors)
34 | # Y_: tensor of shape [100+, 10] containing correct digit labels (one-hot vectors)
35 | # return value: tensor of shape [280, 280, 3] containing the 100 first unrecognised images (rgb, uint8)
36 | # followed by other, recognised images. 100 images max arranged as a 10x10 array. Unrecognised images
37 | # are displayed on a red background and labeled with the correct (left) and recognised digit (right.
38 | def tf_format_mnist_images(X, Y, Y_, n=100, lines=10):
39 | correct_prediction = tf.equal(tf.argmax(Y,1), tf.argmax(Y_,1))
40 | correctly_recognised_indices = tf.squeeze(tf.where(correct_prediction), [1]) # indices of correctly recognised images
41 | incorrectly_recognised_indices = tf.squeeze(tf.where(tf.logical_not(correct_prediction)), [1]) # indices of incorrectly recognised images
42 | everything_incorrect_first = tf.concat([incorrectly_recognised_indices, correctly_recognised_indices], 0) # images reordered with indeces of unrecognised images first
43 | everything_incorrect_first = tf.slice(everything_incorrect_first, [0], [n]) # compute first 100 only - no space to display more anyway
44 | # compute n=100 digits to display only
45 | Xs = tf.gather(X, everything_incorrect_first)
46 | Ys = tf.gather(Y, everything_incorrect_first)
47 | Ys_ = tf.gather(Y_, everything_incorrect_first)
48 | correct_prediction_s = tf.gather(correct_prediction, everything_incorrect_first)
49 |
50 | digits_left = tf.image.grayscale_to_rgb(tensorflowvisu_digits.digits_left())
51 | correct_tags = tf.gather(digits_left, tf.argmax(Ys_, 1)) # correct digits to be printed on the images
52 | digits_right = tf.image.grayscale_to_rgb(tensorflowvisu_digits.digits_right())
53 | computed_tags = tf.gather(digits_right, tf.argmax(Ys, 1)) # computed digits to be printed on the images
54 | #superimposed_digits = correct_tags+computed_tags
55 | superimposed_digits = tf.where(correct_prediction_s, tf.zeros_like(correct_tags),correct_tags+computed_tags) # only pring the correct and computed digits on unrecognised images
56 | correct_bkg = tf.reshape(tf.tile([1.3,1.3,1.3], [28*28]), [1, 28,28,3]) # white background
57 | incorrect_bkg = tf.reshape(tf.tile([1.3,1.0,1.0], [28*28]), [1, 28,28,3]) # red background
58 | recognised_bkg = tf.gather(tf.concat([incorrect_bkg, correct_bkg], 0), tf.cast(correct_prediction_s, tf.int32)) # pick either the red or the white background depending on recognised status
59 |
60 | I = tf.image.grayscale_to_rgb(Xs)
61 | I = ((1-(I+superimposed_digits))*recognised_bkg)/1.3 # stencil extra data on top of images and reorder them unrecognised first
62 | I = tf.image.convert_image_dtype(I, tf.uint8, saturate=True)
63 | Islices = [] # 100 images => 10x10 image block
64 | for imslice in range(lines):
65 | Islices.append(tf.concat(tf.unstack(tf.slice(I, [imslice*n//lines,0,0,0], [n//lines,28,28,3])), 1))
66 | I = tf.concat(Islices, 0)
67 | return I
68 |
69 | # n = HISTOGRAM_BUCKETS (global)
70 | # Buckets the data into n buckets so that there are an equal number of data points in
71 | # each bucket. Returns n+1 bucket boundaries. Spreads the reaminder data.size % n more
72 | # or less evenly among the central buckets.
73 | # data: 1-D ndarray containing float data, MUST BE SORTED in ascending order
74 | # n: integer, the number of desired output buckets
75 | # return value: ndarray, 1-D vector of size n+1 containing the bucket boundaries
76 | # the first value is the min of the data, the last value is the max
77 | def probability_distribution(data):
78 | n = HISTOGRAM_BUCKETS
79 | data.sort()
80 | bucketsize = data.size // n
81 | bucketrem = data.size % n
82 | buckets = np.zeros([n+1])
83 | buckets[0] = data[0] # min
84 | buckets[-1] = data[-1] # max
85 | buckn = 0
86 | rem = 0
87 | remn = 0
88 | k = 0
89 | cnt = 0 # only for assert
90 | lastval = data[0]
91 | for i in range(data.size):
92 | val = data[i]
93 | buckn += 1
94 | cnt += 1
95 | if buckn > bucketsize+rem : ## crossing bucket boundary
96 | cnt -= 1
97 | k += 1
98 | buckets[k] = (val + lastval) / 2
99 | if (k= (n - bucketrem) // 2 and remn < bucketrem:
103 | rem = 1
104 | remn += 1
105 | else:
106 | rem = 0
107 | lastval = val
108 | assert i+1 == cnt
109 | return buckets
110 |
111 | def _empty_collection(collection):
112 | tempcoll = []
113 | for a in (collection):
114 | tempcoll.append(a)
115 | for a in (tempcoll):
116 | collection.remove(a)
117 |
118 | def _display_time_histogram(ax, xdata, ydata, color):
119 | _empty_collection(ax.collections)
120 | midl = HISTOGRAM_BUCKETS//2
121 | midh = HISTOGRAM_BUCKETS//2
122 | for i in range(int(math.ceil(HISTOGRAM_BUCKETS/2.0))):
123 | ax.fill_between(xdata, ydata[:,midl-i], ydata[:,midh+1+i], facecolor=color, alpha=1.6/HISTOGRAM_BUCKETS)
124 | if HISTOGRAM_BUCKETS % 2 == 0 and i == 0:
125 | ax.fill_between(xdata, ydata[:,midl-1], ydata[:,midh], facecolor=color, alpha=1.6/HISTOGRAM_BUCKETS)
126 | midl = midl-1
127 |
128 | class MnistDataVis:
129 | xmax = 0
130 | y2max = 0
131 | x1 = []
132 | y1 = []
133 | z1 = []
134 | x2 = []
135 | y2 = []
136 | z2 = []
137 | x3 = []
138 | w3 = np.zeros([0,HISTOGRAM_BUCKETS+1])
139 | b3 = np.zeros([0,HISTOGRAM_BUCKETS+1])
140 | im1 = np.full((28*10,28*10,3),255, dtype='uint8')
141 | im2 = np.full((28*10,28*10,3),255, dtype='uint8')
142 | _animpause = False
143 | _animation = None
144 | _mpl_figure = None
145 | _mlp_init_func = None
146 | _mpl_update_func = None
147 | _color4 = None
148 | _color5 = None
149 |
150 | def __set_title(self, ax, title, default=""):
151 | if title is not None and title != "":
152 | ax.set_title(title, y=1.02) # adjustment for plot title bottom margin
153 | else:
154 | ax.set_title(default, y=1.02) # adjustment for plot title bottom margin
155 |
156 | # retrieve the color from the color cycle, default is 1
157 | def __get_histogram_cyclecolor(self, colornum):
158 | clist = rcParams['axes.prop_cycle']
159 | ccount = 1 if (colornum is None) else colornum
160 | colors = clist.by_key()['color']
161 | for i, c in enumerate(colors):
162 | if (i == ccount % 3):
163 | return c
164 |
165 | def __init__(self, title1=None, title2=None, title3=None, title4=None, title5=None, title6=None, histogram4colornum=None, histogram5colornum=None, dpi=70):
166 | self._color4 = self.__get_histogram_cyclecolor(histogram4colornum)
167 | self._color5 = self.__get_histogram_cyclecolor(histogram5colornum)
168 | fig = plt.figure(figsize=(19.20,10.80), dpi=dpi)
169 | plt.gcf().canvas.set_window_title("MNIST")
170 | fig.set_facecolor('#FFFFFF')
171 | ax1 = fig.add_subplot(231)
172 | ax2 = fig.add_subplot(232)
173 | ax3 = fig.add_subplot(233)
174 | ax4 = fig.add_subplot(234)
175 | ax5 = fig.add_subplot(235)
176 | ax6 = fig.add_subplot(236)
177 | #fig, ax = plt.subplots() # if you need only 1 graph
178 |
179 | self.__set_title(ax1, title1, default="Accuracy")
180 | self.__set_title(ax2, title2, default="Cross entropy loss")
181 | self.__set_title(ax3, title3, default="Training digits")
182 | self.__set_title(ax4, title4, default="Weights")
183 | self.__set_title(ax5, title5, default="Biases")
184 | self.__set_title(ax6, title6, default="Test digits")
185 |
186 | #ax1.set_figaspect(1.0)
187 |
188 | # TODO: finish exporting the style modifications into a stylesheet
189 | line1, = ax1.plot(self.x1, self.y1, label="training accuracy")
190 | line2, = ax1.plot(self.x2, self.y2, label="test accuracy")
191 | legend = ax1.legend(loc='lower right') # fancybox : slightly rounded corners
192 | legend.draggable(True)
193 |
194 | line3, = ax2.plot(self.x1, self.z1, label="training loss")
195 | line4, = ax2.plot(self.x2, self.z2, label="test loss")
196 | legend = ax2.legend(loc='upper right') # fancybox : slightly rounded corners
197 | legend.draggable(True)
198 |
199 | ax3.grid(False) # toggle grid off
200 | ax3.set_axis_off()
201 | imax1 = ax3.imshow(self.im1, animated=True, cmap='binary', vmin=0.0, vmax=1.0, interpolation='nearest', aspect=1.0)
202 |
203 | ax6.grid(False) # toggle grid off
204 | ax6.axes.get_xaxis().set_visible(False)
205 | imax2 = ax6.imshow(self.im2, animated=True, cmap='binary', vmin=0.0, vmax=1.0, interpolation='nearest', aspect=1.0)
206 | ax6.locator_params(axis='y', nbins=7)
207 | # hack...
208 | ax6.set_yticks([0, 280-4*56, 280-3*56, 280-2*56, 280-56, 280])
209 | ax6.set_yticklabels(["100%", "98%", "96%", "94%", "92%", "90%"])
210 |
211 | def _init():
212 | ax1.set_xlim(0, 10) # initial value only, autoscaled after that
213 | ax2.set_xlim(0, 10) # initial value only, autoscaled after that
214 | ax4.set_xlim(0, 10) # initial value only, autoscaled after that
215 | ax5.set_xlim(0, 10) # initial value only, autoscaled after that
216 | ax1.set_ylim(0, 1) # important: not autoscaled
217 | #ax1.autoscale(axis='y')
218 | ax2.set_ylim(0, 100) # important: not autoscaled
219 | return imax1, imax2, line1, line2, line3, line4
220 |
221 |
222 | def _update():
223 | # x scale: iterations
224 | ax1.set_xlim(0, self.xmax+1)
225 | ax2.set_xlim(0, self.xmax+1)
226 | ax4.set_xlim(0, self.xmax+1)
227 | ax5.set_xlim(0, self.xmax+1)
228 |
229 | # four curves: train and test accuracy, train and test loss
230 | line1.set_data(self.x1, self.y1)
231 | line2.set_data(self.x2, self.y2)
232 | line3.set_data(self.x1, self.z1)
233 | line4.set_data(self.x2, self.z2)
234 |
235 | #images
236 | imax1.set_data(self.im1)
237 | imax2.set_data(self.im2)
238 |
239 | # histograms
240 | _display_time_histogram(ax4, self.x3, self.w3, self._color4)
241 | _display_time_histogram(ax5, self.x3, self.b3, self._color5)
242 |
243 | #return changed artists
244 | return imax1, imax2, line1, line2, line3, line4
245 |
246 | def _key_event_handler(event):
247 | if len(event.key) == 0:
248 | return
249 | else:
250 | keycode = event.key
251 |
252 | # pause/resume with space bar
253 | if keycode == ' ':
254 | self._animpause = not self._animpause
255 | if not self._animpause:
256 | _update()
257 | return
258 |
259 | # [p, m, n] p is the #of the subplot, [n,m] is the subplot layout
260 | toggles = {'1':[1,1,1], # one plot
261 | '2':[2,1,1], # one plot
262 | '3':[3,1,1], # one plot
263 | '4':[4,1,1], # one plot
264 | '5':[5,1,1], # one plot
265 | '6':[6,1,1], # one plot
266 | '7':[12,1,2], # two plots
267 | '8':[45,1,2], # two plots
268 | '9':[36,1,2], # two plots
269 | 'escape':[123456,2,3], # six plots
270 | '0':[123456,2,3]} # six plots
271 |
272 | # other matplotlib keyboard shortcuts:
273 | # 'o' box zoom
274 | # 'p' mouse pan and zoom
275 | # 'h' or 'home' reset
276 | # 's' save
277 | # 'g' toggle grid (when mouse is over a plot)
278 | # 'k' toggle log/lin x axis
279 | # 'l' toggle log/lin y axis
280 |
281 | if not (keycode in toggles):
282 | return
283 |
284 | for i in range(6):
285 | fig.axes[i].set_visible(False)
286 |
287 | fignum = toggles[keycode][0]
288 | if fignum <= 6:
289 | fig.axes[fignum-1].set_visible(True)
290 | fig.axes[fignum-1].change_geometry(toggles[keycode][1], toggles[keycode][2], 1)
291 | ax6.set_aspect(25.0/40) # special case for test digits
292 | elif fignum < 100:
293 | fig.axes[fignum//10-1].set_visible(True)
294 | fig.axes[fignum//10-1].change_geometry(toggles[keycode][1], toggles[keycode][2], 1)
295 | fig.axes[fignum%10-1].set_visible(True)
296 | fig.axes[fignum%10-1].change_geometry(toggles[keycode][1], toggles[keycode][2], 2)
297 | ax6.set_aspect(1.0) # special case for test digits
298 | elif fignum == 123456:
299 | for i in range(6):
300 | fig.axes[i].set_visible(True)
301 | fig.axes[i].change_geometry(toggles[keycode][1], toggles[keycode][2], i+1)
302 | ax6.set_aspect(1.0) # special case for test digits
303 |
304 | plt.draw()
305 |
306 | fig.canvas.mpl_connect('key_press_event', _key_event_handler)
307 |
308 | self._mpl_figure = fig
309 | self._mlp_init_func = _init
310 | self._mpl_update_func = _update
311 |
312 | def _update_xmax(self, x):
313 | if (x > self.xmax):
314 | self.xmax = x
315 |
316 | def _update_y2max(self, y):
317 | if (y > self.y2max):
318 | self.y2max = y
319 |
320 | def append_training_curves_data(self, x, accuracy, loss):
321 | self.x1.append(x)
322 | self.y1.append(accuracy)
323 | self.z1.append(loss)
324 | self._update_xmax(x)
325 |
326 | def append_test_curves_data(self, x, accuracy, loss):
327 | self.x2.append(x)
328 | self.y2.append(accuracy)
329 | self.z2.append(loss)
330 | self._update_xmax(x)
331 | self._update_y2max(accuracy)
332 |
333 | def get_max_test_accuracy(self):
334 | return self.y2max
335 |
336 | def append_data_histograms(self, x, datavect1, datavect2, title1=None, title2=None):
337 | self.x3.append(x)
338 | datavect1.sort()
339 | self.w3 = np.concatenate((self.w3, np.expand_dims(probability_distribution(datavect1), 0)))
340 | datavect2.sort()
341 | self.b3 = np.concatenate((self.b3, np.expand_dims(probability_distribution(datavect2), 0)))
342 | self._update_xmax(x)
343 |
344 | def update_image1(self, im):
345 | self.im1 = im
346 |
347 | def update_image2(self, im):
348 | self.im2 = im
349 |
350 | def is_paused(self):
351 | return self._animpause
352 |
353 | def animate(self, compute_step, iterations, train_data_update_freq=20, test_data_update_freq=100, one_test_at_start=True, more_tests_at_start=False, save_movie=False):
354 |
355 | def animate_step(i):
356 | if (i == iterations // train_data_update_freq): #last iteration
357 | compute_step(iterations, True, True)
358 | else:
359 | for k in range(train_data_update_freq):
360 | n = i * train_data_update_freq + k
361 | request_data_update = (n % train_data_update_freq == 0)
362 | request_test_data_update = (n % test_data_update_freq == 0) and (n > 0 or one_test_at_start)
363 | if more_tests_at_start and n < test_data_update_freq: request_test_data_update = request_data_update
364 | compute_step(n, request_test_data_update, request_data_update)
365 | # makes the UI a little more responsive
366 | plt.pause(0.001)
367 | if not self.is_paused():
368 | return self._mpl_update_func()
369 |
370 | self._animation = animation.FuncAnimation(self._mpl_figure, animate_step, int(iterations // train_data_update_freq + 1), init_func=self._mlp_init_func, interval=16, repeat=False, blit=False)
371 |
372 | if save_movie:
373 | mywriter = animation.FFMpegWriter(fps=24, codec='libx264', extra_args=['-pix_fmt', 'yuv420p', '-profile:v', 'high', '-tune', 'animation', '-crf', '18'])
374 | self._animation.save("./tensorflowvisu_video.mp4", writer=mywriter)
375 | else:
376 | plt.show(block=True)
377 |
--------------------------------------------------------------------------------
/tensorflowvisu_digits.py:
--------------------------------------------------------------------------------
1 | # encoding: UTF-8
2 | # Copyright 2016 Google.com
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 |
16 | import tensorflow as tf
17 |
18 | # helper to print expected and inferred digits on pictures.
19 |
20 | def digits_right():
21 | d= tf.convert_to_tensor(
22 | [[[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
23 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
24 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
25 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
26 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
27 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
28 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
29 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
30 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
31 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
32 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
33 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
34 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
35 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
36 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
37 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
38 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
39 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
40 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
41 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
42 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
43 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
44 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
45 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0],
46 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0],
47 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0],
48 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
49 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
50 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
51 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
52 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
53 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
54 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
55 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
56 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
57 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
58 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
59 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
60 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
61 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
62 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
63 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
64 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
65 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
66 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
67 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
68 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
69 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
70 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
71 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
72 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0],
73 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0],
74 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0],
75 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0],
76 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0],
77 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
78 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
79 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
80 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
81 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
82 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
83 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
84 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
85 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
86 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
87 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
88 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
89 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
90 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
91 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
92 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
93 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
94 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
95 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
96 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
97 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
98 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
99 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
100 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
101 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
102 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
103 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
104 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
105 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
106 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
107 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
108 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
109 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
110 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
111 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
112 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
113 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
114 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
115 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
116 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
117 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
118 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
119 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
120 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
121 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
122 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
123 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
124 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
125 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
126 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
127 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
128 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
129 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
130 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
131 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
132 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
133 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
134 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
135 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
136 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
137 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
138 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
139 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
140 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
141 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
142 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
143 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
144 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
145 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
146 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
147 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
148 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
149 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
150 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
151 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
152 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
153 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
154 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
155 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
156 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
157 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0],
158 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0],
159 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
160 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
161 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
162 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
163 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
164 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
165 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
166 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
167 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
168 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
169 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
170 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
171 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
172 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
173 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
174 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
175 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
176 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
177 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
178 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
179 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
180 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
181 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
182 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
183 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
184 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
185 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
186 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
187 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
188 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
189 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
190 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
191 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
192 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
193 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
194 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
195 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
196 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
197 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
198 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
199 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
200 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
201 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
202 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
203 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
204 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
205 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
206 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
207 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
208 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
209 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
210 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
211 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
212 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
213 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
214 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
215 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0],
216 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
217 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
218 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
219 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
220 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
221 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
222 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
223 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
224 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
225 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
226 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
227 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
228 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
229 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
230 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
231 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
232 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
233 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
234 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
235 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
236 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
237 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
238 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
239 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
240 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
241 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
242 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0],
243 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
244 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
245 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
246 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
247 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
248 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
249 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
250 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
251 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
252 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
253 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
254 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
255 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
256 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
257 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
258 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
259 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
260 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
261 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
262 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
263 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
264 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
265 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
266 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
267 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
268 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
269 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0],
270 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
271 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0],
272 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
273 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
274 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
275 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
276 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
277 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
278 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
279 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
280 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
281 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
282 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
283 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
284 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
285 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
286 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
287 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
288 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
289 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
290 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
291 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
292 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
293 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
294 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
295 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
296 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
297 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0],
298 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
299 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
300 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
301 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]
302 | ], tf.float32)
303 | return tf.reshape(d, [10, 28, 28, 1])
304 |
305 | def digits_left():
306 | d= tf.convert_to_tensor(
307 | [[[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
308 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
309 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
310 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
311 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
312 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
313 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
314 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
315 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
316 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
317 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
318 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
319 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
320 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
321 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
322 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
323 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
324 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
325 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
326 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
327 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
328 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
329 | [0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
330 | [0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
331 | [0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
332 | [0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
333 | [0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
334 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
335 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
336 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
337 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
338 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
339 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
340 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
341 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
342 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
343 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
344 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
345 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
346 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
347 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
348 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
349 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
350 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
351 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
352 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
353 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
354 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
355 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
356 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
357 | [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
358 | [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
359 | [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
360 | [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
361 | [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
362 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
363 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
364 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
365 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
366 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
367 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
368 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
369 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
370 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
371 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
372 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
373 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
374 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
375 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
376 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
377 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
378 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
379 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
380 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
381 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
382 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
383 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
384 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
385 | [0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
386 | [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
387 | [0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
388 | [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
389 | [0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
390 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
391 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
392 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
393 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
394 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
395 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
396 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
397 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
398 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
399 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
400 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
401 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
402 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
403 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
404 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
405 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
406 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
407 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
408 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
409 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
410 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
411 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
412 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
413 | [0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
414 | [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
415 | [0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
416 | [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
417 | [0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
418 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
419 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
420 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
421 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
422 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
423 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
424 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
425 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
426 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
427 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
428 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
429 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
430 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
431 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
432 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
433 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
434 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
435 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
436 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
437 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
438 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
439 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
440 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
441 | [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
442 | [0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
443 | [0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
444 | [0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
445 | [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
446 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
447 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
448 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
449 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
450 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
451 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
452 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
453 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
454 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
455 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
456 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
457 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
458 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
459 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
460 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
461 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
462 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
463 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
464 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
465 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
466 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
467 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
468 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
469 | [0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
470 | [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
471 | [0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
472 | [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
473 | [0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
474 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
475 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
476 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
477 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
478 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
479 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
480 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
481 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
482 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
483 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
484 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
485 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
486 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
487 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
488 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
489 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
490 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
491 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
492 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
493 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
494 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
495 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
496 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
497 | [0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
498 | [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
499 | [0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
500 | [0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
501 | [0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
502 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
503 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
504 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
505 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
506 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
507 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
508 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
509 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
510 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
511 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
512 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
513 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
514 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
515 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
516 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
517 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
518 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
519 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
520 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
521 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
522 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
523 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
524 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
525 | [0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
526 | [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
527 | [0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
528 | [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
529 | [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
530 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
531 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
532 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
533 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
534 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
535 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
536 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
537 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
538 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
539 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
540 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
541 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
542 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
543 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
544 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
545 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
546 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
547 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
548 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
549 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
550 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
551 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
552 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
553 | [0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
554 | [0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
555 | [0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
556 | [0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
557 | [0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
558 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
559 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
560 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
561 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
562 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
563 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
564 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
565 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
566 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
567 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
568 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
569 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
570 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
571 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
572 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
573 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
574 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
575 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
576 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
577 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
578 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
579 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
580 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
581 | [0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
582 | [0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
583 | [0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
584 | [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
585 | [0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
586 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]
587 | ], tf.float32)
588 | return tf.reshape(d, [10, 28, 28, 1])
--------------------------------------------------------------------------------