├── .gitignore
├── README.md
├── datasets
├── brown.lsi-index.npy
├── brown.word-by-word.normalized.npy
├── brown_pos_dataset.hdf5
├── brown_pos_dataset.indices
└── brown_w2v_vectors.npy
├── dl4mt
└── __init__.py
├── notebooks
├── data_preparation
│ ├── prep_pos_corpus.ipynb
│ ├── prep_text_classification_corpus.ipynb
│ └── word_window_vectors.ipynb
├── day0
│ ├── Day 0 - Warming up to Python.ipynb
│ └── test.txt
├── day1
│ ├── datasets
│ │ ├── brown.word-by-word.normalized.npy
│ │ ├── brown_pos_dataset.hdf5
│ │ ├── brown_pos_dataset.indices
│ │ ├── brown_w2v_model
│ │ ├── mnist2500_X.txt
│ │ └── mnist2500_labels.txt
│ ├── mlp.ipynb
│ ├── stacked_autoencoder.ipynb
│ ├── theano_autoencoder.ipynb
│ ├── theano_logistic_regression.ipynb
│ └── tsne_visulization.ipynb
└── day2
│ ├── accumulating_rnn.ipynb
│ ├── create_brown_w2v_index.ipynb
│ └── recurrent_transition_with_lookup.ipynb
└── trained_models
└── logistic_regression_model.pkl
/.gitignore:
--------------------------------------------------------------------------------
1 | .ipynb_checkpoints
2 | *.swp
3 | *.swo
4 |
5 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | ### An Introduction to Deep Learning with Theano
2 |
3 | #### Fundamentals for DL4MT I
4 |
5 | Quick review of MLP & Backprop
6 | Autoencoders and Stacked Auto-Encoders
7 |
8 | Goal: Prep and setup.
9 | Compare logistic regression, MLP, and stacked auto-encoders on the same data
10 | Challenges: see the bottom of each notebook
11 |
12 |
13 | ### Day 1
14 | ***Please work through the notebooks in the following order:***
15 |
16 | If you are new to theano, please start with the excellent tutorials from the [Montreal Deep Learning Summer School 2015](https://github.com/mila-udem/summerschool2015), and complete the following notebooks first:
17 |
18 | - intro_theano/intro_theano.ipynb
19 | - intro_theano/logistic_regression.ipynb
20 |
21 | If you are already familiar with theano, please work through the notebooks in this repository in the following order:
22 | - theano_logistic_regression.ipynb
23 | - mlp.ipynb
24 | - theano_autoencoder.ipynb
25 | - stacked_autoencoder.ipynb
26 |
27 | ### Day 2
28 | ***Please work through the notebooks in the following order:***
29 |
30 | - accmulating_rnn.ipynb
31 | - create_brown_w2v_index.ipynb
32 | - recurrent_transition_with_lookup.ipynb
33 |
34 | ### Completing the labs
35 |
36 | As you work through the notebooks, you will notice that some of the cells use the %%writefile magic at the top of the cell to write the content to a file. This is done for classes and functions which will be used in later notebooks. In order for the save paths to work correctly, you need to be running `ipython notebook` inside the directory for that day.
37 |
38 | Every time the cell is run, the file will be overwritten, so if you want to modify the behavior of a class or function, just edit the cell where it is created, and the corresponding file will automatically update.
39 |
40 | ### Installation and Setup
41 |
42 | The notebooks need to be run inside the day\*/ directory, so, for example:
43 |
44 | ```
45 | git clone https://github.com/chrishokamp/dl4mt_exercises.git
46 | cd dl4mt_exercises/notebooks/day1
47 | ipython notebook
48 | ```
49 |
50 | Please also make sure that you are using the bleeding edge version of theano from github. Installation instructions are [here](http://deeplearning.net/software/theano/install_ubuntu.html#bleeding-edge-installs).
51 |
52 | These labs use [Fuel](http://fuel.readthedocs.org/en/latest/setup.html) to build, load, and iterate over datasets, please install that first.
53 |
54 | ### Memory and Disk Space Management
55 | If you are using the Virtual Machine provided for the exercises, you may find the available memory getting low. Once you have worked through a notebook, you can close it to free up the RAM used by that kernel.
56 |
57 | ### Dataset Description
58 | You can see how our toy POS tagging dataset is created (and create your own versions) by looking at these notebooks:
59 | - notebooks/datasets/prep_pos_corpus.ipynb
60 | - notebooks/datasets/word_window_vectors.ipynb
61 |
62 |
63 | ### Resources and Inspirations Used to Create these Tutorials
64 |
65 | Most of the Theano code in these tutorials was taken from the [excellent tutorials on deeplearning.net](https://github.com/lisa-lab/DeepLearningTutorials), and modified to be easy to use with an example Part-of-Speech tagging task using the [Brown Corpus](https://en.wikipedia.org/wiki/Brown_Corpus) with the [Universal Tagset POS tags](https://github.com/slavpetrov/universal-pos-tags).
66 |
67 |
68 |
--------------------------------------------------------------------------------
/datasets/brown.lsi-index.npy:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/chrishokamp/dl4mt_exercises/2ce7f3606cc19fb9a8ed18dff7dcf2995c17cc3f/datasets/brown.lsi-index.npy
--------------------------------------------------------------------------------
/datasets/brown.word-by-word.normalized.npy:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/chrishokamp/dl4mt_exercises/2ce7f3606cc19fb9a8ed18dff7dcf2995c17cc3f/datasets/brown.word-by-word.normalized.npy
--------------------------------------------------------------------------------
/datasets/brown_pos_dataset.hdf5:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/chrishokamp/dl4mt_exercises/2ce7f3606cc19fb9a8ed18dff7dcf2995c17cc3f/datasets/brown_pos_dataset.hdf5
--------------------------------------------------------------------------------
/datasets/brown_w2v_vectors.npy:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/chrishokamp/dl4mt_exercises/2ce7f3606cc19fb9a8ed18dff7dcf2995c17cc3f/datasets/brown_w2v_vectors.npy
--------------------------------------------------------------------------------
/dl4mt/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/chrishokamp/dl4mt_exercises/2ce7f3606cc19fb9a8ed18dff7dcf2995c17cc3f/dl4mt/__init__.py
--------------------------------------------------------------------------------
/notebooks/data_preparation/prep_pos_corpus.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "code",
5 | "execution_count": 1,
6 | "metadata": {
7 | "collapsed": false
8 | },
9 | "outputs": [],
10 | "source": [
11 | "%matplotlib inline\n",
12 | "%load_ext autoreload\n",
13 | "%autoreload 2\n",
14 | "\n",
15 | "from __future__ import division, print_function\n",
16 | "import codecs\n",
17 | "import os\n",
18 | "import cPickle\n",
19 | "from collections import OrderedDict, Counter\n",
20 | "\n",
21 | "import numpy as np\n",
22 | "import nltk\n",
23 | "import h5py\n",
24 | "from fuel.datasets import IndexableDataset, H5PYDataset\n"
25 | ]
26 | },
27 | {
28 | "cell_type": "code",
29 | "execution_count": 2,
30 | "metadata": {
31 | "collapsed": true
32 | },
33 | "outputs": [],
34 | "source": [
35 | "def extract_windows(seq, window_size):\n",
36 | " padded_seq = [u'_START_']*window_size + seq + [u'_END_']*window_size\n",
37 | " return [padded_seq[i-window_size:i+window_size+1] for i in range(window_size, window_size+len(seq))]"
38 | ]
39 | },
40 | {
41 | "cell_type": "code",
42 | "execution_count": 3,
43 | "metadata": {
44 | "collapsed": false
45 | },
46 | "outputs": [],
47 | "source": [
48 | "MAX_SENTENCES = 10000\n",
49 | "WINDOW_SIZE = 2\n",
50 | "UNKNOWN_THRESHOLD = 2\n",
51 | "UNKNOWN_TOKEN = u'_UNK_'\n",
52 | "\n",
53 | "tagged_sentences = nltk.corpus.brown.tagged_sents(tagset='universal')\n",
54 | "\n",
55 | "words_by_line, tags_by_line = zip(*[zip(*sen)\n",
56 | " for sen in list(tagged_sentences)[:MAX_SENTENCES]])\n",
57 | "\n",
58 | "# let's make a dictionary of words, and replace words which occur in < threshold sentences with u'_UNK_'\n",
59 | "word_counts = Counter([w for line in words_by_line for w in line])\n",
60 | "known_toks = set([k for k,v in word_counts.items() if v >= UNKNOWN_THRESHOLD])\n",
61 | "\n",
62 | "def map_token(tok):\n",
63 | " if tok in known_toks:\n",
64 | " return tok\n",
65 | " return UNKNOWN_TOKEN\n",
66 | "\n",
67 | "words_by_line = [[map_token(w) for w in line] for line in words_by_line]\n",
68 | "\n",
69 | "word_windows, tags = zip(*[(word, tag) for word_seq, tags in zip(words_by_line, tags_by_line) \n",
70 | " for word, tag in zip(extract_windows(list(word_seq), window_size=WINDOW_SIZE), tags)])"
71 | ]
72 | },
73 | {
74 | "cell_type": "code",
75 | "execution_count": 4,
76 | "metadata": {
77 | "collapsed": false
78 | },
79 | "outputs": [],
80 | "source": [
81 | "# cool, lets make our train, dev, and test splits\n",
82 | "num_instances = len(word_windows)\n",
83 | "DEV_FRACTION = 0.2\n",
84 | "TEST_FRACTION = 0.2\n",
85 | "\n",
86 | "dev_size = int(np.floor(num_instances * DEV_FRACTION))\n",
87 | "DEV_SPLIT = num_instances - dev_size\n",
88 | "TEST_SPLIT = num_instances - int(dev_size + np.floor(num_instances * TEST_FRACTION))\n",
89 | "\n",
90 | "# X_train, y_train = zip(*all_instances[:-TEST_SPLIT])\n",
91 | "# X_test, y_test = zip(*all_instances[-TEST_SPLIT:-DEV_SPLIT])\n",
92 | "# X_dev, y_dev = zip(*all_instances[-DEV_SPLIT:])"
93 | ]
94 | },
95 | {
96 | "cell_type": "code",
97 | "execution_count": 5,
98 | "metadata": {
99 | "collapsed": false
100 | },
101 | "outputs": [
102 | {
103 | "data": {
104 | "text/plain": [
105 | "131862"
106 | ]
107 | },
108 | "execution_count": 5,
109 | "metadata": {},
110 | "output_type": "execute_result"
111 | }
112 | ],
113 | "source": [
114 | "# split our corpus and get the set of tokens which occur in the training data\n",
115 | "# do the same thing for tags\n",
116 | "# this is a necessary preprocessing step in most machine learning scenarios, since we need to be able\n",
117 | "# to handle the cases where the test data contains things that we never saw during training\n",
118 | "\n",
119 | "training_instances = word_windows[:TEST_SPLIT]\n",
120 | "training_tags = set(tags[:TEST_SPLIT]).union(set([UNKNOWN_TOKEN]))\n",
121 | "training_tokens = set([w for l in training_instances for w in l]).union(set([UNKNOWN_TOKEN]))\n",
122 | "len(training_instances)"
123 | ]
124 | },
125 | {
126 | "cell_type": "code",
127 | "execution_count": 6,
128 | "metadata": {
129 | "collapsed": true
130 | },
131 | "outputs": [],
132 | "source": [
133 | "# make a words dict, its reverse, and a tags dict and its reverse\n",
134 | "idx2word = dict(enumerate(training_tokens))\n",
135 | "word2idx = {v:k for k,v in idx2word.items()}\n",
136 | "\n",
137 | "idx2tag = dict(enumerate(set([t for l in tags_by_line for t in l])))\n",
138 | "tag2idx = {v:k for k,v in idx2tag.items()}\n",
139 | "\n",
140 | "def map_to_index(tok, index):\n",
141 | " if tok in index:\n",
142 | " return index[tok]\n",
143 | " else:\n",
144 | " return index[UNKNOWN_TOKEN]"
145 | ]
146 | },
147 | {
148 | "cell_type": "code",
149 | "execution_count": 7,
150 | "metadata": {
151 | "collapsed": false
152 | },
153 | "outputs": [],
154 | "source": [
155 | "# This should end up in dl4mt_exercises/\n",
156 | "DATASET_LOCATION = '../../datasets/'\n",
157 | "try:\n",
158 | " os.mkdir(DATASET_LOCATION)\n",
159 | "except OSError:\n",
160 | " pass"
161 | ]
162 | },
163 | {
164 | "cell_type": "code",
165 | "execution_count": 8,
166 | "metadata": {
167 | "collapsed": false
168 | },
169 | "outputs": [],
170 | "source": [
171 | "# persist the indices \n",
172 | "corpus_indices = {'idx2word': idx2word, 'word2idx': word2idx, 'idx2tag': idx2tag, 'tag2idx': tag2idx}\n",
173 | "\n",
174 | "with open(os.path.join(DATASET_LOCATION, 'brown_pos_dataset.indices'), 'wb') as indices_file:\n",
175 | " cPickle.dump(corpus_indices, indices_file)"
176 | ]
177 | },
178 | {
179 | "cell_type": "code",
180 | "execution_count": 9,
181 | "metadata": {
182 | "collapsed": false
183 | },
184 | "outputs": [],
185 | "source": [
186 | "# now create the fuel dataset\n",
187 | "# TODO: add the original data in case we need to map back\n",
188 | "# TODO: right now we are loading the original data again in later notebooks \n",
189 | "# -- all of that preparation should be done here\n",
190 | "\n",
191 | "iwords = [[map_to_index(w, word2idx) for w in l] for l in word_windows]\n",
192 | "itags = [map_to_index(t, tag2idx) for t in tags]\n",
193 | "\n",
194 | "DATASET_NAME = 'brown_pos_dataset.hdf5'\n",
195 | "DATASET_PATH = os.path.join(DATASET_LOCATION, DATASET_NAME)\n",
196 | "\n",
197 | "f = h5py.File(DATASET_PATH, mode='w')\n",
198 | "\n",
199 | "instances = f.create_dataset('instances', (num_instances, WINDOW_SIZE*2+1), dtype='uint32')\n",
200 | "instances[...] = np.array(iwords)\n",
201 | "\n",
202 | "targets = f.create_dataset('targets', (num_instances, 1), dtype='uint32')\n",
203 | "targets[...] = np.array(itags).reshape((num_instances, 1))\n",
204 | "\n",
205 | "instances.dims[0].label = 'batch'\n",
206 | "instances.dims[1].label = 'features'\n",
207 | "\n",
208 | "targets.dims[0].label = 'batch'\n",
209 | "targets.dims[1].label = 'index'\n",
210 | "\n",
211 | "split_dict = {\n",
212 | " 'train': {'instances': (0, TEST_SPLIT), 'targets': (0, TEST_SPLIT)},\n",
213 | " 'test' : {'instances': (TEST_SPLIT, DEV_SPLIT), 'targets': (TEST_SPLIT, DEV_SPLIT)},\n",
214 | " 'dev' : {'instances': (DEV_SPLIT, num_instances), 'targets': (DEV_SPLIT, num_instances)}\n",
215 | "}\n",
216 | "\n",
217 | "f.attrs['split'] = H5PYDataset.create_split_array(split_dict)\n",
218 | "f.flush()\n",
219 | "f.close()"
220 | ]
221 | },
222 | {
223 | "cell_type": "code",
224 | "execution_count": 10,
225 | "metadata": {
226 | "collapsed": true
227 | },
228 | "outputs": [],
229 | "source": [
230 | "# NOW SOME QUICK TESTING"
231 | ]
232 | },
233 | {
234 | "cell_type": "code",
235 | "execution_count": 11,
236 | "metadata": {
237 | "collapsed": false
238 | },
239 | "outputs": [
240 | {
241 | "name": "stdout",
242 | "output_type": "stream",
243 | "text": [
244 | "131862\n",
245 | "43954\n",
246 | "43954\n"
247 | ]
248 | }
249 | ],
250 | "source": [
251 | "train_set = H5PYDataset(DATASET_PATH, which_sets=('train',))\n",
252 | "print(train_set.num_examples)\n",
253 | "\n",
254 | "test_set = H5PYDataset(DATASET_PATH, which_sets=('test',))\n",
255 | "print(test_set.num_examples)\n",
256 | "\n",
257 | "dev_set = H5PYDataset(DATASET_PATH, which_sets=('dev',))\n",
258 | "print(dev_set.num_examples)\n",
259 | "\n",
260 | "in_memory_train = H5PYDataset(\n",
261 | " DATASET_PATH, which_sets=('train',),\n",
262 | " sources=['instances', 'targets'], load_in_memory=True)\n",
263 | "\n",
264 | "# train_X, train_y = in_memory_train.data_sources"
265 | ]
266 | },
267 | {
268 | "cell_type": "code",
269 | "execution_count": 12,
270 | "metadata": {
271 | "collapsed": false
272 | },
273 | "outputs": [],
274 | "source": [
275 | "from fuel.streams import DataStream\n",
276 | "from fuel.schemes import ConstantScheme, SequentialScheme\n",
277 | "from fuel.transformers import Batch\n",
278 | "\n",
279 | "stream = DataStream.default_stream(in_memory_train,\n",
280 | " iteration_scheme=SequentialScheme(in_memory_train.num_examples, 500))"
281 | ]
282 | },
283 | {
284 | "cell_type": "code",
285 | "execution_count": 18,
286 | "metadata": {
287 | "collapsed": false
288 | },
289 | "outputs": [],
290 | "source": [
291 | "# to iterate over the examples, you would do something like:\n",
292 | "\n",
293 | "# test_iter = stream.get_epoch_iterator()\n",
294 | "# for e in list(test_iter):\n",
295 | "# print(e)"
296 | ]
297 | },
298 | {
299 | "cell_type": "code",
300 | "execution_count": null,
301 | "metadata": {
302 | "collapsed": false
303 | },
304 | "outputs": [],
305 | "source": [
306 | "# prep a corpus for POS tagging with DNNs -- use the Brown corpus from NLTK\n",
307 | "\n",
308 | "# the baseline taggers are window-based\n",
309 | "# data prep\n",
310 | "# separate training, dev, and test datasets\n",
311 | "\n",
312 | "\n",
313 | "# training pipeline\n",
314 | "# for every training sentence, segment into windows\n",
315 | "# \n",
316 | "\n",
317 | "# prediction pipeline:\n",
318 | "# input sentence\n",
319 | "# tokenize\n",
320 | "# pad left and right, then extract the windows for each token (parameterized by window size)\n",
321 | "# total vector width is feats*(window_size*2+1)\n",
322 | "\n"
323 | ]
324 | }
325 | ],
326 | "metadata": {
327 | "kernelspec": {
328 | "display_name": "Python 2",
329 | "language": "python",
330 | "name": "python2"
331 | },
332 | "language_info": {
333 | "codemirror_mode": {
334 | "name": "ipython",
335 | "version": 2
336 | },
337 | "file_extension": ".py",
338 | "mimetype": "text/x-python",
339 | "name": "python",
340 | "nbconvert_exporter": "python",
341 | "pygments_lexer": "ipython2",
342 | "version": "2.7.10"
343 | }
344 | },
345 | "nbformat": 4,
346 | "nbformat_minor": 0
347 | }
348 |
--------------------------------------------------------------------------------
/notebooks/data_preparation/word_window_vectors.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "code",
5 | "execution_count": 58,
6 | "metadata": {
7 | "collapsed": false
8 | },
9 | "outputs": [
10 | {
11 | "name": "stdout",
12 | "output_type": "stream",
13 | "text": [
14 | "The autoreload extension is already loaded. To reload it, use:\n",
15 | " %reload_ext autoreload\n"
16 | ]
17 | }
18 | ],
19 | "source": [
20 | "# use windows from hdf5 dataset -- extract the distributions around words to build our baseline representation\n",
21 | "# the idea is to encode each word by the words which occur around it\n",
22 | "# this should give us a good idea of the distributional tendencies of this word\n",
23 | "\n",
24 | "%load_ext autoreload\n",
25 | "%autoreload 2\n",
26 | "\n",
27 | "import os\n",
28 | "import cPickle\n",
29 | "from collections import Counter, defaultdict\n",
30 | "from itertools import chain\n",
31 | "\n",
32 | "import numpy as np\n",
33 | "from fuel.datasets import H5PYDataset"
34 | ]
35 | },
36 | {
37 | "cell_type": "code",
38 | "execution_count": 67,
39 | "metadata": {
40 | "collapsed": false
41 | },
42 | "outputs": [],
43 | "source": [
44 | "# the constants that we'll need in this notebook\n",
45 | "\n",
46 | "# let's find the N most frequent tokens in our corpus \n",
47 | "TOKEN_FREQ_CUTOFF = 25 # this is an important hyperparameter because it controls the sparsity \n",
48 | "# of the window representation\n",
49 | "TOKS_IN_WINDOW = 4 # window size - 1 because we delete the middle token\n",
50 | "\n",
51 | "DATASET_LOCATION = '../../datasets/' # the directory where we store datasets\n",
52 | "\n",
53 | "# the pos dataset consists of windows around words\n",
54 | "POS_DATASET_NAME = 'brown_pos_dataset.hdf5'\n",
55 | "POS_DATASET_PATH = os.path.join(DATASET_LOCATION, POS_DATASET_NAME)\n",
56 | "\n",
57 | "CORPUS_INDICES = 'brown_pos_dataset.indices' \n",
58 | "VECTOR_INDEX = 'brown.word-by-word.normalized.npy' # the name of the index we'll create\n",
59 | "\n",
60 | "# Indexes for mapping words <--> ints\n",
61 | "with open(os.path.join(DATASET_LOCATION, CORPUS_INDICES)) as indices_file:\n",
62 | " corpus_indices = cPickle.load(indices_file)\n",
63 | "\n",
64 | "UNKNOWN_TOKEN = u'_UNK_' \n",
65 | "UNKNOWN_TOKEN_IDX = corpus_indices['word2idx'][UNKNOWN_TOKEN]\n",
66 | "\n",
67 | "train_X, train_y = H5PYDataset(\n",
68 | " POS_DATASET_PATH, which_sets=('train',),\n",
69 | " sources=['instances', 'targets'], load_in_memory=True).data_sources"
70 | ]
71 | },
72 | {
73 | "cell_type": "code",
74 | "execution_count": 68,
75 | "metadata": {
76 | "collapsed": false
77 | },
78 | "outputs": [],
79 | "source": [
80 | "token_counts = Counter(chain(*[i for i in train_X]))\n",
81 | "vocab_size = len(token_counts)"
82 | ]
83 | },
84 | {
85 | "cell_type": "code",
86 | "execution_count": 69,
87 | "metadata": {
88 | "collapsed": true
89 | },
90 | "outputs": [],
91 | "source": [
92 | "top_N_tokens = set(k for k,v in token_counts.most_common()[:TOKEN_FREQ_CUTOFF])\n",
93 | "\n",
94 | "# map to new 0-based indices\n",
95 | "top_N_index = {old_idx: new_idx for new_idx,old_idx in enumerate(top_N_tokens)}"
96 | ]
97 | },
98 | {
99 | "cell_type": "code",
100 | "execution_count": 70,
101 | "metadata": {
102 | "collapsed": true
103 | },
104 | "outputs": [],
105 | "source": [
106 | "# function to check if tok is in top N, else it's top_N_index[UNKNOWN_TOKEN]\n",
107 | "def idx_or_unk(index, token):\n",
108 | " if token in index:\n",
109 | " return index[token]\n",
110 | " return index[UNKNOWN_TOKEN_IDX]"
111 | ]
112 | },
113 | {
114 | "cell_type": "code",
115 | "execution_count": 71,
116 | "metadata": {
117 | "collapsed": false
118 | },
119 | "outputs": [],
120 | "source": [
121 | "# now initialize matrix of zeros\n",
122 | "# iterate through the window corpus and increment counts\n",
123 | "# in the slice of the features corresponding to the token's position in the window\n",
124 | "num_rows = vocab_size\n",
125 | "\n",
126 | "num_cols = TOKS_IN_WINDOW * TOKEN_FREQ_CUTOFF\n",
127 | "\n",
128 | "word_by_word_index = np.zeros((num_rows, num_cols), dtype=\"float32\")\n",
129 | "\n",
130 | "for instance in train_X:\n",
131 | " pos_token = instance[2] # TODO: parameterize getting the middle of the window -- this assumes window_size=5\n",
132 | " new_idxs = [idx_or_unk(top_N_index, tok) for tok in instance]\n",
133 | " # delete the token itself\n",
134 | " del new_idxs[2]\n",
135 | " \n",
136 | " # now iterate over idx and create the one-hot encoding at the right place\n",
137 | " for i,idx in enumerate(new_idxs):\n",
138 | " word_by_word_index[pos_token, idx + (i * TOKEN_FREQ_CUTOFF)] += 1\n"
139 | ]
140 | },
141 | {
142 | "cell_type": "code",
143 | "execution_count": 72,
144 | "metadata": {
145 | "collapsed": false
146 | },
147 | "outputs": [
148 | {
149 | "data": {
150 | "text/plain": [
151 | "[u'_UNK_',\n",
152 | " u'of',\n",
153 | " u'in',\n",
154 | " u',',\n",
155 | " u'to',\n",
156 | " u'on',\n",
157 | " u'for',\n",
158 | " u'at',\n",
159 | " u'and',\n",
160 | " u'that',\n",
161 | " u'with',\n",
162 | " u'by',\n",
163 | " u'is',\n",
164 | " u'as',\n",
165 | " u'was',\n",
166 | " u'be',\n",
167 | " u'_START_',\n",
168 | " u'``',\n",
169 | " u\"''\",\n",
170 | " u'_END_',\n",
171 | " u'the',\n",
172 | " u'The',\n",
173 | " u'he',\n",
174 | " u'a',\n",
175 | " u'.']"
176 | ]
177 | },
178 | "execution_count": 72,
179 | "metadata": {},
180 | "output_type": "execute_result"
181 | }
182 | ],
183 | "source": [
184 | "# normalize counts by row to 0-1\n",
185 | "word_by_word_index = np.array([row / float(row.max()) for row in word_by_word_index]).astype('float32')\n",
186 | "\n",
187 | "# TESTING\n",
188 | "# sanity ('the' is = 7524)\n",
189 | "reverse_top_N = {v:k for k,v in top_N_index.items()}\n",
190 | "\n",
191 | "max_idxs = word_by_word_index[7524][TOKEN_FREQ_CUTOFF:TOKEN_FREQ_CUTOFF*2].argsort()[::-1]\n",
192 | "[corpus_indices['idx2word'][reverse_top_N[idx]] for idx in max_idxs]"
193 | ]
194 | },
195 | {
196 | "cell_type": "code",
197 | "execution_count": 73,
198 | "metadata": {
199 | "collapsed": true
200 | },
201 | "outputs": [],
202 | "source": [
203 | "# TODO: implement real-valued representation -- right now the autoencoder only works with binary\n",
204 | "# TODO: non-autoencoders can use real-valued features\n",
205 | "word_by_word_index[word_by_word_index.nonzero()] = 1"
206 | ]
207 | },
208 | {
209 | "cell_type": "code",
210 | "execution_count": 74,
211 | "metadata": {
212 | "collapsed": true
213 | },
214 | "outputs": [],
215 | "source": [
216 | "# persist the new index\n",
217 | "with open(os.path.join(DATASET_LOCATION, VECTOR_INDEX), 'wb') as outfile:\n",
218 | " np.save(outfile, word_by_word_index)"
219 | ]
220 | },
221 | {
222 | "cell_type": "code",
223 | "execution_count": null,
224 | "metadata": {
225 | "collapsed": true
226 | },
227 | "outputs": [],
228 | "source": []
229 | }
230 | ],
231 | "metadata": {
232 | "kernelspec": {
233 | "display_name": "Python 2",
234 | "language": "python",
235 | "name": "python2"
236 | },
237 | "language_info": {
238 | "codemirror_mode": {
239 | "name": "ipython",
240 | "version": 2
241 | },
242 | "file_extension": ".py",
243 | "mimetype": "text/x-python",
244 | "name": "python",
245 | "nbconvert_exporter": "python",
246 | "pygments_lexer": "ipython2",
247 | "version": "2.7.10"
248 | }
249 | },
250 | "nbformat": 4,
251 | "nbformat_minor": 0
252 | }
253 |
--------------------------------------------------------------------------------
/notebooks/day0/Day 0 - Warming up to Python.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "
Congratulations!!! "
8 | ]
9 | },
10 | {
11 | "cell_type": "markdown",
12 | "metadata": {},
13 | "source": [
14 | "If you have came up to this point and open ipython notebook, you should have successfully installed the tools required for DL4MT. \n",
15 | "To test that you can import all the tools that you've just downloaded, click on the next box and then press `shift` + `enter`"
16 | ]
17 | },
18 | {
19 | "cell_type": "code",
20 | "execution_count": null,
21 | "metadata": {
22 | "collapsed": false
23 | },
24 | "outputs": [],
25 | "source": [
26 | "import h5py, cython, pydot, numpy, scipy, theano, fuel, sklearn, nltk, gensim"
27 | ]
28 | },
29 | {
30 | "cell_type": "markdown",
31 | "metadata": {},
32 | "source": [
33 | "You should not see any error when you press `shift` + `enter` on the previous line. \n",
34 | "\n",
35 | "In `ipython-notebook`, every \"clickable\" box is refer to as a cell. And you can switch between a text cell, where you can write text like this or a code cell where you can execute python code on the fly (as above).\n",
36 | "\n",
37 | "If you're already a fluent Parseltongue (Python) coder, please go ahead with \n",
38 | "\n",
39 | " - the [summerschool exercises from MILA](https://github.com/mila-udem/summerschool2015) or\n",
40 | " - take a look at the nice introduction to [Neural Machine Translation](http://devblogs.nvidia.com/parallelforall/introduction-neural-machine-translation-with-gpus/) from Prof. Kyunghyun Cho or\n",
41 | " - read more about the [Unreasonable Effectiveness of Recurrent Neural Nets](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) or\n",
42 | " - enjoy the nice [Neural Net NLP primer](http://u.cs.biu.ac.il/~yogo/nnlp.pdf) from Yoav Goldberg or\n",
43 | " - hone your snake charming, please do take a look at [Fluent Python](http://shop.oreilly.com/product/0636920032519.do) \n",
44 | " \n",
45 | " \n",
46 | "Pythonic Fun\n",
47 | "=====\n",
48 | " \n",
49 | "The following lines would be just a few common pythonic idioms that you might see in the following days. \n",
50 | "To proceed, click on each ipython notebook code cell and press `shift` + `enter`...\n",
51 | "\n",
52 | "\n",
53 | "File I/O\n",
54 | "====\n",
55 | "\n",
56 | "To open a text file in Python, it's best to use the [`io` module](https://docs.python.org/2/library/io.html) and the `with` operator, you can also specify the encoding for your file. Note that when using the `with` statement, the file automatically closes when you're out of the with scope. There is no need to explicitly close a file when using `with`.\n",
57 | "\n",
58 | "\n",
59 | "For example:"
60 | ]
61 | },
62 | {
63 | "cell_type": "code",
64 | "execution_count": null,
65 | "metadata": {
66 | "collapsed": false
67 | },
68 | "outputs": [],
69 | "source": [
70 | "import io # This is the io module.\n",
71 | "# This is the with statement that controls the file I/O\n",
72 | "with io.open('test.txt', 'r', encoding='utf8') as fin: \n",
73 | " for line in fin: # Iterates through each line in the file.\n",
74 | " print line # prints the line to console."
75 | ]
76 | },
77 | {
78 | "cell_type": "markdown",
79 | "metadata": {},
80 | "source": [
81 | "Pickle (not Sauerkraut)\n",
82 | "----\n",
83 | "\n",
84 | "Pickling is a concept to save a python object as binary and then read them again without the hassle of packing and unpacking an object from a specific file format. The [`pickle` object](https://docs.python.org/2/library/pickle.html) is a nifty way to save your trained models and load them in the future. "
85 | ]
86 | },
87 | {
88 | "cell_type": "code",
89 | "execution_count": null,
90 | "metadata": {
91 | "collapsed": false
92 | },
93 | "outputs": [],
94 | "source": [
95 | "import io\n",
96 | "import cPickle as pickle # Note that in python3, you simple do `import pickle`\n",
97 | "from collections import Counter # This is a counter object, see https://docs.python.org/2/library/collections.html\n",
98 | "\n",
99 | "word_counts = Counter() # Let's initialize a word counter object to keep count of words in a file.\n",
100 | "with io.open('test.txt', 'r', encoding='utf8') as fin:\n",
101 | " text = fin.read() # Reads the whole file as a single string object.\n",
102 | " text = text.split() # Let's split the text into a list of words.\n",
103 | " word_counter.update(text)\n",
104 | " \n",
105 | "with io.open('wordcounts.pk', 'wb') as fout: # Note that 'wb' means a file for writing objects as binary objects.\n",
106 | " pickle.dump(word_counts, fout) # Dumps the *word_counts* object into a pickle file named 'wordcounts.pk'"
107 | ]
108 | },
109 | {
110 | "cell_type": "markdown",
111 | "metadata": {},
112 | "source": [
113 | "Now you would see a new file `wordcounts.pk` appear in the directory where this ipython notebook resides. You can easily read a `word_counter` now by loading the pickled object rather than recounting from the textfile:"
114 | ]
115 | },
116 | {
117 | "cell_type": "code",
118 | "execution_count": null,
119 | "metadata": {
120 | "collapsed": false
121 | },
122 | "outputs": [],
123 | "source": [
124 | "import io\n",
125 | "import cPickle as pickle\n",
126 | "\n",
127 | "with io.open('wordcounts.pk', 'rb') as fin: # Opens a binary file with the 'rb' parameter/flag.\n",
128 | " word_counts = pickle.load(fin)\n",
129 | " \n",
130 | "for word, count in word_counts.items(): # Iterates through the newly loaded *word_counts* object.\n",
131 | " print word, count # prints the word and its count."
132 | ]
133 | },
134 | {
135 | "cell_type": "markdown",
136 | "metadata": {},
137 | "source": [
138 | "Some NLP (atlas)\n",
139 | "====\n",
140 | "\n",
141 | "Let's get down to some NLP work given the knowledge of pickles and file I/O in python. First let's start with some corpus access, Part-of-Speech (POS) and tokenization tools that `NLTK` provides:"
142 | ]
143 | },
144 | {
145 | "cell_type": "code",
146 | "execution_count": null,
147 | "metadata": {
148 | "collapsed": true
149 | },
150 | "outputs": [],
151 | "source": [
152 | "import io\n",
153 | "\n",
154 | "from nltk.corpus import brown # This the tagged brown corpus which is also a subset of the Penntreebank corpus.\n",
155 | "from nltk import word_tokenize, sent_tokenize, pos_tag # The default sentence, word tokenizer and pos-tagger from NLTK\n"
156 | ]
157 | },
158 | {
159 | "cell_type": "code",
160 | "execution_count": null,
161 | "metadata": {
162 | "collapsed": false
163 | },
164 | "outputs": [],
165 | "source": [
166 | "#############################################################\n",
167 | "# TODO: Please fill in the code to read the `test.txt` file.\n",
168 | "# (remember to press `shift` + `enter` after the code)\n",
169 | "#############################################################\n",
170 | "\n",
171 | "with io.open('test.txt', 'r', encoding='utf8') as fin:\n",
172 | " text = fin.read()\n",
173 | " print text\n",
174 | "\n",
175 | " \n",
176 | "###########################\n",
177 | "# Answer: DON'T PEEK!!!!\n",
178 | "###########################\n",
179 | "#with io.open('test.txt', 'r', encoding='utf8') as fin:\n",
180 | "# text = fin.read()\n"
181 | ]
182 | },
183 | {
184 | "cell_type": "markdown",
185 | "metadata": {},
186 | "source": [
187 | "Now that you have read the file, let's try to tokenize the file:"
188 | ]
189 | },
190 | {
191 | "cell_type": "code",
192 | "execution_count": null,
193 | "metadata": {
194 | "collapsed": false
195 | },
196 | "outputs": [],
197 | "source": [
198 | "for sentence in sent_tokenize(text):\n",
199 | " for word in word_tokenize(sentence):\n",
200 | " print word\n",
201 | " print '# END of sentence'"
202 | ]
203 | },
204 | {
205 | "cell_type": "code",
206 | "execution_count": null,
207 | "metadata": {
208 | "collapsed": false
209 | },
210 | "outputs": [],
211 | "source": [
212 | "for sentence in sent_tokenize(text):\n",
213 | " for word, tag in pos_tag(word_tokenize(sentence)):\n",
214 | " print word, tag\n",
215 | " print '########'"
216 | ]
217 | },
218 | {
219 | "cell_type": "code",
220 | "execution_count": null,
221 | "metadata": {
222 | "collapsed": false
223 | },
224 | "outputs": [],
225 | "source": [
226 | "# To get the tagged words into a pickle-able object, \n",
227 | "# you can keep the whole process tagged file as a \n",
228 | "# list of list of strings using list comprehension, \n",
229 | "# see https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions\n",
230 | "\n",
231 | "tagged_text = [pos_tag(word_tokenize(sentence)) for sentence in sent_tokenize(text)]\n",
232 | "print tagged_text"
233 | ]
234 | },
235 | {
236 | "cell_type": "code",
237 | "execution_count": null,
238 | "metadata": {
239 | "collapsed": false
240 | },
241 | "outputs": [],
242 | "source": [
243 | "##########################################################\n",
244 | "# Now try to pickle the *tagged_text* into a pickle file,\n",
245 | "# and then try to open the newly pickled file.\n",
246 | "##########################################################\n",
247 | "\n",
248 | "\n",
249 | "\n",
250 | "\n",
251 | "###########################\n",
252 | "# Answer: DON'T PEEK!!!!\n",
253 | "###########################\n",
254 | "\n",
255 | "#import pickle\n",
256 | "#with io.open('tagged_text.pk', 'wb') as fout:\n",
257 | "# tagged_text = [pos_tag(word_tokenize(sentence)) for sentence in sent_tokenize(text)]\n",
258 | "# pickle.dump(tagged_text, fout)\n",
259 | "# \n",
260 | "#with io.open('tagged_text.pk', 'rb') as fin:\n",
261 | "# pickled_tagged_text = pickle.load(fin)\n",
262 | "# \n",
263 | "#print pickled_tagged_text\n",
264 | "\n"
265 | ]
266 | },
267 | {
268 | "cell_type": "markdown",
269 | "metadata": {},
270 | "source": [
271 | "Some More NLP\n",
272 | "====\n",
273 | "\n",
274 | "Alright, that's how NLTK tokenizes and POS tag a corpus. Now, let's skip all the hard work of tagging a corpus and just simply read one off NLTK:"
275 | ]
276 | },
277 | {
278 | "cell_type": "code",
279 | "execution_count": null,
280 | "metadata": {
281 | "collapsed": false
282 | },
283 | "outputs": [],
284 | "source": [
285 | "from nltk.corpus import brown\n",
286 | "tagged_text = brown.tagged_sents()\n",
287 | "print tagged_text # Note: this is the same data structure as the pickled tagged_text above."
288 | ]
289 | },
290 | {
291 | "cell_type": "code",
292 | "execution_count": null,
293 | "metadata": {
294 | "collapsed": true
295 | },
296 | "outputs": [],
297 | "source": []
298 | }
299 | ],
300 | "metadata": {
301 | "kernelspec": {
302 | "display_name": "Python 2",
303 | "language": "python",
304 | "name": "python2"
305 | },
306 | "language_info": {
307 | "codemirror_mode": {
308 | "name": "ipython",
309 | "version": 2
310 | },
311 | "file_extension": ".py",
312 | "mimetype": "text/x-python",
313 | "name": "python",
314 | "nbconvert_exporter": "python",
315 | "pygments_lexer": "ipython2",
316 | "version": "2.7.6"
317 | }
318 | },
319 | "nbformat": 4,
320 | "nbformat_minor": 0
321 | }
322 |
--------------------------------------------------------------------------------
/notebooks/day0/test.txt:
--------------------------------------------------------------------------------
1 | This is a test file. I am a foo bar text file. You have just open and read me with python...
2 |
--------------------------------------------------------------------------------
/notebooks/day1/datasets/brown.word-by-word.normalized.npy:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/chrishokamp/dl4mt_exercises/2ce7f3606cc19fb9a8ed18dff7dcf2995c17cc3f/notebooks/day1/datasets/brown.word-by-word.normalized.npy
--------------------------------------------------------------------------------
/notebooks/day1/datasets/brown_pos_dataset.hdf5:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/chrishokamp/dl4mt_exercises/2ce7f3606cc19fb9a8ed18dff7dcf2995c17cc3f/notebooks/day1/datasets/brown_pos_dataset.hdf5
--------------------------------------------------------------------------------
/notebooks/day1/datasets/brown_w2v_model:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/chrishokamp/dl4mt_exercises/2ce7f3606cc19fb9a8ed18dff7dcf2995c17cc3f/notebooks/day1/datasets/brown_w2v_model
--------------------------------------------------------------------------------
/notebooks/day1/datasets/mnist2500_labels.txt:
--------------------------------------------------------------------------------
1 | 5.0000000e+00
2 | 0.0000000e+00
3 | 4.0000000e+00
4 | 1.0000000e+00
5 | 9.0000000e+00
6 | 2.0000000e+00
7 | 1.0000000e+00
8 | 3.0000000e+00
9 | 1.0000000e+00
10 | 4.0000000e+00
11 | 3.0000000e+00
12 | 5.0000000e+00
13 | 3.0000000e+00
14 | 6.0000000e+00
15 | 1.0000000e+00
16 | 7.0000000e+00
17 | 2.0000000e+00
18 | 8.0000000e+00
19 | 6.0000000e+00
20 | 9.0000000e+00
21 | 4.0000000e+00
22 | 0.0000000e+00
23 | 9.0000000e+00
24 | 1.0000000e+00
25 | 1.0000000e+00
26 | 2.0000000e+00
27 | 4.0000000e+00
28 | 3.0000000e+00
29 | 2.0000000e+00
30 | 7.0000000e+00
31 | 3.0000000e+00
32 | 8.0000000e+00
33 | 6.0000000e+00
34 | 9.0000000e+00
35 | 0.0000000e+00
36 | 5.0000000e+00
37 | 6.0000000e+00
38 | 0.0000000e+00
39 | 7.0000000e+00
40 | 6.0000000e+00
41 | 1.0000000e+00
42 | 8.0000000e+00
43 | 7.0000000e+00
44 | 9.0000000e+00
45 | 3.0000000e+00
46 | 9.0000000e+00
47 | 8.0000000e+00
48 | 5.0000000e+00
49 | 9.0000000e+00
50 | 3.0000000e+00
51 | 3.0000000e+00
52 | 0.0000000e+00
53 | 7.0000000e+00
54 | 4.0000000e+00
55 | 9.0000000e+00
56 | 8.0000000e+00
57 | 0.0000000e+00
58 | 9.0000000e+00
59 | 4.0000000e+00
60 | 1.0000000e+00
61 | 4.0000000e+00
62 | 4.0000000e+00
63 | 6.0000000e+00
64 | 0.0000000e+00
65 | 4.0000000e+00
66 | 5.0000000e+00
67 | 6.0000000e+00
68 | 1.0000000e+00
69 | 0.0000000e+00
70 | 0.0000000e+00
71 | 1.0000000e+00
72 | 7.0000000e+00
73 | 1.0000000e+00
74 | 6.0000000e+00
75 | 3.0000000e+00
76 | 0.0000000e+00
77 | 2.0000000e+00
78 | 1.0000000e+00
79 | 1.0000000e+00
80 | 7.0000000e+00
81 | 9.0000000e+00
82 | 0.0000000e+00
83 | 2.0000000e+00
84 | 6.0000000e+00
85 | 7.0000000e+00
86 | 8.0000000e+00
87 | 3.0000000e+00
88 | 9.0000000e+00
89 | 0.0000000e+00
90 | 4.0000000e+00
91 | 6.0000000e+00
92 | 7.0000000e+00
93 | 4.0000000e+00
94 | 6.0000000e+00
95 | 8.0000000e+00
96 | 0.0000000e+00
97 | 7.0000000e+00
98 | 8.0000000e+00
99 | 3.0000000e+00
100 | 1.0000000e+00
101 | 5.0000000e+00
102 | 7.0000000e+00
103 | 1.0000000e+00
104 | 7.0000000e+00
105 | 1.0000000e+00
106 | 1.0000000e+00
107 | 6.0000000e+00
108 | 3.0000000e+00
109 | 0.0000000e+00
110 | 2.0000000e+00
111 | 9.0000000e+00
112 | 3.0000000e+00
113 | 1.0000000e+00
114 | 1.0000000e+00
115 | 0.0000000e+00
116 | 4.0000000e+00
117 | 9.0000000e+00
118 | 2.0000000e+00
119 | 0.0000000e+00
120 | 0.0000000e+00
121 | 2.0000000e+00
122 | 0.0000000e+00
123 | 2.0000000e+00
124 | 7.0000000e+00
125 | 1.0000000e+00
126 | 8.0000000e+00
127 | 6.0000000e+00
128 | 4.0000000e+00
129 | 1.0000000e+00
130 | 6.0000000e+00
131 | 3.0000000e+00
132 | 4.0000000e+00
133 | 5.0000000e+00
134 | 9.0000000e+00
135 | 1.0000000e+00
136 | 3.0000000e+00
137 | 3.0000000e+00
138 | 8.0000000e+00
139 | 5.0000000e+00
140 | 4.0000000e+00
141 | 7.0000000e+00
142 | 7.0000000e+00
143 | 4.0000000e+00
144 | 2.0000000e+00
145 | 8.0000000e+00
146 | 5.0000000e+00
147 | 8.0000000e+00
148 | 6.0000000e+00
149 | 7.0000000e+00
150 | 3.0000000e+00
151 | 4.0000000e+00
152 | 6.0000000e+00
153 | 1.0000000e+00
154 | 9.0000000e+00
155 | 9.0000000e+00
156 | 6.0000000e+00
157 | 0.0000000e+00
158 | 3.0000000e+00
159 | 7.0000000e+00
160 | 2.0000000e+00
161 | 8.0000000e+00
162 | 2.0000000e+00
163 | 9.0000000e+00
164 | 4.0000000e+00
165 | 4.0000000e+00
166 | 6.0000000e+00
167 | 4.0000000e+00
168 | 9.0000000e+00
169 | 7.0000000e+00
170 | 0.0000000e+00
171 | 9.0000000e+00
172 | 2.0000000e+00
173 | 9.0000000e+00
174 | 5.0000000e+00
175 | 1.0000000e+00
176 | 5.0000000e+00
177 | 9.0000000e+00
178 | 1.0000000e+00
179 | 2.0000000e+00
180 | 3.0000000e+00
181 | 2.0000000e+00
182 | 3.0000000e+00
183 | 5.0000000e+00
184 | 9.0000000e+00
185 | 1.0000000e+00
186 | 7.0000000e+00
187 | 6.0000000e+00
188 | 2.0000000e+00
189 | 8.0000000e+00
190 | 2.0000000e+00
191 | 2.0000000e+00
192 | 5.0000000e+00
193 | 0.0000000e+00
194 | 7.0000000e+00
195 | 4.0000000e+00
196 | 9.0000000e+00
197 | 7.0000000e+00
198 | 8.0000000e+00
199 | 3.0000000e+00
200 | 2.0000000e+00
201 | 1.0000000e+00
202 | 1.0000000e+00
203 | 8.0000000e+00
204 | 3.0000000e+00
205 | 6.0000000e+00
206 | 1.0000000e+00
207 | 0.0000000e+00
208 | 3.0000000e+00
209 | 1.0000000e+00
210 | 0.0000000e+00
211 | 0.0000000e+00
212 | 1.0000000e+00
213 | 7.0000000e+00
214 | 2.0000000e+00
215 | 7.0000000e+00
216 | 3.0000000e+00
217 | 0.0000000e+00
218 | 4.0000000e+00
219 | 6.0000000e+00
220 | 5.0000000e+00
221 | 2.0000000e+00
222 | 6.0000000e+00
223 | 4.0000000e+00
224 | 7.0000000e+00
225 | 1.0000000e+00
226 | 8.0000000e+00
227 | 9.0000000e+00
228 | 9.0000000e+00
229 | 3.0000000e+00
230 | 0.0000000e+00
231 | 7.0000000e+00
232 | 1.0000000e+00
233 | 0.0000000e+00
234 | 2.0000000e+00
235 | 0.0000000e+00
236 | 3.0000000e+00
237 | 5.0000000e+00
238 | 4.0000000e+00
239 | 6.0000000e+00
240 | 5.0000000e+00
241 | 8.0000000e+00
242 | 6.0000000e+00
243 | 3.0000000e+00
244 | 7.0000000e+00
245 | 5.0000000e+00
246 | 8.0000000e+00
247 | 0.0000000e+00
248 | 9.0000000e+00
249 | 1.0000000e+00
250 | 0.0000000e+00
251 | 3.0000000e+00
252 | 1.0000000e+00
253 | 2.0000000e+00
254 | 2.0000000e+00
255 | 3.0000000e+00
256 | 3.0000000e+00
257 | 6.0000000e+00
258 | 4.0000000e+00
259 | 7.0000000e+00
260 | 5.0000000e+00
261 | 0.0000000e+00
262 | 6.0000000e+00
263 | 2.0000000e+00
264 | 7.0000000e+00
265 | 9.0000000e+00
266 | 8.0000000e+00
267 | 5.0000000e+00
268 | 9.0000000e+00
269 | 2.0000000e+00
270 | 1.0000000e+00
271 | 1.0000000e+00
272 | 4.0000000e+00
273 | 4.0000000e+00
274 | 5.0000000e+00
275 | 6.0000000e+00
276 | 4.0000000e+00
277 | 1.0000000e+00
278 | 2.0000000e+00
279 | 5.0000000e+00
280 | 3.0000000e+00
281 | 9.0000000e+00
282 | 3.0000000e+00
283 | 9.0000000e+00
284 | 0.0000000e+00
285 | 5.0000000e+00
286 | 9.0000000e+00
287 | 6.0000000e+00
288 | 5.0000000e+00
289 | 7.0000000e+00
290 | 4.0000000e+00
291 | 1.0000000e+00
292 | 3.0000000e+00
293 | 4.0000000e+00
294 | 0.0000000e+00
295 | 4.0000000e+00
296 | 8.0000000e+00
297 | 0.0000000e+00
298 | 4.0000000e+00
299 | 3.0000000e+00
300 | 6.0000000e+00
301 | 8.0000000e+00
302 | 7.0000000e+00
303 | 6.0000000e+00
304 | 0.0000000e+00
305 | 9.0000000e+00
306 | 7.0000000e+00
307 | 5.0000000e+00
308 | 7.0000000e+00
309 | 2.0000000e+00
310 | 1.0000000e+00
311 | 1.0000000e+00
312 | 6.0000000e+00
313 | 8.0000000e+00
314 | 9.0000000e+00
315 | 4.0000000e+00
316 | 1.0000000e+00
317 | 5.0000000e+00
318 | 2.0000000e+00
319 | 2.0000000e+00
320 | 9.0000000e+00
321 | 0.0000000e+00
322 | 3.0000000e+00
323 | 9.0000000e+00
324 | 6.0000000e+00
325 | 7.0000000e+00
326 | 2.0000000e+00
327 | 0.0000000e+00
328 | 3.0000000e+00
329 | 5.0000000e+00
330 | 4.0000000e+00
331 | 3.0000000e+00
332 | 6.0000000e+00
333 | 5.0000000e+00
334 | 8.0000000e+00
335 | 9.0000000e+00
336 | 5.0000000e+00
337 | 4.0000000e+00
338 | 7.0000000e+00
339 | 4.0000000e+00
340 | 2.0000000e+00
341 | 7.0000000e+00
342 | 3.0000000e+00
343 | 4.0000000e+00
344 | 8.0000000e+00
345 | 9.0000000e+00
346 | 1.0000000e+00
347 | 9.0000000e+00
348 | 2.0000000e+00
349 | 8.0000000e+00
350 | 7.0000000e+00
351 | 9.0000000e+00
352 | 1.0000000e+00
353 | 8.0000000e+00
354 | 7.0000000e+00
355 | 4.0000000e+00
356 | 1.0000000e+00
357 | 3.0000000e+00
358 | 1.0000000e+00
359 | 1.0000000e+00
360 | 0.0000000e+00
361 | 2.0000000e+00
362 | 3.0000000e+00
363 | 9.0000000e+00
364 | 4.0000000e+00
365 | 9.0000000e+00
366 | 2.0000000e+00
367 | 1.0000000e+00
368 | 6.0000000e+00
369 | 8.0000000e+00
370 | 4.0000000e+00
371 | 7.0000000e+00
372 | 7.0000000e+00
373 | 4.0000000e+00
374 | 4.0000000e+00
375 | 9.0000000e+00
376 | 2.0000000e+00
377 | 5.0000000e+00
378 | 7.0000000e+00
379 | 2.0000000e+00
380 | 4.0000000e+00
381 | 4.0000000e+00
382 | 2.0000000e+00
383 | 1.0000000e+00
384 | 9.0000000e+00
385 | 7.0000000e+00
386 | 2.0000000e+00
387 | 8.0000000e+00
388 | 7.0000000e+00
389 | 6.0000000e+00
390 | 9.0000000e+00
391 | 2.0000000e+00
392 | 2.0000000e+00
393 | 3.0000000e+00
394 | 8.0000000e+00
395 | 1.0000000e+00
396 | 6.0000000e+00
397 | 5.0000000e+00
398 | 1.0000000e+00
399 | 1.0000000e+00
400 | 0.0000000e+00
401 | 2.0000000e+00
402 | 6.0000000e+00
403 | 4.0000000e+00
404 | 5.0000000e+00
405 | 8.0000000e+00
406 | 3.0000000e+00
407 | 1.0000000e+00
408 | 5.0000000e+00
409 | 1.0000000e+00
410 | 9.0000000e+00
411 | 2.0000000e+00
412 | 7.0000000e+00
413 | 4.0000000e+00
414 | 4.0000000e+00
415 | 4.0000000e+00
416 | 8.0000000e+00
417 | 1.0000000e+00
418 | 5.0000000e+00
419 | 8.0000000e+00
420 | 9.0000000e+00
421 | 5.0000000e+00
422 | 6.0000000e+00
423 | 7.0000000e+00
424 | 9.0000000e+00
425 | 9.0000000e+00
426 | 3.0000000e+00
427 | 7.0000000e+00
428 | 0.0000000e+00
429 | 9.0000000e+00
430 | 0.0000000e+00
431 | 6.0000000e+00
432 | 6.0000000e+00
433 | 2.0000000e+00
434 | 3.0000000e+00
435 | 9.0000000e+00
436 | 0.0000000e+00
437 | 7.0000000e+00
438 | 5.0000000e+00
439 | 4.0000000e+00
440 | 8.0000000e+00
441 | 0.0000000e+00
442 | 9.0000000e+00
443 | 4.0000000e+00
444 | 1.0000000e+00
445 | 2.0000000e+00
446 | 8.0000000e+00
447 | 7.0000000e+00
448 | 1.0000000e+00
449 | 2.0000000e+00
450 | 6.0000000e+00
451 | 1.0000000e+00
452 | 0.0000000e+00
453 | 3.0000000e+00
454 | 0.0000000e+00
455 | 1.0000000e+00
456 | 1.0000000e+00
457 | 8.0000000e+00
458 | 2.0000000e+00
459 | 0.0000000e+00
460 | 3.0000000e+00
461 | 9.0000000e+00
462 | 4.0000000e+00
463 | 0.0000000e+00
464 | 5.0000000e+00
465 | 0.0000000e+00
466 | 6.0000000e+00
467 | 1.0000000e+00
468 | 7.0000000e+00
469 | 7.0000000e+00
470 | 8.0000000e+00
471 | 1.0000000e+00
472 | 9.0000000e+00
473 | 2.0000000e+00
474 | 0.0000000e+00
475 | 5.0000000e+00
476 | 1.0000000e+00
477 | 2.0000000e+00
478 | 2.0000000e+00
479 | 7.0000000e+00
480 | 3.0000000e+00
481 | 5.0000000e+00
482 | 4.0000000e+00
483 | 9.0000000e+00
484 | 7.0000000e+00
485 | 1.0000000e+00
486 | 8.0000000e+00
487 | 3.0000000e+00
488 | 9.0000000e+00
489 | 6.0000000e+00
490 | 0.0000000e+00
491 | 3.0000000e+00
492 | 1.0000000e+00
493 | 1.0000000e+00
494 | 2.0000000e+00
495 | 6.0000000e+00
496 | 3.0000000e+00
497 | 5.0000000e+00
498 | 7.0000000e+00
499 | 6.0000000e+00
500 | 8.0000000e+00
501 | 3.0000000e+00
502 | 9.0000000e+00
503 | 5.0000000e+00
504 | 8.0000000e+00
505 | 5.0000000e+00
506 | 7.0000000e+00
507 | 6.0000000e+00
508 | 1.0000000e+00
509 | 1.0000000e+00
510 | 3.0000000e+00
511 | 1.0000000e+00
512 | 7.0000000e+00
513 | 5.0000000e+00
514 | 5.0000000e+00
515 | 5.0000000e+00
516 | 2.0000000e+00
517 | 5.0000000e+00
518 | 8.0000000e+00
519 | 7.0000000e+00
520 | 0.0000000e+00
521 | 9.0000000e+00
522 | 7.0000000e+00
523 | 7.0000000e+00
524 | 5.0000000e+00
525 | 0.0000000e+00
526 | 9.0000000e+00
527 | 0.0000000e+00
528 | 0.0000000e+00
529 | 8.0000000e+00
530 | 9.0000000e+00
531 | 2.0000000e+00
532 | 4.0000000e+00
533 | 8.0000000e+00
534 | 1.0000000e+00
535 | 6.0000000e+00
536 | 1.0000000e+00
537 | 6.0000000e+00
538 | 5.0000000e+00
539 | 1.0000000e+00
540 | 8.0000000e+00
541 | 3.0000000e+00
542 | 4.0000000e+00
543 | 0.0000000e+00
544 | 5.0000000e+00
545 | 5.0000000e+00
546 | 8.0000000e+00
547 | 3.0000000e+00
548 | 6.0000000e+00
549 | 2.0000000e+00
550 | 3.0000000e+00
551 | 9.0000000e+00
552 | 2.0000000e+00
553 | 1.0000000e+00
554 | 1.0000000e+00
555 | 5.0000000e+00
556 | 2.0000000e+00
557 | 1.0000000e+00
558 | 3.0000000e+00
559 | 2.0000000e+00
560 | 8.0000000e+00
561 | 7.0000000e+00
562 | 3.0000000e+00
563 | 7.0000000e+00
564 | 2.0000000e+00
565 | 4.0000000e+00
566 | 6.0000000e+00
567 | 9.0000000e+00
568 | 7.0000000e+00
569 | 2.0000000e+00
570 | 4.0000000e+00
571 | 2.0000000e+00
572 | 8.0000000e+00
573 | 1.0000000e+00
574 | 1.0000000e+00
575 | 3.0000000e+00
576 | 8.0000000e+00
577 | 4.0000000e+00
578 | 0.0000000e+00
579 | 6.0000000e+00
580 | 5.0000000e+00
581 | 9.0000000e+00
582 | 3.0000000e+00
583 | 0.0000000e+00
584 | 9.0000000e+00
585 | 2.0000000e+00
586 | 4.0000000e+00
587 | 7.0000000e+00
588 | 1.0000000e+00
589 | 2.0000000e+00
590 | 9.0000000e+00
591 | 4.0000000e+00
592 | 2.0000000e+00
593 | 6.0000000e+00
594 | 1.0000000e+00
595 | 8.0000000e+00
596 | 9.0000000e+00
597 | 0.0000000e+00
598 | 6.0000000e+00
599 | 6.0000000e+00
600 | 7.0000000e+00
601 | 9.0000000e+00
602 | 9.0000000e+00
603 | 8.0000000e+00
604 | 0.0000000e+00
605 | 1.0000000e+00
606 | 4.0000000e+00
607 | 4.0000000e+00
608 | 6.0000000e+00
609 | 7.0000000e+00
610 | 1.0000000e+00
611 | 5.0000000e+00
612 | 7.0000000e+00
613 | 0.0000000e+00
614 | 3.0000000e+00
615 | 5.0000000e+00
616 | 8.0000000e+00
617 | 4.0000000e+00
618 | 7.0000000e+00
619 | 1.0000000e+00
620 | 2.0000000e+00
621 | 5.0000000e+00
622 | 9.0000000e+00
623 | 5.0000000e+00
624 | 6.0000000e+00
625 | 7.0000000e+00
626 | 5.0000000e+00
627 | 9.0000000e+00
628 | 8.0000000e+00
629 | 8.0000000e+00
630 | 3.0000000e+00
631 | 6.0000000e+00
632 | 9.0000000e+00
633 | 7.0000000e+00
634 | 0.0000000e+00
635 | 7.0000000e+00
636 | 5.0000000e+00
637 | 7.0000000e+00
638 | 1.0000000e+00
639 | 1.0000000e+00
640 | 0.0000000e+00
641 | 7.0000000e+00
642 | 9.0000000e+00
643 | 2.0000000e+00
644 | 3.0000000e+00
645 | 7.0000000e+00
646 | 3.0000000e+00
647 | 2.0000000e+00
648 | 4.0000000e+00
649 | 1.0000000e+00
650 | 6.0000000e+00
651 | 2.0000000e+00
652 | 7.0000000e+00
653 | 5.0000000e+00
654 | 5.0000000e+00
655 | 7.0000000e+00
656 | 4.0000000e+00
657 | 0.0000000e+00
658 | 2.0000000e+00
659 | 6.0000000e+00
660 | 3.0000000e+00
661 | 6.0000000e+00
662 | 4.0000000e+00
663 | 0.0000000e+00
664 | 4.0000000e+00
665 | 2.0000000e+00
666 | 6.0000000e+00
667 | 0.0000000e+00
668 | 0.0000000e+00
669 | 0.0000000e+00
670 | 0.0000000e+00
671 | 3.0000000e+00
672 | 1.0000000e+00
673 | 6.0000000e+00
674 | 2.0000000e+00
675 | 2.0000000e+00
676 | 3.0000000e+00
677 | 1.0000000e+00
678 | 4.0000000e+00
679 | 1.0000000e+00
680 | 5.0000000e+00
681 | 4.0000000e+00
682 | 6.0000000e+00
683 | 4.0000000e+00
684 | 7.0000000e+00
685 | 2.0000000e+00
686 | 8.0000000e+00
687 | 7.0000000e+00
688 | 9.0000000e+00
689 | 2.0000000e+00
690 | 0.0000000e+00
691 | 5.0000000e+00
692 | 1.0000000e+00
693 | 4.0000000e+00
694 | 2.0000000e+00
695 | 8.0000000e+00
696 | 3.0000000e+00
697 | 2.0000000e+00
698 | 4.0000000e+00
699 | 1.0000000e+00
700 | 5.0000000e+00
701 | 4.0000000e+00
702 | 6.0000000e+00
703 | 0.0000000e+00
704 | 7.0000000e+00
705 | 9.0000000e+00
706 | 8.0000000e+00
707 | 4.0000000e+00
708 | 9.0000000e+00
709 | 8.0000000e+00
710 | 0.0000000e+00
711 | 1.0000000e+00
712 | 1.0000000e+00
713 | 0.0000000e+00
714 | 2.0000000e+00
715 | 2.0000000e+00
716 | 3.0000000e+00
717 | 2.0000000e+00
718 | 4.0000000e+00
719 | 4.0000000e+00
720 | 5.0000000e+00
721 | 8.0000000e+00
722 | 6.0000000e+00
723 | 5.0000000e+00
724 | 7.0000000e+00
725 | 7.0000000e+00
726 | 8.0000000e+00
727 | 8.0000000e+00
728 | 9.0000000e+00
729 | 7.0000000e+00
730 | 4.0000000e+00
731 | 7.0000000e+00
732 | 3.0000000e+00
733 | 2.0000000e+00
734 | 0.0000000e+00
735 | 8.0000000e+00
736 | 6.0000000e+00
737 | 8.0000000e+00
738 | 6.0000000e+00
739 | 1.0000000e+00
740 | 6.0000000e+00
741 | 8.0000000e+00
742 | 9.0000000e+00
743 | 4.0000000e+00
744 | 0.0000000e+00
745 | 9.0000000e+00
746 | 0.0000000e+00
747 | 4.0000000e+00
748 | 1.0000000e+00
749 | 5.0000000e+00
750 | 4.0000000e+00
751 | 7.0000000e+00
752 | 5.0000000e+00
753 | 3.0000000e+00
754 | 7.0000000e+00
755 | 4.0000000e+00
756 | 9.0000000e+00
757 | 8.0000000e+00
758 | 5.0000000e+00
759 | 8.0000000e+00
760 | 6.0000000e+00
761 | 3.0000000e+00
762 | 8.0000000e+00
763 | 6.0000000e+00
764 | 9.0000000e+00
765 | 9.0000000e+00
766 | 1.0000000e+00
767 | 8.0000000e+00
768 | 3.0000000e+00
769 | 5.0000000e+00
770 | 8.0000000e+00
771 | 6.0000000e+00
772 | 5.0000000e+00
773 | 9.0000000e+00
774 | 7.0000000e+00
775 | 2.0000000e+00
776 | 5.0000000e+00
777 | 0.0000000e+00
778 | 8.0000000e+00
779 | 5.0000000e+00
780 | 1.0000000e+00
781 | 1.0000000e+00
782 | 0.0000000e+00
783 | 9.0000000e+00
784 | 1.0000000e+00
785 | 8.0000000e+00
786 | 6.0000000e+00
787 | 7.0000000e+00
788 | 0.0000000e+00
789 | 9.0000000e+00
790 | 3.0000000e+00
791 | 0.0000000e+00
792 | 8.0000000e+00
793 | 8.0000000e+00
794 | 9.0000000e+00
795 | 6.0000000e+00
796 | 7.0000000e+00
797 | 8.0000000e+00
798 | 4.0000000e+00
799 | 7.0000000e+00
800 | 5.0000000e+00
801 | 9.0000000e+00
802 | 2.0000000e+00
803 | 6.0000000e+00
804 | 7.0000000e+00
805 | 4.0000000e+00
806 | 5.0000000e+00
807 | 9.0000000e+00
808 | 2.0000000e+00
809 | 3.0000000e+00
810 | 1.0000000e+00
811 | 6.0000000e+00
812 | 3.0000000e+00
813 | 9.0000000e+00
814 | 2.0000000e+00
815 | 2.0000000e+00
816 | 5.0000000e+00
817 | 6.0000000e+00
818 | 8.0000000e+00
819 | 0.0000000e+00
820 | 7.0000000e+00
821 | 7.0000000e+00
822 | 1.0000000e+00
823 | 9.0000000e+00
824 | 8.0000000e+00
825 | 7.0000000e+00
826 | 0.0000000e+00
827 | 9.0000000e+00
828 | 9.0000000e+00
829 | 4.0000000e+00
830 | 6.0000000e+00
831 | 2.0000000e+00
832 | 8.0000000e+00
833 | 5.0000000e+00
834 | 1.0000000e+00
835 | 4.0000000e+00
836 | 1.0000000e+00
837 | 5.0000000e+00
838 | 5.0000000e+00
839 | 1.0000000e+00
840 | 7.0000000e+00
841 | 3.0000000e+00
842 | 6.0000000e+00
843 | 4.0000000e+00
844 | 3.0000000e+00
845 | 2.0000000e+00
846 | 5.0000000e+00
847 | 6.0000000e+00
848 | 4.0000000e+00
849 | 4.0000000e+00
850 | 0.0000000e+00
851 | 4.0000000e+00
852 | 4.0000000e+00
853 | 6.0000000e+00
854 | 7.0000000e+00
855 | 2.0000000e+00
856 | 4.0000000e+00
857 | 3.0000000e+00
858 | 3.0000000e+00
859 | 8.0000000e+00
860 | 0.0000000e+00
861 | 0.0000000e+00
862 | 3.0000000e+00
863 | 2.0000000e+00
864 | 2.0000000e+00
865 | 9.0000000e+00
866 | 8.0000000e+00
867 | 2.0000000e+00
868 | 3.0000000e+00
869 | 7.0000000e+00
870 | 0.0000000e+00
871 | 1.0000000e+00
872 | 1.0000000e+00
873 | 0.0000000e+00
874 | 2.0000000e+00
875 | 3.0000000e+00
876 | 3.0000000e+00
877 | 8.0000000e+00
878 | 4.0000000e+00
879 | 3.0000000e+00
880 | 5.0000000e+00
881 | 7.0000000e+00
882 | 6.0000000e+00
883 | 4.0000000e+00
884 | 7.0000000e+00
885 | 7.0000000e+00
886 | 8.0000000e+00
887 | 5.0000000e+00
888 | 9.0000000e+00
889 | 7.0000000e+00
890 | 0.0000000e+00
891 | 3.0000000e+00
892 | 1.0000000e+00
893 | 6.0000000e+00
894 | 2.0000000e+00
895 | 4.0000000e+00
896 | 3.0000000e+00
897 | 4.0000000e+00
898 | 4.0000000e+00
899 | 7.0000000e+00
900 | 5.0000000e+00
901 | 9.0000000e+00
902 | 6.0000000e+00
903 | 9.0000000e+00
904 | 0.0000000e+00
905 | 7.0000000e+00
906 | 1.0000000e+00
907 | 4.0000000e+00
908 | 2.0000000e+00
909 | 7.0000000e+00
910 | 3.0000000e+00
911 | 6.0000000e+00
912 | 7.0000000e+00
913 | 5.0000000e+00
914 | 8.0000000e+00
915 | 4.0000000e+00
916 | 5.0000000e+00
917 | 5.0000000e+00
918 | 2.0000000e+00
919 | 7.0000000e+00
920 | 1.0000000e+00
921 | 1.0000000e+00
922 | 5.0000000e+00
923 | 6.0000000e+00
924 | 8.0000000e+00
925 | 5.0000000e+00
926 | 8.0000000e+00
927 | 4.0000000e+00
928 | 0.0000000e+00
929 | 7.0000000e+00
930 | 9.0000000e+00
931 | 9.0000000e+00
932 | 2.0000000e+00
933 | 9.0000000e+00
934 | 7.0000000e+00
935 | 7.0000000e+00
936 | 8.0000000e+00
937 | 7.0000000e+00
938 | 4.0000000e+00
939 | 2.0000000e+00
940 | 6.0000000e+00
941 | 9.0000000e+00
942 | 1.0000000e+00
943 | 7.0000000e+00
944 | 0.0000000e+00
945 | 6.0000000e+00
946 | 4.0000000e+00
947 | 2.0000000e+00
948 | 5.0000000e+00
949 | 7.0000000e+00
950 | 0.0000000e+00
951 | 7.0000000e+00
952 | 1.0000000e+00
953 | 0.0000000e+00
954 | 3.0000000e+00
955 | 7.0000000e+00
956 | 6.0000000e+00
957 | 5.0000000e+00
958 | 0.0000000e+00
959 | 6.0000000e+00
960 | 1.0000000e+00
961 | 5.0000000e+00
962 | 1.0000000e+00
963 | 7.0000000e+00
964 | 8.0000000e+00
965 | 5.0000000e+00
966 | 0.0000000e+00
967 | 3.0000000e+00
968 | 4.0000000e+00
969 | 7.0000000e+00
970 | 7.0000000e+00
971 | 5.0000000e+00
972 | 7.0000000e+00
973 | 8.0000000e+00
974 | 6.0000000e+00
975 | 9.0000000e+00
976 | 3.0000000e+00
977 | 8.0000000e+00
978 | 6.0000000e+00
979 | 1.0000000e+00
980 | 0.0000000e+00
981 | 9.0000000e+00
982 | 7.0000000e+00
983 | 1.0000000e+00
984 | 3.0000000e+00
985 | 0.0000000e+00
986 | 5.0000000e+00
987 | 6.0000000e+00
988 | 4.0000000e+00
989 | 4.0000000e+00
990 | 2.0000000e+00
991 | 4.0000000e+00
992 | 4.0000000e+00
993 | 3.0000000e+00
994 | 1.0000000e+00
995 | 7.0000000e+00
996 | 7.0000000e+00
997 | 6.0000000e+00
998 | 0.0000000e+00
999 | 3.0000000e+00
1000 | 6.0000000e+00
1001 | 0.0000000e+00
1002 | 7.0000000e+00
1003 | 1.0000000e+00
1004 | 1.0000000e+00
1005 | 4.0000000e+00
1006 | 9.0000000e+00
1007 | 4.0000000e+00
1008 | 3.0000000e+00
1009 | 4.0000000e+00
1010 | 8.0000000e+00
1011 | 2.0000000e+00
1012 | 2.0000000e+00
1013 | 1.0000000e+00
1014 | 8.0000000e+00
1015 | 7.0000000e+00
1016 | 0.0000000e+00
1017 | 8.0000000e+00
1018 | 1.0000000e+00
1019 | 0.0000000e+00
1020 | 7.0000000e+00
1021 | 6.0000000e+00
1022 | 3.0000000e+00
1023 | 7.0000000e+00
1024 | 7.0000000e+00
1025 | 5.0000000e+00
1026 | 8.0000000e+00
1027 | 8.0000000e+00
1028 | 9.0000000e+00
1029 | 0.0000000e+00
1030 | 0.0000000e+00
1031 | 4.0000000e+00
1032 | 1.0000000e+00
1033 | 5.0000000e+00
1034 | 2.0000000e+00
1035 | 2.0000000e+00
1036 | 3.0000000e+00
1037 | 9.0000000e+00
1038 | 4.0000000e+00
1039 | 9.0000000e+00
1040 | 5.0000000e+00
1041 | 0.0000000e+00
1042 | 6.0000000e+00
1043 | 7.0000000e+00
1044 | 7.0000000e+00
1045 | 1.0000000e+00
1046 | 8.0000000e+00
1047 | 0.0000000e+00
1048 | 2.0000000e+00
1049 | 2.0000000e+00
1050 | 0.0000000e+00
1051 | 4.0000000e+00
1052 | 1.0000000e+00
1053 | 1.0000000e+00
1054 | 2.0000000e+00
1055 | 7.0000000e+00
1056 | 3.0000000e+00
1057 | 9.0000000e+00
1058 | 7.0000000e+00
1059 | 2.0000000e+00
1060 | 8.0000000e+00
1061 | 1.0000000e+00
1062 | 9.0000000e+00
1063 | 5.0000000e+00
1064 | 8.0000000e+00
1065 | 8.0000000e+00
1066 | 1.0000000e+00
1067 | 9.0000000e+00
1068 | 8.0000000e+00
1069 | 3.0000000e+00
1070 | 1.0000000e+00
1071 | 6.0000000e+00
1072 | 5.0000000e+00
1073 | 7.0000000e+00
1074 | 4.0000000e+00
1075 | 2.0000000e+00
1076 | 7.0000000e+00
1077 | 0.0000000e+00
1078 | 3.0000000e+00
1079 | 0.0000000e+00
1080 | 4.0000000e+00
1081 | 1.0000000e+00
1082 | 1.0000000e+00
1083 | 7.0000000e+00
1084 | 9.0000000e+00
1085 | 1.0000000e+00
1086 | 1.0000000e+00
1087 | 8.0000000e+00
1088 | 5.0000000e+00
1089 | 7.0000000e+00
1090 | 5.0000000e+00
1091 | 0.0000000e+00
1092 | 6.0000000e+00
1093 | 6.0000000e+00
1094 | 0.0000000e+00
1095 | 4.0000000e+00
1096 | 1.0000000e+00
1097 | 2.0000000e+00
1098 | 3.0000000e+00
1099 | 4.0000000e+00
1100 | 4.0000000e+00
1101 | 6.0000000e+00
1102 | 8.0000000e+00
1103 | 0.0000000e+00
1104 | 9.0000000e+00
1105 | 5.0000000e+00
1106 | 8.0000000e+00
1107 | 7.0000000e+00
1108 | 0.0000000e+00
1109 | 3.0000000e+00
1110 | 5.0000000e+00
1111 | 4.0000000e+00
1112 | 5.0000000e+00
1113 | 9.0000000e+00
1114 | 6.0000000e+00
1115 | 7.0000000e+00
1116 | 1.0000000e+00
1117 | 9.0000000e+00
1118 | 6.0000000e+00
1119 | 1.0000000e+00
1120 | 3.0000000e+00
1121 | 8.0000000e+00
1122 | 3.0000000e+00
1123 | 9.0000000e+00
1124 | 1.0000000e+00
1125 | 2.0000000e+00
1126 | 7.0000000e+00
1127 | 7.0000000e+00
1128 | 7.0000000e+00
1129 | 0.0000000e+00
1130 | 2.0000000e+00
1131 | 3.0000000e+00
1132 | 1.0000000e+00
1133 | 1.0000000e+00
1134 | 4.0000000e+00
1135 | 2.0000000e+00
1136 | 5.0000000e+00
1137 | 6.0000000e+00
1138 | 0.0000000e+00
1139 | 9.0000000e+00
1140 | 6.0000000e+00
1141 | 2.0000000e+00
1142 | 8.0000000e+00
1143 | 9.0000000e+00
1144 | 2.0000000e+00
1145 | 3.0000000e+00
1146 | 3.0000000e+00
1147 | 6.0000000e+00
1148 | 9.0000000e+00
1149 | 1.0000000e+00
1150 | 4.0000000e+00
1151 | 3.0000000e+00
1152 | 3.0000000e+00
1153 | 0.0000000e+00
1154 | 7.0000000e+00
1155 | 7.0000000e+00
1156 | 1.0000000e+00
1157 | 7.0000000e+00
1158 | 7.0000000e+00
1159 | 3.0000000e+00
1160 | 6.0000000e+00
1161 | 4.0000000e+00
1162 | 9.0000000e+00
1163 | 5.0000000e+00
1164 | 4.0000000e+00
1165 | 4.0000000e+00
1166 | 2.0000000e+00
1167 | 7.0000000e+00
1168 | 9.0000000e+00
1169 | 0.0000000e+00
1170 | 9.0000000e+00
1171 | 8.0000000e+00
1172 | 4.0000000e+00
1173 | 4.0000000e+00
1174 | 9.0000000e+00
1175 | 1.0000000e+00
1176 | 2.0000000e+00
1177 | 4.0000000e+00
1178 | 9.0000000e+00
1179 | 3.0000000e+00
1180 | 0.0000000e+00
1181 | 4.0000000e+00
1182 | 1.0000000e+00
1183 | 6.0000000e+00
1184 | 2.0000000e+00
1185 | 6.0000000e+00
1186 | 3.0000000e+00
1187 | 7.0000000e+00
1188 | 4.0000000e+00
1189 | 2.0000000e+00
1190 | 6.0000000e+00
1191 | 6.0000000e+00
1192 | 7.0000000e+00
1193 | 1.0000000e+00
1194 | 8.0000000e+00
1195 | 9.0000000e+00
1196 | 0.0000000e+00
1197 | 4.0000000e+00
1198 | 1.0000000e+00
1199 | 4.0000000e+00
1200 | 2.0000000e+00
1201 | 1.0000000e+00
1202 | 3.0000000e+00
1203 | 6.0000000e+00
1204 | 4.0000000e+00
1205 | 6.0000000e+00
1206 | 7.0000000e+00
1207 | 5.0000000e+00
1208 | 8.0000000e+00
1209 | 7.0000000e+00
1210 | 0.0000000e+00
1211 | 5.0000000e+00
1212 | 1.0000000e+00
1213 | 4.0000000e+00
1214 | 2.0000000e+00
1215 | 8.0000000e+00
1216 | 4.0000000e+00
1217 | 7.0000000e+00
1218 | 7.0000000e+00
1219 | 3.0000000e+00
1220 | 8.0000000e+00
1221 | 4.0000000e+00
1222 | 9.0000000e+00
1223 | 5.0000000e+00
1224 | 8.0000000e+00
1225 | 6.0000000e+00
1226 | 7.0000000e+00
1227 | 3.0000000e+00
1228 | 4.0000000e+00
1229 | 6.0000000e+00
1230 | 7.0000000e+00
1231 | 1.0000000e+00
1232 | 7.0000000e+00
1233 | 4.0000000e+00
1234 | 3.0000000e+00
1235 | 3.0000000e+00
1236 | 9.0000000e+00
1237 | 8.0000000e+00
1238 | 8.0000000e+00
1239 | 1.0000000e+00
1240 | 8.0000000e+00
1241 | 6.0000000e+00
1242 | 3.0000000e+00
1243 | 1.0000000e+00
1244 | 1.0000000e+00
1245 | 3.0000000e+00
1246 | 5.0000000e+00
1247 | 2.0000000e+00
1248 | 8.0000000e+00
1249 | 4.0000000e+00
1250 | 2.0000000e+00
1251 | 9.0000000e+00
1252 | 7.0000000e+00
1253 | 1.0000000e+00
1254 | 4.0000000e+00
1255 | 8.0000000e+00
1256 | 2.0000000e+00
1257 | 9.0000000e+00
1258 | 6.0000000e+00
1259 | 4.0000000e+00
1260 | 1.0000000e+00
1261 | 3.0000000e+00
1262 | 4.0000000e+00
1263 | 2.0000000e+00
1264 | 5.0000000e+00
1265 | 2.0000000e+00
1266 | 5.0000000e+00
1267 | 6.0000000e+00
1268 | 8.0000000e+00
1269 | 0.0000000e+00
1270 | 6.0000000e+00
1271 | 2.0000000e+00
1272 | 4.0000000e+00
1273 | 9.0000000e+00
1274 | 4.0000000e+00
1275 | 9.0000000e+00
1276 | 4.0000000e+00
1277 | 5.0000000e+00
1278 | 1.0000000e+00
1279 | 5.0000000e+00
1280 | 8.0000000e+00
1281 | 4.0000000e+00
1282 | 7.0000000e+00
1283 | 9.0000000e+00
1284 | 5.0000000e+00
1285 | 9.0000000e+00
1286 | 5.0000000e+00
1287 | 9.0000000e+00
1288 | 1.0000000e+00
1289 | 5.0000000e+00
1290 | 8.0000000e+00
1291 | 3.0000000e+00
1292 | 9.0000000e+00
1293 | 9.0000000e+00
1294 | 1.0000000e+00
1295 | 8.0000000e+00
1296 | 3.0000000e+00
1297 | 8.0000000e+00
1298 | 6.0000000e+00
1299 | 5.0000000e+00
1300 | 2.0000000e+00
1301 | 7.0000000e+00
1302 | 2.0000000e+00
1303 | 7.0000000e+00
1304 | 6.0000000e+00
1305 | 0.0000000e+00
1306 | 9.0000000e+00
1307 | 7.0000000e+00
1308 | 9.0000000e+00
1309 | 4.0000000e+00
1310 | 6.0000000e+00
1311 | 0.0000000e+00
1312 | 5.0000000e+00
1313 | 3.0000000e+00
1314 | 5.0000000e+00
1315 | 7.0000000e+00
1316 | 3.0000000e+00
1317 | 9.0000000e+00
1318 | 3.0000000e+00
1319 | 6.0000000e+00
1320 | 8.0000000e+00
1321 | 3.0000000e+00
1322 | 1.0000000e+00
1323 | 7.0000000e+00
1324 | 6.0000000e+00
1325 | 5.0000000e+00
1326 | 5.0000000e+00
1327 | 7.0000000e+00
1328 | 6.0000000e+00
1329 | 5.0000000e+00
1330 | 8.0000000e+00
1331 | 2.0000000e+00
1332 | 1.0000000e+00
1333 | 7.0000000e+00
1334 | 9.0000000e+00
1335 | 2.0000000e+00
1336 | 7.0000000e+00
1337 | 3.0000000e+00
1338 | 6.0000000e+00
1339 | 7.0000000e+00
1340 | 8.0000000e+00
1341 | 5.0000000e+00
1342 | 3.0000000e+00
1343 | 7.0000000e+00
1344 | 7.0000000e+00
1345 | 8.0000000e+00
1346 | 4.0000000e+00
1347 | 0.0000000e+00
1348 | 7.0000000e+00
1349 | 3.0000000e+00
1350 | 0.0000000e+00
1351 | 6.0000000e+00
1352 | 3.0000000e+00
1353 | 9.0000000e+00
1354 | 7.0000000e+00
1355 | 1.0000000e+00
1356 | 9.0000000e+00
1357 | 5.0000000e+00
1358 | 3.0000000e+00
1359 | 6.0000000e+00
1360 | 0.0000000e+00
1361 | 9.0000000e+00
1362 | 2.0000000e+00
1363 | 8.0000000e+00
1364 | 0.0000000e+00
1365 | 9.0000000e+00
1366 | 1.0000000e+00
1367 | 6.0000000e+00
1368 | 0.0000000e+00
1369 | 0.0000000e+00
1370 | 1.0000000e+00
1371 | 9.0000000e+00
1372 | 0.0000000e+00
1373 | 0.0000000e+00
1374 | 4.0000000e+00
1375 | 2.0000000e+00
1376 | 1.0000000e+00
1377 | 7.0000000e+00
1378 | 0.0000000e+00
1379 | 3.0000000e+00
1380 | 4.0000000e+00
1381 | 4.0000000e+00
1382 | 7.0000000e+00
1383 | 5.0000000e+00
1384 | 9.0000000e+00
1385 | 8.0000000e+00
1386 | 2.0000000e+00
1387 | 0.0000000e+00
1388 | 0.0000000e+00
1389 | 8.0000000e+00
1390 | 6.0000000e+00
1391 | 2.0000000e+00
1392 | 2.0000000e+00
1393 | 7.0000000e+00
1394 | 6.0000000e+00
1395 | 1.0000000e+00
1396 | 2.0000000e+00
1397 | 9.0000000e+00
1398 | 2.0000000e+00
1399 | 6.0000000e+00
1400 | 9.0000000e+00
1401 | 7.0000000e+00
1402 | 9.0000000e+00
1403 | 5.0000000e+00
1404 | 0.0000000e+00
1405 | 8.0000000e+00
1406 | 1.0000000e+00
1407 | 5.0000000e+00
1408 | 2.0000000e+00
1409 | 4.0000000e+00
1410 | 3.0000000e+00
1411 | 9.0000000e+00
1412 | 4.0000000e+00
1413 | 7.0000000e+00
1414 | 5.0000000e+00
1415 | 6.0000000e+00
1416 | 6.0000000e+00
1417 | 7.0000000e+00
1418 | 7.0000000e+00
1419 | 6.0000000e+00
1420 | 8.0000000e+00
1421 | 5.0000000e+00
1422 | 9.0000000e+00
1423 | 7.0000000e+00
1424 | 0.0000000e+00
1425 | 6.0000000e+00
1426 | 1.0000000e+00
1427 | 9.0000000e+00
1428 | 2.0000000e+00
1429 | 3.0000000e+00
1430 | 3.0000000e+00
1431 | 5.0000000e+00
1432 | 4.0000000e+00
1433 | 3.0000000e+00
1434 | 5.0000000e+00
1435 | 8.0000000e+00
1436 | 6.0000000e+00
1437 | 3.0000000e+00
1438 | 7.0000000e+00
1439 | 2.0000000e+00
1440 | 8.0000000e+00
1441 | 4.0000000e+00
1442 | 9.0000000e+00
1443 | 5.0000000e+00
1444 | 0.0000000e+00
1445 | 2.0000000e+00
1446 | 1.0000000e+00
1447 | 4.0000000e+00
1448 | 2.0000000e+00
1449 | 4.0000000e+00
1450 | 3.0000000e+00
1451 | 1.0000000e+00
1452 | 7.0000000e+00
1453 | 1.0000000e+00
1454 | 8.0000000e+00
1455 | 0.0000000e+00
1456 | 9.0000000e+00
1457 | 6.0000000e+00
1458 | 8.0000000e+00
1459 | 1.0000000e+00
1460 | 9.0000000e+00
1461 | 4.0000000e+00
1462 | 4.0000000e+00
1463 | 9.0000000e+00
1464 | 1.0000000e+00
1465 | 8.0000000e+00
1466 | 9.0000000e+00
1467 | 6.0000000e+00
1468 | 5.0000000e+00
1469 | 5.0000000e+00
1470 | 3.0000000e+00
1471 | 3.0000000e+00
1472 | 0.0000000e+00
1473 | 1.0000000e+00
1474 | 4.0000000e+00
1475 | 3.0000000e+00
1476 | 8.0000000e+00
1477 | 3.0000000e+00
1478 | 4.0000000e+00
1479 | 2.0000000e+00
1480 | 0.0000000e+00
1481 | 7.0000000e+00
1482 | 5.0000000e+00
1483 | 5.0000000e+00
1484 | 1.0000000e+00
1485 | 8.0000000e+00
1486 | 5.0000000e+00
1487 | 3.0000000e+00
1488 | 4.0000000e+00
1489 | 6.0000000e+00
1490 | 0.0000000e+00
1491 | 5.0000000e+00
1492 | 7.0000000e+00
1493 | 2.0000000e+00
1494 | 6.0000000e+00
1495 | 6.0000000e+00
1496 | 0.0000000e+00
1497 | 1.0000000e+00
1498 | 1.0000000e+00
1499 | 4.0000000e+00
1500 | 7.0000000e+00
1501 | 9.0000000e+00
1502 | 0.0000000e+00
1503 | 0.0000000e+00
1504 | 6.0000000e+00
1505 | 6.0000000e+00
1506 | 8.0000000e+00
1507 | 6.0000000e+00
1508 | 9.0000000e+00
1509 | 4.0000000e+00
1510 | 5.0000000e+00
1511 | 2.0000000e+00
1512 | 4.0000000e+00
1513 | 0.0000000e+00
1514 | 7.0000000e+00
1515 | 5.0000000e+00
1516 | 6.0000000e+00
1517 | 5.0000000e+00
1518 | 0.0000000e+00
1519 | 9.0000000e+00
1520 | 8.0000000e+00
1521 | 6.0000000e+00
1522 | 1.0000000e+00
1523 | 9.0000000e+00
1524 | 7.0000000e+00
1525 | 5.0000000e+00
1526 | 7.0000000e+00
1527 | 5.0000000e+00
1528 | 1.0000000e+00
1529 | 1.0000000e+00
1530 | 3.0000000e+00
1531 | 0.0000000e+00
1532 | 2.0000000e+00
1533 | 0.0000000e+00
1534 | 3.0000000e+00
1535 | 8.0000000e+00
1536 | 1.0000000e+00
1537 | 6.0000000e+00
1538 | 4.0000000e+00
1539 | 6.0000000e+00
1540 | 2.0000000e+00
1541 | 6.0000000e+00
1542 | 4.0000000e+00
1543 | 8.0000000e+00
1544 | 8.0000000e+00
1545 | 1.0000000e+00
1546 | 4.0000000e+00
1547 | 4.0000000e+00
1548 | 7.0000000e+00
1549 | 1.0000000e+00
1550 | 2.0000000e+00
1551 | 2.0000000e+00
1552 | 3.0000000e+00
1553 | 9.0000000e+00
1554 | 6.0000000e+00
1555 | 4.0000000e+00
1556 | 9.0000000e+00
1557 | 5.0000000e+00
1558 | 6.0000000e+00
1559 | 2.0000000e+00
1560 | 3.0000000e+00
1561 | 9.0000000e+00
1562 | 2.0000000e+00
1563 | 6.0000000e+00
1564 | 2.0000000e+00
1565 | 7.0000000e+00
1566 | 4.0000000e+00
1567 | 3.0000000e+00
1568 | 6.0000000e+00
1569 | 4.0000000e+00
1570 | 9.0000000e+00
1571 | 7.0000000e+00
1572 | 0.0000000e+00
1573 | 2.0000000e+00
1574 | 2.0000000e+00
1575 | 9.0000000e+00
1576 | 5.0000000e+00
1577 | 4.0000000e+00
1578 | 5.0000000e+00
1579 | 0.0000000e+00
1580 | 1.0000000e+00
1581 | 4.0000000e+00
1582 | 3.0000000e+00
1583 | 6.0000000e+00
1584 | 3.0000000e+00
1585 | 2.0000000e+00
1586 | 9.0000000e+00
1587 | 7.0000000e+00
1588 | 5.0000000e+00
1589 | 3.0000000e+00
1590 | 7.0000000e+00
1591 | 0.0000000e+00
1592 | 9.0000000e+00
1593 | 5.0000000e+00
1594 | 8.0000000e+00
1595 | 3.0000000e+00
1596 | 2.0000000e+00
1597 | 0.0000000e+00
1598 | 1.0000000e+00
1599 | 8.0000000e+00
1600 | 3.0000000e+00
1601 | 0.0000000e+00
1602 | 1.0000000e+00
1603 | 2.0000000e+00
1604 | 3.0000000e+00
1605 | 4.0000000e+00
1606 | 0.0000000e+00
1607 | 0.0000000e+00
1608 | 1.0000000e+00
1609 | 7.0000000e+00
1610 | 2.0000000e+00
1611 | 9.0000000e+00
1612 | 3.0000000e+00
1613 | 9.0000000e+00
1614 | 4.0000000e+00
1615 | 2.0000000e+00
1616 | 5.0000000e+00
1617 | 8.0000000e+00
1618 | 6.0000000e+00
1619 | 7.0000000e+00
1620 | 7.0000000e+00
1621 | 9.0000000e+00
1622 | 8.0000000e+00
1623 | 9.0000000e+00
1624 | 9.0000000e+00
1625 | 2.0000000e+00
1626 | 0.0000000e+00
1627 | 0.0000000e+00
1628 | 1.0000000e+00
1629 | 4.0000000e+00
1630 | 2.0000000e+00
1631 | 4.0000000e+00
1632 | 3.0000000e+00
1633 | 9.0000000e+00
1634 | 4.0000000e+00
1635 | 3.0000000e+00
1636 | 5.0000000e+00
1637 | 7.0000000e+00
1638 | 6.0000000e+00
1639 | 5.0000000e+00
1640 | 7.0000000e+00
1641 | 1.0000000e+00
1642 | 8.0000000e+00
1643 | 6.0000000e+00
1644 | 9.0000000e+00
1645 | 3.0000000e+00
1646 | 0.0000000e+00
1647 | 4.0000000e+00
1648 | 1.0000000e+00
1649 | 2.0000000e+00
1650 | 2.0000000e+00
1651 | 5.0000000e+00
1652 | 3.0000000e+00
1653 | 7.0000000e+00
1654 | 4.0000000e+00
1655 | 1.0000000e+00
1656 | 7.0000000e+00
1657 | 7.0000000e+00
1658 | 8.0000000e+00
1659 | 1.0000000e+00
1660 | 9.0000000e+00
1661 | 2.0000000e+00
1662 | 3.0000000e+00
1663 | 2.0000000e+00
1664 | 4.0000000e+00
1665 | 0.0000000e+00
1666 | 1.0000000e+00
1667 | 8.0000000e+00
1668 | 4.0000000e+00
1669 | 3.0000000e+00
1670 | 6.0000000e+00
1671 | 5.0000000e+00
1672 | 6.0000000e+00
1673 | 4.0000000e+00
1674 | 7.0000000e+00
1675 | 9.0000000e+00
1676 | 3.0000000e+00
1677 | 1.0000000e+00
1678 | 3.0000000e+00
1679 | 0.0000000e+00
1680 | 2.0000000e+00
1681 | 1.0000000e+00
1682 | 1.0000000e+00
1683 | 0.0000000e+00
1684 | 9.0000000e+00
1685 | 9.0000000e+00
1686 | 4.0000000e+00
1687 | 6.0000000e+00
1688 | 7.0000000e+00
1689 | 6.0000000e+00
1690 | 3.0000000e+00
1691 | 5.0000000e+00
1692 | 5.0000000e+00
1693 | 4.0000000e+00
1694 | 4.0000000e+00
1695 | 6.0000000e+00
1696 | 9.0000000e+00
1697 | 1.0000000e+00
1698 | 1.0000000e+00
1699 | 3.0000000e+00
1700 | 1.0000000e+00
1701 | 1.0000000e+00
1702 | 0.0000000e+00
1703 | 5.0000000e+00
1704 | 1.0000000e+00
1705 | 4.0000000e+00
1706 | 4.0000000e+00
1707 | 6.0000000e+00
1708 | 6.0000000e+00
1709 | 6.0000000e+00
1710 | 0.0000000e+00
1711 | 1.0000000e+00
1712 | 2.0000000e+00
1713 | 0.0000000e+00
1714 | 8.0000000e+00
1715 | 2.0000000e+00
1716 | 2.0000000e+00
1717 | 1.0000000e+00
1718 | 1.0000000e+00
1719 | 3.0000000e+00
1720 | 7.0000000e+00
1721 | 9.0000000e+00
1722 | 5.0000000e+00
1723 | 3.0000000e+00
1724 | 0.0000000e+00
1725 | 2.0000000e+00
1726 | 0.0000000e+00
1727 | 6.0000000e+00
1728 | 2.0000000e+00
1729 | 9.0000000e+00
1730 | 0.0000000e+00
1731 | 7.0000000e+00
1732 | 6.0000000e+00
1733 | 9.0000000e+00
1734 | 9.0000000e+00
1735 | 1.0000000e+00
1736 | 2.0000000e+00
1737 | 9.0000000e+00
1738 | 3.0000000e+00
1739 | 4.0000000e+00
1740 | 7.0000000e+00
1741 | 9.0000000e+00
1742 | 6.0000000e+00
1743 | 0.0000000e+00
1744 | 9.0000000e+00
1745 | 4.0000000e+00
1746 | 8.0000000e+00
1747 | 7.0000000e+00
1748 | 7.0000000e+00
1749 | 9.0000000e+00
1750 | 8.0000000e+00
1751 | 6.0000000e+00
1752 | 9.0000000e+00
1753 | 5.0000000e+00
1754 | 2.0000000e+00
1755 | 2.0000000e+00
1756 | 2.0000000e+00
1757 | 3.0000000e+00
1758 | 9.0000000e+00
1759 | 8.0000000e+00
1760 | 8.0000000e+00
1761 | 8.0000000e+00
1762 | 6.0000000e+00
1763 | 4.0000000e+00
1764 | 4.0000000e+00
1765 | 4.0000000e+00
1766 | 4.0000000e+00
1767 | 2.0000000e+00
1768 | 4.0000000e+00
1769 | 6.0000000e+00
1770 | 0.0000000e+00
1771 | 7.0000000e+00
1772 | 0.0000000e+00
1773 | 7.0000000e+00
1774 | 8.0000000e+00
1775 | 2.0000000e+00
1776 | 0.0000000e+00
1777 | 8.0000000e+00
1778 | 8.0000000e+00
1779 | 3.0000000e+00
1780 | 6.0000000e+00
1781 | 8.0000000e+00
1782 | 6.0000000e+00
1783 | 6.0000000e+00
1784 | 8.0000000e+00
1785 | 6.0000000e+00
1786 | 5.0000000e+00
1787 | 1.0000000e+00
1788 | 1.0000000e+00
1789 | 8.0000000e+00
1790 | 7.0000000e+00
1791 | 8.0000000e+00
1792 | 3.0000000e+00
1793 | 6.0000000e+00
1794 | 8.0000000e+00
1795 | 9.0000000e+00
1796 | 5.0000000e+00
1797 | 0.0000000e+00
1798 | 0.0000000e+00
1799 | 0.0000000e+00
1800 | 3.0000000e+00
1801 | 2.0000000e+00
1802 | 6.0000000e+00
1803 | 6.0000000e+00
1804 | 7.0000000e+00
1805 | 8.0000000e+00
1806 | 3.0000000e+00
1807 | 5.0000000e+00
1808 | 1.0000000e+00
1809 | 4.0000000e+00
1810 | 3.0000000e+00
1811 | 5.0000000e+00
1812 | 9.0000000e+00
1813 | 4.0000000e+00
1814 | 5.0000000e+00
1815 | 4.0000000e+00
1816 | 1.0000000e+00
1817 | 1.0000000e+00
1818 | 5.0000000e+00
1819 | 4.0000000e+00
1820 | 0.0000000e+00
1821 | 9.0000000e+00
1822 | 7.0000000e+00
1823 | 1.0000000e+00
1824 | 2.0000000e+00
1825 | 5.0000000e+00
1826 | 7.0000000e+00
1827 | 9.0000000e+00
1828 | 4.0000000e+00
1829 | 0.0000000e+00
1830 | 3.0000000e+00
1831 | 6.0000000e+00
1832 | 1.0000000e+00
1833 | 7.0000000e+00
1834 | 7.0000000e+00
1835 | 5.0000000e+00
1836 | 6.0000000e+00
1837 | 3.0000000e+00
1838 | 0.0000000e+00
1839 | 1.0000000e+00
1840 | 1.0000000e+00
1841 | 4.0000000e+00
1842 | 2.0000000e+00
1843 | 4.0000000e+00
1844 | 3.0000000e+00
1845 | 6.0000000e+00
1846 | 4.0000000e+00
1847 | 7.0000000e+00
1848 | 5.0000000e+00
1849 | 7.0000000e+00
1850 | 6.0000000e+00
1851 | 9.0000000e+00
1852 | 7.0000000e+00
1853 | 2.0000000e+00
1854 | 8.0000000e+00
1855 | 4.0000000e+00
1856 | 9.0000000e+00
1857 | 8.0000000e+00
1858 | 0.0000000e+00
1859 | 7.0000000e+00
1860 | 1.0000000e+00
1861 | 1.0000000e+00
1862 | 2.0000000e+00
1863 | 3.0000000e+00
1864 | 3.0000000e+00
1865 | 5.0000000e+00
1866 | 4.0000000e+00
1867 | 6.0000000e+00
1868 | 5.0000000e+00
1869 | 0.0000000e+00
1870 | 6.0000000e+00
1871 | 3.0000000e+00
1872 | 7.0000000e+00
1873 | 3.0000000e+00
1874 | 8.0000000e+00
1875 | 2.0000000e+00
1876 | 9.0000000e+00
1877 | 0.0000000e+00
1878 | 0.0000000e+00
1879 | 9.0000000e+00
1880 | 1.0000000e+00
1881 | 2.0000000e+00
1882 | 2.0000000e+00
1883 | 9.0000000e+00
1884 | 3.0000000e+00
1885 | 9.0000000e+00
1886 | 4.0000000e+00
1887 | 6.0000000e+00
1888 | 5.0000000e+00
1889 | 2.0000000e+00
1890 | 6.0000000e+00
1891 | 6.0000000e+00
1892 | 7.0000000e+00
1893 | 6.0000000e+00
1894 | 8.0000000e+00
1895 | 2.0000000e+00
1896 | 9.0000000e+00
1897 | 2.0000000e+00
1898 | 0.0000000e+00
1899 | 6.0000000e+00
1900 | 7.0000000e+00
1901 | 2.0000000e+00
1902 | 8.0000000e+00
1903 | 6.0000000e+00
1904 | 9.0000000e+00
1905 | 0.0000000e+00
1906 | 9.0000000e+00
1907 | 6.0000000e+00
1908 | 0.0000000e+00
1909 | 3.0000000e+00
1910 | 1.0000000e+00
1911 | 1.0000000e+00
1912 | 4.0000000e+00
1913 | 9.0000000e+00
1914 | 5.0000000e+00
1915 | 8.0000000e+00
1916 | 1.0000000e+00
1917 | 0.0000000e+00
1918 | 6.0000000e+00
1919 | 6.0000000e+00
1920 | 7.0000000e+00
1921 | 2.0000000e+00
1922 | 3.0000000e+00
1923 | 4.0000000e+00
1924 | 2.0000000e+00
1925 | 3.0000000e+00
1926 | 9.0000000e+00
1927 | 0.0000000e+00
1928 | 0.0000000e+00
1929 | 4.0000000e+00
1930 | 5.0000000e+00
1931 | 0.0000000e+00
1932 | 6.0000000e+00
1933 | 4.0000000e+00
1934 | 7.0000000e+00
1935 | 4.0000000e+00
1936 | 3.0000000e+00
1937 | 1.0000000e+00
1938 | 9.0000000e+00
1939 | 3.0000000e+00
1940 | 9.0000000e+00
1941 | 9.0000000e+00
1942 | 3.0000000e+00
1943 | 1.0000000e+00
1944 | 8.0000000e+00
1945 | 7.0000000e+00
1946 | 1.0000000e+00
1947 | 7.0000000e+00
1948 | 2.0000000e+00
1949 | 8.0000000e+00
1950 | 9.0000000e+00
1951 | 9.0000000e+00
1952 | 6.0000000e+00
1953 | 2.0000000e+00
1954 | 7.0000000e+00
1955 | 2.0000000e+00
1956 | 5.0000000e+00
1957 | 0.0000000e+00
1958 | 6.0000000e+00
1959 | 7.0000000e+00
1960 | 3.0000000e+00
1961 | 9.0000000e+00
1962 | 5.0000000e+00
1963 | 9.0000000e+00
1964 | 0.0000000e+00
1965 | 8.0000000e+00
1966 | 5.0000000e+00
1967 | 8.0000000e+00
1968 | 4.0000000e+00
1969 | 9.0000000e+00
1970 | 0.0000000e+00
1971 | 5.0000000e+00
1972 | 3.0000000e+00
1973 | 2.0000000e+00
1974 | 4.0000000e+00
1975 | 8.0000000e+00
1976 | 7.0000000e+00
1977 | 4.0000000e+00
1978 | 1.0000000e+00
1979 | 5.0000000e+00
1980 | 4.0000000e+00
1981 | 6.0000000e+00
1982 | 3.0000000e+00
1983 | 7.0000000e+00
1984 | 7.0000000e+00
1985 | 9.0000000e+00
1986 | 6.0000000e+00
1987 | 6.0000000e+00
1988 | 2.0000000e+00
1989 | 2.0000000e+00
1990 | 1.0000000e+00
1991 | 6.0000000e+00
1992 | 5.0000000e+00
1993 | 4.0000000e+00
1994 | 5.0000000e+00
1995 | 3.0000000e+00
1996 | 0.0000000e+00
1997 | 5.0000000e+00
1998 | 5.0000000e+00
1999 | 2.0000000e+00
2000 | 0.0000000e+00
2001 | 5.0000000e+00
2002 | 8.0000000e+00
2003 | 8.0000000e+00
2004 | 4.0000000e+00
2005 | 2.0000000e+00
2006 | 6.0000000e+00
2007 | 9.0000000e+00
2008 | 7.0000000e+00
2009 | 1.0000000e+00
2010 | 0.0000000e+00
2011 | 2.0000000e+00
2012 | 3.0000000e+00
2013 | 1.0000000e+00
2014 | 8.0000000e+00
2015 | 7.0000000e+00
2016 | 1.0000000e+00
2017 | 8.0000000e+00
2018 | 2.0000000e+00
2019 | 7.0000000e+00
2020 | 2.0000000e+00
2021 | 6.0000000e+00
2022 | 3.0000000e+00
2023 | 2.0000000e+00
2024 | 3.0000000e+00
2025 | 7.0000000e+00
2026 | 6.0000000e+00
2027 | 8.0000000e+00
2028 | 7.0000000e+00
2029 | 5.0000000e+00
2030 | 3.0000000e+00
2031 | 8.0000000e+00
2032 | 4.0000000e+00
2033 | 4.0000000e+00
2034 | 9.0000000e+00
2035 | 4.0000000e+00
2036 | 6.0000000e+00
2037 | 4.0000000e+00
2038 | 5.0000000e+00
2039 | 5.0000000e+00
2040 | 1.0000000e+00
2041 | 1.0000000e+00
2042 | 9.0000000e+00
2043 | 3.0000000e+00
2044 | 8.0000000e+00
2045 | 6.0000000e+00
2046 | 1.0000000e+00
2047 | 4.0000000e+00
2048 | 1.0000000e+00
2049 | 7.0000000e+00
2050 | 5.0000000e+00
2051 | 4.0000000e+00
2052 | 0.0000000e+00
2053 | 4.0000000e+00
2054 | 2.0000000e+00
2055 | 7.0000000e+00
2056 | 7.0000000e+00
2057 | 5.0000000e+00
2058 | 8.0000000e+00
2059 | 0.0000000e+00
2060 | 6.0000000e+00
2061 | 9.0000000e+00
2062 | 6.0000000e+00
2063 | 6.0000000e+00
2064 | 8.0000000e+00
2065 | 7.0000000e+00
2066 | 2.0000000e+00
2067 | 0.0000000e+00
2068 | 8.0000000e+00
2069 | 8.0000000e+00
2070 | 4.0000000e+00
2071 | 8.0000000e+00
2072 | 4.0000000e+00
2073 | 4.0000000e+00
2074 | 2.0000000e+00
2075 | 3.0000000e+00
2076 | 2.0000000e+00
2077 | 3.0000000e+00
2078 | 8.0000000e+00
2079 | 2.0000000e+00
2080 | 0.0000000e+00
2081 | 5.0000000e+00
2082 | 0.0000000e+00
2083 | 0.0000000e+00
2084 | 1.0000000e+00
2085 | 0.0000000e+00
2086 | 2.0000000e+00
2087 | 1.0000000e+00
2088 | 3.0000000e+00
2089 | 3.0000000e+00
2090 | 4.0000000e+00
2091 | 7.0000000e+00
2092 | 5.0000000e+00
2093 | 3.0000000e+00
2094 | 6.0000000e+00
2095 | 1.0000000e+00
2096 | 7.0000000e+00
2097 | 6.0000000e+00
2098 | 8.0000000e+00
2099 | 3.0000000e+00
2100 | 9.0000000e+00
2101 | 0.0000000e+00
2102 | 0.0000000e+00
2103 | 7.0000000e+00
2104 | 1.0000000e+00
2105 | 2.0000000e+00
2106 | 2.0000000e+00
2107 | 4.0000000e+00
2108 | 3.0000000e+00
2109 | 7.0000000e+00
2110 | 4.0000000e+00
2111 | 1.0000000e+00
2112 | 5.0000000e+00
2113 | 0.0000000e+00
2114 | 6.0000000e+00
2115 | 3.0000000e+00
2116 | 7.0000000e+00
2117 | 9.0000000e+00
2118 | 8.0000000e+00
2119 | 7.0000000e+00
2120 | 9.0000000e+00
2121 | 5.0000000e+00
2122 | 0.0000000e+00
2123 | 1.0000000e+00
2124 | 1.0000000e+00
2125 | 8.0000000e+00
2126 | 2.0000000e+00
2127 | 6.0000000e+00
2128 | 3.0000000e+00
2129 | 3.0000000e+00
2130 | 4.0000000e+00
2131 | 3.0000000e+00
2132 | 5.0000000e+00
2133 | 3.0000000e+00
2134 | 7.0000000e+00
2135 | 1.0000000e+00
2136 | 8.0000000e+00
2137 | 1.0000000e+00
2138 | 9.0000000e+00
2139 | 6.0000000e+00
2140 | 8.0000000e+00
2141 | 9.0000000e+00
2142 | 9.0000000e+00
2143 | 7.0000000e+00
2144 | 5.0000000e+00
2145 | 0.0000000e+00
2146 | 7.0000000e+00
2147 | 3.0000000e+00
2148 | 0.0000000e+00
2149 | 6.0000000e+00
2150 | 3.0000000e+00
2151 | 8.0000000e+00
2152 | 1.0000000e+00
2153 | 6.0000000e+00
2154 | 6.0000000e+00
2155 | 1.0000000e+00
2156 | 8.0000000e+00
2157 | 6.0000000e+00
2158 | 4.0000000e+00
2159 | 6.0000000e+00
2160 | 1.0000000e+00
2161 | 0.0000000e+00
2162 | 7.0000000e+00
2163 | 1.0000000e+00
2164 | 6.0000000e+00
2165 | 4.0000000e+00
2166 | 5.0000000e+00
2167 | 1.0000000e+00
2168 | 6.0000000e+00
2169 | 7.0000000e+00
2170 | 4.0000000e+00
2171 | 8.0000000e+00
2172 | 2.0000000e+00
2173 | 5.0000000e+00
2174 | 7.0000000e+00
2175 | 3.0000000e+00
2176 | 8.0000000e+00
2177 | 4.0000000e+00
2178 | 1.0000000e+00
2179 | 6.0000000e+00
2180 | 3.0000000e+00
2181 | 3.0000000e+00
2182 | 4.0000000e+00
2183 | 5.0000000e+00
2184 | 3.0000000e+00
2185 | 2.0000000e+00
2186 | 4.0000000e+00
2187 | 5.0000000e+00
2188 | 7.0000000e+00
2189 | 1.0000000e+00
2190 | 2.0000000e+00
2191 | 4.0000000e+00
2192 | 0.0000000e+00
2193 | 0.0000000e+00
2194 | 5.0000000e+00
2195 | 8.0000000e+00
2196 | 0.0000000e+00
2197 | 6.0000000e+00
2198 | 1.0000000e+00
2199 | 1.0000000e+00
2200 | 9.0000000e+00
2201 | 3.0000000e+00
2202 | 2.0000000e+00
2203 | 1.0000000e+00
2204 | 3.0000000e+00
2205 | 4.0000000e+00
2206 | 2.0000000e+00
2207 | 4.0000000e+00
2208 | 3.0000000e+00
2209 | 4.0000000e+00
2210 | 5.0000000e+00
2211 | 8.0000000e+00
2212 | 5.0000000e+00
2213 | 6.0000000e+00
2214 | 7.0000000e+00
2215 | 6.0000000e+00
2216 | 8.0000000e+00
2217 | 9.0000000e+00
2218 | 4.0000000e+00
2219 | 0.0000000e+00
2220 | 9.0000000e+00
2221 | 4.0000000e+00
2222 | 9.0000000e+00
2223 | 4.0000000e+00
2224 | 7.0000000e+00
2225 | 4.0000000e+00
2226 | 1.0000000e+00
2227 | 6.0000000e+00
2228 | 1.0000000e+00
2229 | 3.0000000e+00
2230 | 7.0000000e+00
2231 | 3.0000000e+00
2232 | 8.0000000e+00
2233 | 3.0000000e+00
2234 | 3.0000000e+00
2235 | 2.0000000e+00
2236 | 4.0000000e+00
2237 | 4.0000000e+00
2238 | 8.0000000e+00
2239 | 7.0000000e+00
2240 | 6.0000000e+00
2241 | 4.0000000e+00
2242 | 3.0000000e+00
2243 | 6.0000000e+00
2244 | 8.0000000e+00
2245 | 7.0000000e+00
2246 | 0.0000000e+00
2247 | 7.0000000e+00
2248 | 9.0000000e+00
2249 | 5.0000000e+00
2250 | 6.0000000e+00
2251 | 5.0000000e+00
2252 | 2.0000000e+00
2253 | 3.0000000e+00
2254 | 0.0000000e+00
2255 | 4.0000000e+00
2256 | 1.0000000e+00
2257 | 4.0000000e+00
2258 | 0.0000000e+00
2259 | 5.0000000e+00
2260 | 6.0000000e+00
2261 | 1.0000000e+00
2262 | 2.0000000e+00
2263 | 6.0000000e+00
2264 | 3.0000000e+00
2265 | 4.0000000e+00
2266 | 8.0000000e+00
2267 | 5.0000000e+00
2268 | 9.0000000e+00
2269 | 5.0000000e+00
2270 | 0.0000000e+00
2271 | 2.0000000e+00
2272 | 7.0000000e+00
2273 | 5.0000000e+00
2274 | 2.0000000e+00
2275 | 9.0000000e+00
2276 | 3.0000000e+00
2277 | 4.0000000e+00
2278 | 4.0000000e+00
2279 | 0.0000000e+00
2280 | 5.0000000e+00
2281 | 2.0000000e+00
2282 | 5.0000000e+00
2283 | 5.0000000e+00
2284 | 2.0000000e+00
2285 | 9.0000000e+00
2286 | 8.0000000e+00
2287 | 3.0000000e+00
2288 | 5.0000000e+00
2289 | 2.0000000e+00
2290 | 4.0000000e+00
2291 | 4.0000000e+00
2292 | 6.0000000e+00
2293 | 7.0000000e+00
2294 | 6.0000000e+00
2295 | 4.0000000e+00
2296 | 6.0000000e+00
2297 | 6.0000000e+00
2298 | 7.0000000e+00
2299 | 0.0000000e+00
2300 | 9.0000000e+00
2301 | 6.0000000e+00
2302 | 1.0000000e+00
2303 | 8.0000000e+00
2304 | 8.0000000e+00
2305 | 5.0000000e+00
2306 | 2.0000000e+00
2307 | 1.0000000e+00
2308 | 1.0000000e+00
2309 | 5.0000000e+00
2310 | 2.0000000e+00
2311 | 0.0000000e+00
2312 | 6.0000000e+00
2313 | 9.0000000e+00
2314 | 5.0000000e+00
2315 | 2.0000000e+00
2316 | 3.0000000e+00
2317 | 1.0000000e+00
2318 | 4.0000000e+00
2319 | 1.0000000e+00
2320 | 7.0000000e+00
2321 | 6.0000000e+00
2322 | 9.0000000e+00
2323 | 2.0000000e+00
2324 | 4.0000000e+00
2325 | 6.0000000e+00
2326 | 0.0000000e+00
2327 | 1.0000000e+00
2328 | 0.0000000e+00
2329 | 5.0000000e+00
2330 | 5.0000000e+00
2331 | 5.0000000e+00
2332 | 9.0000000e+00
2333 | 2.0000000e+00
2334 | 0.0000000e+00
2335 | 4.0000000e+00
2336 | 1.0000000e+00
2337 | 1.0000000e+00
2338 | 2.0000000e+00
2339 | 4.0000000e+00
2340 | 3.0000000e+00
2341 | 0.0000000e+00
2342 | 4.0000000e+00
2343 | 2.0000000e+00
2344 | 5.0000000e+00
2345 | 7.0000000e+00
2346 | 6.0000000e+00
2347 | 2.0000000e+00
2348 | 7.0000000e+00
2349 | 7.0000000e+00
2350 | 8.0000000e+00
2351 | 3.0000000e+00
2352 | 9.0000000e+00
2353 | 0.0000000e+00
2354 | 0.0000000e+00
2355 | 2.0000000e+00
2356 | 1.0000000e+00
2357 | 1.0000000e+00
2358 | 2.0000000e+00
2359 | 8.0000000e+00
2360 | 3.0000000e+00
2361 | 4.0000000e+00
2362 | 4.0000000e+00
2363 | 1.0000000e+00
2364 | 5.0000000e+00
2365 | 9.0000000e+00
2366 | 6.0000000e+00
2367 | 1.0000000e+00
2368 | 7.0000000e+00
2369 | 5.0000000e+00
2370 | 8.0000000e+00
2371 | 9.0000000e+00
2372 | 9.0000000e+00
2373 | 6.0000000e+00
2374 | 0.0000000e+00
2375 | 1.0000000e+00
2376 | 1.0000000e+00
2377 | 7.0000000e+00
2378 | 2.0000000e+00
2379 | 8.0000000e+00
2380 | 3.0000000e+00
2381 | 7.0000000e+00
2382 | 7.0000000e+00
2383 | 8.0000000e+00
2384 | 8.0000000e+00
2385 | 8.0000000e+00
2386 | 9.0000000e+00
2387 | 7.0000000e+00
2388 | 6.0000000e+00
2389 | 8.0000000e+00
2390 | 2.0000000e+00
2391 | 4.0000000e+00
2392 | 7.0000000e+00
2393 | 4.0000000e+00
2394 | 1.0000000e+00
2395 | 8.0000000e+00
2396 | 5.0000000e+00
2397 | 0.0000000e+00
2398 | 6.0000000e+00
2399 | 9.0000000e+00
2400 | 2.0000000e+00
2401 | 8.0000000e+00
2402 | 5.0000000e+00
2403 | 2.0000000e+00
2404 | 0.0000000e+00
2405 | 0.0000000e+00
2406 | 5.0000000e+00
2407 | 4.0000000e+00
2408 | 4.0000000e+00
2409 | 3.0000000e+00
2410 | 2.0000000e+00
2411 | 1.0000000e+00
2412 | 0.0000000e+00
2413 | 5.0000000e+00
2414 | 8.0000000e+00
2415 | 5.0000000e+00
2416 | 1.0000000e+00
2417 | 4.0000000e+00
2418 | 7.0000000e+00
2419 | 2.0000000e+00
2420 | 1.0000000e+00
2421 | 4.0000000e+00
2422 | 8.0000000e+00
2423 | 6.0000000e+00
2424 | 3.0000000e+00
2425 | 4.0000000e+00
2426 | 8.0000000e+00
2427 | 1.0000000e+00
2428 | 7.0000000e+00
2429 | 5.0000000e+00
2430 | 1.0000000e+00
2431 | 3.0000000e+00
2432 | 7.0000000e+00
2433 | 1.0000000e+00
2434 | 7.0000000e+00
2435 | 9.0000000e+00
2436 | 0.0000000e+00
2437 | 0.0000000e+00
2438 | 1.0000000e+00
2439 | 9.0000000e+00
2440 | 4.0000000e+00
2441 | 0.0000000e+00
2442 | 4.0000000e+00
2443 | 8.0000000e+00
2444 | 4.0000000e+00
2445 | 8.0000000e+00
2446 | 5.0000000e+00
2447 | 1.0000000e+00
2448 | 7.0000000e+00
2449 | 1.0000000e+00
2450 | 4.0000000e+00
2451 | 2.0000000e+00
2452 | 4.0000000e+00
2453 | 4.0000000e+00
2454 | 9.0000000e+00
2455 | 2.0000000e+00
2456 | 7.0000000e+00
2457 | 3.0000000e+00
2458 | 2.0000000e+00
2459 | 4.0000000e+00
2460 | 6.0000000e+00
2461 | 9.0000000e+00
2462 | 5.0000000e+00
2463 | 1.0000000e+00
2464 | 3.0000000e+00
2465 | 2.0000000e+00
2466 | 9.0000000e+00
2467 | 4.0000000e+00
2468 | 4.0000000e+00
2469 | 3.0000000e+00
2470 | 3.0000000e+00
2471 | 6.0000000e+00
2472 | 9.0000000e+00
2473 | 2.0000000e+00
2474 | 0.0000000e+00
2475 | 6.0000000e+00
2476 | 8.0000000e+00
2477 | 8.0000000e+00
2478 | 7.0000000e+00
2479 | 5.0000000e+00
2480 | 6.0000000e+00
2481 | 6.0000000e+00
2482 | 3.0000000e+00
2483 | 8.0000000e+00
2484 | 0.0000000e+00
2485 | 4.0000000e+00
2486 | 4.0000000e+00
2487 | 6.0000000e+00
2488 | 6.0000000e+00
2489 | 7.0000000e+00
2490 | 3.0000000e+00
2491 | 4.0000000e+00
2492 | 7.0000000e+00
2493 | 7.0000000e+00
2494 | 0.0000000e+00
2495 | 6.0000000e+00
2496 | 1.0000000e+00
2497 | 6.0000000e+00
2498 | 1.0000000e+00
2499 | 6.0000000e+00
2500 | 2.0000000e+00
2501 |
--------------------------------------------------------------------------------
/notebooks/day1/mlp.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "code",
5 | "execution_count": 1,
6 | "metadata": {
7 | "collapsed": false
8 | },
9 | "outputs": [],
10 | "source": [
11 | "%matplotlib inline\n",
12 | "%load_ext autoreload\n",
13 | "%autoreload 2\n",
14 | "\n",
15 | "import pylab\n",
16 | "import sys\n",
17 | "# making sure the dl4mt module is on the path \n",
18 | "# -- this path depends upon the location where the notebook is running\n",
19 | "# here it is assumed to be the day1/ directory\n",
20 | "sys.path.append('../..')\n",
21 | "\n",
22 | "pylab.rcParams['figure.figsize'] = (10.0, 8.0)"
23 | ]
24 | },
25 | {
26 | "cell_type": "code",
27 | "execution_count": 2,
28 | "metadata": {
29 | "collapsed": false
30 | },
31 | "outputs": [
32 | {
33 | "name": "stdout",
34 | "output_type": "stream",
35 | "text": [
36 | "Overwriting ../../dl4mt/mlp.py\n"
37 | ]
38 | }
39 | ],
40 | "source": [
41 | "%%writefile ../../dl4mt/mlp.py\n",
42 | "# Uncomment to save this cell to file in order to import it in later notebooks\n",
43 | "\n",
44 | "# construct an MLP graph using theano, then train and evaluate using static features\n",
45 | "import numpy\n",
46 | "import theano\n",
47 | "import theano.tensor as T\n",
48 | "\n",
49 | "# this is our logistic regression class, created and saved in the LogisticRegression notebook\n",
50 | "from dl4mt.logistic_regression import LogisticRegression\n",
51 | "\n",
52 | "\"\"\"\n",
53 | " Most of this class was taken from the lisa-lab deeplearning tutorials \n",
54 | " -- https://github.com/lisa-lab/DeepLearningTutorials\n",
55 | "\n",
56 | "\"\"\"\n",
57 | "\n",
58 | "class HiddenLayer(object):\n",
59 | " def __init__(self, rng, input, n_in, n_out, W=None, b=None,\n",
60 | " activation=T.tanh):\n",
61 | " \"\"\"\n",
62 | " Typical hidden layer of a MLP: units are fully-connected and have\n",
63 | " sigmoidal activation function. Weight matrix W is of shape (n_in,n_out)\n",
64 | " and the bias vector b is of shape (n_out,).\n",
65 | "\n",
66 | " Hidden unit activation is given by: tanh(dot(input,W) + b)\n",
67 | "\n",
68 | " :type rng: numpy.random.RandomState\n",
69 | " :param rng: a random number generator used to initialize weights\n",
70 | "\n",
71 | " :type input: theano.tensor.dmatrix\n",
72 | " :param input: a symbolic tensor of shape (n_examples, n_in)\n",
73 | "\n",
74 | " :type n_in: int\n",
75 | " :param n_in: dimensionality of input\n",
76 | "\n",
77 | " :type n_out: int\n",
78 | " :param n_out: number of hidden units\n",
79 | "\n",
80 | " :type activation: theano.Op or function\n",
81 | " :param activation: Non linearity to be applied in the hidden\n",
82 | " layer\n",
83 | " \"\"\"\n",
84 | " self.input = input\n",
85 | "\n",
86 | " # `W` is initialized with `W_values` which is uniformely sampled\n",
87 | " # from sqrt(-6./(n_in+n_hidden)) and sqrt(6./(n_in+n_hidden))\n",
88 | " # for tanh activation function\n",
89 | " # the output of uniform if converted using asarray to dtype\n",
90 | " # theano.config.floatX so that the code is runable on GPU\n",
91 | " # Note : optimal initialization of weights is dependent on the\n",
92 | " # activation function used (among other things).\n",
93 | " # For example, results presented in [Xavier10] suggest that you\n",
94 | " # should use 4 times larger initial weights for sigmoid\n",
95 | " # compared to tanh\n",
96 | " # We have no info for other function, so we use the same as\n",
97 | " # tanh.\n",
98 | " if W is None:\n",
99 | " W_values = numpy.asarray(\n",
100 | " rng.uniform(\n",
101 | " low=-numpy.sqrt(6. / (n_in + n_out)),\n",
102 | " high=numpy.sqrt(6. / (n_in + n_out)),\n",
103 | " size=(n_in, n_out)\n",
104 | " ),\n",
105 | " dtype=theano.config.floatX\n",
106 | " )\n",
107 | " if activation == theano.tensor.nnet.sigmoid:\n",
108 | " W_values *= 4\n",
109 | "\n",
110 | " W = theano.shared(value=W_values, name='W', borrow=True)\n",
111 | "\n",
112 | " if b is None:\n",
113 | " b_values = numpy.zeros((n_out,), dtype=theano.config.floatX)\n",
114 | " b = theano.shared(value=b_values, name='b', borrow=True)\n",
115 | "\n",
116 | " self.W = W\n",
117 | " self.b = b\n",
118 | "\n",
119 | " lin_output = T.dot(input, self.W) + self.b\n",
120 | " self.output = (\n",
121 | " lin_output if activation is None\n",
122 | " else activation(lin_output)\n",
123 | " )\n",
124 | " # parameters of the model\n",
125 | " self.params = [self.W, self.b]\n",
126 | "\n",
127 | "\n",
128 | "class MLP(object):\n",
129 | " \"\"\"Multi-Layer Perceptron Class\n",
130 | "\n",
131 | " A multilayer perceptron is a feedforward artificial neural network model\n",
132 | " that has one layer or more of hidden units and nonlinear activations.\n",
133 | " Intermediate layers usually have as activation function tanh or the\n",
134 | " sigmoid function (defined here by a ``HiddenLayer`` class) while the\n",
135 | " top layer is a softmax layer (defined here by a ``LogisticRegression``\n",
136 | " class).\n",
137 | " \"\"\"\n",
138 | "\n",
139 | " def __init__(self, rng, input, n_in, n_hidden, n_out):\n",
140 | " \"\"\"Initialize the parameters for the multilayer perceptron\n",
141 | "\n",
142 | " :type rng: numpy.random.RandomState\n",
143 | " :param rng: a random number generator used to initialize weights\n",
144 | "\n",
145 | " :type input: theano.tensor.TensorType\n",
146 | " :param input: symbolic variable that describes the input of the\n",
147 | " architecture (one minibatch)\n",
148 | "\n",
149 | " :type n_in: int\n",
150 | " :param n_in: number of input units, the dimension of the space in\n",
151 | " which the datapoints lie\n",
152 | "\n",
153 | " :type n_hidden: int\n",
154 | " :param n_hidden: number of hidden units\n",
155 | "\n",
156 | " :type n_out: int\n",
157 | " :param n_out: number of output units, the dimension of the space in\n",
158 | " which the labels lie\n",
159 | "\n",
160 | " \"\"\"\n",
161 | "\n",
162 | " # Since we are dealing with a one hidden layer MLP, this will translate\n",
163 | " # into a HiddenLayer with a tanh activation function connected to the\n",
164 | " # LogisticRegression layer; the activation function can be replaced by\n",
165 | " # sigmoid or any other nonlinear function\n",
166 | " self.hiddenLayer = HiddenLayer(\n",
167 | " rng=rng,\n",
168 | " input=input,\n",
169 | " n_in=n_in,\n",
170 | " n_out=n_hidden,\n",
171 | " activation=T.tanh\n",
172 | " )\n",
173 | "\n",
174 | " # The logistic regression layer gets as input the hidden units\n",
175 | " # of the hidden layer\n",
176 | " self.logRegressionLayer = LogisticRegression(\n",
177 | " input=self.hiddenLayer.output,\n",
178 | " n_in=n_hidden,\n",
179 | " n_out=n_out\n",
180 | " )\n",
181 | " \n",
182 | " # alias to logRegressionLayer.y_pred to provide a consistent interface to our models\n",
183 | " self.y_pred = self.logRegressionLayer.y_pred\n",
184 | "\n",
185 | " # L1 norm\n",
186 | " self.L1 = (\n",
187 | " abs(self.hiddenLayer.W).sum()\n",
188 | " + abs(self.logRegressionLayer.W).sum()\n",
189 | " )\n",
190 | "\n",
191 | " # L2 norm\n",
192 | " self.L2_sqr = (\n",
193 | " (self.hiddenLayer.W ** 2).sum()\n",
194 | " + (self.logRegressionLayer.W ** 2).sum()\n",
195 | " )\n",
196 | "\n",
197 | " # negative log likelihood of the MLP is given by the negative\n",
198 | " # log likelihood of the output of the model, computed in the\n",
199 | " # logistic regression layer\n",
200 | " self.negative_log_likelihood = (\n",
201 | " self.logRegressionLayer.negative_log_likelihood\n",
202 | " )\n",
203 | " # same holds for the function computing the number of errors\n",
204 | " self.errors = self.logRegressionLayer.errors\n",
205 | "\n",
206 | " # the parameters of the model are the parameters of the two layer it is\n",
207 | " # made out of\n",
208 | " self.params = self.hiddenLayer.params + self.logRegressionLayer.params\n",
209 | "\n",
210 | " # keep track of model input -- this will be useful for creating theano functions\n",
211 | " self.input = input"
212 | ]
213 | },
214 | {
215 | "cell_type": "code",
216 | "execution_count": 3,
217 | "metadata": {
218 | "collapsed": false
219 | },
220 | "outputs": [],
221 | "source": [
222 | "import numpy\n",
223 | "\n",
224 | "import theano\n",
225 | "import theano.tensor as T\n",
226 | "import timeit\n",
227 | "\n",
228 | "from dl4mt.mlp import MLP\n",
229 | "\n",
230 | "\n",
231 | "def initialize_mlp(train_dataset, dev_dataset, learning_rate=0.01, L1_reg=0.00, L2_reg=0.0001, \n",
232 | " batch_size=10, n_hidden=500):\n",
233 | " \n",
234 | " \"\"\"Intialize the functions and parameters needed to train a multilayer perceptron\n",
235 | "\n",
236 | " :type train_dataset: tuple\n",
237 | " :param train_dataset: a tuple consisting of (X,y) for the training dataset\n",
238 | " (X and y are theano shared variables)\n",
239 | " \n",
240 | " :type dev_dataset: tuple\n",
241 | " :param dev_dataset: a tuple consisting of (X,y) for the dev/validation dataset\n",
242 | " (X and y are theano shared variables) \n",
243 | " \n",
244 | " :type learning_rate: float\n",
245 | " :param learning_rate: learning rate used (factor for the stochastic\n",
246 | " gradient)\n",
247 | "\n",
248 | "\n",
249 | " :type n_out: int\n",
250 | " :param batch_size: the number of output \"classes\"\n",
251 | "\n",
252 | " :type L1_reg: float\n",
253 | " :param L1_reg: L1-norm's weight when added to the cost (see\n",
254 | " regularization)\n",
255 | "\n",
256 | " :type L2_reg: float\n",
257 | " :param L2_reg: L2-norm's weight when added to the cost (see\n",
258 | " regularization)\n",
259 | "\n",
260 | " :type batch_size: int\n",
261 | " :param batch_size: the minibatch size for training and validation \n",
262 | "\n",
263 | " :type n_hidden: int\n",
264 | " :param batch_size: the size of the hidden layer\n",
265 | "\n",
266 | " \"\"\"\n",
267 | "\n",
268 | " train_set_x, train_set_y = train_dataset\n",
269 | " valid_set_x, valid_set_y = dev_dataset\n",
270 | "\n",
271 | " # compute number of minibatches for training, validation and testing\n",
272 | " n_train_batches = train_set_x.get_value(borrow=True).shape[0] / batch_size\n",
273 | " n_valid_batches = valid_set_x.get_value(borrow=True).shape[0] / batch_size\n",
274 | "\n",
275 | " # allocate symbolic variables for the data\n",
276 | " index = T.lscalar() # index to a [mini]batch\n",
277 | " x = T.matrix('x') # the data is presented as rasterized images\n",
278 | " y = T.ivector('y') # the labels are presented as 1D vector of\n",
279 | " # [int] labels\n",
280 | "\n",
281 | " rng = numpy.random.RandomState(1234)\n",
282 | " \n",
283 | " # TODO: move these into the parameterization of the training function\n",
284 | " n_in = train_set_x.get_value().shape[1]\n",
285 | " n_out = 12\n",
286 | " \n",
287 | " # construct the MLP class\n",
288 | " classifier = MLP(\n",
289 | " rng=rng,\n",
290 | " input=x,\n",
291 | " n_in=n_in,\n",
292 | " n_hidden=n_hidden,\n",
293 | " n_out=n_out\n",
294 | " )\n",
295 | "\n",
296 | " # the cost we minimize during training is the negative log likelihood of\n",
297 | " # the model plus the regularization terms (L1 and L2); cost is expressed\n",
298 | " # here symbolically\n",
299 | " cost = (\n",
300 | " classifier.negative_log_likelihood(y)\n",
301 | " + L1_reg * classifier.L1\n",
302 | " + L2_reg * classifier.L2_sqr\n",
303 | " )\n",
304 | "\n",
305 | " validate_model_func = theano.function(\n",
306 | " inputs=[index],\n",
307 | " outputs=classifier.errors(y),\n",
308 | " givens={\n",
309 | " x: valid_set_x[index * batch_size:(index + 1) * batch_size],\n",
310 | " y: valid_set_y[index * batch_size:(index + 1) * batch_size]\n",
311 | " }\n",
312 | " )\n",
313 | "\n",
314 | " # compute the gradient of cost with respect to theta (sotred in params)\n",
315 | " # the resulting gradients will be stored in a list gparams\n",
316 | " gparams = [T.grad(cost, param) for param in classifier.params]\n",
317 | "\n",
318 | " # specify how to update the parameters of the model as a list of\n",
319 | " # (variable, update expression) pairs\n",
320 | " updates = [\n",
321 | " (param, param - learning_rate * gparam)\n",
322 | " for param, gparam in zip(classifier.params, gparams)\n",
323 | " ]\n",
324 | "\n",
325 | " # compiling a Theano function `train_model` that returns the cost, but\n",
326 | " # in the same time updates the parameters of the model based on the rules\n",
327 | " # defined in `updates`\n",
328 | " train_model_func = theano.function(\n",
329 | " inputs=[index],\n",
330 | " outputs=cost,\n",
331 | " updates=updates,\n",
332 | " givens={\n",
333 | " x: train_set_x[index * batch_size: (index + 1) * batch_size],\n",
334 | " y: train_set_y[index * batch_size: (index + 1) * batch_size]\n",
335 | " }\n",
336 | " )\n",
337 | "\n",
338 | " return (classifier, train_model_func, validate_model_func, n_train_batches, n_valid_batches)"
339 | ]
340 | },
341 | {
342 | "cell_type": "code",
343 | "execution_count": 4,
344 | "metadata": {
345 | "collapsed": false
346 | },
347 | "outputs": [
348 | {
349 | "name": "stdout",
350 | "output_type": "stream",
351 | "text": [
352 | "epoch 1, minibatch 200/200, validation error 51.240000 %\n",
353 | "epoch 1, average training cost 1.70\n",
354 | "epoch 2, minibatch 200/200, validation error 43.420000 %\n",
355 | "epoch 2, average training cost 1.25\n",
356 | "epoch 3, minibatch 200/200, validation error 38.330000 %\n",
357 | "epoch 3, average training cost 1.04\n",
358 | "epoch 4, minibatch 200/200, validation error 34.620000 %\n",
359 | "epoch 4, average training cost 0.91\n",
360 | "epoch 5, minibatch 200/200, validation error 31.020000 %\n",
361 | "epoch 5, average training cost 0.82\n",
362 | "epoch 6, minibatch 200/200, validation error 28.330000 %\n",
363 | "epoch 6, average training cost 0.76\n",
364 | "epoch 7, minibatch 200/200, validation error 26.640000 %\n",
365 | "epoch 7, average training cost 0.71\n",
366 | "epoch 8, minibatch 200/200, validation error 25.290000 %\n",
367 | "epoch 8, average training cost 0.67\n",
368 | "epoch 9, minibatch 200/200, validation error 24.410000 %\n",
369 | "epoch 9, average training cost 0.63\n",
370 | "epoch 10, minibatch 200/200, validation error 23.690000 %\n",
371 | "epoch 10, average training cost 0.60\n",
372 | "Optimization complete!\n",
373 | "Best validation score: 23.690000 %\n"
374 | ]
375 | }
376 | ],
377 | "source": [
378 | "# TODO: allow user to load another distributed representation for comparison, such as PCA\n",
379 | "import os\n",
380 | "\n",
381 | "from dl4mt.datasets import prep_dataset\n",
382 | "from dl4mt.training import train_model\n",
383 | "\n",
384 | "# location of our datasets\n",
385 | "DATASET_LOCATION = '../../datasets/'\n",
386 | "\n",
387 | "# the pos dataset consists of windows around words\n",
388 | "POS_DATASET_NAME = 'brown_pos_dataset.hdf5'\n",
389 | "POS_DATASET_PATH = os.path.join(DATASET_LOCATION, POS_DATASET_NAME)\n",
390 | "WORD_BY_WORD_MATRIX = 'brown.word-by-word.normalized.npy'\n",
391 | "VECTOR_INDEX_PATH = os.path.join(DATASET_LOCATION, WORD_BY_WORD_MATRIX)\n",
392 | "\n",
393 | "CUTOFF = 10000\n",
394 | " \n",
395 | "train_dataset = prep_dataset(POS_DATASET_PATH, VECTOR_INDEX_PATH, which_sets=['train'], cutoff=CUTOFF)\n",
396 | "dev_dataset = prep_dataset(POS_DATASET_PATH, VECTOR_INDEX_PATH, which_sets=['dev'], cutoff=CUTOFF)\n",
397 | "\n",
398 | "# initialize the MLP\n",
399 | "initialization_data = initialize_mlp(train_dataset, dev_dataset, learning_rate=0.01, batch_size=50)\n",
400 | "\n",
401 | "classifier, train_model_func, validate_model_func, n_train_batches, n_valid_batches = initialization_data\n",
402 | "\n",
403 | "# train the MLP model\n",
404 | "train_model(train_model_func, n_train_batches, validate_model=validate_model_func,\n",
405 | " n_valid_batches=n_valid_batches, training_epochs=10)"
406 | ]
407 | },
408 | {
409 | "cell_type": "code",
410 | "execution_count": 5,
411 | "metadata": {
412 | "collapsed": false
413 | },
414 | "outputs": [
415 | {
416 | "data": {
417 | "text/plain": [
418 | "0.79869999999999997"
419 | ]
420 | },
421 | "execution_count": 5,
422 | "metadata": {},
423 | "output_type": "execute_result"
424 | }
425 | ],
426 | "source": [
427 | "import cPickle\n",
428 | "from dl4mt.predict import predict\n",
429 | "\n",
430 | "test_dataset = prep_dataset(POS_DATASET_PATH, VECTOR_INDEX_PATH, which_sets=['test'], cutoff=CUTOFF, cast_y=False)\n",
431 | "\n",
432 | "test_X, test_y = test_dataset\n",
433 | "test_y = test_y.get_value().astype('int32')\n",
434 | "predictions = predict(classifier, test_X.get_value())\n",
435 | "\n",
436 | "CORPUS_INDICES = 'brown_pos_dataset.indices'\n",
437 | "# Indexes for mapping words and tags <--> ints\n",
438 | "with open(os.path.join(DATASET_LOCATION, CORPUS_INDICES)) as indices_file:\n",
439 | " corpus_indices = cPickle.load(indices_file)\n",
440 | "\n",
441 | "# map tag ids back to strings\n",
442 | "y_test_actual = [corpus_indices['idx2tag'][tag_idx] for tag_idx in test_y]\n",
443 | "y_test_hat = [corpus_indices['idx2tag'][tag_idx] for tag_idx in predictions]\n",
444 | "\n",
445 | "# Quick Evaluation\n",
446 | "sum([y==p for y,p in zip(predictions, test_y)]) / float(len(predictions))"
447 | ]
448 | },
449 | {
450 | "cell_type": "code",
451 | "execution_count": 6,
452 | "metadata": {
453 | "collapsed": false
454 | },
455 | "outputs": [
456 | {
457 | "data": {
458 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAokAAAJJCAYAAADGP62VAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzs3XncHXV9//3XOwFFZVHUaqXBiHUBcQEVFSWJy6/iim29\nBdSqra1WRWqrLdZyY0TcpVXBBf3xU1yp/tzQFuldNQEFEQQXJFhQqYBLQXBBUbJ87j/OJB6u9YTM\nueY6k9eTx3kwy3fmfOZcV06+ec93ZlJVSJIkScOWdF2AJEmSFh87iZIkSZrGTqIkSZKmsZMoSZKk\naewkSpIkaRo7iZIkSZrGTqKkWSVZk+S5zfQzkpzR8v6XJ9mUZEG/i5K8N8m1Sb6yDfs4KMklbdbV\nlSR7JvllknRdi6TFw06i1KEklyf5SZJbDy37yyRf7LKuIdW8qKoPVdVjO65nmyU5CHgMcJeqeujN\n3U9VnVVV926vsvFofsceNVebqvpBVe1S3jhX0hA7iVL3lgB/s607SaOFevrursDlVfWbrgtZIAXM\n+nuRZIcFrEXSBLGTKHWrgDcDL0uy20wNkhyY5LwkP0vy1SQPG1q3JslxSb4MXA/s1Zy+fUGSS5P8\nIsmxSe6e5JxmH6cm2bHZ/rZJPpvkf5rTr59JsscsdTwnyVnN9D80pyc3v9YneW+zbrckJyf5YZIr\nk7x68+nkJEuSvDnJ1Um+Czxhrg8nybIkn2jquybJCUP7OXooiT0lya7Nus2nsJ+V5L+b93pFs+65\nwHuAhzV1rx4+rqH33ZRkr2b68Um+3XyWVyZ5abN8VZIrhrbZu/l5XJfkoiRPGlr3viRvbz7rXyT5\nyub9z3DMm+t/TpIfJPlpkr9O8uAk32z2f8JQ+7sn+ULz+Vyd5IObf5eSfADYE/hMc7wvG9r/XyT5\nb+A/k9y1WbYkye5JrkjyxGYfOye5LMkz5/pZSeofO4lS984H1gAvm7oiye7AvwFvAXYH/hn4tyS3\nG2r2TOAvgV2AHzTL/gjYD3gocBSDjtHhDDoM922mYfAdcHKzfE/gBuDE+Qquqjc2pyd3AfYG/gc4\ntVn9PuBG4O5NDX/U1AfwPAYdwwcADwKeSnM6e4ZjXwp8Fvg+g/RvD+AjzernAM8GVgF7ATvPUPfD\ngXsCjwaOSXKvqjoZ+GvgnKb+1fMdK4PP53lVtStwH+ALM9S6I/AZ4HPAHYEXAx9Kcs+hZocCq4Hb\nAZcBr5nnfQ8A/hA4DHgr8ArgUU0NT0uyYqjta4DfZ/CzWNa8D1X1Zwx+J57YHO+bh7ZZAdwbeCxD\nSWNVXQv8BfCeJHcE/gW4oKo+OE+9knrGTqLUvQKOAV6c5A5T1j0B+E4zHnBTVZ0KXAI8eWjb91XV\numb9+mb5G6vq+qq6GPgWcHpVXV5VvwBOZ9B5o6qurapPVtVvqup64LXAylELT3Ir4NPAW6rqjCR3\nAh4H/G1V3VBVVzPo4B7WbPI04F+q6qqquq55v9lOhR7AoOPz982+fltVZzfrngEc3xzTr4B/BA7L\nTS+AeVWzzTeBbwD331z2qMfXuBG4T5Jdq+rnVXXhDG0eCtymql5fVRuq6osMOriHD7X5RFWdX1Ub\ngQ8x6CjP5dVVdWNV/X/AL4EPV9U1VfVD4Cx+9zP8blV9vqrWV9U1DDp1o/wMV2/+XKeuaN7zYww6\nxAcDzx9hf5J6xk6itAhU1bcZdCpezk2Ttbvwu3Rws/9ulm92BdP9ZGj6hhnmdwZIcuskJzWnbX8O\nrAV2S0Ye23gysK6q3tTM3xXYEfhRc1r0OuBdDNI1GHT6huudemzDlgH/XVWbZlj3+ww+h+H97ADc\naWjZj4emf01zzDfDnwKPBy5vTifPdLHLXZj+cxj+ORWz/AzmMOrP8E7NEIIrm5/hB4Dbz7NvZqh3\nqvcwSC3f13ToJW1n7CRKi8crgb9icFp1s6sYdLyG3bVZvtm2XJH6UganZA+oqt0YJFBhhLQtycsZ\nnA597tDiK4DfArevqts1r92q6r7N+h8xOK292fD0VFcAezannaf6IbB8yn42cNOO1Kh+BQxfXX7n\n4ZVN+vcUBh3dTwEfnaWeZVM611N/Tm3b/HN/LbAR2Lf5Gf4ZN/1un+33Y9bfm+YzfzfwfuBFSe6+\n7eVKmjR2EqVFoqq+C/wrN73S+XTgnkkOT7JDkkMZjCP77FCbUVK/zDK9M4NU6ufN+MdXjlJrkscx\nGHf3J8OnK6vqR8B/AP+cZJfmQoi7D42f+yhwZJI9mnGVL5/jbc5l0Kl8fZN47pTkwGbdR4C/bS7C\n2JlBR+nUWVLH+XyDwenk+yfZiWY8X3OcO2Zwf8jdmtPEv2TQIZup1l8D/9Bsswp4Ir8bp9n2VedT\nf4a/An6RwUVHfz+l7U8YjA/dGq9gcJx/DrwJeH8W+F6WkrrnH3ppcTmWQaq1+d6EP2XQ2XgpcA2D\ni1ue2FxcsNnURGimhKimTG+efwtwq2bfZzPolM6VPG1e9zTgDsC6/O4K53c0654F3AK4GLiWwdi2\nzence4AzGHTMzgc+Ptv7NR2+JzFIK3/AIFl8WrP6/zA4rXom8D0GHbQXz/MZzHQcVNV/Mfjc/xP4\nDoPxfsPbPxP4fnMq93kMxkPe5H2q6sam1scBVzO4iObPmn1Pe88Ra5zL8PpXAfsDP2dw8czUz/R1\nwNHN6f+/m2P/BZDkgcDfAs9q7pv4hmbdUfPUJKln4r1TJUmSNJVJoiRJkqaxkyhJkqRp7CRKkiRp\nGjuJkiRJmqa3D3ZP4hU5kiRtR6qq7dtNbbWF6n8sxLH2tpMIcM0v18/faCu94bXHctQrjml1n2d/\n/6et7g/gw+94E09/4dTbpW2bfX9/t1b3t9lb3ngcL/mHo1vd5+/fdqdW9wdw3LGrOfqY1a3uc9Om\n8XyXvObVq/mn/3d1q/tcsqT976NxfKbjumPDOGo9+vTvtLo/gLM/ciIHHn5E6/t9zePv3fo+x/GZ\njsOk1AnjqfVXv9nQ6v5gPH+X3mGXHVvd37bY6QEvGuv+f/P1t491/5v1upMoSZK04Hpy7/l+HIUk\nSZJaZZK4lR5+0MquSxjJfR984PyNFomHPnzF/I0WgRUrV3VdwsgOWrGq6xJGMkmf6aTUumzfA7ou\nYWST8plOSp0wObVOyt+lN1s6HxrZit4+cSVJjWNM4jiMY0ziOIxrTOI4jGNM4jiMa0ziOIxjTOI4\nTNJ32jjGJI7LOMYkajKMY0ziONxhlx0XzYUrO+1/5Fjf4zcXvM0LVyRJkiaOYxIlSZLUVyaJkiRJ\nberJmESTREmSJE1jkihJktQmxyRKkiSpr0wSJUmS2uSYREmSJPWVSaIkSVKbHJMoSZKkvuq0k5jk\nKUk2JblXM788yQ1JLkhycZJzkzy7WbcyydlTtt8hyU+S3LmL+iVJkqZJxvtaIF2fbj4c+Gzz/9XN\nssuqan+AJHcDPpEkwCnAHyTZs6p+0LR9DPCtqvrxwpYtSZLUb50liUl2Bh4CHAEcOlObqvo+8HfA\nkVVVwEeBw4aaHAZ8ZMylSpIkjS5LxvtaIF2ebj4E+FyTCl6dZP9Z2l0I3LuZ/ghNJzHJLYHHAR8f\nd6GSJEnbmy47iYcDH2umP9bM1wzttpx8r6qvATsnuSeDDuJXqupn4y5UkiRpZI5JvPmS7A48Etg3\nSQFLgU3A22dovh9w8dD85jRxb+Y51fyG1x67ZfrhB63kEQet3LbCJUnSovCls9by5bPWdl1Gr3V1\n4cpTgfdX1Qs2L0iyBthzuFGS5cCbgLcNLf4I8BlgF+Av5nqTo15xTCvFSpKkxeURU8KfN73u1R1W\nM4X3SdwmhwGfnLLs48DLgb023wIH+FfgrVV1yuZGVXUJcD3whaq6YaEKliRJWqySHJzkkiSXJjlq\nhvW3S/LJJN9objF4n/n22UmSWFWPmmHZCcAJI26/X+tFSZIktWGBn92cZClwIoNbA14FnJfktKpa\nN9TsFcAFVfXHzf2p3960n1U/8lBJkqTt1wEM7jN9eVWtB05lcBeZYXsDXwSoqu8Ay5Pcca6d2kmU\nJElq08LfJ3EP4Iqh+SubZcO+AfwJQJIDgLsCfzDXYdhJlCRJmmwz3UJwqtcDt01yIYMHmVwIbJxr\ng64fyydJktQvLV/dvPFn32fTzy6fq8lVwLKh+WUM0sQtquqXDN0VJsn3ge/NtVM7iZIkSYvY0tve\njaW3vduW+Y0/WDO1yfnAPZpbB/6QweOODx9ukGQ34IaqujHJXwFrq+r6ud7XTqIkSVKblizs1c1V\ntSHJEcAZDB5QcnJVrUvy/Gb9ScA+wPuah5hcBDx3vv3aSZQkSZpwVXU6cPqUZScNTZ8D3Gtr9mkn\nUZIkqU0+cUWSJEl9ZZIoSZLUpgV+4sq4mCRKkiRpGpNESZKkNjkmUZIkSX1lkihJktQmxyRKkiSp\nr0wSJUmS2uSYREmSJPVVr5PE2+w0GYf3tGe9uusSRnLdeSd2XULvTNKwlU2bqusSRrJkgZ+Zui1e\n9dh7dl2CNK+dbrG06xImzyR9uc/BJFGSJEnTTEbUJkmSNCkckyhJkqS+MkmUJElqk2MSJUmS1Fcm\niZIkSW1yTKIkSZL6yiRRkiSpTY5JlCRJUl+ZJEqSJLXJMYmSJEnqK5NESZKkNpkkSpIkqa9MEiVJ\nktrk1c2SJEnqK5NESZKkNjkmcW5JNiV589D8y5K8cmj+eUnWNa9zkzx8aN3lSXYfml+V5DPN9HOS\nbExy36H1FyXZc1zHIkmStL0ZZ1f3RuCPk9y+ma/NK5I8EXge8PCq2hv4a+DDSX5vattZXAn809D8\nfO0lSZIWRjLe1wIZZydxPfBu4G9nWHcU8LKquhagqi4ETgGOGGG/BXwWuE+Se7ZUqyRJkoaM+6T5\nO4BnJNm1md+c+O0DfG1K2/OB+4y4303AG4FXbHOFkiRJbcqS8b4WyFjfqap+CbwfOLJZNFdGOrxu\nptPHNaXdh4GHJlm+DSVKkiRpBgtxdfNbgAuA9w4tuxh4EPDFoWUPBC5qpn8K7A5c28zvDlwzvNOq\n2pjkeODls73xcceu3jK9YuUqVqxcdXPqlyRJi8yZa9dw1plrui5jZj25T+LYO4lVdV2SjwLPBU5u\nFr8ReEOSg6vq2iQPAJ4NHNCsXwP8GfDKJEuBZwCfnGH372MwvnHnmd776GNWt3QUkiRpMZka/rzu\nuGO7K6anxtlJHD5lfDxDF6VU1WeS7AGcnaSAXwDPqKqfNE1eDbwzydcZnF4+vao+OLTfavazPslb\nGaSVkiRJnYtJ4tyqateh6f8BbjNl/buAd82y7S8YpIczrTuFwZXQm+dPAE5ooWRJkiQ1fOKKJElS\ni/qSJPbjuTGSJElqlUmiJElSm/oRJJokSpIkaTqTREmSpBY5JlGSJEm9ZZIoSZLUIpNESZIk9ZZJ\noiRJUotMEiVJkrQoJDk4ySVJLk1y1Azr75Dkc0m+nuSiJM+Zb592EiVJklqUZKyvGd5vKXAicDCw\nD3B4kr2nNDsCuLCqHgCsAo5PMucZZTuJkiRJk+0A4LKquryq1gOnAodMafMjYNdmelfgp1W1Ya6d\nOiZRkiSpTQs/JHEP4Iqh+SuBh0xp8x7gC0l+COwCPG2+nZokSpIkTbYaoc0rgK9X1V2ABwBvT7LL\nXBuYJEqSJLWo7aubN/xkHRt+sm6uJlcBy4bmlzFIE4cdCLwGoKq+m+T7wL2A82fbqZ1ESZKkRWyH\nO+3NDnf63XUov73ok1ObnA/cI8ly4IfAocDhU9pcAjwG+HKSOzHoIH5vzvfdlqLVjqu/8rauSxjJ\n7Vb8Y9cljOy6M1/XdQkjmax7aY1yNqN7mzZNRp0AGzZOTq07LO26AnVlySR9TS0SC/3dXlUbkhwB\nnAEsBU6uqnVJnt+sPwl4LfDeJN9gMNzwH6rq2rn2aydRkiRpwlXV6cDpU5adNDR9DfCkrdmnnURJ\nkqQWTdZZotl5dbMkSZKmMUmUJElqkUmiJEmSesskUZIkqU39CBJNEiVJkjSdSaIkSVKLHJMoSZKk\n3jJJlCRJapFJoiRJknrLJFGSJKlFJomSJEnqLZNESZKkNvUjSDRJlCRJ0nQmiZIkSS1yTGJLkjwl\nyaYk92rmlye5IckFSS5Ocm6SZw+1f06Sq5NcmOTbSf6yu+olSZL6aTEkiYcDn23+v7pZdllV7Q+Q\n5G7AJ5Kkqt4HFPCRqjoyyR2Bbyf5dFVdvfClS5Ik3ZRJYguS7Aw8BDgCOHSmNlX1feDvgCM3b9a8\naDqG3wXuOvZiJUmStiNdJ4mHAJ+rqh80p5D3B66dod2FwL2nLkyyF7AXcNl4y5QkSRqNSWI7Dgc+\n1kx/rJmvGdpN/bQPTXIh8GHgeVX1s/GVKEmStP3pLElMsjvwSGDfJAUsBTYBb5+h+X7AxUPzp1bV\nkTO0u4njjl29ZXrFylWsWLlqGyqWJEmLxZlr13Dm2jVdlzGjviSJXZ5ufirw/qp6weYFSdYAew43\nSrIceBPwtuHFo7zB0ces3sYSJUnSYjQ1/Hntccd2V0xPddlJPAx4/ZRlHwdeDuyV5AJgJ+CXwFur\n6v1Nm2LmU9KSJEnd60eQ2F0nsaoeNcOyE4AT5tnuFOCUcdUlSZKk7q9uliRJ6pW+jEns+upmSZIk\nLUImiZIkSS0ySZQkSVJvmSRKkiS1yCRRkiRJvWWSKEmS1KZ+BIkmiZIkSZrOJFGSJKlFjkmUJElS\nb5kkSpIktcgkUZIkSb1lkihJktQik0RJkiT1lkmiJElSi0wSJUmS1FsmiZIkSW3qR5DY707ir367\noesSRvLdn/yq6xJGcuV/vLrrEkZ2+PvO77qEkXzwWQ/suoSRVVXXJYxkkk7z3LhxU9cljGwnlnZd\ngjqycdNk/NlX+3rdSZQkSVpok/SP1bk4JlGSJEnTmCRKkiS1yCRRkiRJi0KSg5NckuTSJEfNsP5l\nSS5sXt9KsiHJbefap0miJElSixY6SEyyFDgReAxwFXBektOqat3mNlX1ZuDNTfsnAi+pqp/NtV+T\nREmSpMl2AHBZVV1eVeuBU4FD5mj/dOAj8+3UJFGSJKlFHYxJ3AO4Ymj+SuAhMzVMcmvgscAL59up\nnURJkqRF7Nc/+CY3XPHNuZpszc0snwR8ab5TzWAnUZIkqVVtB4m3uev9uM1d77dl/rpzPjS1yVXA\nsqH5ZQzSxJkcxginmsExiZIkSZPufOAeSZYnuQVwKHDa1EZJdgNWAJ8eZacmiZIkSS1a6DGJVbUh\nyRHAGcBS4OSqWpfk+c36k5qmTwHOqKobRtmvnURJkqQJV1WnA6dPWXbSlPlTgFNG3aedREmSpBb1\n5IErjkmUJEnSdCaJkiRJLVqypB9RokmiJEmSphl7JzHJxqGHSX80ya2mLP9mkk8k2Xlom/sk+ULz\noOr/SnL00LrnNNved2jZRUn2HPexSJIkzScZ72uhLESS+Ouq2q+q7gvcCPz1lOX3A34BPB+g6UR+\nGnhtVd0buD9wYJLhx8dcCfzT0PzW3GlckiRJ81jo081fAu4+w/KvDC1/OoPHxfwnQHMvnyOAlzfr\nC/gscJ8k9xxvuZIkSVsnyVhfC2XBOolJdgAeB3xryvKlwP8CLmoW7QN8bbhNVX0P2DnJLs2iTcAb\ngVeMs2ZJkqTt1UJ0Em+V5ELgPOBy4OQpy3/E4BmD7xraZrZucg2t+zDw0CTLW65XkiTpZuvLmMSF\nuAXODVW132zLmzGIZwCHAJ8ELmbwXMEtkuwFXF9V12+OWatqY5Lj+d1p6Gne8Jpjt0w//KCVPGLF\nym09FkmStAictXYNZ525tusyeq3z+yRW1Q1JjgQ+nORTDBLCVyR5dFV9vulEvg14wwybvw84Cth5\nhnUc9U/HjKlqSZLUpYNWruKglau2zL9uKBjq2kI/u3lcFuJ082xXHm9ZXlVfBy4DntZcqHIIcHSS\nS4BvAudW1duHtqtmu/XAW4E7jql2SZKk7dLYk8Sq2nWU5VX15KHpi4BHzrLdTR5OXVUnACe0Uqwk\nSdI2MkmUJElSb3U+JlGSJKlPehIkmiRKkiRpOpNESZKkFjkmUZIkSb1lkihJktSingSJJomSJEma\nziRRkiSpRY5JlCRJUm+ZJEqSJLWoJ0GiSaIkSZKmM0mUJElqkWMSJUmS1FsmiZIkSS3qSZBokihJ\nkqTpTBIlSZJa1Jcxib3uJN7mlpNxePvssUvXJYxkyQT90n/oWQ/suoSR3P4RL+u6hJFd++U3d13C\nSCbpy3nnCfmOkrR98htKkiSpRRP0b9U5OSZRkiRJ05gkSpIktWiShr3MxSRRkiRJ05gkSpIktagn\nQaJJoiRJkqYzSZQkSWqRYxIlSZLUWyaJkiRJLepJkGiSKEmSpOlMEiVJklrkmERJkiT1lkmiJElS\ni0wSJUmS1Ft2EiVJklqUjPc183vm4CSXJLk0yVGztFmV5MIkFyVZM99xeLpZkiRpgiVZCpwIPAa4\nCjgvyWlVtW6ozW2BtwOPraork9xhvv122klMshH4JrAjsAF4P/AvVVVJVgGfBr43tMnrgH9spu8M\nbASuBgp4SFWtX6DSJUmSZtTBmMQDgMuq6vLm/U8FDgHWDbV5OvDxqroSoKqumW+nXSeJv66q/QCS\n3BH4MLArsLpZv7aqnjxlm4827V8J/LKq/nmBapUkSVqM9gCuGJq/EnjIlDb3AHZM8kVgF+CtVfWB\nuXbadSdxi6q6OsnzgPP4XSdxvq54Py4fkiRJvdHBxc01Qpsdgf2BRwO3Bs5J8pWqunS2DRZNJxGg\nqr6fZGmTKgIclOTCoSZ/WlXfm2lbSZKkPrru0gv42WUXztXkKmDZ0PwyBmnisCuAa6rqBuCGJGcC\n9wcmo5M4g7Oq6kldFyFJkjSqtsck7n7PB7L7PR+4Zf6/z/g/U5ucD9wjyXLgh8ChwOFT2nwaOLG5\nyOWWDE5Hzzlkb1F1EpPsBWxsTj1v8/6OO3b1lukVK1exYuWqbd6nJEnq3llr13DWmWu7LmNRqKoN\nSY4AzgCWAidX1bokz2/Wn1RVlyT5HIMLhjcB76mqi+fab6pGOY09Hkl+WVW7NNN3BD4EfLmqXtVc\n3fzS2ZLE5sKV66vq+FnW1w3ruzu2rbFh46auSxjJkp7cQX4xuf0jXtZ1CSO79stv7rqEkUzSkw42\nbZqM7yiAJUsm53NVuybl76hddlpKVXX+i5qkHvW2s8f6Hl848sAFOdauk8RbNWMOt9wCZ+hq5WL6\nmMTjqurjQ/OT8w0rSZI0QTrtJFbVrO9fVWuB286x/lVjKUqSJGkb9OXMm4/lkyRJ0jRdn26WJEnq\nlZ4EiSaJkiRJms4kUZIkqUWTdJeFuZgkSpIkaRqTREmSpBb15baiJomSJEmaxiRRkiSpRY5JlCRJ\nUm+ZJEqSJLWoJ0GinURJkqQ2hX70Ej3dLEmSpGlMEiVJklrkLXAkSZLUWyaJkiRJLfIWOJIkSeot\nk0RJkqQW9SRItJO4GGzcVF2XMJIddjR4btt1Zx/fdQkju90BL+66hJFc99UTui5hZJtqMv7sAyzp\nyS09tPWW9uUqDG01O4mSJEktWtKTKNFoSJIkSdOYJEqSJLWoJ0GiSaIkSZKmM0mUJElqkfdJlCRJ\nUm+ZJEqSJLWoJ0GiSaIkSZKmM0mUJElqkfdJlCRJUm+ZJEqSJLWoHzmiSaIkSZJmYJIoSZLUIu+T\nKEmSpN4ySZQkSWrRkn4EiSaJkiRJms4kUZIkqUWOSZQkSVJvmSRKkiS1qCdB4uydxCQnzLFdVdWR\nY6hHkiRJi8BcSeLXgGqmN/eJq5muGbeQJEnazvVlTOKsncSqet/wfJLbVNWvxl5Ri447dvWW6RUr\nV7Fi5arOapEkSe05c+0azly7pusyei1Vc4eCSQ4E/jewS1UtS/IA4HlV9cKFKPDmSlI3rJ+MwPO3\n6zd2XcJIbrnj0q5LUIdud8CLuy5hJNd9da6RMovLho2bui5hZDss9TrH7dV8/YTF4ta3WEJVdR7h\nJalnf/gbY32PU55+/wU51lH+1L8FOBi4BqCqvg6sHGdRWyvJvyW5c9d1SJIk9cVIVzdX1Q+mnF/f\nMJ5ybp6qekLXNUiSJEF/xiSOkiT+IMnDAZLcIsnLgHXjLUuSJEmjSnJwkkuSXJrkqBnWr0ry8yQX\nNq+j59vnKEniC4C3AnsAVwH/Abxoa4uXJEnaHix0jphkKXAi8BgGfbXzkpxWVVNDvbVV9eRR9ztv\nJ7GqrgaevjXFSpIkacEcAFxWVZcDJDkVOITpZ363qv867+nmJHdP8pkk1yS5Osmnk+y1NW8iSZK0\nvViSjPU1gz2AK4bmr2yWDSvgwCTfSPLvSfaZ7zhGOd38YQYR5p8084cCHwEeMsK2kiRJ2gY/+vZ5\n/Oji8+ZqMsp9ii4AllXVr5M8DvgUcM+5Nhilk3irqvrA0PwHk/z9CNtJkiRtd9q+uPku+z6Yu+z7\n4C3zF378nVObXAUsG5pfxiBN3KKqfjk0fXqSdyTZvaqune19Zz3dnGT3JLcHTk/yj0mWN6+jgNNH\nOShJkiSN3fnAPZp+2i0YnPU9bbhBkjuluTdPkgMYPFBl1g4izJ0kXsBN48vnbX6fZvnLt65+SZKk\n/lvo+yRW1YYkRwBnAEuBk6tqXZLnN+tPAp4KvCDJBuDXwGHz7XeuZzcvb6NwSZIkjVdVnc6UM71N\n53Dz9NuBt2/NPkd64kqSfYF9gJ2G3uz9W/NGkiRJ24OePHBl/k5iktUMntV8H+DfgMcBXwLsJEqS\nJPXUKEniU4H7AxdU1Z8nuRPwofGWJUmSNJlmuZfhxBnl2c03VNVGYEOS3YD/4aaXWUuSJKlnRkkS\nz0tyO+A9DC6x/hVw9lirkiRJmlA9CRJHenbzC5vJdyU5A9i1qr4x3rIkSZLUpVk7iUkeyCyPeUmy\nf1VdMLaqJEmSJtRC3ydxXOZKEo9n7mcBPrLlWiRJkrRIzHUz7VULWMdYVI3yvOvuXf+bDV2XMJKL\nrvxF1yWM7L7Lduu6hJHsuHRy/rV5zVfe1nUJI/nj95zbdQkj+8hzHtR1CSPbYWnXFagrE/JX6aIy\nylXBk6DDA36rAAAgAElEQVQvxyFJkqQWjfTEFUmSJI2mL2MSTRIlSZI0zSiP5VsCPAO4W1Udm2RP\n4M5V9dWxVydJkjRhlvQjSBwpSXwH8DDg6c389c0ySZIk9dQoYxIfUlX7JbkQoKquTbLjmOuSJEma\nSNtTknhjki03P0hyR2DT+EqSJElS10ZJEk8APgn8XpLXAk8Fjh5rVZIkSROqL1c3j/Ls5g8m+Rrw\n6GbRIVW1brxlSZIkqUujXN28J/Ar4DPNokqyZ1X9YKyVSZIkTaC+jEkc5XTzv/O7ZzjvBNwN+A5w\nn3EVJUmSpG6Ncrp53+H5JPsDLxpbRZIkSROsJ0MSt/6JK1V1AfCQMdQiSZKkRWKUMYkvHZpdAuwP\nXDW2iiRJkibYkp5EiaOMSdx5aHoD8Fng4+MpR5IkSYvBnJ3E5ibau1bVS+dqt62SbAS+2dSzDnh2\nVd0wtHwpcBnwLODzwC2A3YFb8btU8xCvuJYkSV3b6rF8i9Ssx5Fkh6raCDw8478r5K+rar+qui9w\nI/DXU5bfD/gF8PyqekhV7QccA5zarN/PDqIkSVJ75koSv8pg/OHXgU8n+Rjw62ZdVdUnxlTTl4B9\nZ1h+DnD/ofk0L0mSpEWjJ0MS5+wkbj7EnYCfAo+asr71TmKSHYDHMbg34/DypcAfMTjVvFkhSZKk\nsZirk3jHJH8HfGsB6rhVkgub6TOBk6cs3wO4HHjXAtQiSZJ0s20PVzcvBXZZoDpuaMYZzrg8ya2A\nM4BDgE+OutPjjl29ZXrFylWsWLlqG8uUJEmLwZlr13DWmWu6LqPX5uok/riqXrVglcyhudL5SODD\nST5VVcUI4xGPPmb12GuTJEkLb2r489rjju2umCl6EiQumqu0ZxtfuGV5VX2dwW1wnja0znGJkiRJ\nYzBXkviYhSqiqnYdZXlVPXlo+hTglDGXJkmStFWW9D1JrKqfLmQhkiRJWjxGeSyfJEmSRtSXq5sX\ny5hESZIkLSImiZIkSS3qSZBokihJkqTpTBIlSZJa1PurmyVJkrT9MkmUJElqUeZ/KNxEMEmUJEnS\nNCaJkiRJLXJMoiRJknrLTqIkSVKLlmS8r5kkOTjJJUkuTXLUbLUleXCSDUn+ZN7juPkfgSRJkrqW\nZClwInAwsA9weJK9Z2n3BuBzMP/VNY5JlCRJalEW/pErBwCXVdXlzfufChwCrJvS7sXA/wUePMpO\nTRIlSZIm2x7AFUPzVzbLtkiyB4OO4zubRTXfTk0SJUmSWtTB1c3zdviAtwAvr6rKIOr0dLMkSdIk\nu+zCr/Ddr587V5OrgGVD88sYpInDHgic2pwKvwPwuCTrq+q02XaaqlE6n5MnSd2wfjKO7b9+9Muu\nSxjJPX9/l65LUIcm5buig7FAN9sez/1I1yWM7KqTD++6BHVk/YZNXZcwkl1vtZSq6vwLIEkdv/a7\nY32Pl668+02ONckOwHeARwM/BL4KHF5VU8ckbm7/XuAzVfWJud7HJFGSJGmCVdWGJEcAZwBLgZOr\nal2S5zfrT7o5+7WTKEmS1KIlHZzRqKrTgdOnLJuxc1hVfz7KPr26WZIkSdOYJEqSJLXIZzdLkiSp\nt0wSJUmSWjRBN1mYk0miJEmSpjFJlCRJatGS+R9mMhFMEiVJkjSNSaIkSVKLHJMoSZKk3jJJlCRJ\napH3SZQkSVJvmSRKkiS1qItnN4+DSaIkSZKmMUmUJElqUU+CxPEniUm+kOSPpix7SZJ/T3JDkguH\nXs9s1l+e5JtJvp7kP5PcZWjbjU3bryf5WpKHjfsYJEmStjcLkSR+BDgM+I+hZYcC/wAsq6r9Ztim\ngFVVdW2S1cA/Ai9u1v168zZN5/N1wKrxlC5JkrR1HJM4uo8DT0iyA0CS5cBdgCtG3P4rwN1nWbcb\ncO021idJkqQpxp4kNmngV4HHA6cxSBX/lUFaePckFw41P6KqvtxMb+6GHwxcNNTmVs02OwG/Dzxq\nnPVLkiRtjZ4EiQt24crmU86nMTjV/BcMOoHfneV0M8AXk+wObAD2HVp+w9Dp5ocC75+yXpIkSdto\noTqJpwH/kmQ/4NZVdWFz2nkuq4CfAx8C/gr4l6kNquorSe6Q5A5Vdc3U9ccdu3rL9IqVq1ixctXN\nLF+SJC0mZ525hrPOXNt1GTPqy/0FF6STWFXXJ/ki8F7gw1ux3cYkLwHOT/Keqrp+eH2SewNLgZ/O\ntP3Rx6y++UVLkqRF66AVqzhoxaot869/zbHdFdNTC3mfxI8AnwCeNrRs6pjEk6vqxOGNqurHST4B\nvAh4A78bkwiDU9bPqqoaY92SJEkjS08GJS5YJ7GqPs0g9ds8fzlw61na3m3K/JFD094AXJIkaczs\ncEmSJLWoHzlif8ZWSpIkqUUmiZIkSS3yiSuSJEnqLZNESZKkFvUjRzRJlCRJ0gxMEiVJklrUkyGJ\nJomSJEmaziRRkiSpRX154opJoiRJkqYxSZQkSWpRXxK4vhyHJEmSWmSSKEmS1CLHJEqSJKm3TBIl\nSZJa1I8c0SRRkiRJMzBJlCRJalFfxiTaSVwEVrz8012XMJIfn/LMrktQh/rypbeYXHXy4V2XIM3r\nsp9c33UJ6oidREmSpBb1ZSxfX45DkiRJLTJJlCRJalFfhueYJEqSJGkaO4mSJEktyphfM75ncnCS\nS5JcmuSoGdYfkuQbSS5M8rUkj5rvODzdLEmSNMGSLAVOBB4DXAWcl+S0qlo31Ow/q+rTTfv7Ap8E\n/nCu/dpJlCRJalEHQxIPAC6rqssH759TgUOALZ3EqvrVUPudgWvm26mnmyVJkibbHsAVQ/NXNstu\nIslTkqwDTgeOnG+nJomSJEktWtLy05u/dd6X+dZ5Z8/VpEbZT1V9CvhUkoOADwD3mqu9nURJkqRF\n7L4Pfjj3ffDDt8yf+q7jpza5Clg2NL+MQZo4o6o6K8kOSW5fVT+drZ2nmyVJklqUjPc1g/OBeyRZ\nnuQWwKHAaTetKXdPcwPHJPsDzNVBBJNESZKkiVZVG5IcAZwBLAVOrqp1SZ7frD8J+FPgWUnWA9cD\nh823XzuJkiRJLUrLYxJHUVWnM7ggZXjZSUPTbwTeuDX79HSzJEmSpjFJlCRJalFPHt08eUlikmVJ\nvpfkds387Zr5PbuuTZIkqS8mrpNYVVcA7wRe3yx6PXBSVf2gu6okSZIGlpCxvhbKpJ5u/hfga0le\nAhwIvLDjeiRJknplIjuJzaXe/8DgKp7/VVUbu65JkiQJHJO4GDwO+CFw364LkSRJ6puJTBKTPAB4\nDPAw4EtJTq2qH09td9yxq7dMr1i5ihUrVy1UiZIkaYzOP+cszv/Kl7ouY0Z9SRJTNdIzoReN5pEy\nZwNHV9XnmzuMP7SqnjmlXd2wfjKO7c7P/mDXJYzkx6c8c/5GkqReWXfVL7ouYST7L9+Nquq8e5ak\nzrj4f8b6Ho/d5/cW5Fgn8XTzXwGXV9Xnm/l3AHsnOajDmiRJkoDBE1fG+d9CmbjTzVX1buDdQ/Ob\ngAd2V5EkSVL/TFwnUZIkaTFb0vlJ73ZM4ulmSZIkjZlJoiRJUosWctzgOJkkSpIkaRqTREmSpBb1\n5T6JJomSJEmaxiRRkiSpRY5JlCRJUm+ZJEqSJLXI+yRKkiSpt0wSJUmSWuSYREmSJPWWSaIkSVKL\nvE+iJEmSesskUZIkqUU9CRJNEiVJkjSdSaIkSVKLlvRkUKKdxEXgtne8bdclqCObNlXXJahD6zdu\n6rqEkd1yx6Vdl6CO3Lhhcn5P1S47iZIkSS3qR47omERJkiTNwCRRkiSpTT2JEk0SJUmSNI1JoiRJ\nUot8drMkSZJ6yyRRkiSpRT25TaJJoiRJkqYzSZQkSWpRT4JEk0RJkiRNZ5IoSZLUpp5EiSaJkiRJ\nmsYkUZIkqUXeJ1GSJEm9ZZIoSZLUIu+TKEmSpEUhycFJLklyaZKjZlj/jCTfSPLNJF9Ocr/59tlJ\nJzHJpiRvHpp/WZJXNtPvS/KnU9pf3/x/ebPtq4fW3SHJ+iQnLFT9kiRJs8mYX9PeL1kKnAgcDOwD\nHJ5k7ynNvgesqKr7Aa8G3j3fcXSVJN4I/HGS2zfz1bymTjO0bLPvA48fmv9/gItm2EaSJGl7cABw\nWVVdXlXrgVOBQ4YbVNU5VfXzZvZc4A/m22lXncT1DHqwfzu0LLNMT/VrYF2SBzbzTwM+Os82kiRJ\nC2Oho0TYA7hiaP7KZtlsngv8+3yH0eWYxHcAz0iy683Y9lTgsCR/AGwEfthqZZIkSZNj5LOpSR4J\n/AUwbdziVJ1d3VxVv0zyfuBI4IbhVTM1nzJ/BnAc8BPgX8dToSRJ0tZr+z6J559zFud/5ay5mlwF\nLBuaX8YgTbxpXYOLVd4DHFxV1833vl3fAuctwAXAe4eW/RS43eaZJLsD1wxvVFXrk3wN+DsGAzSf\nMtPOjzt29ZbpFStXsWLlqpbKliRJXfraV77EBed+qesyFsSDHnYQD3rYQVvm3/3W109tcj5wjyTL\nGZxdPRQ4fLhBkj2BTwDPrKrLRnnfTjuJVXVdko8yODd+crN4DfCSJKc0gy+fA3xhhs2PB9ZU1c8y\nyw2Jjj5mddslS5KkReCBD30ED3zoI7bMn3zCGzqs5qYW+j6JVbUhyREMzrQuBU6uqnVJnt+sPwk4\nhkEI986m37S+qg6Ya79ddRKHTx8fDxyxZUXVvzUXpXwtyUbgMuCvp25bVRcDFw8t8+pmSZK0Xaqq\n04HTpyw7aWj6L4G/3Jp9dtJJrKpdh6b/B7jNlPXHAsfOsN3lwLSbP1bVKcAprRcqSZK0lfpyuxWf\nuCJJkqRpur5wRZIkqV96EiWaJEqSJGkak0RJkqQWtX2fxK6YJEqSJGkak0RJkqQWLfR9EsfFJFGS\nJEnTmCRKkiS1qCdBokmiJEmSpjNJlCRJalNPokSTREmSJE1jkihJktQi75MoSZKk3jJJlCRJapH3\nSZQkSVJvmSRKkiS1qCdBokmiJEmSpjNJXAQufuMTui5hJD+87oauSxjZnXfbqesSRrJkyeT8e3PD\nxk1dlzCSpRP0mf76xo1dlzCyW+64tOsSemf5C/5v1yWM5PJ3PrXrEibP5HwNzckkUZIkSdOYJEqS\nJLXI+yRKkiSpt0wSJUmSWuR9EiVJktRbJomSJEkt6kmQaJIoSZKk6UwSJUmS2tSTKNEkUZIkSdOY\nJEqSJLXI+yRKkiSpt0wSJUmSWuR9EiVJktRbJomSJEkt6kmQaJIoSZKk6UwSJUmS2tSTKHHBk8Qk\nd05yapLLkpyf5N+S3CPJfZJ8IcklSf4rydFD2zwnycYk9x1adlGSPZvpy5PsvtDHIkmS1FcL2klM\nEuCTwBeq6g+r6kHAy4E7A58GXltV9wbuDxyY5IVDm18J/NPQfM0yLUmS1JmM+b+FstBJ4iOBG6vq\n3ZsXVNW3gHsCX6qq/2yW3QAcwaADCYNO4GeB+yS558KWLEmStP1Z6E7ivsDXZli+z9TlVfU9YOck\nuzSLNgFvBF4x1golSZK2QTLe10JZ6AtX5jotPNth19C6DwP/lGT5KG923LGrt0yvWLmKFStXjbKZ\nJEla5M5cu4Yz167puoxeW+hO4reBp86w/GJgxfCCJHsB11fV9Wm6zVW1Mcnx/O409JyOPmb1NhUr\nSZIWp6nhz2te/aruipmiJxc3L+zp5qr6AnDLJH+1eVmS+wHfAR6R5NHNslsBbwPeMMNu3gc8Brjj\n2AuWJEnaTnVxM+0/Bh7T3ALnIuA1wI+AQ4Cjk1wCfBM4t6re3mxTzYuqWg+8lZt2EncAfrtA9UuS\nJM0uY34tkAW/mXZV/Qg4dJbVj5xlm1OAU4bmTwBOAEhyRyBV9auWS5UkSdpuTfRj+ZI8GTiTEcco\nSpIkjZv3SVwEquq0qtq7qj7YdS2SJEldSXJw89S6S5McNcP6eyc5J8lvkrx0lH367GZJkqQWLeS9\nDAfvl6XAiQwu7L0KOC/JaVW1bqjZT4EXA08Zdb8TnSRKkiSJA4DLqury5gLfUxlcELxFVV1dVecD\n60fdqZ1ESZKkFnVwcfMewBVD81c2y7aJp5slSZIWsXO+tJZzvnzmXE3meqLdzWYnUZIkqUVtj0k8\n8KCVHHjQyi3zb3nja6Y2uQpYNjS/jEGauE083SxJkjTZzgfukWR5klswuB/1abO0HbkLa5IoSZLU\nqoW9vLmqNiQ5AjgDWAqcXFXrkjy/WX9SkjsD5wG7ApuS/A2wT1VdP9t+7SRKkiRNuKo6HTh9yrKT\nhqZ/zE1PSc/LTqIkSVKLFvo+iePimERJkiRNY5IoSZLUop4EiSaJkiRJms4kUZIkqUWOSZQkSVJv\nmSQuAn//2XVdlzCSNz9p765LGFkm5J9xVWN5ktJYLF0yGZ/ppPzsATZNzo9fY3D5O5/adQkj+ee1\nl3VdwsRJT0Yl2kmUJElqUz/6iJ5uliRJ0nQmiZIkSS3qSZBokihJkqTpTBIlSZJaNEHXz83JJFGS\nJEnTmCRKkiS1qC+3wDFJlCRJ0jQmiZIkSW3qR5BokihJkqTpTBIlSZJa1JMg0SRRkiRJ05kkSpIk\ntcj7JEqSJKm3TBIlSZJa5H0SJUmS1FuLrpOY5ClJNiW5VzO/PMkNSS5IcnGSc5M8e6j9c5Kc0F3F\nkiRJv5OM97VQFl0nETgc+Gzz/80uq6r9q2of4DDgJUme06yrBa5PkiSp9xZVJzHJzsBDgCOAQ2dq\nU1XfB/4OOHLzZgtTnSRJ0vZjUXUSgUOAz1XVD4Crk+w/S7sLgXsvXFmSJEnbl8XWSTwc+Fgz/bFm\nfqbTyaaHkiRpUerLmMRFcwucJLsDjwT2TVLAUmAT8PYZmu8HXDzfPo87dvWW6RUrV7Fi5ao2SpUk\nSR373tfP5fvfOLfrMnpt0XQSgacC76+qF2xekGQNsOdwoyTLgTcBb2sWzXrhytHHrG65REmStBjs\n9YCHsNcDHrJl/gsfWDw3OunLfRIXUyfxMOD1U5Z9HHg5sFeSC4CdgF8Cb62q9zdtdgB+u2BVSpIk\nbQcWTSexqh41w7ITgPn+abAv8J2xFCVJkrSV+vLs5kXTSbw5kpzO4BiO6boWSZKkPpnoTmJVPa7r\nGiRJkob1JEhcdLfAkSRJ0iIw0UmiJEnSotOTKNEkUZIkSdOYJEqSJLWoL/dJNEmUJEnSNCaJkiRJ\nLerLfRJNEiVJkjSNSaIkSVKLehIkmiRKkiRpOpNESZKkNvUkSjRJlCRJmnBJDk5ySZJLkxw1S5u3\nNeu/kWS/+fZpJ3Ernbl2TdcljOSqi77adQkjm5TPdFLqhMmpdVLqhMmp9ewvre26hJFNymc6KXXC\n5NT6va+f23UJY5Ux/zft/ZKlwInAwcA+wOFJ9p7S5vHAH1bVPYDnAe+c7zjsJG6lSfkDeNW3z+u6\nhJFNymc6KXXC5NQ6KXXC5NR6zpfO7LqEkU3KZzopdcLk1Pr9b/S7k9iBA4DLquryqloPnAocMqXN\nk4FTAKrqXOC2Se40107tJEqSJLUoGe9rBnsAVwzNX9ksm6/NH8x1HHYSJUmSJluN2G5qF3PO7VI1\n6n4nS5J+HpgkSZpRVXV+XfFC9T+GjzXJQ4HVVXVwM/+PwKaqesNQm3cBa6rq1Gb+EmBlVf1ktvfo\n7S1wFsMviiRJ2r501P84H7hHkuXAD4FDgcOntDkNOAI4telU/myuDiL0uJMoSZK0PaiqDUmOAM4A\nlgInV9W6JM9v1p9UVf+e5PFJLgN+Bfz5fPvt7elmSZIk3XxeuLINklmuMVqkktyh6xpGMSmfa5KH\nJ3lB13XMJskTkzy16zr6JMm+SXbuuo6bI8mi+L5P8qgkB3Zdx82V5GFJHtt1HaNIslPXNcwlyQ7N\n/yfiO397tCi+NCZNkgOT3KmqqrmB5aKWgTsDa5NMvW/SopDkzkn2BKjJibc3AX+f5HldFzJVkj8C\nXg9c3XUtfZHkYODTzHPLiMUkyd2SLE+yQ1VtWiQdxZXAn8GWGwBPhOZ79I7Al4Fjk/xJ1zXNJclt\nGXznL8oOeRNavDLJ3SboO3+7sxi+MCbRi4H/DVBVGzuuZV418GPgTQz+UD6h65qGJXkigwG1pyd5\nQ5Ldm+WL+l+XVXUO8EzgRUle2HU9mzWdmQ8A76yqtc2yRf1ZLnZNcvQu4JlVdcnmBGQxS/I44HTg\n1cCnkixdJB3FLwO7wWR8f27WfI9eDbwW+C9gVZI/67isGTX/KPgZ8AngPUkO6LqmGSwHdgVeuDkg\n0OLT9ZfFRBn6cn05cHWS/Zvli/Yv4CT3SnLbJKmq9wGvA17fdMw61/xF9mbg2cATgIcx6IQvykQx\nyf9K8vkkL0ly96o6m8FVZM9bDIli83M9HlgD7Jzk4TD4LBfz7+li1nQQT2LQMfgD2DJIfNF+fza/\nB68A/obB1Yw/Ae4A0HQUF/R3Icljkry4+X28DFie5C5D67OYfz+T3GZo9tvAMuAC4IFJntlNVTNL\n8nvAR5Pctbn9yUnABxdbR7Gqzgfew+A+fS+xo7g4LdovucVo6Mv1OmBH4EnN8kXXmQFIcndgHXA2\n8KHmtMPngZcAxyV5TMf17QL8MXApcGVVXc7gL7a9k9yiy9pm0qRHy4D7AC8CTk7yDuBBDNKav0ly\nWIf17QUczf/f3pnHW1WWe/z7AwFRGcIQpBRzANFwyHIMh3JCcSiHjPvRLFPU0sQhs/Sm5pBamOQU\nqDngRJpD1wkHVLzVRQlNtLxaig2aaXrR1Eh97h/Psz3LfQ4cMNzvOvZ8P5/1OWu96z17P3vttd/1\nvM/0wr7AQfgsfYykTSAVxXeDvEzE+cDBuCV+G7VlC9bBKvcOQtdaHrfMTzez24AVgb3wyeH9klYq\nECrTE1gfOBpXDEYA+1XDX2o8ju6AK11fADCzK4F78M/wIG5RLPa7b8bMngNeAX4U3/VE4IfUQFGU\ntJqkAyV9W9IX8bH/POANUlGsJZndvAiExfBLwOEAZjZf0kjgauBgM7u7oHgdImk5M3tF0vHASNyK\ncCewJ25p2gsYiBffvLGgnJ/A15fsjStaR4ZcB9bxoREWhd2ANaLpWuAkYDYwHlgG+IqZdbpw+hKW\nawywETDZzJ6OtrWAsfhk8GfhHiesyrW5tpJ6xFqjtUJtiV5rmtl9oXxtC2wJzDKzSdGvm5m9VUjM\nDpG0J3AOcAywB3CXmZ0Wk5ptgBFm9kYh2T4CnAz0AZYF/gYMwcel62p4LY/Bx6ZncK/HG8BTwCD8\nGbAbsANwjZldU0hMJC1rZn+vHE8E1gH2NrM/yMujHIqHTMxs9X0raU18vLwaWBlYDle0N8Gv5UF4\nWb4fNMawpDy1mgXXmG7AcOB64BhJ65rZw3jc15rQlqVVB+TFNK+StA5wGq4cPoTPfnfGf4h/BNYD\nJoVFr5XyDQk3+FBgFnAxPvDeCXzSzMaFpaMW96ekjSUdIOlzwEgzuxT4Hb4O5kAzGwMcj1sXzwHu\nbbF82+JxUjPM7GlJ3UMRfBSYgifY7Chpc6iXxUaefXmJpF6lZakSYRC349bDrSUNMbMXgJuB6bib\n8cvgFsVykrYh6cOS+khaxsymAgfirsY/N1ZdMLODgUeBoQXkU8jwJHA/8LyZfQoPL/kR8GBdriWA\npFGS+pjZqcB+uHWuV/w9GzgL+Chutb0Rj7UsJeuHgCckXRFu/WXM7FB8zL9I0spmdjZuUbxC0lot\nVhDXAi4Fvm1mJ5rZl81sL2AOMBMv/nwp8BpwkKRBrZIt6QQzy20BGz67/QjwoTgehc+A5+LZeWcC\nM4ABpWVtknsQbkH4KT5TWw63cv0Y2LTSb/3GZ2uhbGNwJeoe/CF8K25F/DDuhjoL6Bt9VYNruSPw\nJK4snIdbE8bj4Qb7xjUdW+nfvcXybYcv2L5OHK+KW7x7VfoMj3v1P4Hepa9pB59hmdIyNMmzEx6i\nMQq3Ev0Qtw43zvfD41CvAL5QWt6QaXvc9XkJcCGwfLSPAeYBO8bxPviDeWBheYcCU0pft05knBy/\n9+Xi+Ku4x2AlYBgwDp80tvx334GsI/DM+1/ik4AJwE14KMx9wJXAKtF3PHAN0KNFsgmfBPyx0lYd\nny4HDov9T8bvbYvS339u8f2UFqCuW5MycxduQVg2zu2ClxdpWGmOrYNC0yT/CqEs3AishbtBx+Px\nVbsUkmk73HK4Fe5mGhwDxOMh3+q46/YC4MM1uIbDgIeBUZW2tYE/VQa1fYCrgD0LyNcDOAq3EveN\nazoTOLKDvsNLKwZ133CPQW/gWWBqpf1I4Pux3zP+9sPjaVesgdyj43v/FLAxcC5uWV4qzu8GvBAP\n6nuAtWsgc/9QZjYqLUsncp6LJyz1ieOj8ISVj8Zx0XEf6FbZH44nVf4AWBe3fp4U3/lbwG+i32dx\nhbJnC+Xsg09gricU6spv6WRgYqXvpcDRpb/73OL7KC1AHbcFKDNT8DiUhpWrH+5unASsVgOZNwdG\nN7UNwhXF64DVgAGVQWTZFsu3TgxUW8TxUpVzl4eMwuNTjgUG1eCaDgMuiv3ulUFtbeBp3F0/AI/7\nK6IsxH14CO6qfwKPP6qeX7H6IMltodey8f2uGt/vsXE8GVccrwem4srhCqXlDdl648lp51fa9gYm\nNM7H311wRXFkaZlDHsXvfEhpWZrkGgT0a2prZLY3LIqH4d6kdQvLuiY+GfhOjPNLR9t3cc/HgOg3\nBJ/MbhXHO+Fxtq2Q7/O0WTB7hQJ4A+9Ubr+IJyz2wJ+3U/GY2eL3Q26pJLa/IAtXZi4D/osWmekX\nU+6xeOHk7ZvaB4VieHQcr0IB9zhu6fppDBINC0ev+LsG7npeIY5r4RLFZ+aPAetV2hoyXwrsEPut\ndjEPAzbFJzH9ou3ruNVzjUq/fXHXTUsnBF1xw5M5rgSOw8MwhuCuxlnAT3BLyEa4Nf4qaqAk4h6C\n3sAGwG+Bg6L9AuAlPEHgTtw13h9YurTMTfIvVVqGJnkG4yEEY2mvKJ6LhyA0lO4DgFULyjoCV1yP\nwt25RFgAAA1PSURBVDPGL8Mts0NifPgeriiuHv2rStl7Pl7hk4AJeKjDFDzcpReeQHk2cFP0WyfG\n2E9V/rdlFs7cOt9qkRhQM57CLQb7yQuSvlEJqj8Rn+0MLCVcR0SW2hX4wHVelGxotP8Fd+eOADCz\np8zsby2UbXC87zx88O0OXBuJFf+IYPa/4rWyekTf11olXwfyDpG0VGQKPobfC6MrpRnmx9/XiYLA\n+KSiVfKNwZWUrwPfAuZIWg+YiCsHE6LMxGg8k3GSVTIek/bEtToZVwJ6AUfgrudN8YfuHDN7Gbjf\nzM4EvmheZqQY8oL4PwaGmtksvKj7EZLuwh/EH8GVh1uBzfC4z9dLydsRVii7uiOiVMyzePHxbYFt\nJTV+35gn/DwJfDqOJ5nZ7wvJ2hM4HTjLzM4ws9PMbG98QnA7npQ4GR9rD5HUGx9fgdYUMDfX9m4D\nHsDj44fiNXoPwz1Zj0h6AE9aPMLM7pLULZ4L8xfwskkJSmupddmAwZX9pXEX6A20lQkSPhufRg3i\nkEKm7fGH21TcnTQAH8SepGJRxMveXE3rXcwjcAXq+3hJG/Akmkm4RbZbtO2LD279WilfB/KOxrMu\np+ATgr548sIFuGtsw+i3N57d3FJLQsj3P1SCuvGs6qdpC6A/FHc7PwGsVfoerfsGfCDu0Z3jeOX4\nrewZx8PwGNRTSstakXn7uA+2i+P+8Xck7npuxMs2xq5aWRDrtuEWxHOAQ+N4rxgD9mhc22g/D/hs\nYVmXxpW/qUSYExXPCz5xOCX2NwGGF5b3OuCbsb8Pnhl+H5708wxtCVWiZnH9ufmWlkRA0gjgz5K+\nL+lA8xn3OHyVgp+FRc6AXfGb+dWC4gIgaWfchP8Q/mDYHM9qfBoPWD5Z0omSTsfjPb5jrbcovYIn\n//wO2FXSJXiNuZNxV8kU+WoFhwCHm9n/tVi+t5G0E+5qbMRwLo9nLc/A3eS9gKslXRZ9drUWWhIk\nfQDPVjzJzO6J0jGY2fH4g+H6KGV0GT5T39m8BE6yEMzsRTxG61RJfc3rs70BDAhPwv8CWwN7SPpg\no4xLKeQF8H+CP3hvk7Q6bplf37ws197AgZK+HmMWVjMLYg35K54VvFqM/1fhk9gxwG5hmR+Du/Uf\nLCWkpGG4q3Y9PPxhL3DPS8XbNT3OYWa/MPeGlJC18Ts5BeglaV3c+/E1fPwfjCvcN1VKI1mHL5YU\nJYtp464G/OE6Fa8j+Bd8IH4Yv6kH49nN44F9YzAuRhT1vQZ/UDQKJK+EB9RvAXwZj6cbjrvLro2H\nXQlZz8Td8/sCu+OBzP3wchI/xhMrti2l0EQtxl649fVeM9sz2o/EywONr/RdGfgn8Ja5G7/Vsu6I\nB6VvZWbPS1q6oQBIuht328ySr9HbZdbErQMRonEW7ikYgk8QXmsU+pbU0wq7waKu6Jm4hXsunrBw\nEXCLmZ3R+N7lBerPB7YOJTjpAElr4N6Mx2Ic2BEviv1rPFllO3yCsDZeW/ZwM/t1IVkbdQan4M+q\nLfGY5BvNbFql32dxS/MhwD+tcN1J+RKBU3AjxmFmdn60L2Nmr6aCWH9qUwC6JObV6GfjMUhjcGVm\nf96pzHyXgspME91xxea5iOGw+Aw34EkgHzOzO/HZcREacuFxURfjlrln8UDlO3Cr3ePAPmb2m1Jy\n4sHzr8mXLLxb0rFmdhKuYO8kX+ruddwF+YCZPVNK0Jh1vwXMlLSBmb1YUV7m4QosqSAuPmZ2s6T5\nuJI4OO6J3tYWH1uHFWH64mP2Yfik6/f4g3diRUHcHFdyNimt1NaZmGg/Brwg6QTgTTwMpj+e3X4w\nnjF+Syg6/yylcEvqi1sQzzOzC6PtdtyyuXMokFfiZW/OwN3m/yghazNm9pykb+EJdDfA27Hyr8b5\nVA5rzr+9u7liFj8avx5VZeZx2pSZbUoriJKGxgoAz+ExZ/3NzBSrvZjZXFyh2b2knCFL48cv/GE2\nAbdujDez/fHr+tWSCqKkbfDVPo7DJwQbA1+RNAt/WKyBu0t+icd1FkuoaWBmt+Aru8ySNMB8icgv\n4Fnsz5aVrmtjZnfg1qTpkgZVFMRaPMzCgzETj5GdSBTHj3NvytfCPQ0v1ZIK4kIwXz1nG3y8F57R\nfjUe0/0h3CMzLiYKzxW2yL6GJ6NcK6eneSLg6Xiy1Xa4te4YfNJwU+W5VgdmA48Am8dkpjar6iSd\nk+7mIGI6jsOzAjcAvmFm10ccyPPWwozgBcg3GI8tfBp3OR2DF8kdZWavVPqNxwPVTy0iaAfI1+y8\nBzjHzE6sWBlLyjQaOAF33QzCSwN9A8+w/jluRTghZr1vNVmVihPyn44H0+8NHFA6DOL9gqRd8ISg\nDUo/0CQNAOY3fuOSlsMnXJfjmaOTcSvYNHwZvnFmNqeQuF2OmChOxK1wg/CC5J8HNsSXivukmb1U\nTsK345Fn4GXMboq2RhhEH7wG7lzgTTObV0cXboRA9DSzYksXJu+OVBIr1FGZaSCpO15C5uPAI2Y2\nSdKPgE/gsZLP4wHN3wD2MrNHignbAWHlGIq7Q14teV1j0H0BTz65MWINz8BjN6fGxGA6cImZfbOU\nnJ0RwfTX4XUca/V9d3UkLVedfBWSoT+eNDUbj5e9IRSAU/DSN2MjgalhAdso74PFJ2J9zwQ2NrO/\nxfjQA68G8WRZ6RxJ43BPx0Qzm10JL9gVj0U/FJgXnqXaPLeaqbNsScekkthEnZSZkKej4OodgV+F\nongUHj+3Cp6VebSZPVRM4AUQCvgZuAJbvG5fPBhOx2O35km6HJ+tX2BeG3MEvqThJsALpe+DBdEI\nAC8tR/LeEDGxm+Fu5IuBu/FlQqcBF5rZ5WFN6mtmfyolZ1cnEpd+gI8HL5SWpxlJA3FjwPJ4guV0\nPNRgEr4M580FxUvex6SS2ESdlJkIrv4rbvWqBlePxdc5fhYvlvymvPDrG6VlXhg1dNnWPqM1SeDt\n8ie7427QXnjYyStmdkRRwd5H1CnMoCMkDcJXzzkIty6vCnw3wqLSQpe8J6SS2AF1UmYkfRovNP01\nvFjuALz+4Hy8EPA9uEWhFvJ2NeR15xoZrc9Vv/sceJM6obYVoE7Cy58Mxwu6zyss2vuGOoQZdEbE\np7+Jx57/oY4xiMn7h1QSuwCdBFc/A2xWOri6KxNJIN/D1w9tef3DJFkUqpOWsCqR9+u/L437ISez\nyXtJKoldhK4QXN2VqburKUkgrdtJkrSWVBK7EHUPru7qdAVXU5IkSZK0ilxxpQthvipED+AO+Yob\nafFagqSCmCRJkiRtpCWxC5IWryRJkiRJ3mtSSUySJEmSJEna8W+/dnOSJEmSJEnSnlQSkyRJkiRJ\nknakkpgkSZIkSZK0I5XEJEmSJEmSpB2pJCZJ8i8j6U1JsyU9LGmqpN7/wmtdLGm32J8sacRC+m4h\naZN38R5PSRqwqO1NfRarsoCk4yXlGstJknQ5UklMkmRJ8KqZrW9mI/F1xQ+snpS0ODVZLTbMbH8z\n+81C+m4FbLq4wjZefzHaF7fPv9I/SZKkFqSSmCTJkmYGsHpY+WZIugGYI6mbpDMkzZT0kKQDwJea\nk3S2pN9Kuh1YofFCku6WtEHsby9plqQHJd0uaSgwDhgfVszNJA2UdE28x0xJm8b/Li9pmqQ5kiYD\n6uxDSLpO0gPxP/s3nZsQ7XdI+mC0rSbplvifeyUNXzKXM0mSpAy54kqSJEuMsBjuANwcTesDa5vZ\n3FAKXzKzDSX1Au6TNA34GDAMGAEMBh4FLoz/N8AkDQQmAaPitfqb2UuSzgdeNrMJ8f5XAGea2X9L\nWhm4FVgL+DZwr5mdFMtb7rcIH+dLZvZiuM5nSrrGzF4ElgXuN7PDJR0Xr31IyDfOzJ6QtBFwLvDp\nd3kpkyRJipNKYpIkS4LekmbH/r3ARcBmwEwzmxvt2wIjJe0ex32BNYBRwBXmlf2fkXRX02sL2BhX\n8uYCmNlLTecbbA2MkN5u6iNp2XiPz8T/3izpxUX4TF+TtGvsrxSyzgTeAq6O9inAT+M9NgV+Unnv\nnovwHkmSJLUllcQkSZYEr5nZ+tWGUJb+3tTvq2Z2e1O/Hejc/buocX0CNjKz+R3I0qmLudJ/S9wK\nuLGZvS5pOrD0At7P8NCdF5uvQZIkSVcmYxKTJGkVtwEHN5JYJA2TtAxuefxcxCyuiCejVDHgl8Dm\nklaJ/21kIL8M9Kn0nQYc2jiQtG7s3guMjbbRwAc6kbUvrvS9LmlN3JLZoBuwR+yPBWaY2cvAkw0r\nacRZrtPJeyRJktSaVBKTJFkSdGTps6b2C/B4w19Jehg4D+huZtcBj8e5S4Cft3shs+eBA3DX7oPA\nlXHqZ8BnGokruIL48UiMeQRPbAE4AVcy5+Bu57l0TEPeW4GlJD0KnAr8otLn78CG8Rm2BE6M9v8A\n9gv55gA7d3J9kiRJao08DChJkiRJkiRJ2khLYpIkSZIkSdKOVBKTJEmSJEmSdqSSmCRJkiRJkrQj\nlcQkSZIkSZKkHakkJkmSJEmSJO1IJTFJkiRJkiRpRyqJSZIkSZIkSTtSSUySJEmSJEna8f9OlUfY\nMzQbNAAAAABJRU5ErkJggg==\n",
459 | "text/plain": [
460 | ""
461 | ]
462 | },
463 | "metadata": {},
464 | "output_type": "display_data"
465 | }
466 | ],
467 | "source": [
468 | "from sklearn.metrics import confusion_matrix\n",
469 | "import numpy as np\n",
470 | "import matplotlib.pyplot as plt\n",
471 | "\n",
472 | "from dl4mt.confusion_matrix import plot_confusion_matrix\n",
473 | "\n",
474 | "# get class names\n",
475 | "class_names = list(set(y_test_actual))\n",
476 | "\n",
477 | "# Compute confusion matrix\n",
478 | "cm = confusion_matrix(y_test_actual, y_test_hat, labels=class_names)\n",
479 | "\n",
480 | "# Normalize the confusion matrix by row (i.e by the number of samples in each class)\n",
481 | "cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n",
482 | "\n",
483 | "plt.figure()\n",
484 | "plot_confusion_matrix(cm_normalized, class_names, title='Normalized confusion matrix')\n",
485 | "\n",
486 | "plt.show()"
487 | ]
488 | },
489 | {
490 | "cell_type": "markdown",
491 | "metadata": {
492 | "collapsed": true
493 | },
494 | "source": [
495 | "### Challenges: \n",
496 | "(1)\n",
497 | "- change the MLP class to allow adding multiple hidden layers\n",
498 | "- how can you leverage the HiddenLayer class to do that? \n",
499 | "- how does the performance of the model change as you add layers?\n",
500 | "- hint: run the code in theano_autoencoder.ipynb, then take a look at stacked_autoencoder.ipynb, and understand how multiple hidden layers are created there \n",
501 | "\n",
502 | "(2)\n",
503 | "- compare performance with different regularization weights\n",
504 | "- what effect does setting the L2 regularization weight very low or very high have on the model performance?\n",
505 | "\n",
506 | "(3) \n",
507 | "- make the final hidden layer (before the Logistic layer) of your model have only two dimensions. Add a function to the MLP class which outputs the 2D embedding for a given X, then make a scatter plot of some datapoints to see if you're getting good separation of the different classes. \n",
508 | "- hint -- take a look at theano_autoencoder.ipynb and stacked_autoencoder.ipynb for some plotting code\n"
509 | ]
510 | },
511 | {
512 | "cell_type": "code",
513 | "execution_count": null,
514 | "metadata": {
515 | "collapsed": true
516 | },
517 | "outputs": [],
518 | "source": []
519 | }
520 | ],
521 | "metadata": {
522 | "kernelspec": {
523 | "display_name": "Python 2",
524 | "language": "python",
525 | "name": "python2"
526 | },
527 | "language_info": {
528 | "codemirror_mode": {
529 | "name": "ipython",
530 | "version": 2
531 | },
532 | "file_extension": ".py",
533 | "mimetype": "text/x-python",
534 | "name": "python",
535 | "nbconvert_exporter": "python",
536 | "pygments_lexer": "ipython2",
537 | "version": "2.7.10"
538 | }
539 | },
540 | "nbformat": 4,
541 | "nbformat_minor": 0
542 | }
543 |
--------------------------------------------------------------------------------
/notebooks/day2/accumulating_rnn.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "code",
5 | "execution_count": 1,
6 | "metadata": {
7 | "collapsed": true
8 | },
9 | "outputs": [],
10 | "source": [
11 | "%matplotlib inline\n",
12 | "%load_ext autoreload\n",
13 | "%autoreload 2\n",
14 | "\n",
15 | "import pylab\n",
16 | "import sys\n",
17 | "\n",
18 | "pylab.rcParams['figure.figsize'] = (10.0, 8.0)"
19 | ]
20 | },
21 | {
22 | "cell_type": "code",
23 | "execution_count": 20,
24 | "metadata": {
25 | "collapsed": true
26 | },
27 | "outputs": [],
28 | "source": [
29 | "# verify that an RNN can accumulate its inputs\n",
30 | "import numpy\n",
31 | "import theano\n",
32 | "import theano.tensor as T\n",
33 | "from blocks.bricks.recurrent import SimpleRecurrent\n",
34 | "from blocks.graph import ComputationGraph\n",
35 | "from blocks import initialization\n",
36 | "from blocks.bricks import Identity, Logistic, Tanh, Rectifier"
37 | ]
38 | },
39 | {
40 | "cell_type": "code",
41 | "execution_count": 21,
42 | "metadata": {
43 | "collapsed": false
44 | },
45 | "outputs": [
46 | {
47 | "name": "stdout",
48 | "output_type": "stream",
49 | "text": [
50 | "[[[ 1. 1. 1.]]\n",
51 | "\n",
52 | " [[ 2. 2. 2.]]\n",
53 | "\n",
54 | " [[ 3. 3. 3.]]]\n"
55 | ]
56 | }
57 | ],
58 | "source": [
59 | "x = tensor.tensor3('x')\n",
60 | "rnn = SimpleRecurrent(\n",
61 | " dim=3, activation=Identity(), weights_init=initialization.Identity())\n",
62 | "rnn.initialize()\n",
63 | "h = rnn.apply(x) \n",
64 | " \n",
65 | "f = theano.function([x], h)\n",
66 | "print(f(numpy.ones((3, 1, 3), dtype=theano.config.floatX))) "
67 | ]
68 | },
69 | {
70 | "cell_type": "markdown",
71 | "metadata": {},
72 | "source": [
73 | "### Challenge:\n",
74 | " \n",
75 | "(1)\n",
76 | "- what do the 3 dimensions of the numpy.ones matrix represent? (hint: (batch, features, time), but in what order?\n",
77 | "- you can check the implementation of the SimpleRecurrent transition here: https://github.com/mila-udem/blocks/blob/master/blocks/bricks/recurrent.py#L322-L323\n",
78 | "\n",
79 | "(2) \n",
80 | "- why does a RNN with the \"Identity\" activation function accumulate its inputs? What happens when you change the activation to Logistic or Tanh?"
81 | ]
82 | },
83 | {
84 | "cell_type": "code",
85 | "execution_count": null,
86 | "metadata": {
87 | "collapsed": true
88 | },
89 | "outputs": [],
90 | "source": []
91 | }
92 | ],
93 | "metadata": {
94 | "kernelspec": {
95 | "display_name": "Python 2",
96 | "language": "python",
97 | "name": "python2"
98 | },
99 | "language_info": {
100 | "codemirror_mode": {
101 | "name": "ipython",
102 | "version": 2
103 | },
104 | "file_extension": ".py",
105 | "mimetype": "text/x-python",
106 | "name": "python",
107 | "nbconvert_exporter": "python",
108 | "pygments_lexer": "ipython2",
109 | "version": "2.7.10"
110 | }
111 | },
112 | "nbformat": 4,
113 | "nbformat_minor": 0
114 | }
115 |
--------------------------------------------------------------------------------
/notebooks/day2/create_brown_w2v_index.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "code",
5 | "execution_count": 1,
6 | "metadata": {
7 | "collapsed": true
8 | },
9 | "outputs": [],
10 | "source": [
11 | "# create a w2v index from a filtered version of the Brown corpus from NLTK\n",
12 | "# these vectors will be used as the initial input for our Deep Learning models"
13 | ]
14 | },
15 | {
16 | "cell_type": "code",
17 | "execution_count": 2,
18 | "metadata": {
19 | "collapsed": false
20 | },
21 | "outputs": [],
22 | "source": [
23 | "%matplotlib inline\n",
24 | "%load_ext autoreload\n",
25 | "%autoreload 2\n",
26 | "\n",
27 | "from __future__ import division, print_function\n",
28 | "import codecs\n",
29 | "import os\n",
30 | "import cPickle\n",
31 | "import logging\n",
32 | "from collections import Counter, defaultdict\n",
33 | "\n",
34 | "import nltk\n",
35 | "import numpy as np\n",
36 | "import matplotlib.pyplot as plt\n",
37 | "import pylab\n",
38 | "import pandas as pd\n",
39 | "from scipy.stats import norm\n",
40 | "from gensim.models import Word2Vec\n",
41 | "from fuel.datasets import H5PYDataset\n",
42 | "\n",
43 | "logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)"
44 | ]
45 | },
46 | {
47 | "cell_type": "code",
48 | "execution_count": 3,
49 | "metadata": {
50 | "collapsed": false
51 | },
52 | "outputs": [
53 | {
54 | "name": "stdout",
55 | "output_type": "stream",
56 | "text": [
57 | "15667\n"
58 | ]
59 | }
60 | ],
61 | "source": [
62 | "# load our version of the brown dataset, and get an iterator over all of the documents\n",
63 | "DATASET_LOCATION = '../../datasets/'\n",
64 | "DATASET_NAME = 'brown_pos_dataset.hdf5'\n",
65 | "DATASET_PATH = os.path.join(DATASET_LOCATION, DATASET_NAME)\n",
66 | "\n",
67 | "with open(os.path.join(DATASET_LOCATION, 'brown_pos_dataset.indices')) as indices_file:\n",
68 | " corpus_indices = cPickle.load(indices_file)\n",
69 | " \n",
70 | "# ok lets load the brown corpus, and use the indexes to convert it to ints,\n",
71 | "# then build the W2V index over this corpus\n",
72 | "UNKNOWN_TOKEN = u'_UNK_'\n",
73 | "\n",
74 | "def map_to_unknown(tok, index):\n",
75 | " if tok in index:\n",
76 | " return tok\n",
77 | " else:\n",
78 | " return UNKNOWN_TOKEN\n",
79 | "\n",
80 | "brown_documents = [[w for p in d for w in p] for d in nltk.corpus.brown.paras()]\n",
81 | "print(len(brown_documents))\n",
82 | "\n",
83 | "# Gensim expects strings, so we'll need to map back to indices to use our vectors consistently\n",
84 | "# let's also pad with start and end\n",
85 | "brown_documents = [[u'_START_'] + [map_to_unknown(w, corpus_indices['word2idx']) for w in d] + [u'_END_'] \n",
86 | " for d in brown_documents]"
87 | ]
88 | },
89 | {
90 | "cell_type": "code",
91 | "execution_count": 4,
92 | "metadata": {
93 | "collapsed": true
94 | },
95 | "outputs": [],
96 | "source": [
97 | "def train_word2vec(text_iterator, model_file='w2v_model', workers=6, vec_size=100, min_count=1):\n",
98 | " \"\"\"\n",
99 | " Trains word2vec model using the corpus contained in text_iterator\n",
100 | " \n",
101 | " Parameters:\n",
102 | " Model is stored in \n",
103 | " controls the number of processors Word2Vec can use\n",
104 | " min_count is the minimum number of occurences for a word to be included in the model\n",
105 | " \n",
106 | " Returns:\n",
107 | " The model contains vectors of dimensions (default 100)\n",
108 | " \"\"\"\n",
109 | "\n",
110 | " docs = text_iterator\n",
111 | "\n",
112 | " model = Word2Vec(docs, size=vec_size, workers=workers, iter=1, min_count=min_count) \n",
113 | " model.save(model_file)\n",
114 | " return model"
115 | ]
116 | },
117 | {
118 | "cell_type": "code",
119 | "execution_count": 5,
120 | "metadata": {
121 | "collapsed": false
122 | },
123 | "outputs": [
124 | {
125 | "name": "stderr",
126 | "output_type": "stream",
127 | "text": [
128 | "WARNING:gensim.models.word2vec:consider setting layer size to a multiple of 4 for greater performance\n"
129 | ]
130 | }
131 | ],
132 | "source": [
133 | "EMBEDDING_SIZE = 50\n",
134 | "\n",
135 | "w2v_model = train_word2vec(brown_documents,\n",
136 | " model_file=os.path.join(DATASET_LOCATION, 'brown_w2v_model'),\n",
137 | " vec_size=EMBEDDING_SIZE)\n",
138 | "\n",
139 | "orig_w2v_vectors = w2v_model.syn0"
140 | ]
141 | },
142 | {
143 | "cell_type": "code",
144 | "execution_count": 6,
145 | "metadata": {
146 | "collapsed": false
147 | },
148 | "outputs": [
149 | {
150 | "data": {
151 | "text/plain": [
152 | "set()"
153 | ]
154 | },
155 | "execution_count": 6,
156 | "metadata": {},
157 | "output_type": "execute_result"
158 | }
159 | ],
160 | "source": [
161 | "set(set(corpus_indices['word2idx'].keys())).difference(w2v_model.vocab.keys())"
162 | ]
163 | },
164 | {
165 | "cell_type": "code",
166 | "execution_count": 10,
167 | "metadata": {
168 | "collapsed": false
169 | },
170 | "outputs": [],
171 | "source": [
172 | "# reindex w2v_vectors to correspond to our index\n",
173 | "w2v_index_order = [w2v_model.vocab[w].index \n",
174 | " for w,v in sorted(corpus_indices['word2idx'].items(), key=lambda x: x[1])]\n",
175 | "\n",
176 | "final_w2v_vectors = orig_w2v_vectors[w2v_index_order]\n",
177 | "\n",
178 | "# now persist this version for later\n",
179 | "with open(os.path.join(DATASET_LOCATION, 'brown_w2v_vectors.npy'), 'wb') as outfile:\n",
180 | " np.save(outfile, final_w2v_vectors)\n"
181 | ]
182 | },
183 | {
184 | "cell_type": "markdown",
185 | "metadata": {
186 | "collapsed": true
187 | },
188 | "source": [
189 | "### Challenges: \n",
190 | " \n",
191 | "(1)\n",
192 | "- try using the word2vec index in some of the experiments from day1 (there is a pre-trained index in dl4mt_exercises/datasets)\n",
193 | "- how does the performance compare with the simple embedding we were previously using?"
194 | ]
195 | }
196 | ],
197 | "metadata": {
198 | "kernelspec": {
199 | "display_name": "Python 2",
200 | "language": "python",
201 | "name": "python2"
202 | },
203 | "language_info": {
204 | "codemirror_mode": {
205 | "name": "ipython",
206 | "version": 2
207 | },
208 | "file_extension": ".py",
209 | "mimetype": "text/x-python",
210 | "name": "python",
211 | "nbconvert_exporter": "python",
212 | "pygments_lexer": "ipython2",
213 | "version": "2.7.10"
214 | }
215 | },
216 | "nbformat": 4,
217 | "nbformat_minor": 0
218 | }
219 |
--------------------------------------------------------------------------------
/notebooks/day2/recurrent_transition_with_lookup.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "code",
5 | "execution_count": 1,
6 | "metadata": {
7 | "collapsed": false
8 | },
9 | "outputs": [],
10 | "source": [
11 | "%matplotlib inline\n",
12 | "%load_ext autoreload\n",
13 | "%autoreload 2\n",
14 | "\n",
15 | "import pylab\n",
16 | "import sys\n",
17 | "\n",
18 | "pylab.rcParams['figure.figsize'] = (10.0, 8.0)"
19 | ]
20 | },
21 | {
22 | "cell_type": "code",
23 | "execution_count": 2,
24 | "metadata": {
25 | "collapsed": false
26 | },
27 | "outputs": [
28 | {
29 | "name": "stdout",
30 | "output_type": "stream",
31 | "text": [
32 | "The autoreload extension is already loaded. To reload it, use:\n",
33 | " %reload_ext autoreload\n"
34 | ]
35 | }
36 | ],
37 | "source": [
38 | "from __future__ import print_function, division\n",
39 | "import logging\n",
40 | "import pprint\n",
41 | "import math\n",
42 | "import numpy\n",
43 | "import numpy as np\n",
44 | "import os\n",
45 | "import operator\n",
46 | "import theano\n",
47 | "\n",
48 | "from theano import tensor\n",
49 | "\n",
50 | "from collections import OrderedDict\n",
51 | "\n",
52 | "from theano import function\n",
53 | "from fuel.datasets import IndexableDataset\n",
54 | "from fuel.transformers import Mapping, Batch, Padding, Filter\n",
55 | "from fuel.schemes import (ConstantScheme, ShuffledScheme,\n",
56 | " ShuffledExampleScheme, SequentialExampleScheme,\n",
57 | " BatchSizeScheme)\n",
58 | "from fuel.transformers import Flatten\n",
59 | "from fuel.streams import DataStream\n",
60 | "from blocks.config import config\n",
61 | "from blocks.bricks import Tanh, Initializable, Logistic, Identity\n",
62 | "from blocks.bricks.base import application\n",
63 | "from blocks.bricks import Linear, Rectifier, Softmax\n",
64 | "from blocks.graph import ComputationGraph\n",
65 | "from blocks.bricks.lookup import LookupTable\n",
66 | "from blocks.bricks.recurrent import SimpleRecurrent, GatedRecurrent, LSTM, Bidirectional\n",
67 | "from blocks.bricks.parallel import Fork\n",
68 | "from blocks.bricks.sequence_generators import (\n",
69 | " SequenceGenerator, Readout, SoftmaxEmitter, LookupFeedback)\n",
70 | "from blocks.algorithms import (GradientDescent, Scale,\n",
71 | " StepClipping, CompositeRule, AdaDelta, Adam)\n",
72 | "from blocks.initialization import Orthogonal, IsotropicGaussian, Constant\n",
73 | "from blocks.bricks.cost import CategoricalCrossEntropy\n",
74 | "from blocks.bricks.cost import SquaredError\n",
75 | "from blocks.serialization import load_parameter_values\n",
76 | "from blocks.model import Model\n",
77 | "from blocks.monitoring import aggregation\n",
78 | "from blocks.extensions import FinishAfter, Printing, Timing\n",
79 | "from blocks.extensions.saveload import Checkpoint\n",
80 | "from blocks.extensions.monitoring import TrainingDataMonitoring\n",
81 | "from blocks.main_loop import MainLoop\n",
82 | "from blocks.bricks import WEIGHT\n",
83 | "from blocks.roles import INPUT\n",
84 | "from blocks.filter import VariableFilter\n",
85 | "from blocks.graph import apply_dropout\n",
86 | "from blocks.utils import named_copy, dict_union, shared_floatx_nans, shared_floatx_zeros, shared_floatx\n",
87 | "from blocks.utils import dict_union, shared_floatx_nans, shared_floatx_zeros, shared_floatx\n",
88 | "from blocks.bricks.recurrent import BaseRecurrent\n",
89 | "from blocks.bricks.wrappers import As2D\n",
90 | "from blocks.bricks.base import lazy\n",
91 | "from blocks.bricks.recurrent import recurrent\n",
92 | "from blocks.roles import add_role, INITIAL_STATE\n",
93 | "from blocks.extensions import SimpleExtension\n",
94 | "from blocks.bricks import MLP\n",
95 | "\n",
96 | "%load_ext autoreload\n",
97 | "%autoreload 2"
98 | ]
99 | },
100 | {
101 | "cell_type": "code",
102 | "execution_count": 3,
103 | "metadata": {
104 | "collapsed": false
105 | },
106 | "outputs": [],
107 | "source": [
108 | "# SOME USEFUL STUFF FOR THEANO DEBUGGING\n",
109 | "\n",
110 | "config.recursion_limit = 100000\n",
111 | "floatX = theano.config.floatX\n",
112 | "logger = logging.getLogger(__name__)\n",
113 | "# this is to let the log print in the notebook\n",
114 | "logger.setLevel(logging.DEBUG)\n",
115 | "\n",
116 | "# theano debugging stuff\n",
117 | "theano.config.optimizer='fast_compile'\n",
118 | "# theano.config.optimizer='None'\n",
119 | "theano.config.exception_verbosity='high'\n",
120 | "\n",
121 | "# compute_test_value is 'off' by default, meaning this feature is inactive\n",
122 | "# theano.config.compute_test_value = 'off' # Use 'warn' to activate this feature\n",
123 | "# theano.config.compute_test_value = 'warn' # Use 'warn' to activate this feature"
124 | ]
125 | },
126 | {
127 | "cell_type": "code",
128 | "execution_count": 4,
129 | "metadata": {
130 | "collapsed": true
131 | },
132 | "outputs": [],
133 | "source": [
134 | "# some filter and mapping functions for fuel to use\n",
135 | "def _transpose(data):\n",
136 | " return tuple(array.T for array in data)\n",
137 | "\n",
138 | "# swap the (batch, time) axes to make the shape (time, batch, ...)\n",
139 | "def _swapaxes(data):\n",
140 | " return tuple(array.swapaxes(0,1) for array in data)\n",
141 | "\n",
142 | "def _filter_long(data):\n",
143 | " return len(data[0]) <= 100"
144 | ]
145 | },
146 | {
147 | "cell_type": "code",
148 | "execution_count": 5,
149 | "metadata": {
150 | "collapsed": false
151 | },
152 | "outputs": [],
153 | "source": [
154 | "# this is how you could load a bigger slice of the corpus\n",
155 | "# from nltk.corpus import brown\n",
156 | "# words_by_line, tags_by_line = zip(*[zip(*sen) for sen in list(brown.tagged_sents())[:10000]])\n",
157 | "# for these examples, we need to make sure every seq is the same length to avoid padding/mask issues\n",
158 | "\n",
159 | "# a tiny toy dataset for learning the POS of the ambiguous word \"calls\"\n",
160 | "words_by_line = [['she', 'calls', 'me', 'every', 'day', '.'], \n",
161 | " ['I', 'received', 'two', 'calls', 'yesterday', '.']] *10\n",
162 | "\n",
163 | "\n",
164 | "tags_by_line = [[u'PPS', u'VBZ', u'PPO', u'AT', u'NN', u'.'],\n",
165 | " [u'PPSS', u'VBN', u'CD', u'NNS', u'NR', u'.']] *10\n",
166 | "\n",
167 | "idx2word = dict(enumerate(set([w for l in words_by_line for w in l])))\n",
168 | "word2idx = {v:k for k,v in idx2word.items()}\n",
169 | "\n",
170 | "idx2tag = dict(enumerate(set([t for l in tags_by_line for t in l])))\n",
171 | "tag2idx = {v:k for k,v in idx2tag.items()}\n",
172 | "\n",
173 | "iwords = [[word2idx[w] for w in l] for l in words_by_line]\n",
174 | "itags = [[tag2idx[t] for t in l] for l in tags_by_line]\n",
175 | "\n",
176 | "# now create the fuel dataset\n",
177 | "qe_dataset = IndexableDataset(\n",
178 | " indexables=OrderedDict([('words', iwords), ('tags', itags)]))\n",
179 | "\n",
180 | "# now we're going to progressively wrap data streams with other streams that transform the stream somehow\n",
181 | "qe_dataset.example_iteration_scheme = ShuffledExampleScheme(qe_dataset.num_examples)\n",
182 | "data_stream = qe_dataset.get_example_stream()\n",
183 | "data_stream = Batch(data_stream, iteration_scheme=ConstantScheme(1))\n",
184 | "\n",
185 | "# add padding and masks to the dataset\n",
186 | "# data_stream = Padding(data_stream, mask_sources=('words','tags'))\n",
187 | "data_stream = Padding(data_stream, mask_sources=('words'))\n",
188 | "data_stream = Mapping(data_stream, _swapaxes)\n",
189 | "\n",
190 | "# Example of how the iterator works\n",
191 | "# for batch in list(data_stream.get_epoch_iterator()):\n",
192 | "# print([source.shape for source in batch])"
193 | ]
194 | },
195 | {
196 | "cell_type": "code",
197 | "execution_count": 6,
198 | "metadata": {
199 | "collapsed": false
200 | },
201 | "outputs": [],
202 | "source": [
203 | "tagset_size=len(tag2idx.keys())\n",
204 | "vocab_size=len(word2idx.keys())\n",
205 | "dimension=5\n",
206 | "\n",
207 | "class LookupRecurrent(BaseRecurrent, Initializable):\n",
208 | " \"\"\"The recurrent transition with lookup and feedback \n",
209 | "\n",
210 | " The most well-known recurrent transition: a matrix multiplication,\n",
211 | " optionally followed by a non-linearity.\n",
212 | "\n",
213 | " Parameters\n",
214 | " ----------\n",
215 | " dim : int\n",
216 | " The dimension of the hidden state\n",
217 | " activation : :class:`.Brick`\n",
218 | " The brick to apply as activation.\n",
219 | "\n",
220 | " Notes\n",
221 | " -----\n",
222 | " See :class:`.Initializable` for initialization parameters.\n",
223 | "\n",
224 | " \"\"\"\n",
225 | " @lazy(allocation=['dim'])\n",
226 | " def __init__(self, dim, activation, **kwargs):\n",
227 | " super(LookupRecurrent, self).__init__(**kwargs)\n",
228 | " self.dim = dim\n",
229 | " \n",
230 | " word_lookup = LookupTable(vocab_size, dimension)\n",
231 | " word_lookup.weights_init = IsotropicGaussian(0.01)\n",
232 | " word_lookup.initialize()\n",
233 | " self.word_lookup = word_lookup\n",
234 | "\n",
235 | " # There will be a Softmax on top of this layer\n",
236 | " state_to_output = Linear(name='state_to_output', input_dim=dimension, output_dim=tagset_size)\n",
237 | " state_to_output.weights_init = IsotropicGaussian(0.01)\n",
238 | " state_to_output.biases_init = Constant(0.0)\n",
239 | " state_to_output.initialize()\n",
240 | " self.state_to_output = state_to_output\n",
241 | "\n",
242 | " # note - As2D won't work with masks\n",
243 | " nonlinearity = Softmax()\n",
244 | " wrapper_2D = As2D(nonlinearity.apply)\n",
245 | " wrapper_2D.initialize()\n",
246 | " self.wrapper_2D = wrapper_2D\n",
247 | " \n",
248 | " # a non-linear activation (i.e. Sigmoid, Tanh, ReLU, ...)\n",
249 | " self.activation = activation\n",
250 | " \n",
251 | " self.children = [activation, state_to_output, nonlinearity, \n",
252 | " wrapper_2D, word_lookup]\n",
253 | "\n",
254 | " @property\n",
255 | " def W(self):\n",
256 | " return self.parameters[0]\n",
257 | "\n",
258 | " def get_dim(self, name):\n",
259 | " if name == 'mask':\n",
260 | " return 0\n",
261 | " if name in (LookupRecurrent.apply.sequences +\n",
262 | " LookupRecurrent.apply.states):\n",
263 | " return self.dim\n",
264 | " return super(LookupRecurrent, self).get_dim(name)\n",
265 | "\n",
266 | " # the initial state is the 'original' lookup+feedback\n",
267 | " # the initial state is combined with the first input to produce the first output\n",
268 | " def _allocate(self):\n",
269 | " self.parameters.append(shared_floatx_nans((self.dim, self.dim),\n",
270 | " name=\"W\"))\n",
271 | " add_role(self.parameters[0], WEIGHT)\n",
272 | " \n",
273 | " self.parameters.append(shared_floatx(np.random.random(self.dim,), name=\"initial_state\"))\n",
274 | " add_role(self.parameters[1], INITIAL_STATE)\n",
275 | " \n",
276 | " def _initialize(self):\n",
277 | " self.weights_init.initialize(self.W, self.rng)\n",
278 | " \n",
279 | " def get_predictions(self, inputs):\n",
280 | " linear_mapping = self.state_to_output.apply(inputs)\n",
281 | " readouts = self.wrapper_2D.apply(linear_mapping)\n",
282 | " return readouts\n",
283 | " \n",
284 | " # TODO: change inputs-->states or something more clear\n",
285 | " def get_feedback(self, inputs):\n",
286 | " linear_mapping = self.state_to_output.apply(inputs)\n",
287 | " readouts = self.wrapper_2D.apply(linear_mapping)\n",
288 | " predictions = readouts.argmax(axis=1)\n",
289 | " return self.lookup.feedback(predictions)\n",
290 | " \n",
291 | " @recurrent(sequences=['inputs', 'mask'], states=['states'], outputs=['states'], contexts=[])\n",
292 | " def apply(self, inputs=None, states=None, mask=None):\n",
293 | " \"\"\"Apply the transition.\n",
294 | "\n",
295 | " Parameters\n",
296 | " ----------\n",
297 | " inputs : :class:`~tensor.TensorVariable`\n",
298 | " The 2D inputs, in the shape (batch, features).\n",
299 | " states : :class:`~tensor.TensorVariable`\n",
300 | " The 2D states, in the shape (batch, features).\n",
301 | " mask : :class:`~tensor.TensorVariable`\n",
302 | " A 1D binary array in the shape (batch,) which is 1 if\n",
303 | " there is data available, 0 if not. Assumed to be 1-s\n",
304 | " only if not given.\n",
305 | "\n",
306 | " \"\"\"\n",
307 | " # first compute the current representation (_not_ state) via the standard recurrent transition\n",
308 | " current_representation = self.word_lookup.apply(inputs) + tensor.dot(states, self.W)\n",
309 | " next_states = self.children[0].apply(current_representation)\n",
310 | " \n",
311 | " if mask:\n",
312 | " next_states = (mask[:, None] * next_states +\n",
313 | " (1 - mask[:, None]) * states)\n",
314 | "\n",
315 | " return next_states\n",
316 | "\n",
317 | " # trainable initial state\n",
318 | " @application(outputs=apply.states)\n",
319 | " def initial_states(self, batch_size, *args, **kwargs):\n",
320 | " return tensor.repeat(self.parameters[1][None, :], batch_size, 0)\n",
321 | " "
322 | ]
323 | },
324 | {
325 | "cell_type": "code",
326 | "execution_count": 7,
327 | "metadata": {
328 | "collapsed": false
329 | },
330 | "outputs": [],
331 | "source": [
332 | "# test applying our transition to a batch, and see what we get back\n",
333 | "transition = LookupRecurrent(dim=dimension, activation=Tanh())"
334 | ]
335 | },
336 | {
337 | "cell_type": "code",
338 | "execution_count": 8,
339 | "metadata": {
340 | "collapsed": false
341 | },
342 | "outputs": [],
343 | "source": [
344 | "# this is the cost function that we'll use to train our model\n",
345 | "def get_cost(words,words_mask,targets):\n",
346 | "\n",
347 | "# comment this out if you are using the GatedRecurrent transition\n",
348 | " states = transition.apply(\n",
349 | " **dict_union(inputs=words, mask=words_mask, return_initial_states=True))\n",
350 | " \n",
351 | " output = states[1:]\n",
352 | " output_shape = output.shape\n",
353 | "\n",
354 | " dim1 = output_shape[0] * output_shape[1]\n",
355 | " dim2 = output_shape[2]\n",
356 | " \n",
357 | " y_hat = Softmax().apply(\n",
358 | " transition.state_to_output.apply(\n",
359 | " output.reshape((dim1, dim2))))\n",
360 | " \n",
361 | " # try the blocks crossentropy\n",
362 | " y = targets.flatten()\n",
363 | " costs = theano.tensor.nnet.categorical_crossentropy(y_hat,y)\n",
364 | " \n",
365 | " final_cost = costs.mean()\n",
366 | " \n",
367 | "# return final_cost\n",
368 | " return (final_cost, y_hat, y, costs, final_cost)"
369 | ]
370 | },
371 | {
372 | "cell_type": "code",
373 | "execution_count": 9,
374 | "metadata": {
375 | "collapsed": false
376 | },
377 | "outputs": [],
378 | "source": [
379 | "def get_prediction(words, words_mask):\n",
380 | " \n",
381 | " states = transition.apply(\n",
382 | " **dict_union(inputs=words, mask=words_mask, return_initial_states=True))\n",
383 | " \n",
384 | " # we only care about the RNN states, which are the first and only output\n",
385 | " output = states[1:]\n",
386 | " output_shape = output.shape\n",
387 | " dim1 = output_shape[0] * output_shape[1]\n",
388 | " dim2 = output_shape[2]\n",
389 | " \n",
390 | " y_hat = Softmax().apply(\n",
391 | " transition.state_to_output.apply(\n",
392 | " output.reshape((dim1, dim2))))\n",
393 | "\n",
394 | " predictions = y_hat\n",
395 | "\n",
396 | " return predictions"
397 | ]
398 | },
399 | {
400 | "cell_type": "code",
401 | "execution_count": 10,
402 | "metadata": {
403 | "collapsed": false
404 | },
405 | "outputs": [
406 | {
407 | "name": "stderr",
408 | "output_type": "stream",
409 | "text": [
410 | "/home/chris/programs/anaconda/lib/python2.7/site-packages/theano/scan_module/scan_perform_ext.py:135: RuntimeWarning: numpy.ndarray size changed, may indicate binary incompatibility\n",
411 | " from scan_perform.scan_perform import *\n"
412 | ]
413 | }
414 | ],
415 | "source": [
416 | "words=tensor.lmatrix(\"words\")\n",
417 | "words_mask=tensor.matrix(\"words_mask\")\n",
418 | "targets=tensor.lmatrix(\"tags\")\n",
419 | "# targets_mask=tensor.matrix(\"tags_mask\")\n",
420 | "\n",
421 | "# let's get some feedback from the cost function so we can monitor it\n",
422 | "cost, yhat, y, raw_costs, true_costs = get_cost(words, words_mask, targets)\n",
423 | "\n",
424 | "yhat.name = 'yyhat'\n",
425 | "y.name = 'y_inside'\n",
426 | "raw_costs.name = 'raw_costs'\n",
427 | "true_costs.name = 'true_costs'\n",
428 | "# mlp_cost.name = 'mlp_cost'\n",
429 | "\n",
430 | " \n",
431 | "# can we just get ther computation graph directly here? (without Model)\n",
432 | "cost_cg = ComputationGraph(cost)\n",
433 | "weights = VariableFilter(roles=[WEIGHT])(cost_cg.variables)\n",
434 | "\n",
435 | "cost.name = \"sequence_log_likelihood_cost_regularized\"\n",
436 | "prediction_model = Model(get_prediction(words, words_mask)).get_theano_function()"
437 | ]
438 | },
439 | {
440 | "cell_type": "code",
441 | "execution_count": 11,
442 | "metadata": {
443 | "collapsed": false
444 | },
445 | "outputs": [
446 | {
447 | "name": "stderr",
448 | "output_type": "stream",
449 | "text": [
450 | "INFO:__main__:Parameters:\n",
451 | "[('/lookuprecurrent/state_to_output.b', (11,)),\n",
452 | " ('/lookuprecurrent.initial_state', (5,)),\n",
453 | " ('/lookuprecurrent/lookuptable.W', (10, 5)),\n",
454 | " ('/lookuprecurrent.W', (5, 5)),\n",
455 | " ('/lookuprecurrent/state_to_output.W', (5, 11))]\n"
456 | ]
457 | },
458 | {
459 | "name": "stdout",
460 | "output_type": "stream",
461 | "text": [
462 | "\n",
463 | "-------------------------------------------------------------------------------\n",
464 | "BEFORE FIRST EPOCH\n",
465 | "-------------------------------------------------------------------------------\n",
466 | "Training status:\n",
467 | "\t batch_interrupt_received: False\n",
468 | "\t epoch_interrupt_received: False\n",
469 | "\t epoch_started: True\n",
470 | "\t epochs_done: 0\n",
471 | "\t iterations_done: 0\n",
472 | "\t received_first_batch: False\n",
473 | "\t resumed_from: None\n",
474 | "\t training_started: True\n",
475 | "Log records from the iteration 0:\n",
476 | "\t time_initialization: 0.361103773117\n",
477 | "\n",
478 | "\n",
479 | "-------------------------------------------------------------------------------\n",
480 | "-------------------------------------------------------------------------------\n",
481 | "Training status:\n",
482 | "\t batch_interrupt_received: False\n",
483 | "\t epoch_interrupt_received: False\n",
484 | "\t epoch_started: True\n",
485 | "\t epochs_done: 49\n",
486 | "\t iterations_done: 1000\n",
487 | "\t received_first_batch: True\n",
488 | "\t resumed_from: None\n",
489 | "\t training_started: True\n",
490 | "Log records from the iteration 1000:\n",
491 | "\t average_sequence_log_likelihood_cost_regularized: 2.30121779442\n",
492 | "\t sequence_log_likelihood_cost_regularized: 2.13266158104\n",
493 | "\n",
494 | "\n",
495 | "-------------------------------------------------------------------------------\n",
496 | "-------------------------------------------------------------------------------\n",
497 | "Training status:\n",
498 | "\t batch_interrupt_received: False\n",
499 | "\t epoch_interrupt_received: False\n",
500 | "\t epoch_started: True\n",
501 | "\t epochs_done: 99\n",
502 | "\t iterations_done: 2000\n",
503 | "\t received_first_batch: True\n",
504 | "\t resumed_from: None\n",
505 | "\t training_started: True\n",
506 | "Log records from the iteration 2000:\n",
507 | "\t average_sequence_log_likelihood_cost_regularized: 1.82736539841\n",
508 | "\t sequence_log_likelihood_cost_regularized: 1.3605761528\n",
509 | "\n",
510 | "\n",
511 | "-------------------------------------------------------------------------------\n",
512 | "-------------------------------------------------------------------------------\n",
513 | "Training status:\n",
514 | "\t batch_interrupt_received: False\n",
515 | "\t epoch_interrupt_received: False\n",
516 | "\t epoch_started: True\n",
517 | "\t epochs_done: 149\n",
518 | "\t iterations_done: 3000\n",
519 | "\t received_first_batch: True\n",
520 | "\t resumed_from: None\n",
521 | "\t training_started: True\n",
522 | "Log records from the iteration 3000:\n",
523 | "\t average_sequence_log_likelihood_cost_regularized: 1.14914631844\n",
524 | "\t sequence_log_likelihood_cost_regularized: 0.801165759563\n",
525 | "\n",
526 | "\n",
527 | "-------------------------------------------------------------------------------\n",
528 | "-------------------------------------------------------------------------------\n",
529 | "Training status:\n",
530 | "\t batch_interrupt_received: False\n",
531 | "\t epoch_interrupt_received: False\n",
532 | "\t epoch_started: True\n",
533 | "\t epochs_done: 199\n",
534 | "\t iterations_done: 4000\n",
535 | "\t received_first_batch: True\n",
536 | "\t resumed_from: None\n",
537 | "\t training_started: True\n",
538 | "Log records from the iteration 4000:\n",
539 | "\t average_sequence_log_likelihood_cost_regularized: 0.692778944969\n",
540 | "\t sequence_log_likelihood_cost_regularized: 0.567239224911\n",
541 | "\n",
542 | "\n",
543 | "-------------------------------------------------------------------------------\n",
544 | "-------------------------------------------------------------------------------\n",
545 | "Training status:\n",
546 | "\t batch_interrupt_received: False\n",
547 | "\t epoch_interrupt_received: False\n",
548 | "\t epoch_started: True\n",
549 | "\t epochs_done: 249\n",
550 | "\t iterations_done: 5000\n",
551 | "\t received_first_batch: True\n",
552 | "\t resumed_from: None\n",
553 | "\t training_started: True\n",
554 | "Log records from the iteration 5000:\n",
555 | "\t average_sequence_log_likelihood_cost_regularized: 0.429486840963\n",
556 | "\t sequence_log_likelihood_cost_regularized: 0.340382009745\n",
557 | "\t training_finish_requested: True\n",
558 | "\n",
559 | "\n",
560 | "-------------------------------------------------------------------------------\n",
561 | "TRAINING HAS BEEN FINISHED:\n",
562 | "-------------------------------------------------------------------------------\n",
563 | "Training status:\n",
564 | "\t batch_interrupt_received: False\n",
565 | "\t epoch_interrupt_received: False\n",
566 | "\t epoch_started: True\n",
567 | "\t epochs_done: 249\n",
568 | "\t iterations_done: 5000\n",
569 | "\t received_first_batch: True\n",
570 | "\t resumed_from: None\n",
571 | "\t training_started: True\n",
572 | "Log records from the iteration 5000:\n",
573 | "\t average_sequence_log_likelihood_cost_regularized: 0.429486840963\n",
574 | "\t sequence_log_likelihood_cost_regularized: 0.340382009745\n",
575 | "\t training_finish_requested: True\n",
576 | "\t training_finished: True\n",
577 | "\n"
578 | ]
579 | }
580 | ],
581 | "source": [
582 | "# Construct the main loop and start training!\n",
583 | "\n",
584 | "transition.weights_init = IsotropicGaussian(0.1)\n",
585 | "transition.biases_init = Constant(0.)\n",
586 | "transition.initialize()\n",
587 | "\n",
588 | "# Go through this many batches\n",
589 | "num_batches=5000\n",
590 | "\n",
591 | "from blocks.monitoring import aggregation\n",
592 | "\n",
593 | "batch_cost = cost\n",
594 | "final_cost = aggregation.mean(batch_cost, 1)\n",
595 | "final_cost.name = 'final_cost'\n",
596 | "test_model = Model(final_cost)\n",
597 | "\n",
598 | "cg = ComputationGraph(final_cost)\n",
599 | "\n",
600 | "# note that you must explicitly provide the cost function `cost=...`\n",
601 | "algorithm = GradientDescent(\n",
602 | " cost=final_cost, parameters=cg.parameters,\n",
603 | " step_rule=CompositeRule([StepClipping(10.0), Scale(0.01)])) \n",
604 | "\n",
605 | "# CompositeRule([StepClipping(10.0), Scale(0.01)])) \n",
606 | "\n",
607 | "# CompositeRule([StepClipping(10.0), Adam()]) \n",
608 | "# step_rule=AdaDelta())\n",
609 | "# step_rule=Scale(learning_rate=1e-3)\n",
610 | "# step_rule=AdaDelta()) \n",
611 | "# step_rule=CompositeRule([StepClipping(10.0), Scale(0.01)]))\n",
612 | "\n",
613 | "parameters = test_model.get_parameter_dict()\n",
614 | "logger.info(\"Parameters:\\n\" +\n",
615 | " pprint.pformat(\n",
616 | " [(key, value.get_value().shape) for key, value in parameters.items()],\n",
617 | " width=120))\n",
618 | "\n",
619 | "observables = [cost]\n",
620 | "\n",
621 | "# Some other things you could observe during training\n",
622 | "# algorithm.total_step_norm, algorithm.total_gradient_norm]\n",
623 | "\n",
624 | "# for name, parameter in parameters.items():\n",
625 | "# observables.append(named_copy(\n",
626 | "# parameter.norm(2), name + \"_norm\"))\n",
627 | "# observables.append(named_copy(\n",
628 | "# algorithm.gradients[parameter].norm(2), name + \"_grad_norm\"))\n",
629 | "\n",
630 | "# this will be the prefix of the saved model and log\n",
631 | "save_path='test-lookup-recurrent-model'\n",
632 | "\n",
633 | "average_monitoring = TrainingDataMonitoring(\n",
634 | " observables, prefix=\"average\", every_n_batches=1000)\n",
635 | "\n",
636 | "main_loop = MainLoop(\n",
637 | " model=test_model,\n",
638 | " data_stream=data_stream,\n",
639 | " algorithm=algorithm,\n",
640 | " extensions=[\n",
641 | " Timing(),\n",
642 | " TrainingDataMonitoring(observables, after_batch=True),\n",
643 | " average_monitoring,\n",
644 | " FinishAfter(after_n_batches=num_batches),\n",
645 | " # This is a hook to handle NaN emerging during training -- finish if you see it\n",
646 | "# .add_condition([\"after_batch\"], _is_nan),\n",
647 | " # hook to save the model \n",
648 | "# Checkpoint(save_path, every_n_batches=1000,\n",
649 | "# save_separately=[\"model\", \"log\"]),\n",
650 | " Printing(every_n_batches=1000, after_epoch=False)])\n",
651 | "main_loop.run()\n",
652 | "\n"
653 | ]
654 | },
655 | {
656 | "cell_type": "code",
657 | "execution_count": 13,
658 | "metadata": {
659 | "collapsed": false
660 | },
661 | "outputs": [
662 | {
663 | "data": {
664 | "text/plain": [
665 | "[u'PPS', u'VBN', u'CD', u'NNS']"
666 | ]
667 | },
668 | "execution_count": 13,
669 | "metadata": {},
670 | "output_type": "execute_result"
671 | }
672 | ],
673 | "source": [
674 | "# manually test some examples to get an idea what your model learned\n",
675 | "\n",
676 | "# new_ex = [['she', 'calls', 'me', 'every', 'day']]\n",
677 | "new_ex = [['she', 'received', 'two', 'calls']]\n",
678 | "new_ex_int = [[word2idx[w] for w in l] for l in new_ex]\n",
679 | "example = np.array(new_ex_int).swapaxes(0,1)\n",
680 | "o = prediction_model(example, np.ones(example.shape).astype(theano.config.floatX))[0]\n",
681 | "predictions = [idx2tag[i] for i in o.argmax(axis=1)]\n",
682 | "predictions"
683 | ]
684 | },
685 | {
686 | "cell_type": "markdown",
687 | "metadata": {},
688 | "source": [
689 | "### Challenges:\n",
690 | "\n",
691 | "(1)\n",
692 | "How many batches do you need before the model learns something useful? Any ideas on how to make the training faster?\n",
693 | "\n",
694 | "(2) \n",
695 | "Play with the hyperparameters, embedding size, recurrent transition size, etc... -- how does this affect the model's performance?\n",
696 | "\n",
697 | "(2)\n",
698 | "- come up with your own small training set of ambiguous examples (maybe in your native language)\n",
699 | "- do a qualitative analysis of what the recurrent model learns\n",
700 | "- Note: for this notebook, you need to make sure that your examples are the same length, because we haven't\n",
701 | " because we haven't implemented masking \n",
702 | "\n",
703 | " \n",
704 | " "
705 | ]
706 | }
707 | ],
708 | "metadata": {
709 | "kernelspec": {
710 | "display_name": "Python 2",
711 | "language": "python",
712 | "name": "python2"
713 | },
714 | "language_info": {
715 | "codemirror_mode": {
716 | "name": "ipython",
717 | "version": 2
718 | },
719 | "file_extension": ".py",
720 | "mimetype": "text/x-python",
721 | "name": "python",
722 | "nbconvert_exporter": "python",
723 | "pygments_lexer": "ipython2",
724 | "version": "2.7.10"
725 | }
726 | },
727 | "nbformat": 4,
728 | "nbformat_minor": 0
729 | }
730 |
--------------------------------------------------------------------------------