├── .gitignore ├── theanorc ├── check_data.py ├── pgn_splitter.py ├── README.md ├── reinforcement.py ├── book_parser.py ├── play.py ├── book_builder.py ├── parse_game.py ├── book_play.py ├── train.py ├── book_train.py ├── sunfish.py └── training_1Kx7_2.5M_zero-mean_2016-04-08.log /.gitignore: -------------------------------------------------------------------------------- 1 | 2 | *.pyc 3 | -------------------------------------------------------------------------------- /theanorc: -------------------------------------------------------------------------------- 1 | [global] 2 | floatX = float32 3 | device = gpu 4 | 5 | [nvcc] 6 | fastmath = True 7 | 8 | -------------------------------------------------------------------------------- /check_data.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import h5py, os, sys 4 | 5 | for r, d, files in os.walk('data') : 6 | for f in sorted(files) : 7 | if not f.endswith('.hdf5') : continue 8 | 9 | print f, 10 | data = h5py.File( os.path.join( r, f ), 'r') 11 | 12 | if sum(data['m'][0]) > 28 : 13 | print 'scaled', 14 | else : 15 | print 'normal', 16 | 17 | if min(data['x'][0]) < 0 : 18 | print 'zero-mean' 19 | else : 20 | print 'positive' 21 | -------------------------------------------------------------------------------- /pgn_splitter.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # 3 | # pgn_splitter.py -- splits large .pgn files into the bunch of smaller ones 4 | # to make use of the multiprocessing and making sure all CPU cores are loaded 5 | # evenly instead of one core maxed to 100% processing one large single file. 6 | 7 | import sys, os 8 | 9 | WAYS_TO_SPLIT = 7 # should load up to 7 cores 10 | 11 | if len(sys.argv) < 2 : 12 | sys.exit('USAGE: pgn_splitter file.pgn') 13 | 14 | if not sys.argv[1].endswith( '.pgn' ) : 15 | sys.exit('Invalid file: ' + sys.argv[1] ) 16 | 17 | num = 0 18 | events = 0 19 | name = sys.argv[1][:-4] 20 | line = '' 21 | 22 | with open( sys.argv[1] ) as fin : 23 | while True : 24 | line = fin.readline() 25 | if len(line) == 0 : break 26 | if line.startswith( '[Event' ) : 27 | events += 1 28 | 29 | split_threshold = events / WAYS_TO_SPLIT + 4; 30 | print events, 'events, splitting', WAYS_TO_SPLIT, 'ways by', split_threshold 31 | 32 | events = 0 33 | fin.seek( 0 ) # rewind 34 | 35 | while True : 36 | new_name = '%s.%03d.pgn' % (name, num) 37 | print new_name 38 | with open( new_name, 'wb' ) as fout : 39 | if len(line) : 40 | fout.write(line) 41 | num += 1 42 | while True : 43 | line = fin.readline() 44 | if len(line) == 0 : break 45 | 46 | if line.startswith( '[Event' ) : 47 | events += 1 48 | if events > split_threshold : 49 | events = 0 50 | break 51 | 52 | fout.write(line) 53 | 54 | if events : break 55 | 56 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ### deep-murasaki 2 | 3 | Deep learning chess engine, that has no idea about chess rules, but watches and learns. 4 | 5 | Heavily based on [deep-pink](https://github.com/erikbern/deep-pink) by [Erik Bernhardsson](http://erikbern.com). The main idea was to improve the network configuration from 3 very costly (in terms of memory size and training time) FC layers to multiple less connected layers, that could give same or better results using only a fraction of memory and could converge during the trained much faster. 6 | 7 | Could have started from the scratch, but too buzy/lazy to reimplement chess framework for the move generation and evaluation myself, especially when it's already done and generously given to the open source (Thanks, Eric!). 8 | 9 | There's no pretrained model in the repository, because the model configuration still has not been firmly decided. Training is a three step process: 10 | 11 | 1. use `parse_game.py` on a bunch of .PGN formatted games to extract the data and save the results in HDF5 format. 12 | 2. use `train.py` to learn from the saved data as much as possible (GPU is a must for this step). 13 | 3. (optionally) use `reinforcement.py` to learn while playing against another chess program (sunfish). 14 | 15 | ### The requirements 16 | 17 | * [Keras] (http://keras.io/) `sudo pip install keras`, that gives us a choice to use Theano or TensorFlow as a backend. I used Theano, if you prefer TensorFlow, there's [backend configuration guide](http://keras.io/backend/) 18 | * [Theano](https://github.com/Theano/Theano): `git clone https://github.com/Theano/Theano; cd Theano; python setup.py install` to get the newest version, old versions have various compatibility issues and the latest available on PIP is 0.7 (quite dated). 19 | * [Sunfish](https://github.com/thomasahle/sunfish): `git clone https://github.com/thomasahle/sunfish`. I have already included `sunfish.py` in this project, but you might want to get the newer version. 20 | * [python-chess](https://pypi.python.org/pypi/python-chess) `pip install python-chess` 21 | * [scikit-learn](http://scikit-learn.org/stable/install.html) (only needed for training) 22 | * [h5py](http://www.h5py.org/): can be installed using `apt-get install python-hdf5` or `pip install hdf5` (only needed for training) 23 | * A relatively recent NVidia card, the bigger the better. Training could be a major pain without it. 24 | -------------------------------------------------------------------------------- /reinforcement.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import train 4 | import theano 5 | import theano.tensor as T 6 | # import chess, chess.pgn 7 | from parse_game import bb2array 8 | import numpy 9 | import random 10 | import pickle 11 | from theano.tensor.nnet import sigmoid 12 | import sunfish 13 | from play import sf2array 14 | import os 15 | import time 16 | import math 17 | 18 | 19 | def dump(Ws_s, bs_s): 20 | f = open('model_reinforcement.pickle', 'w') 21 | def values(zs): 22 | return [z.get_value(borrow=True) for z in zs] 23 | pickle.dump((values(Ws_s), values(bs_s)), f) 24 | 25 | 26 | def get_params(fns): 27 | for fn in fns: 28 | if os.path.exists(fn): 29 | print 'loading', fn 30 | f = open(fn) 31 | Ws, bs = pickle.load(f) 32 | return Ws, bs 33 | 34 | 35 | def get_predict(Ws_s, bs_s): 36 | x, p = train.get_model(Ws_s, bs_s) 37 | 38 | predict = theano.function( 39 | inputs=[x], 40 | outputs=p) 41 | 42 | return predict 43 | 44 | 45 | def get_update(Ws_s, bs_s): 46 | x, fx = train.get_model(Ws_s, bs_s) 47 | 48 | # Ground truth (who won) 49 | y = T.vector('y') 50 | 51 | # Compute loss (just log likelihood of a sigmoid fit) 52 | y_pred = sigmoid(fx) 53 | loss = -( y * T.log(y_pred) + (1 - y) * T.log(1 - y_pred)).mean() 54 | 55 | # Metrics on the number of correctly predicted ones 56 | frac_correct = ((fx > 0) * y + (fx < 0) * (1 - y)).mean() 57 | 58 | # Updates 59 | learning_rate_s = T.scalar(dtype=theano.config.floatX) 60 | momentum_s = T.scalar(dtype=theano.config.floatX) 61 | updates = train.nesterov_updates(loss, Ws_s + bs_s, learning_rate_s, momentum_s) 62 | 63 | f_update = theano.function( 64 | inputs=[x, y, learning_rate_s, momentum_s], 65 | outputs=[loss, frac_correct], 66 | updates=updates, 67 | ) 68 | 69 | return f_update 70 | 71 | 72 | def weighted_random_sample(ps): 73 | r = random.random() 74 | for i, p in enumerate(ps): 75 | r -= p 76 | if r < 0: 77 | return i 78 | 79 | 80 | def game(f_pred, f_train, learning_rate, momentum=0.9): 81 | pos = sunfish.Position(sunfish.initial, 0, (True,True), (True,True), 0, 0) 82 | 83 | data = [] 84 | 85 | max_turns = 100 86 | 87 | for turn in xrange(max_turns): 88 | # Generate all possible moves 89 | Xs = [] 90 | new_poss = [] 91 | for move in pos.gen_moves(): 92 | new_pos = pos.move(move) 93 | Xs.append(sf2array(new_pos, False)) 94 | new_poss.append(new_pos) 95 | 96 | # Calculate softmax probabilities 97 | ys = f_pred(Xs) 98 | zs = numpy.exp(ys) 99 | Z = sum(zs) 100 | ps = zs / Z 101 | i = weighted_random_sample(ps) 102 | 103 | # Append moves 104 | data.append((turn % 2, Xs[i])) 105 | pos = new_poss[i] 106 | 107 | if pos.board.find('K') == -1: 108 | break 109 | 110 | if turn == 0 and random.random() < 0.01: 111 | print ys 112 | 113 | if turn == max_turns - 1: 114 | return 115 | 116 | # White moves all even turns 117 | # If turn is even, it means white just moved, and black is up next 118 | # That means if turn is even, all even (black) boards are losses 119 | # If turn is odd, all odd (white) boards are losses 120 | win = (turn % 2) # 0 = white, 1 = black 121 | 122 | X = numpy.array([x for t, x in data], dtype=theano.config.floatX) 123 | Y = numpy.array([(t ^ win) for t, x in data], dtype=theano.config.floatX) 124 | 125 | loss, frac_correct = f_train(X, Y, learning_rate, momentum) 126 | 127 | return len(data), loss, frac_correct 128 | 129 | 130 | def main(): 131 | Ws, bs = get_params(['model_reinforcement.pickle', 'model.pickle']) 132 | Ws_s, bs_s = train.get_parameters(Ws=Ws, bs=bs) 133 | f_pred = get_predict(Ws_s, bs_s) 134 | f_train = get_update(Ws_s, bs_s) 135 | 136 | i, n, l, c = 0, 0.0, 0.0, 0.0 137 | 138 | base_learning_rate = 1e-2 139 | t0 = time.time() 140 | 141 | while True: 142 | learning_rate = base_learning_rate * math.exp(-(time.time() - t0) / 86400) 143 | r = game(f_pred, f_train, learning_rate) 144 | if r is None: 145 | continue 146 | i += 1 147 | n_t, l_t, c_t = r 148 | n = n*0.999 + n_t 149 | l = l*0.999 + l_t*n_t 150 | c = c*0.999 + c_t*n_t 151 | print '%6d %9.5f %9.5f %9.5f' % (i, learning_rate, l / n, c / n) 152 | 153 | if i % 100 == 0: 154 | print 'dumping model...' 155 | dump(Ws_s, bs_s) 156 | 157 | 158 | if __name__ == '__main__': 159 | main() 160 | 161 | -------------------------------------------------------------------------------- /book_parser.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import chess, chess.pgn 4 | import numpy 5 | import sys 6 | import os, time 7 | import multiprocessing 8 | import itertools 9 | import random 10 | import h5py 11 | 12 | DATA_FOLDER = 'data' 13 | 14 | if not os.path.isdir( DATA_FOLDER ) : 15 | sys.exit(DATA_FOLDER + ' is not accessible') 16 | 17 | def read_games(fn): 18 | data = [i.strip() for i in open(fn).readlines() if not i.startswith('#')] 19 | 20 | for g in data : 21 | yield g 22 | 23 | def bb2array(b, flip=False): 24 | x = numpy.zeros(64, dtype=numpy.int8) 25 | 26 | # for pos, piece in enumerate(b.pieces()): # broken in pychess v0.13.2, hence the next two lines 27 | for pos in range(64) : 28 | piece = b.piece_type_at(pos) 29 | 30 | # don't do that for bitboards 31 | # if piece == 2 : piece = 3 32 | # if piece == 5 : piece = 10 # increase values for q/k 33 | # if piece == 6 : piece = 100 34 | 35 | if piece : 36 | color = int(bool(b.occupied_co[chess.BLACK] & chess.BB_SQUARES[pos])) 37 | col = int(pos % 8) 38 | row = int(pos / 8) 39 | if flip: 40 | row = 7-row 41 | color = 1 - color 42 | 43 | #piece = color*7 + piece 44 | 45 | #x[row * 8 + col] = piece 46 | # x[row * 8 + col] = -piece if color else piece 47 | x[row * 8 + col] = (piece+6) if color else piece 48 | 49 | return numpy.array(x).reshape((8,8)) 50 | 51 | def spread( b, limit ) : 52 | x = numpy.zeros(64 * limit, dtype=numpy.bool) 53 | bb = b.reshape(64) 54 | for i in range(64) : 55 | val = bb[i] 56 | assert( val >= 0 ) 57 | if val == 0 : continue 58 | if val > limit : val = limit 59 | x[i + 64 * (val-1)] = True 60 | 61 | return x.reshape((-1,8,8)) 62 | 63 | def attacks( b, side ) : 64 | x = [] 65 | for i in range(64) : 66 | x.append( bin(b.attackers( side, i)).count('1') ) 67 | 68 | return numpy.array(x).reshape((8,8)) 69 | 70 | letters = { 'a' : 1, 'b' : 2, 'c' : 3, 'd' : 4, 'e' : 5, 'f' : 6, 'g' : 7, 'h' : 8 } 71 | numbers = { '1' : 1, '2' : 2, '3' : 3, '4' : 4, '5' : 5, '6' : 6, '7' : 7, '8' : 8 } 72 | 73 | def numeric_notation( move ) : 74 | m = numpy.zeros( 4, dtype=numpy.int8) 75 | m[0] = letters[move[0]] 76 | m[1] = numbers[move[1]] 77 | m[2] = letters[move[2]] 78 | m[3] = numbers[move[3]] 79 | return m 80 | 81 | def parse_game(g): 82 | 83 | fen, moves = g.split('{') 84 | 85 | board = chess.Board( fen ) 86 | moves = [m.split(':') for m in moves[:-1].split(', ') if len(m) > 1] 87 | 88 | result = [] 89 | for m in moves : 90 | board.push_san( m[0] ) 91 | brd = spread( bb2array(board), 12 ) 92 | aw = spread( attacks( board, chess.WHITE), 8) 93 | ab = spread( attacks( board, chess.BLACK), 8) 94 | result.append( (numpy.vstack( [brd, aw, ab] ), float(m[1]) / 1000.0 / 100.0) ) 95 | # result.append( (numpy.array( [bb2array(board), attacks( board, chess.WHITE), -attacks( board, chess.BLACK)]), float(m[1]) / 1000.0) ) 96 | board.pop() 97 | 98 | return result 99 | 100 | def read_all_games(fn_in, fn_out): 101 | g = h5py.File(fn_out, 'w') 102 | X = g.create_dataset('x', (0, 28 * 8), dtype='b', maxshape=(None, 28 * 8), chunks=True) # dtype='b' 103 | M = g.create_dataset('m', (0, 1), dtype='float32', maxshape=(None, 1), chunks=True) 104 | size = 0 105 | line = 0 106 | for game in read_games(fn_in): 107 | game = parse_game(game) 108 | if game is None: 109 | continue 110 | 111 | for x, m in game : 112 | if line + 1 >= size: 113 | g.flush() 114 | size = 2 * size + 1 115 | print 'resizing to', size 116 | [d.resize(size=size, axis=0) for d in (X, M)] 117 | 118 | X[line] = numpy.packbits(x) 119 | M[line] = m 120 | 121 | line += 1 122 | 123 | print 'shrink to', line 124 | [d.resize(size=line, axis=0) for d in (X, M)] # shrink to fit 125 | g.close() 126 | 127 | def read_all_games_2(a): 128 | return read_all_games(*a) 129 | 130 | def parse_dir(): 131 | files = [] 132 | 133 | for fn_in in os.listdir('.'): 134 | #print fn_in, 135 | if not fn_in.endswith('.txt'): 136 | continue 137 | if not fn_in.startswith('ficsgamesdb_'): 138 | continue 139 | 140 | fn_out = os.path.join(DATA_FOLDER, fn_in.replace('.txt', '.hdf5')) 141 | if not os.path.exists(fn_out) : 142 | files.append((fn_in, fn_out)) 143 | 144 | print files 145 | if len(files) : 146 | pool = multiprocessing.Pool() 147 | pool.map(read_all_games_2, files) 148 | pool.close() 149 | 150 | #read_all_games( files[0][0], files[0][1] ) 151 | 152 | def pretty_time( t ) : 153 | if t > 86400 : 154 | return '%.2fd' % (t / 86400) 155 | if t > 3600 : 156 | return '%.2fh' % (t / 3600) 157 | if t > 60 : 158 | return '%.2fm' % (t / 60) 159 | return '%.2fs' % t 160 | 161 | if __name__ == '__main__': 162 | start = time.time() 163 | parse_dir() 164 | print 'done in', pretty_time(time.time() - start) 165 | 166 | -------------------------------------------------------------------------------- /play.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import train 4 | import pickle 5 | import math 6 | import chess, chess.pgn 7 | from parse_game import numeric_notation 8 | from parse_game import bb2array 9 | import heapq 10 | import time 11 | import re 12 | import string 13 | import numpy 14 | import sunfish 15 | import pickle 16 | import random 17 | import traceback 18 | 19 | def create_move(board, crdn): 20 | # workaround for pawn promotions 21 | move = chess.Move.from_uci(crdn) 22 | if board.piece_at(move.from_square).piece_type == chess.PAWN: 23 | if int(move.to_square/8) in [0, 7]: 24 | move.promotion = chess.QUEEN # always promote to queen 25 | return move 26 | 27 | class Player(object): 28 | def move(self, gn_current): 29 | raise NotImplementedError() 30 | 31 | 32 | class Murasaki(Player): 33 | def __init__(self): 34 | self._model = train.make_model() 35 | self._model.compile(loss='mean_squared_error', optimizer='adadelta') 36 | 37 | def move(self, gn_current): 38 | assert(gn_current.board().turn == True) 39 | 40 | color = 0 41 | 42 | # X = numpy.array([sf2array(self._pos, flip=(color==1)),]) 43 | X = numpy.array([bb2array( gn_current.board(), flip=(color==1) )]) 44 | #print X 45 | predicted = self._model.predict( X ) 46 | print predicted 47 | 48 | best_move = "" 49 | best_value = 1e6 50 | for move in gn_current.board().generate_legal_moves() : 51 | notation = numeric_notation(str(move)) 52 | value = sum([(i-j)*(i-j) for i,j in zip(predicted[0],notation)]) 53 | #print value, best_value 54 | if best_value > value : 55 | best_value = value 56 | best_move = move 57 | #print 58 | 59 | print 'best:', best_value, str(best_move) 60 | 61 | move = create_move(gn_current.board(), str(best_move)) # consider promotions 62 | 63 | gn_new = chess.pgn.GameNode() 64 | gn_new.parent = gn_current 65 | gn_new.move = move 66 | 67 | return gn_new 68 | 69 | 70 | class Human(Player): 71 | def move(self, gn_current): 72 | bb = gn_current.board() 73 | 74 | print bb 75 | 76 | def get_move(move_str): 77 | try: 78 | move = chess.Move.from_uci(move_str) 79 | except: 80 | print 'cant parse' 81 | return False 82 | if move not in bb.legal_moves: 83 | print 'not a legal move' 84 | return False 85 | else: 86 | return move 87 | 88 | while True: 89 | print 'your turn:' 90 | move = get_move(raw_input()) 91 | if move: 92 | break 93 | 94 | gn_new = chess.pgn.GameNode() 95 | gn_new.parent = gn_current 96 | gn_new.move = move 97 | 98 | return gn_new 99 | 100 | 101 | class Sunfish(Player): 102 | def __init__(self, maxn=1e4): 103 | self._pos = sunfish.Position(sunfish.initial, 0, (True,True), (True,True), 0, 0) 104 | self._maxn = maxn 105 | 106 | def move(self, gn_current): 107 | import sunfish 108 | 109 | assert(gn_current.board().turn == False) 110 | 111 | # Apply last_move 112 | crdn = str(gn_current.move) 113 | move = (sunfish.parse(crdn[0:2]), sunfish.parse(crdn[2:4])) 114 | self._pos = self._pos.move(move) 115 | 116 | t0 = time.time() 117 | move, score = sunfish.search(self._pos, maxn=self._maxn) 118 | print time.time() - t0, move, score 119 | self._pos = self._pos.move(move) 120 | 121 | crdn = sunfish.render(119-move[0]) + sunfish.render(119 - move[1]) 122 | move = create_move(gn_current.board(), crdn) 123 | 124 | gn_new = chess.pgn.GameNode() 125 | gn_new.parent = gn_current 126 | gn_new.move = move 127 | 128 | return gn_new 129 | 130 | def game(): 131 | gn_current = chess.pgn.Game() 132 | 133 | maxn = 10 ** (2.0 + random.random() * 1.0) # max nodes for sunfish 134 | 135 | print 'maxn %f' % maxn 136 | 137 | player_a = Murasaki() 138 | # player_b = Human() 139 | player_b = Sunfish(maxn=maxn) 140 | 141 | times = {'A': 0.0, 'B': 0.0} 142 | 143 | while True: 144 | for side, player in [('A', player_a), ('B', player_b)]: 145 | t0 = time.time() 146 | try: 147 | gn_current = player.move(gn_current) 148 | except KeyboardInterrupt: 149 | return 150 | except: 151 | traceback.print_exc() 152 | return side + '-exception', times 153 | 154 | times[side] += time.time() - t0 155 | print '=========== Player %s: %s' % (side, gn_current.move) 156 | s = str(gn_current.board()) 157 | print s 158 | if gn_current.board().is_checkmate(): 159 | return side, times 160 | elif gn_current.board().is_stalemate(): 161 | return '-', times 162 | elif gn_current.board().can_claim_fifty_moves(): 163 | return '-', times 164 | elif s.find('K') == -1 or s.find('k') == -1: 165 | # Both AI's suck at checkmating, so also detect capturing the king 166 | return side, times 167 | 168 | def play(): 169 | while True: 170 | side, times = game() 171 | f = open('stats.txt', 'a') 172 | f.write('%s %f %f\n' % (side, times['A'], times['B'])) 173 | f.close() 174 | 175 | if __name__ == '__main__': 176 | # play() 177 | game() 178 | 179 | -------------------------------------------------------------------------------- /book_builder.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys, os, random, json, time, datetime 4 | 5 | import hashlib 6 | 7 | import chess 8 | import chess.uci 9 | import chess.pgn 10 | 11 | from chess.polyglot import zobrist_hash # used to be Board.zobrist_hash() 12 | 13 | board = chess.Board() 14 | 15 | engine = chess.uci.popen_engine('stockfish') 16 | engine.uci() 17 | engine.isready() 18 | engine.setoption( {'Threads' : 24 } ) 19 | engine.isready() 20 | engine.setoption( {'Hash' : 8192, 'Threads' : 24 } ) 21 | #print engine.options 22 | #sys.exit(1) 23 | 24 | info_handler = chess.uci.InfoHandler() 25 | engine.info_handlers.append(info_handler) 26 | 27 | already_played = set() 28 | if os.path.isfile( 'already_played.json' ) : 29 | with open( 'already_played.json' ) as fin : 30 | already_played = set( json.load(fin) ) 31 | 32 | already_have = set() 33 | _, _, files = os.walk('.').next() 34 | for f in files : 35 | if f.startswith( 'already_have' ) and f.endswith( '.json' ) : 36 | print 'loading', f 37 | with open( f ) as fin : 38 | already_have.update( json.load( fin )) 39 | 40 | def evaluation( board ) : 41 | engine.position( board ) 42 | engine.go( movetime = 400 ) 43 | 44 | try : 45 | if info_handler.info['score'][1].cp == None : 46 | mate = info_handler.info['score'][1].mate 47 | return (-60000 if mate > 0 else 60000) + mate * 1000 48 | else : 49 | return -info_handler.info["score"][1].cp 50 | except : 51 | print info_handler.info 52 | 53 | def generate_evaluation( board ) : 54 | moves = [] 55 | for m in board.legal_moves : 56 | board.push( m ) 57 | #print m, evaluation(board), 58 | moves.append( (evaluation(board), m) ) 59 | board.pop() 60 | 61 | moves = sorted(moves, reverse = True) 62 | m_list = [] 63 | for m in moves : 64 | #if moves[0][0] - m[0] > 30 : break # 0.3 pawn limit 65 | m_list.append( '%s:%d' % (board.san(m[1]), m[0]) ) 66 | return m_list 67 | 68 | #b = chess.Board('3R4/6RQ/1q3p2/4p3/1P2Pk2/6rP/p4P2/5K2 b - - 0 1') 69 | #b = chess.Board('R7/8/8/8/8/1K6/8/6k1 w - - 0 1') 70 | #print evaluation( b ) 71 | #sys.exit(1) 72 | 73 | def normalize( fen, m_eval ) : 74 | parts = fen.split() 75 | 76 | parts[4], parts[5] = '0', '1' 77 | 78 | if parts[1] == 'b' : 79 | parts[0] = '/'.join(reversed(parts[0].split('/'))).swapcase() 80 | parts[1] = 'w' 81 | parts[2] = ''.join( sorted(parts[2].swapcase()) ) # castling 82 | if parts[3] != '-' : 83 | parts[3] = parts[3][0] + str(9-int(parts[3][1])) # enpassant 84 | 85 | b_eval = [] 86 | for m in m_eval : 87 | vals = m.split(':') 88 | vals[0] = ''.join([str(9-int(c)) if c.isdigit() else c for c in vals[0]]) 89 | b_eval.append( ':'.join(vals) ) 90 | else : 91 | b_eval = m_eval 92 | 93 | #print parts 94 | fen = ' '.join( parts ) 95 | 96 | return fen, b_eval 97 | 98 | if __name__ == '__main__' : 99 | 100 | if len(sys.argv) < 2 : 101 | print 'USAGE: book_builder file.pgn' 102 | sys.exit(1) 103 | 104 | counter = 0 105 | with open( sys.argv[1] ) as pgn : 106 | while True : 107 | game = chess.pgn.read_game( pgn ) 108 | if game == None : break 109 | counter += 1 110 | 111 | principal = chess.Board().variation_san( game.main_line() ) 112 | p_hash = hashlib.md5(principal).hexdigest() 113 | if p_hash in already_played : continue 114 | already_played.add( p_hash ) 115 | 116 | engine.ucinewgame() 117 | print principal 118 | 119 | cnt = 0 120 | fen_data = [ '# ' + chess.Board().variation_san( game.main_line() ) ] 121 | board = chess.Board() 122 | for m in game.main_line() : 123 | print m, 124 | board.push(m) 125 | if board.is_game_over() : 126 | print 'game over' 127 | break 128 | 129 | #cnt += 1 130 | #if cnt < 50 : continue 131 | 132 | #print '\n', board 133 | zobrist = zobrist_hash(board) 134 | if zobrist not in already_have : 135 | already_have.add( zobrist ) 136 | 137 | fen, b_eval = normalize( board.fen(), generate_evaluation( board ) ) 138 | print fen, '{' + ', '.join( b_eval ) + '}' 139 | fen_data.append( fen + ' {' + ', '.join( b_eval ) + '}' ) 140 | 141 | else : 142 | print board.fen(), 'already have' 143 | 144 | #break 145 | 146 | if len(fen_data) > 1 : 147 | with open( os.path.basename(sys.argv[1]) + '.txt', 'a' ) as fout : 148 | fout.write( '\n'.join( fen_data ) ) 149 | fout.write( '\n' ) 150 | 151 | now = datetime.datetime.now() 152 | suffix = str(now.strftime('%Y-%m-%d')) 153 | with open( 'already_have_%s.json' % suffix, 'w' ) as fout : 154 | json.dump( list(already_have), fout ) 155 | 156 | with open( 'already_played.json', 'w' ) as fout : 157 | json.dump( list(already_played), fout ) 158 | 159 | print 'read:', counter 160 | 161 | # for i in range(5) : 162 | # fen, b_eval = normalize( board.fen(), generate_evaluation( board ) ) 163 | # print fen, '{' + ', '.join( b_eval ) + '}' 164 | # for m in board.legal_moves : 165 | # board.push(m) 166 | # break 167 | 168 | -------------------------------------------------------------------------------- /parse_game.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import chess, chess.pgn 4 | import numpy 5 | import sys 6 | import os, time 7 | import multiprocessing 8 | import itertools 9 | import random 10 | import h5py 11 | 12 | DATA_FOLDER = 'data' 13 | 14 | if not os.path.isdir( DATA_FOLDER ) : 15 | sys.exit(DATA_FOLDER + ' is not accessible') 16 | 17 | def read_games(fn): 18 | f = open(fn) 19 | 20 | while True: 21 | try: 22 | g = chess.pgn.read_game(f) 23 | except KeyboardInterrupt: 24 | raise 25 | except: 26 | continue 27 | 28 | if not g: 29 | break 30 | 31 | yield g 32 | 33 | 34 | def bb2array(b, flip=False): 35 | x = numpy.zeros(64, dtype=numpy.int8) 36 | 37 | # for pos, piece in enumerate(b.pieces()): # broken in pychess v0.13.2, hence the next two lines 38 | for pos in range(64) : 39 | piece = b.piece_type_at(pos) 40 | if piece : 41 | color = int(bool(b.occupied_co[chess.BLACK] & chess.BB_SQUARES[pos])) 42 | col = int(pos % 8) 43 | row = int(pos / 8) 44 | if flip: 45 | row = 7-row 46 | color = 1 - color 47 | 48 | #piece = color*7 + piece 49 | 50 | #x[row * 8 + col] = piece 51 | x[row * 8 + col] = -piece if color else piece 52 | 53 | return x 54 | 55 | 56 | letters = { 'a' : 1, 'b' : 2, 'c' : 3, 'd' : 4, 'e' : 5, 'f' : 6, 'g' : 7, 'h' : 8 } 57 | numbers = { '1' : 1, '2' : 2, '3' : 3, '4' : 4, '5' : 5, '6' : 6, '7' : 7, '8' : 8 } 58 | 59 | def numeric_notation( move ) : 60 | m = numpy.zeros( 4, dtype=numpy.int8) 61 | m[0] = letters[move[0]] 62 | m[1] = numbers[move[1]] 63 | m[2] = letters[move[2]] 64 | m[3] = numbers[move[3]] 65 | return m 66 | 67 | def parse_game(g): 68 | rm = {'1-0': 1, '0-1': -1, '1/2-1/2': 0} 69 | r = g.headers['Result'] 70 | if r not in rm: 71 | return None 72 | y = rm[r] 73 | # print >> sys.stderr, 'result:', y 74 | 75 | # Generate all boards 76 | gn = g.end() 77 | # if not gn.board().is_game_over(): 78 | # return None 79 | 80 | gns = [] 81 | moves_left = 0 82 | while gn: 83 | gns.append((moves_left, gn, gn.board().turn == 0)) 84 | gn = gn.parent 85 | moves_left += 1 86 | 87 | # print len(gns) 88 | # if len(gns) < 10: 89 | # print g.end() 90 | 91 | if len(gns) > 10 : 92 | gns = gns[-10:] 93 | # num = random.randint(0,5) 94 | # for i in range(num) : 95 | # gns.pop() # remove first N positions to lessen repetitions 96 | gns.pop() 97 | 98 | #moves_left, gn, flip = random.choice(gns) 99 | 100 | result = [] 101 | for moves_left, gn, flip in gns : 102 | 103 | b = gn.board() 104 | x = bb2array(b, flip=flip) 105 | b_parent = gn.parent.board() 106 | 107 | # print b_parent 108 | # print b_parent.parse_san(gn.san()).uci() 109 | # if len(result) > 6 : 110 | # return None 111 | 112 | x_parent = bb2array(b_parent, flip=(not flip)) 113 | if flip: 114 | y = - rm[r] 115 | 116 | move = b_parent.parse_san(gn.san()).uci() 117 | 118 | # generate a random board 119 | # moves = list(b_parent.legal_moves) 120 | # move = random.choice(moves) 121 | # b_parent.push(move) 122 | # x_random = bb2array(b_parent, flip=flip) 123 | 124 | #if moves_left < 3: 125 | # print moves_left, 'moves left' 126 | # print 'winner:', y 127 | # print g.headers 128 | # print b 129 | # print 'checkmate:', g.end().board().is_checkmate() 130 | 131 | # print x 132 | # print x_parent 133 | # print x_random 134 | 135 | # result.append( (x, x_parent, x_random, moves_left, y) ) 136 | # result.append( (x, x_parent, x_random) ) 137 | result.append( (x_parent, numeric_notation(move)) ) 138 | 139 | return result 140 | 141 | def read_all_games(fn_in, fn_out): 142 | g = h5py.File(fn_out, 'w') 143 | X = g.create_dataset('x', (0, 64), dtype='b', maxshape=(None, 64), chunks=True) 144 | M = g.create_dataset('m', (0, 4), dtype='b', maxshape=(None, 4), chunks=True) 145 | size = 0 146 | line = 0 147 | for game in read_games(fn_in): 148 | game = parse_game(game) 149 | if game is None: 150 | continue 151 | 152 | for x, m in game : 153 | if line + 1 >= size: 154 | g.flush() 155 | size = 2 * size + 1 156 | print 'resizing to', size 157 | [d.resize(size=size, axis=0) for d in (X, M)] 158 | 159 | X[line] = x 160 | M[line] = m 161 | 162 | line += 1 163 | 164 | [d.resize(size=line, axis=0) for d in (X, M)] # shrink to fit 165 | g.close() 166 | 167 | def read_all_games_2(a): 168 | return read_all_games(*a) 169 | 170 | def parse_dir(): 171 | files = [] 172 | 173 | for fn_in in os.listdir(DATA_FOLDER): 174 | if not fn_in.endswith('.pgn'): 175 | continue 176 | fn_in = os.path.join(DATA_FOLDER, fn_in) 177 | fn_out = fn_in.replace('.pgn', '.hdf5') 178 | if not os.path.exists(fn_out) : 179 | files.append((fn_in, fn_out)) 180 | 181 | print files 182 | if len(files) : 183 | pool = multiprocessing.Pool() 184 | pool.map(read_all_games_2, files) 185 | pool.close() 186 | 187 | def pretty_time( t ) : 188 | if t > 86400 : 189 | return '%.2fd' % (t / 86400) 190 | if t > 3600 : 191 | return '%.2fh' % (t / 3600) 192 | if t > 60 : 193 | return '%.2fm' % (t / 60) 194 | return '%.2fs' % t 195 | 196 | if __name__ == '__main__': 197 | start = time.time() 198 | parse_dir() 199 | print 'done in', pretty_time(time.time() - start) 200 | 201 | -------------------------------------------------------------------------------- /book_play.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import pickle 4 | import math 5 | import chess, chess.pgn 6 | 7 | import heapq 8 | import time 9 | import re 10 | import string 11 | import numpy 12 | import sunfish 13 | import pickle 14 | import random 15 | import traceback 16 | 17 | from book_train import make_model 18 | from book_parser import bb2array, attacks, numeric_notation 19 | 20 | def create_move(board, crdn): 21 | # workaround for pawn promotions 22 | move = chess.Move.from_uci(crdn) 23 | if board.piece_at(move.from_square).piece_type == chess.PAWN: 24 | if int(move.to_square/8) in [0, 7]: 25 | move.promotion = chess.QUEEN # always promote to queen 26 | return move 27 | 28 | class Player(object): 29 | def move(self, gn_current): 30 | raise NotImplementedError() 31 | 32 | MODEL_DATA = 'conv3x64_4096_2048_1024_2017-07-26_195636.model' 33 | 34 | class Murasaki(Player): 35 | def __init__(self): 36 | self._model, _ = make_model() 37 | self._model.compile(loss='mean_squared_error', optimizer='adadelta') 38 | self._model.load_weights( MODEL_DATA ) 39 | 40 | def move(self, gn_current): 41 | assert(gn_current.board().turn == True) 42 | 43 | color = 0 44 | 45 | ## X = numpy.array([sf2array(self._pos, flip=(color==1)),]) 46 | # X = numpy.array([bb2array( gn_current.board(), flip=(color==1) )]) 47 | #print X 48 | 49 | board = gn_current.board() 50 | moves = board.legal_moves 51 | X = [] 52 | for m in moves : 53 | board.push( m ) 54 | X.append( numpy.array( [bb2array(board), attacks( board, chess.WHITE), -attacks( board, chess.BLACK)]) ) 55 | board.pop() 56 | 57 | predicted = sorted( zip( [i[0] for i in self._model.predict( numpy.array(X))], moves ), reverse=True) 58 | print predicted 59 | 60 | best_value, best_move = predicted[0] 61 | 62 | ''' 63 | best_move = "" 64 | best_value = 1e6 65 | for move in gn_current.board().generate_legal_moves() : 66 | notation = numeric_notation(str(move)) 67 | value = sum([(i-j)*(i-j) for i,j in zip(predicted[0],notation)]) 68 | #print value, best_value 69 | if best_value > value : 70 | best_value = value 71 | best_move = move 72 | #print 73 | ''' 74 | 75 | print 'best:', best_value, str(best_move) 76 | 77 | move = create_move(gn_current.board(), str(best_move)) # consider promotions 78 | 79 | gn_new = chess.pgn.GameNode() 80 | gn_new.parent = gn_current 81 | gn_new.move = move 82 | 83 | return gn_new 84 | 85 | 86 | class Human(Player): 87 | def move(self, gn_current): 88 | bb = gn_current.board() 89 | 90 | print bb 91 | 92 | def get_move(move_str): 93 | try: 94 | move = chess.Move.from_uci(move_str) 95 | except: 96 | print 'cant parse' 97 | return False 98 | if move not in bb.legal_moves: 99 | print 'not a legal move' 100 | return False 101 | else: 102 | return move 103 | 104 | while True: 105 | print 'your turn:' 106 | move = get_move(raw_input()) 107 | if move: 108 | break 109 | 110 | gn_new = chess.pgn.GameNode() 111 | gn_new.parent = gn_current 112 | gn_new.move = move 113 | 114 | return gn_new 115 | 116 | 117 | class Sunfish(Player): 118 | def __init__(self, maxn=1e4): 119 | self._pos = sunfish.Position(sunfish.initial, 0, (True,True), (True,True), 0, 0) 120 | self._maxn = maxn 121 | 122 | def move(self, gn_current): 123 | import sunfish 124 | 125 | assert(gn_current.board().turn == False) 126 | 127 | # Apply last_move 128 | crdn = str(gn_current.move) 129 | move = (sunfish.parse(crdn[0:2]), sunfish.parse(crdn[2:4])) 130 | self._pos = self._pos.move(move) 131 | 132 | t0 = time.time() 133 | move, score = sunfish.search(self._pos, maxn=self._maxn) 134 | print time.time() - t0, move, score 135 | self._pos = self._pos.move(move) 136 | 137 | crdn = sunfish.render(119-move[0]) + sunfish.render(119 - move[1]) 138 | move = create_move(gn_current.board(), crdn) 139 | 140 | gn_new = chess.pgn.GameNode() 141 | gn_new.parent = gn_current 142 | gn_new.move = move 143 | 144 | return gn_new 145 | 146 | def game(): 147 | gn_current = chess.pgn.Game() 148 | 149 | maxn = 10 ** (2.0 + random.random() * 1.0) # max nodes for sunfish 150 | 151 | print 'maxn %f' % maxn 152 | 153 | player_a = Murasaki() 154 | player_b = Human() 155 | # player_b = Sunfish(maxn=maxn) 156 | 157 | times = {'A': 0.0, 'B': 0.0} 158 | 159 | while True: 160 | for side, player in [('A', player_a), ('B', player_b)]: 161 | t0 = time.time() 162 | try: 163 | gn_current = player.move(gn_current) 164 | except KeyboardInterrupt: 165 | return 166 | except: 167 | traceback.print_exc() 168 | return side + '-exception', times 169 | 170 | times[side] += time.time() - t0 171 | print '=========== Player %s: %s' % (side, gn_current.move) 172 | s = str(gn_current.board()) 173 | print s 174 | if gn_current.board().is_checkmate(): 175 | return side, times 176 | elif gn_current.board().is_stalemate(): 177 | return '-', times 178 | elif gn_current.board().can_claim_fifty_moves(): 179 | return '-', times 180 | elif s.find('K') == -1 or s.find('k') == -1: 181 | # Both AI's suck at checkmating, so also detect capturing the king 182 | return side, times 183 | 184 | def play(): 185 | while True: 186 | side, times = game() 187 | f = open('stats.txt', 'a') 188 | f.write('%s %f %f\n' % (side, times['A'], times['B'])) 189 | f.close() 190 | 191 | if __name__ == '__main__': 192 | # play() 193 | game() 194 | 195 | -------------------------------------------------------------------------------- /train.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import numpy 4 | import theano 5 | import pickle 6 | import itertools 7 | import scipy.sparse 8 | import h5py 9 | import math 10 | 11 | from keras.models import Sequential 12 | 13 | try : 14 | # old imports (v0.3.1) 15 | from keras.layers import Dense, Dropout, Activation 16 | from keras.layers import Convolution2D, Reshape, MaxPooling2D, Flatten 17 | except ImportError : 18 | # new keras imports (v0.3.3) 19 | from keras.layers.core import Dense, Dropout, Activation, Reshape, Flatten 20 | from keras.layers.convolutional import Convolution2D, MaxPooling2D 21 | 22 | from keras.callbacks import EarlyStopping, Callback 23 | from keras.optimizers import SGD 24 | 25 | from numpy import array 26 | 27 | import os, sys, time, random 28 | 29 | BATCH_SIZE = 2000 30 | 31 | rng = numpy.random 32 | 33 | DATA_FOLDER = 'data' 34 | 35 | def floatX(x): 36 | return numpy.asarray(x, dtype=theano.config.floatX) 37 | 38 | def load_data(dir = DATA_FOLDER): 39 | for fn in os.listdir(dir): 40 | if not fn.endswith('.hdf5'): 41 | continue 42 | 43 | fn = os.path.join(dir, fn) 44 | try: 45 | yield h5py.File(fn, 'r') 46 | except: 47 | print 'could not read', fn 48 | 49 | 50 | def get_data(series=['x', 'm']): 51 | data = [[] for s in series] 52 | for f in load_data(): 53 | try: 54 | for i, s in enumerate(series): 55 | data[i].append(f[s].value) 56 | except: 57 | raise 58 | print 'failed reading from', f 59 | 60 | def stack(vectors): 61 | if len(vectors[0].shape) > 1: 62 | return numpy.vstack(vectors) 63 | else: 64 | return numpy.hstack(vectors) 65 | 66 | data = [stack(d) for d in data] 67 | 68 | # #test_size = 10000.0 / len(data[0]) # does not work for small data sets (<10k entries) 69 | # test_size = 0.05 # let's make it fixed 5% instead 70 | # print 'Splitting', len(data[0]), 'entries into train/test set' 71 | # data = train_test_split(*data, test_size=test_size) 72 | # 73 | # print data[0].shape[0], 'train set', data[1].shape[0], 'test set' 74 | return data 75 | 76 | def show_board( board ) : 77 | for row in xrange(8): 78 | print ' '.join('%2d' % x for x in board[(row*8):((row+1)*8)]) 79 | print 80 | 81 | def make_model(data = None) : 82 | global MODEL_DATA 83 | MODEL_SIZE = [1024, 1024, 1024, 1024, 1024, 1024, 1024] 84 | # MODEL_SIZE = [8192, 8192, 4096, 2048, 2048, 1024, 1024] 85 | # MODEL_SIZE = [4096, 4096, 2048, 2048, 1024, 512, 256] 86 | # MODEL_SIZE = [512, 512, 512, 512, 512, 512, 512] 87 | # MODEL_SIZE = [256, 256, 256, 256, 256, 256, 256] 88 | 89 | MODEL_SIZE = [4096, 2048, 1024, 1024] # 45M @ AWS 90 | MODEL_SIZE = [4096, 4096, 2048, 1024] # 45M (1999-2001) 91 | MODEL_SIZE = [3072, 2048, 2048, 1024] # 19M @ work (1999-2000) 92 | MODEL_SIZE = [2048, 1024, 1024, 1024] # 5M, 1.011 @ E250 93 | MODEL_SIZE = [2048, 2048, 1024, 1024] # 5M, 0.7122 @ E350 94 | MODEL_SIZE = [2048, 2048, 2048, 1024] # 5M, 0.7673 @ E100, 0.6638 @ E150 95 | MODEL_SIZE = [3072, 2048, 2048, 1024] # 5M, 0.6818 @ E100, 0.6153 @ E125 96 | # MODEL_SIZE = [8192, 4096, 2048, 1024] # 19M @ work (1999-2000) 97 | MODEL_SIZE = [2048, 2048, 1024, 1024] # 287k 10moves, 0.8181 @ E100, 0.8054 @ E200 98 | MODEL_SIZE = [3072, 2048, 2048, 1024] # 287k 10moves, 0.8174 @ E100 99 | MODEL_SIZE = [1024, 1024, 1024, 1024] # 287k 10moves 100 | 101 | CONVOLUTION = min( 64, MODEL_SIZE[0] / 64 ) # 64 for 4096 first layer, 32 for 2048 layer 102 | 103 | if data : 104 | MODEL_DATA = data 105 | else : 106 | MODEL_DATA = 'new_%s.model' % ('_'.join(['%d' % i for i in MODEL_SIZE])) 107 | MODEL_DATA = 'conv%d_%s.model' % (CONVOLUTION, '_'.join(['%d' % i for i in MODEL_SIZE])) 108 | 109 | model = Sequential() 110 | # model.add(Reshape( dims = (1, 8, 8), input_shape = (64,))) 111 | model.add(Reshape( (1, 8, 8), input_shape = (64,))) 112 | model.add(Convolution2D( CONVOLUTION, 3, 3, border_mode='valid')) 113 | model.add(Activation('relu')) 114 | # model.add(Convolution2D(8, 3, 3)) 115 | # model.add(Activation('relu')) 116 | # model.add(MaxPooling2D(pool_size=(2, 2))) 117 | 118 | model.add(Flatten()) 119 | for i in MODEL_SIZE : 120 | model.add(Dense( i, init='uniform', activation='relu')) 121 | 122 | model.add(Dense( 4, init='uniform', activation='relu')) 123 | 124 | # model.add(Dense(MODEL_SIZE[0], input_dim = 64, init='uniform', activation='relu' )) 125 | ## model.add(Dropout(0.2)) 126 | # for i in MODEL_SIZE[1:] : 127 | # model.add(Dense( i, init='uniform', activation='relu')) 128 | ## model.add(Dropout(0.2)) 129 | # model.add(Dense(4, init='uniform', activation='relu')) 130 | 131 | if os.path.isfile( MODEL_DATA ) : # saved model exists, load it 132 | model.load_weights( MODEL_DATA ) 133 | 134 | return model 135 | 136 | def train(): 137 | X, m = get_data(['x', 'm']) 138 | # X_train, X_test, m_train, m_test = get_data(['x', 'm']) 139 | # for board in X_train[:2] : 140 | # show_board( board ) 141 | 142 | model = make_model() 143 | 144 | print 'compiling...' 145 | # sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True) 146 | # model.compile(loss='squared_hinge', optimizer='adadelta') 147 | model.compile(loss='mean_squared_error', optimizer='adadelta') 148 | 149 | early_stopping = EarlyStopping( monitor = 'loss', patience = 50 ) # monitor='val_loss', verbose=0, mode='auto' 150 | #print 'fitting...' 151 | history = model.fit( X, m, nb_epoch = 100, batch_size = BATCH_SIZE) #, callbacks = [early_stopping]) #, validation_split=0.05) #, verbose=2) #, show_accuracy = True ) 152 | 153 | # print 'evaluating...' 154 | # score = model.evaluate(X_test, m_test, batch_size = BATCH_SIZE ) 155 | # print 'score:', score 156 | 157 | model.save_weights( MODEL_DATA, overwrite = True ) 158 | 159 | #print X_train[:10] 160 | # print m_train[:20] 161 | # print model.predict( X_train[:20], batch_size = 5 ) 162 | print m[:20] 163 | print model.predict( X[:20], batch_size = 5 ) 164 | 165 | # print m_test[:20] 166 | # print model.predict( X_test[:20], batch_size = 5 ) 167 | 168 | # with open( MODEL_DATA + '.history', 'w') as fout : 169 | # print >>fout, history.losses 170 | 171 | 172 | if __name__ == '__main__': 173 | train() 174 | -------------------------------------------------------------------------------- /book_train.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import numpy 4 | import theano 5 | import pickle 6 | import itertools 7 | import scipy.sparse 8 | import h5py 9 | import math 10 | import random 11 | 12 | import time, datetime 13 | 14 | from keras.models import Sequential 15 | 16 | #try : 17 | # # old imports (v0.3.1) 18 | # from keras.layers import Dense, Dropout, Activation 19 | # from keras.layers import Convolution2D, Reshape, MaxPooling2D, Flatten 20 | #except ImportError : 21 | # # new keras imports (v0.3.3) 22 | # from keras.layers.core import Dense, Dropout, Activation, Reshape, Flatten 23 | # from keras.layers.convolutional import Convolution2D, MaxPooling2D 24 | 25 | from keras.layers.core import Dense, Reshape, Flatten, Activation 26 | from keras.layers.convolutional import Conv2D 27 | 28 | from keras.callbacks import EarlyStopping, Callback 29 | from keras.optimizers import SGD 30 | 31 | from keras.layers.advanced_activations import LeakyReLU 32 | 33 | from numpy import array 34 | 35 | import os, sys, time, random 36 | 37 | BATCH_SIZE = 2048 38 | 39 | rng = numpy.random 40 | 41 | DATA_FOLDER = 'data' 42 | 43 | def floatX(x): 44 | return numpy.asarray(x, dtype=theano.config.floatX) 45 | 46 | def load_data(dir = DATA_FOLDER): 47 | for fn in os.listdir(dir): 48 | if not fn.endswith('.hdf5'): 49 | continue 50 | 51 | fn = os.path.join(dir, fn) 52 | try: 53 | yield h5py.File(fn, 'r') 54 | except: 55 | print 'could not read', fn 56 | break 57 | 58 | def get_data(series=['x', 'm']): 59 | data = [[] for s in series] 60 | for f in load_data(): 61 | try: 62 | for i, s in enumerate(series): 63 | data[i].append(f[s].value) 64 | except: 65 | raise 66 | print 'failed reading from', f 67 | 68 | def stack(vectors): 69 | if len(vectors[0].shape) > 1: 70 | return numpy.vstack(vectors) 71 | else: 72 | return numpy.hstack(vectors) 73 | 74 | data = [stack(d) for d in data] 75 | 76 | # #test_size = 10000.0 / len(data[0]) # does not work for small data sets (<10k entries) 77 | # test_size = 0.05 # let's make it fixed 5% instead 78 | # print 'Splitting', len(data[0]), 'entries into train/test set' 79 | # data = train_test_split(*data, test_size=test_size) 80 | # 81 | # print data[0].shape[0], 'train set', data[1].shape[0], 'test set' 82 | return data 83 | 84 | def show_board( board ) : 85 | for row in xrange(8): 86 | print ' '.join('%2d' % x for x in board[(row*8):((row+1)*8)]) 87 | print 88 | 89 | CONV_LAYERS = 3 90 | 91 | MODEL_DATA = None 92 | 93 | def make_model(data = None) : 94 | global MODEL_DATA 95 | 96 | MODEL_SIZE = [8192, 8192, 4096, 2048, 2048, 1024, 1024] 97 | MODEL_SIZE = [4096, 4096, 2048, 2048, 1024, 512, 256] 98 | # MODEL_SIZE = [4096, 2048, 1024, 512, 256] 99 | MODEL_SIZE = [1024, 1024, 1024, 1024] 100 | MODEL_SIZE = [1024, 1024] # 42 101 | # MODEL_SIZE = [512] # 50 @ 36 102 | MODEL_SIZE = [4096, 2048, 1024] # 38 @ 70/2layers, 36.9 @ 100 /3layers 103 | # MODEL_SIZE = [8192, 4096, 2048, 1024] 104 | 105 | CONVOLUTION = min( 64, MODEL_SIZE[0] * 4 / 64 ) # 64 for 4096 first layer, 32 for 2048 layer 106 | print 'convolution', CONVOLUTION, 'layers', CONV_LAYERS 107 | 108 | # if data : 109 | # MODEL_DATA = data 110 | # else : 111 | # MODEL_DATA = 'new_%s.model' % ('_'.join(['%d' % i for i in MODEL_SIZE])) 112 | # MODEL_DATA = 'conv%d_%s.model' % (CONVOLUTION, '_'.join(['%d' % i for i in MODEL_SIZE])) 113 | 114 | name = 'conv%dx%d_%s.model' % (CONV_LAYERS, CONVOLUTION, '_'.join(['%d' % i for i in MODEL_SIZE])) 115 | 116 | model = Sequential() 117 | ## model.add(Reshape( dims = (1, 8, 8), input_shape = (64,))) 118 | # model.add(Reshape( (1, 8, 8), input_shape = (64,))) 119 | model.add(Conv2D( CONVOLUTION, 1, 1, border_mode='same', dim_ordering='th', input_shape = (28,8,8,))) 120 | model.add(LeakyReLU(alpha=0.3)) 121 | 122 | # model.add(Conv2D( CONVOLUTION, 3, 3, border_mode='same', dim_ordering='th', input_shape = (3,8,8,))) 123 | # model.add(LeakyReLU(alpha=0.3)) 124 | 125 | for i in range( CONV_LAYERS ) : 126 | model.add(Conv2D( CONVOLUTION, 3, 3, border_mode='same', dim_ordering='th')) # 'valid' shrinks, 'same' keeps size 127 | model.add(LeakyReLU(alpha=0.3)) 128 | 129 | # model.add(Convolution2D(8, 3, 3)) 130 | # model.add(Activation('relu')) 131 | # model.add(MaxPooling2D(pool_size=(2, 2))) 132 | 133 | model.add(Flatten()) 134 | for i in MODEL_SIZE[:-1] : 135 | print i 136 | # model.add(Dense( i, init='uniform', activation='relu')) 137 | model.add(Dense( i, init='uniform')) 138 | model.add(LeakyReLU(alpha=0.3)) 139 | 140 | model.add(Dense( MODEL_SIZE[-1], init='uniform', activation='tanh')) 141 | model.add(Dense( 1, init='uniform')) 142 | 143 | # model.add(Dense(MODEL_SIZE[0], input_dim = 64, init='uniform', activation='relu' )) 144 | ## model.add(Dropout(0.2)) 145 | # for i in MODEL_SIZE[1:] : 146 | # model.add(Dense( i, init='uniform', activation='relu')) 147 | ## model.add(Dropout(0.2)) 148 | # model.add(Dense(4, init='uniform', activation='relu')) 149 | 150 | # if os.path.isfile( MODEL_DATA ) : # saved model exists, load it 151 | # model.load_weights( MODEL_DATA ) 152 | 153 | return model, name 154 | 155 | def train(): 156 | X, m = get_data(['x', 'm']) 157 | # X_train, X_test, m_train, m_test = get_data(['x', 'm']) 158 | # for board in X_train[:2] : 159 | # show_board( board ) 160 | 161 | start = time.time() 162 | print 'shuffling...', 163 | idx = range(len(X)) 164 | random.shuffle(idx) 165 | X, m = X[idx], m[idx] 166 | print '%.2f sec' % (time.time() - start) 167 | 168 | # unpack the bits 169 | start = time.time() 170 | print 'unpacking...', 171 | X = np.array([numpy.unpackbits(x).reshape(28, 8, 8).astype(np.bool) for x in X]) 172 | print '%.2f sec' % (time.time() - start) 173 | 174 | model, name = make_model() 175 | 176 | print 'compiling...' # 5e5 too high on 2017-09-06 177 | sgd = SGD(lr=3e-5, decay=1e-6, momentum=0.9, nesterov=True) # 1e-4 : nan, 1e-5 loss 137 epoch1, 5e-5 loss 121 epoch1 178 | # model.compile(loss='squared_hinge', optimizer='adadelta') 179 | # model.compile(loss='mean_squared_error', optimizer='adadelta') 180 | model.compile(loss='mean_squared_error', optimizer=sgd) 181 | 182 | early_stopping = EarlyStopping( monitor = 'loss', patience = 50 ) # monitor='val_loss', verbose=0, mode='auto' 183 | #print 'fitting...' 184 | history = model.fit( X, m, nb_epoch = 10, batch_size = BATCH_SIZE, validation_split=0.05) #, callbacks = [early_stopping]) #, validation_split=0.05) #, verbose=2) #, show_accuracy = True ) 185 | 186 | # print 'evaluating...' 187 | # score = model.evaluate(X_test, m_test, batch_size = BATCH_SIZE ) 188 | # print 'score:', score 189 | 190 | now = datetime.datetime.now() 191 | suffix = str(now.strftime("%Y-%m-%d_%H%M%S")) 192 | model.save_weights( name.replace( '.model', '_%s.model' % suffix), overwrite = True ) 193 | 194 | #print X_train[:10] 195 | # print m_train[:20] 196 | # print model.predict( X_train[:20], batch_size = 5 ) 197 | # print m[:20] 198 | # print model.predict( X[:20], batch_size = 5 ) 199 | 200 | result = zip( m[-20:] * 100.0, model.predict( X[-20:], batch_size = 5 ) * 100.0) 201 | for a, b in result : 202 | print '%.4f %.4f %.2f%%' % (a, b, abs(a-b) * 100.0 / max(abs(a),abs(b))) 203 | 204 | # print m_test[:20] 205 | # print model.predict( X_test[:20], batch_size = 5 ) 206 | 207 | # with open( MODEL_DATA + '.history', 'w') as fout : 208 | # print >>fout, history.losses 209 | 210 | 211 | if __name__ == '__main__': 212 | train() 213 | -------------------------------------------------------------------------------- /sunfish.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env pypy 2 | # -*- coding: utf-8 -*- 3 | 4 | from __future__ import print_function 5 | import re 6 | import sys 7 | from itertools import count 8 | from collections import OrderedDict, namedtuple 9 | 10 | # The table size is the maximum number of elements in the transposition table. 11 | TABLE_SIZE = 1e6 12 | 13 | # This constant controls how much time we spend on looking for optimal moves. 14 | NODES_SEARCHED = 1e4 15 | 16 | # Mate value must be greater than 8*queen + 2*(rook+knight+bishop) 17 | # King value is set to twice this value such that if the opponent is 18 | # 8 queens up, but we got the king, we still exceed MATE_VALUE. 19 | MATE_VALUE = 30000 20 | 21 | # Our board is represented as a 120 character string. The padding allows for 22 | # fast detection of moves that don't stay within the board. 23 | A1, H1, A8, H8 = 91, 98, 21, 28 24 | initial = ( 25 | ' \n' # 0 - 9 26 | ' \n' # 10 - 19 27 | ' rnbqkbnr\n' # 20 - 29 28 | ' pppppppp\n' # 30 - 39 29 | ' ........\n' # 40 - 49 30 | ' ........\n' # 50 - 59 31 | ' ........\n' # 60 - 69 32 | ' ........\n' # 70 - 79 33 | ' PPPPPPPP\n' # 80 - 89 34 | ' RNBQKBNR\n' # 90 - 99 35 | ' \n' # 100 -109 36 | ' ' # 110 -119 37 | ) 38 | 39 | ############################################################################### 40 | # Move and evaluation tables 41 | ############################################################################### 42 | 43 | N, E, S, W = -10, 1, 10, -1 44 | directions = { 45 | 'P': (N, 2*N, N+W, N+E), 46 | 'N': (2*N+E, N+2*E, S+2*E, 2*S+E, 2*S+W, S+2*W, N+2*W, 2*N+W), 47 | 'B': (N+E, S+E, S+W, N+W), 48 | 'R': (N, E, S, W), 49 | 'Q': (N, E, S, W, N+E, S+E, S+W, N+W), 50 | 'K': (N, E, S, W, N+E, S+E, S+W, N+W) 51 | } 52 | 53 | pst = { 54 | 'P': (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 55 | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 56 | 0, 198, 198, 198, 198, 198, 198, 198, 198, 0, 57 | 0, 178, 198, 198, 198, 198, 198, 198, 178, 0, 58 | 0, 178, 198, 198, 198, 198, 198, 198, 178, 0, 59 | 0, 178, 198, 208, 218, 218, 208, 198, 178, 0, 60 | 0, 178, 198, 218, 238, 238, 218, 198, 178, 0, 61 | 0, 178, 198, 208, 218, 218, 208, 198, 178, 0, 62 | 0, 178, 198, 198, 198, 198, 198, 198, 178, 0, 63 | 0, 198, 198, 198, 198, 198, 198, 198, 198, 0, 64 | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 65 | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), 66 | 'B': (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 67 | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 68 | 0, 797, 824, 817, 808, 808, 817, 824, 797, 0, 69 | 0, 814, 841, 834, 825, 825, 834, 841, 814, 0, 70 | 0, 818, 845, 838, 829, 829, 838, 845, 818, 0, 71 | 0, 824, 851, 844, 835, 835, 844, 851, 824, 0, 72 | 0, 827, 854, 847, 838, 838, 847, 854, 827, 0, 73 | 0, 826, 853, 846, 837, 837, 846, 853, 826, 0, 74 | 0, 817, 844, 837, 828, 828, 837, 844, 817, 0, 75 | 0, 792, 819, 812, 803, 803, 812, 819, 792, 0, 76 | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 77 | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), 78 | 'N': (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 79 | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 80 | 0, 627, 762, 786, 798, 798, 786, 762, 627, 0, 81 | 0, 763, 798, 822, 834, 834, 822, 798, 763, 0, 82 | 0, 817, 852, 876, 888, 888, 876, 852, 817, 0, 83 | 0, 797, 832, 856, 868, 868, 856, 832, 797, 0, 84 | 0, 799, 834, 858, 870, 870, 858, 834, 799, 0, 85 | 0, 758, 793, 817, 829, 829, 817, 793, 758, 0, 86 | 0, 739, 774, 798, 810, 810, 798, 774, 739, 0, 87 | 0, 683, 718, 742, 754, 754, 742, 718, 683, 0, 88 | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 89 | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), 90 | 'R': (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 91 | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 92 | 0, 1258, 1263, 1268, 1272, 1272, 1268, 1263, 1258, 0, 93 | 0, 1258, 1263, 1268, 1272, 1272, 1268, 1263, 1258, 0, 94 | 0, 1258, 1263, 1268, 1272, 1272, 1268, 1263, 1258, 0, 95 | 0, 1258, 1263, 1268, 1272, 1272, 1268, 1263, 1258, 0, 96 | 0, 1258, 1263, 1268, 1272, 1272, 1268, 1263, 1258, 0, 97 | 0, 1258, 1263, 1268, 1272, 1272, 1268, 1263, 1258, 0, 98 | 0, 1258, 1263, 1268, 1272, 1272, 1268, 1263, 1258, 0, 99 | 0, 1258, 1263, 1268, 1272, 1272, 1268, 1263, 1258, 0, 100 | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 101 | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), 102 | 'Q': (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 103 | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 104 | 0, 2529, 2529, 2529, 2529, 2529, 2529, 2529, 2529, 0, 105 | 0, 2529, 2529, 2529, 2529, 2529, 2529, 2529, 2529, 0, 106 | 0, 2529, 2529, 2529, 2529, 2529, 2529, 2529, 2529, 0, 107 | 0, 2529, 2529, 2529, 2529, 2529, 2529, 2529, 2529, 0, 108 | 0, 2529, 2529, 2529, 2529, 2529, 2529, 2529, 2529, 0, 109 | 0, 2529, 2529, 2529, 2529, 2529, 2529, 2529, 2529, 0, 110 | 0, 2529, 2529, 2529, 2529, 2529, 2529, 2529, 2529, 0, 111 | 0, 2529, 2529, 2529, 2529, 2529, 2529, 2529, 2529, 0, 112 | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 113 | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), 114 | 'K': (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 115 | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 116 | 0, 60098, 60132, 60073, 60025, 60025, 60073, 60132, 60098, 0, 117 | 0, 60119, 60153, 60094, 60046, 60046, 60094, 60153, 60119, 0, 118 | 0, 60146, 60180, 60121, 60073, 60073, 60121, 60180, 60146, 0, 119 | 0, 60173, 60207, 60148, 60100, 60100, 60148, 60207, 60173, 0, 120 | 0, 60196, 60230, 60171, 60123, 60123, 60171, 60230, 60196, 0, 121 | 0, 60224, 60258, 60199, 60151, 60151, 60199, 60258, 60224, 0, 122 | 0, 60287, 60321, 60262, 60214, 60214, 60262, 60321, 60287, 0, 123 | 0, 60298, 60332, 60273, 60225, 60225, 60273, 60332, 60298, 0, 124 | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 125 | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) 126 | } 127 | 128 | 129 | ############################################################################### 130 | # Chess logic 131 | ############################################################################### 132 | 133 | class Position(namedtuple('Position', 'board score wc bc ep kp')): 134 | """ A state of a chess game 135 | board -- a 120 char representation of the board 136 | score -- the board evaluation 137 | wc -- the castling rights 138 | bc -- the opponent castling rights 139 | ep - the en passant square 140 | kp - the king passant square 141 | """ 142 | 143 | def gen_moves(self): 144 | # For each of our pieces, iterate through each possible 'ray' of moves, 145 | # as defined in the 'directions' map. The rays are broken e.g. by 146 | # captures or immediately in case of pieces such as knights. 147 | for i, p in enumerate(self.board): 148 | if not p.isupper(): continue 149 | for d in directions[p]: 150 | for j in count(i+d, d): 151 | q = self.board[j] 152 | # Stay inside the board 153 | if self.board[j].isspace(): break 154 | # Castling 155 | if i == A1 and q == 'K' and self.wc[0]: yield (j, j-2) 156 | if i == H1 and q == 'K' and self.wc[1]: yield (j, j+2) 157 | # No friendly captures 158 | if q.isupper(): break 159 | # Special pawn stuff 160 | if p == 'P' and d in (N+W, N+E) and q == '.' and j not in (self.ep, self.kp): break 161 | if p == 'P' and d in (N, 2*N) and q != '.': break 162 | if p == 'P' and d == 2*N and (i < A1+N or self.board[i+N] != '.'): break 163 | # Move it 164 | yield (i, j) 165 | # Stop crawlers from sliding 166 | if p in ('P', 'N', 'K'): break 167 | # No sliding after captures 168 | if q.islower(): break 169 | 170 | def rotate(self): 171 | return Position( 172 | self.board[::-1].swapcase(), -self.score, 173 | self.bc, self.wc, 119-self.ep, 119-self.kp) 174 | 175 | def move(self, move): 176 | i, j = move 177 | p, q = self.board[i], self.board[j] 178 | put = lambda board, i, p: board[:i] + p + board[i+1:] 179 | # Copy variables and reset ep and kp 180 | board = self.board 181 | wc, bc, ep, kp = self.wc, self.bc, 0, 0 182 | score = self.score + self.value(move) 183 | # Actual move 184 | board = put(board, j, board[i]) 185 | board = put(board, i, '.') 186 | # Castling rights 187 | if i == A1: wc = (False, wc[1]) 188 | if i == H1: wc = (wc[0], False) 189 | if j == A8: bc = (bc[0], False) 190 | if j == H8: bc = (False, bc[1]) 191 | # Castling 192 | if p == 'K': 193 | wc = (False, False) 194 | if abs(j-i) == 2: 195 | kp = (i+j)//2 196 | board = put(board, A1 if j < i else H1, '.') 197 | board = put(board, kp, 'R') 198 | # Special pawn stuff 199 | if p == 'P': 200 | if A8 <= j <= H8: 201 | board = put(board, j, 'Q') 202 | if j - i == 2*N: 203 | ep = i + N 204 | if j - i in (N+W, N+E) and q == '.': 205 | board = put(board, j+S, '.') 206 | # We rotate the returned position, so it's ready for the next player 207 | return Position(board, score, wc, bc, ep, kp).rotate() 208 | 209 | def value(self, move): 210 | i, j = move 211 | p, q = self.board[i], self.board[j] 212 | # Actual move 213 | score = pst[p][j] - pst[p][i] 214 | # Capture 215 | if q.islower(): 216 | score += pst[q.upper()][j] 217 | # Castling check detection 218 | if abs(j-self.kp) < 2: 219 | score += pst['K'][j] 220 | # Castling 221 | if p == 'K' and abs(i-j) == 2: 222 | score += pst['R'][(i+j)//2] 223 | score -= pst['R'][A1 if j < i else H1] 224 | # Special pawn stuff 225 | if p == 'P': 226 | if A8 <= j <= H8: 227 | score += pst['Q'][j] - pst['P'][j] 228 | if j == self.ep: 229 | score += pst['P'][j+S] 230 | return score 231 | 232 | Entry = namedtuple('Entry', 'depth score gamma move') 233 | tp = OrderedDict() 234 | 235 | 236 | ############################################################################### 237 | # Search logic 238 | ############################################################################### 239 | 240 | nodes = 0 241 | def bound(pos, gamma, depth): 242 | """ returns s(pos) <= r < gamma if s(pos) < gamma 243 | returns s(pos) >= r >= gamma if s(pos) >= gamma """ 244 | global nodes; nodes += 1 245 | 246 | # Look in the table if we have already searched this position before. 247 | # We use the table value if it was done with at least as deep a search 248 | # as ours, and the gamma value is compatible. 249 | entry = tp.get(pos) 250 | if entry is not None and entry.depth >= depth and ( 251 | entry.score < entry.gamma and entry.score < gamma or 252 | entry.score >= entry.gamma and entry.score >= gamma): 253 | return entry.score 254 | 255 | # Stop searching if we have won/lost. 256 | if abs(pos.score) >= MATE_VALUE: 257 | return pos.score 258 | 259 | # Null move. Is also used for stalemate checking 260 | nullscore = -bound(pos.rotate(), 1-gamma, depth-3) if depth > 0 else pos.score 261 | #nullscore = -MATE_VALUE*3 if depth > 0 else pos.score 262 | if nullscore >= gamma: 263 | return nullscore 264 | 265 | # We generate all possible, pseudo legal moves and order them to provoke 266 | # cuts. At the next level of the tree we are going to minimize the score. 267 | # This can be shown equal to maximizing the negative score, with a slightly 268 | # adjusted gamma value. 269 | best, bmove = -3*MATE_VALUE, None 270 | for move in sorted(pos.gen_moves(), key=pos.value, reverse=True): 271 | # We check captures with the value function, as it also contains ep and kp 272 | if depth <= 0 and pos.value(move) < 150: 273 | break 274 | score = -bound(pos.move(move), 1-gamma, depth-1) 275 | if score > best: 276 | best = score 277 | bmove = move 278 | if score >= gamma: 279 | break 280 | 281 | # If there are no captures, or just not any good ones, stand pat 282 | if depth <= 0 and best < nullscore: 283 | return nullscore 284 | # Check for stalemate. If best move loses king, but not doing anything 285 | # would save us. Not at all a perfect check. 286 | if depth > 0 and best <= -MATE_VALUE and nullscore > -MATE_VALUE: 287 | best = 0 288 | 289 | # We save the found move together with the score, so we can retrieve it in 290 | # the play loop. We also trim the transposition table in FILO order. 291 | # We prefer fail-high moves, as they are the ones we can build our pv from. 292 | if entry is None or depth >= entry.depth and best >= gamma: 293 | tp[pos] = Entry(depth, best, gamma, bmove) 294 | if len(tp) > TABLE_SIZE: 295 | tp.popitem() 296 | return best 297 | 298 | 299 | def search(pos, maxn=NODES_SEARCHED): 300 | """ Iterative deepening MTD-bi search """ 301 | global nodes; nodes = 0 302 | 303 | # We limit the depth to some constant, so we don't get a stack overflow in 304 | # the end game. 305 | for depth in range(1, 99): 306 | # The inner loop is a binary search on the score of the position. 307 | # Inv: lower <= score <= upper 308 | # However this may be broken by values from the transposition table, 309 | # as they don't have the same concept of p(score). Hence we just use 310 | # 'lower < upper - margin' as the loop condition. 311 | lower, upper = -3*MATE_VALUE, 3*MATE_VALUE 312 | while lower < upper - 3: 313 | gamma = (lower+upper+1)//2 314 | score = bound(pos, gamma, depth) 315 | if score >= gamma: 316 | lower = score 317 | if score < gamma: 318 | upper = score 319 | 320 | # We stop deepening if the global N counter shows we have spent too 321 | # long, or if we have already won the game. 322 | if nodes >= maxn or abs(score) >= MATE_VALUE: 323 | break 324 | 325 | # If the game hasn't finished we can retrieve our move from the 326 | # transposition table. 327 | entry = tp.get(pos) 328 | if entry is not None: 329 | return entry.move, score 330 | return None, score 331 | 332 | 333 | ############################################################################### 334 | # User interface 335 | ############################################################################### 336 | 337 | # Python 2 compatability 338 | if sys.version_info[0] == 2: 339 | input = raw_input 340 | 341 | 342 | def parse(c): 343 | fil, rank = ord(c[0]) - ord('a'), int(c[1]) - 1 344 | return A1 + fil - 10*rank 345 | 346 | 347 | def render(i): 348 | rank, fil = divmod(i - A1, 10) 349 | return chr(fil + ord('a')) + str(-rank + 1) 350 | 351 | 352 | def print_pos(pos): 353 | print() 354 | uni_pieces = {'R':'♜', 'N':'♞', 'B':'♝', 'Q':'♛', 'K':'♚', 'P':'♟', 355 | 'r':'♖', 'n':'♘', 'b':'♗', 'q':'♕', 'k':'♔', 'p':'♙', '.':'·'} 356 | for i, row in enumerate(pos.board.strip().split('\n ')): 357 | print(' ', 8-i, ' '.join(uni_pieces.get(p, p) for p in row)) 358 | print(' a b c d e f g h \n\n') 359 | 360 | 361 | def main(): 362 | pos = Position(initial, 0, (True,True), (True,True), 0, 0) 363 | while True: 364 | print_pos(pos) 365 | 366 | # We query the user until she enters a legal move. 367 | move = None 368 | while move not in pos.gen_moves(): 369 | match = re.match('([a-h][1-8])'*2, input('Your move: ')) 370 | if match: 371 | move = parse(match.group(1)), parse(match.group(2)) 372 | else: 373 | # Inform the user when invalid input (e.g. "help") is entered 374 | print("Please enter a move like g8f6") 375 | pos = pos.move(move) 376 | 377 | # After our move we rotate the board and print it again. 378 | # This allows us to see the effect of our move. 379 | print_pos(pos.rotate()) 380 | 381 | # Fire up the engine to look for a move. 382 | move, score = search(pos) 383 | if score <= -MATE_VALUE: 384 | print("You won") 385 | break 386 | if score >= MATE_VALUE: 387 | print("You lost") 388 | break 389 | 390 | # The black player moves from a rotated position, so we have to 391 | # 'back rotate' the move before printing it. 392 | print("My move:", render(119-move[0]) + render(119-move[1])) 393 | pos = pos.move(move) 394 | 395 | 396 | if __name__ == '__main__': 397 | main() 398 | -------------------------------------------------------------------------------- /training_1Kx7_2.5M_zero-mean_2016-04-08.log: -------------------------------------------------------------------------------- 1 | lenik@drifter~/Desktop/deep-murasaki$ ./train_keras.py 2 | Using gpu device 0: GeForce GTX 970 (CNMeM is disabled) 3 | Using Theano backend. 4 | compiling... 5 | Epoch 1/500 6 | 2576998/2576998 [==============================] - 57s - loss: 58.3720 7 | Epoch 2/500 8 | 2576998/2576998 [==============================] - 59s - loss: 3.2944 9 | Epoch 3/500 10 | 2576998/2576998 [==============================] - 62s - loss: 3.1499 11 | Epoch 4/500 12 | 2576998/2576998 [==============================] - 58s - loss: 3.0634 13 | Epoch 5/500 14 | 2576998/2576998 [==============================] - 59s - loss: 2.9920 15 | Epoch 6/500 16 | 2576998/2576998 [==============================] - 58s - loss: 2.9313 17 | Epoch 7/500 18 | 2576998/2576998 [==============================] - 58s - loss: 2.8819 19 | Epoch 8/500 20 | 2576998/2576998 [==============================] - 58s - loss: 2.8355 21 | Epoch 9/500 22 | 2576998/2576998 [==============================] - 59s - loss: 2.7854 ^T 23 | Epoch 10/500 24 | 2576998/2576998 [==============================] - 58s - loss: 2.7397 25 | Epoch 11/500 26 | 2576998/2576998 [==============================] - 58s - loss: 2.6917 27 | Epoch 12/500 28 | 2576998/2576998 [==============================] - 59s - loss: 2.6462 29 | Epoch 13/500 30 | 2576998/2576998 [==============================] - 58s - loss: 2.5989 31 | Epoch 14/500 32 | 2576998/2576998 [==============================] - 58s - loss: 2.5525 33 | Epoch 15/500 34 | 2576998/2576998 [==============================] - 58s - loss: 2.5030 35 | Epoch 16/500 36 | 2576998/2576998 [==============================] - 58s - loss: 2.4572 37 | Epoch 17/500 38 | 2576998/2576998 [==============================] - 58s - loss: 2.4101 39 | Epoch 18/500 40 | 2576998/2576998 [==============================] - 58s - loss: 2.3614 41 | Epoch 19/500 42 | 2576998/2576998 [==============================] - 58s - loss: 2.3138 43 | Epoch 20/500 44 | 2576998/2576998 [==============================] - 58s - loss: 2.2679 45 | Epoch 21/500 46 | 2576998/2576998 [==============================] - 58s - loss: 2.2213 47 | Epoch 22/500 48 | 2576998/2576998 [==============================] - 58s - loss: 2.1760 49 | Epoch 23/500 50 | 2576998/2576998 [==============================] - 58s - loss: 2.1326 51 | Epoch 24/500 52 | 2576998/2576998 [==============================] - 58s - loss: 2.0880 53 | Epoch 25/500 54 | 2576998/2576998 [==============================] - 59s - loss: 2.0436 55 | Epoch 26/500 56 | 2576998/2576998 [==============================] - 58s - loss: 2.0037 57 | Epoch 27/500 58 | 2576998/2576998 [==============================] - 58s - loss: 1.9618 59 | Epoch 28/500 60 | 2576998/2576998 [==============================] - 58s - loss: 1.9228 61 | Epoch 29/500 62 | 2576998/2576998 [==============================] - 58s - loss: 1.8828 63 | Epoch 30/500 64 | 2576998/2576998 [==============================] - 58s - loss: 1.8439 65 | Epoch 31/500 66 | 2576998/2576998 [==============================] - 58s - loss: 1.8054 67 | Epoch 32/500 68 | 2576998/2576998 [==============================] - 58s - loss: 1.7698 69 | Epoch 33/500 70 | 2576998/2576998 [==============================] - 58s - loss: 1.7345 71 | Epoch 34/500 72 | 2576998/2576998 [==============================] - 58s - loss: 1.6997 73 | Epoch 35/500 74 | 2576998/2576998 [==============================] - 58s - loss: 1.6637 75 | Epoch 36/500 76 | 2576998/2576998 [==============================] - 58s - loss: 1.6305 77 | Epoch 37/500 78 | 2576998/2576998 [==============================] - 58s - loss: 1.6022 79 | Epoch 38/500 80 | 2576998/2576998 [==============================] - 58s - loss: 1.5705 81 | Epoch 39/500 82 | 2576998/2576998 [==============================] - 58s - loss: 1.5371 83 | Epoch 40/500 84 | 2576998/2576998 [==============================] - 58s - loss: 1.5085 85 | Epoch 41/500 86 | 2576998/2576998 [==============================] - 58s - loss: 1.4796 87 | Epoch 42/500 88 | 2576998/2576998 [==============================] - 58s - loss: 1.4503 89 | Epoch 43/500 90 | 2576998/2576998 [==============================] - 58s - loss: 1.4242 91 | Epoch 44/500 92 | 2576998/2576998 [==============================] - 58s - loss: 1.3969 93 | Epoch 45/500 94 | 2576998/2576998 [==============================] - 58s - loss: 1.3719 95 | Epoch 46/500 96 | 2576998/2576998 [==============================] - 58s - loss: 1.3463 97 | Epoch 47/500 98 | 2576998/2576998 [==============================] - 58s - loss: 1.3211 99 | Epoch 48/500 100 | 2576998/2576998 [==============================] - 58s - loss: 1.3007 101 | Epoch 49/500 102 | 2576998/2576998 [==============================] - 58s - loss: 1.2782 103 | Epoch 50/500 104 | 2576998/2576998 [==============================] - 58s - loss: 1.2575 105 | Epoch 51/500 106 | 2576998/2576998 [==============================] - 58s - loss: 1.2349 107 | Epoch 52/500 108 | 2576998/2576998 [==============================] - 61s - loss: 1.2122 109 | Epoch 53/500 110 | 2576998/2576998 [==============================] - 62s - loss: 1.1940 111 | Epoch 54/500 112 | 2576998/2576998 [==============================] - 62s - loss: 1.1762 113 | Epoch 55/500 114 | 2576998/2576998 [==============================] - 62s - loss: 1.1578 115 | Epoch 56/500 116 | 2576998/2576998 [==============================] - 62s - loss: 1.1412 117 | Epoch 57/500 118 | 2576998/2576998 [==============================] - 62s - loss: 1.1211 119 | Epoch 58/500 120 | 2576998/2576998 [==============================] - 62s - loss: 1.1074 121 | Epoch 59/500 122 | 2576998/2576998 [==============================] - 62s - loss: 1.0892 123 | Epoch 60/500 124 | 2576998/2576998 [==============================] - 62s - loss: 1.0746 125 | Epoch 61/500 126 | 2576998/2576998 [==============================] - 62s - loss: 1.0622 127 | Epoch 62/500 128 | 2576998/2576998 [==============================] - 62s - loss: 1.0486 129 | Epoch 63/500 130 | 2576998/2576998 [==============================] - 62s - loss: 1.0371 131 | Epoch 64/500 132 | 2576998/2576998 [==============================] - 62s - loss: 1.0231 133 | Epoch 65/500 134 | 2576998/2576998 [==============================] - 62s - loss: 1.0078 135 | Epoch 66/500 136 | 2576998/2576998 [==============================] - 62s - loss: 0.9978 137 | Epoch 67/500 138 | 2576998/2576998 [==============================] - 62s - loss: 0.9862 139 | Epoch 68/500 140 | 2576998/2576998 [==============================] - 62s - loss: 0.9752 141 | Epoch 69/500 142 | 2576998/2576998 [==============================] - 61s - loss: 0.9644 143 | Epoch 70/500 144 | 2576998/2576998 [==============================] - 61s - loss: 0.9553 145 | Epoch 71/500 146 | 2576998/2576998 [==============================] - 62s - loss: 0.9430 147 | Epoch 72/500 148 | 2576998/2576998 [==============================] - 61s - loss: 0.9318 149 | Epoch 73/500 150 | 2576998/2576998 [==============================] - 61s - loss: 0.9258 151 | Epoch 74/500 152 | 2576998/2576998 [==============================] - 62s - loss: 0.9163 153 | Epoch 75/500 154 | 2576998/2576998 [==============================] - 61s - loss: 0.9065 155 | Epoch 76/500 156 | 2576998/2576998 [==============================] - 61s - loss: 0.8984 157 | Epoch 77/500 158 | 2576998/2576998 [==============================] - 61s - loss: 0.8908 159 | Epoch 78/500 160 | 2576998/2576998 [==============================] - 61s - loss: 0.8791 161 | Epoch 79/500 162 | 2576998/2576998 [==============================] - 61s - loss: 0.8751 163 | Epoch 80/500 164 | 2576998/2576998 [==============================] - 61s - loss: 0.8658 165 | Epoch 81/500 166 | 2576998/2576998 [==============================] - 61s - loss: 0.8588 167 | Epoch 82/500 168 | 2576998/2576998 [==============================] - 62s - loss: 0.8540 169 | Epoch 83/500 170 | 2576998/2576998 [==============================] - 62s - loss: 0.8505 171 | Epoch 84/500 172 | 2576998/2576998 [==============================] - 62s - loss: 0.8423 173 | Epoch 85/500 174 | 2576998/2576998 [==============================] - 62s - loss: 0.8341 175 | Epoch 86/500 176 | 2576998/2576998 [==============================] - 61s - loss: 0.8261 177 | Epoch 87/500 178 | 2576998/2576998 [==============================] - 61s - loss: 0.8224 179 | Epoch 88/500 180 | 2576998/2576998 [==============================] - 61s - loss: 0.8146 181 | Epoch 89/500 182 | 2576998/2576998 [==============================] - 61s - loss: 0.8079 183 | Epoch 90/500 184 | 2576998/2576998 [==============================] - 61s - loss: 0.8048 185 | Epoch 91/500 186 | 2576998/2576998 [==============================] - 61s - loss: 0.7988 187 | Epoch 92/500 188 | 2576998/2576998 [==============================] - 61s - loss: 0.7943 189 | Epoch 93/500 190 | 2576998/2576998 [==============================] - 61s - loss: 0.7902 191 | Epoch 94/500 192 | 2576998/2576998 [==============================] - 61s - loss: 0.7820 193 | Epoch 95/500 194 | 2576998/2576998 [==============================] - 64s - loss: 0.7812 195 | Epoch 96/500 196 | 2576998/2576998 [==============================] - 65s - loss: 0.7774 197 | Epoch 97/500 198 | 2576998/2576998 [==============================] - 63s - loss: 0.7707 199 | Epoch 98/500 200 | 2576998/2576998 [==============================] - 64s - loss: 0.7666 201 | Epoch 99/500 202 | 2576998/2576998 [==============================] - 64s - loss: 0.7602 203 | Epoch 100/500 204 | 2576998/2576998 [==============================] - 65s - loss: 0.7590 205 | Epoch 101/500 206 | 2576998/2576998 [==============================] - 65s - loss: 0.7542 207 | Epoch 102/500 208 | 2576998/2576998 [==============================] - 65s - loss: 0.7515 209 | Epoch 103/500 210 | 2576998/2576998 [==============================] - 63s - loss: 0.7458 211 | Epoch 104/500 212 | 2576998/2576998 [==============================] - 62s - loss: 0.7422 213 | Epoch 105/500 214 | 2576998/2576998 [==============================] - 62s - loss: 0.7427 215 | Epoch 106/500 216 | 2576998/2576998 [==============================] - 62s - loss: 0.7401 217 | Epoch 107/500 218 | 2576998/2576998 [==============================] - 62s - loss: 0.7340 219 | Epoch 108/500 220 | 2576998/2576998 [==============================] - 61s - loss: 0.7303 221 | Epoch 109/500 222 | 2576998/2576998 [==============================] - 61s - loss: 0.7276 223 | Epoch 110/500 224 | 2576998/2576998 [==============================] - 62s - loss: 0.7222 225 | Epoch 111/500 226 | 2576998/2576998 [==============================] - 61s - loss: 0.7201 227 | Epoch 112/500 228 | 2576998/2576998 [==============================] - 61s - loss: 0.7184 229 | Epoch 113/500 230 | 2576998/2576998 [==============================] - 61s - loss: 0.7116 231 | Epoch 114/500 232 | 2576998/2576998 [==============================] - 62s - loss: 0.7108 233 | Epoch 115/500 234 | 2576998/2576998 [==============================] - 62s - loss: 0.7077 235 | Epoch 116/500 236 | 2576998/2576998 [==============================] - 62s - loss: 0.7022 237 | Epoch 117/500 238 | 2576998/2576998 [==============================] - 61s - loss: 0.6988 239 | Epoch 118/500 240 | 2576998/2576998 [==============================] - 61s - loss: 0.6989 241 | Epoch 119/500 242 | 2576998/2576998 [==============================] - 62s - loss: 0.6981 243 | Epoch 120/500 244 | 2576998/2576998 [==============================] - 62s - loss: 0.6961 245 | Epoch 121/500 246 | 2576998/2576998 [==============================] - 62s - loss: 0.6923 247 | Epoch 122/500 248 | 2576998/2576998 [==============================] - 62s - loss: 0.6905 249 | Epoch 123/500 250 | 2576998/2576998 [==============================] - 62s - loss: 0.6899 251 | Epoch 124/500 252 | 2576998/2576998 [==============================] - 62s - loss: 0.6866 253 | Epoch 125/500 254 | 2576998/2576998 [==============================] - 62s - loss: 0.6802 255 | Epoch 126/500 256 | 2576998/2576998 [==============================] - 62s - loss: 0.6788 257 | Epoch 127/500 258 | 2576998/2576998 [==============================] - 61s - loss: 0.6756 259 | Epoch 128/500 260 | 2576998/2576998 [==============================] - 61s - loss: 0.6750 261 | Epoch 129/500 262 | 2576998/2576998 [==============================] - 61s - loss: 0.6750 263 | Epoch 130/500 264 | 2576998/2576998 [==============================] - 61s - loss: 0.6724 265 | Epoch 131/500 266 | 2576998/2576998 [==============================] - 62s - loss: 0.6712 267 | Epoch 132/500 268 | 2576998/2576998 [==============================] - 61s - loss: 0.6687 269 | Epoch 133/500 270 | 2576998/2576998 [==============================] - 61s - loss: 0.6672 271 | Epoch 134/500 272 | 2576998/2576998 [==============================] - 61s - loss: 0.6658 273 | Epoch 135/500 274 | 2576998/2576998 [==============================] - 61s - loss: 0.6651 275 | Epoch 136/500 276 | 2576998/2576998 [==============================] - 61s - loss: 0.6619 277 | Epoch 137/500 278 | 2576998/2576998 [==============================] - 61s - loss: 0.6631 279 | Epoch 138/500 280 | 2576998/2576998 [==============================] - 61s - loss: 0.6576 281 | Epoch 139/500 282 | 2576998/2576998 [==============================] - 61s - loss: 0.6537 283 | Epoch 140/500 284 | 2576998/2576998 [==============================] - 61s - loss: 0.6516 285 | Epoch 141/500 286 | 2576998/2576998 [==============================] - 61s - loss: 0.6552 287 | Epoch 142/500 288 | 2576998/2576998 [==============================] - 62s - loss: 0.6490 289 | Epoch 143/500 290 | 2576998/2576998 [==============================] - 61s - loss: 0.6484 291 | Epoch 144/500 292 | 2576998/2576998 [==============================] - 61s - loss: 0.6465 293 | Epoch 145/500 294 | 2576998/2576998 [==============================] - 61s - loss: 0.6447 295 | Epoch 146/500 296 | 2576998/2576998 [==============================] - 61s - loss: 0.6399 297 | Epoch 147/500 298 | 2576998/2576998 [==============================] - 61s - loss: 0.6424 299 | Epoch 148/500 300 | 2576998/2576998 [==============================] - 61s - loss: 0.6396 301 | Epoch 149/500 302 | 2576998/2576998 [==============================] - 62s - loss: 0.6331 303 | Epoch 150/500 304 | 2576998/2576998 [==============================] - 61s - loss: 0.6344 305 | Epoch 151/500 306 | 2576998/2576998 [==============================] - 61s - loss: 0.6316 307 | Epoch 152/500 308 | 2576998/2576998 [==============================] - 61s - loss: 0.6321 309 | Epoch 153/500 310 | 2576998/2576998 [==============================] - 61s - loss: 0.6295 311 | Epoch 154/500 312 | 2576998/2576998 [==============================] - 61s - loss: 0.6287 313 | Epoch 155/500 314 | 2576998/2576998 [==============================] - 61s - loss: 0.6269 315 | Epoch 156/500 316 | 2576998/2576998 [==============================] - 61s - loss: 0.6263 317 | Epoch 157/500 318 | 2576998/2576998 [==============================] - 61s - loss: 0.6256 319 | Epoch 158/500 320 | 2576998/2576998 [==============================] - 61s - loss: 0.6275 321 | Epoch 159/500 322 | 2576998/2576998 [==============================] - 61s - loss: 0.6231 323 | Epoch 160/500 324 | 2576998/2576998 [==============================] - 61s - loss: 0.6234 325 | Epoch 161/500 326 | 2576998/2576998 [==============================] - 61s - loss: 0.6214 327 | Epoch 162/500 328 | 2576998/2576998 [==============================] - 61s - loss: 0.6188 329 | Epoch 163/500 330 | 2576998/2576998 [==============================] - 61s - loss: 0.6169 331 | Epoch 164/500 332 | 2576998/2576998 [==============================] - 61s - loss: 0.6179 333 | Epoch 165/500 334 | 2576998/2576998 [==============================] - 61s - loss: 0.6121 335 | Epoch 166/500 336 | 2576998/2576998 [==============================] - 61s - loss: 0.6148 337 | Epoch 167/500 338 | 2576998/2576998 [==============================] - 62s - loss: 0.6126 339 | Epoch 168/500 340 | 2576998/2576998 [==============================] - 61s - loss: 0.6099 341 | Epoch 169/500 342 | 2576998/2576998 [==============================] - 61s - loss: 0.6074 343 | Epoch 170/500 344 | 2576998/2576998 [==============================] - 61s - loss: 0.6063 345 | Epoch 171/500 346 | 2576998/2576998 [==============================] - 61s - loss: 0.6110 347 | Epoch 172/500 348 | 2576998/2576998 [==============================] - 61s - loss: 0.6099 349 | Epoch 173/500 350 | 2576998/2576998 [==============================] - 61s - loss: 0.6074 351 | Epoch 174/500 352 | 2576998/2576998 [==============================] - 61s - loss: 0.6081 353 | Epoch 175/500 354 | 2576998/2576998 [==============================] - 61s - loss: 0.6098 355 | Epoch 176/500 356 | 2576998/2576998 [==============================] - 61s - loss: 0.6076 357 | Epoch 177/500 358 | 2576998/2576998 [==============================] - 61s - loss: 0.6076 359 | Epoch 178/500 360 | 2576998/2576998 [==============================] - 61s - loss: 0.6015 361 | Epoch 179/500 362 | 2576998/2576998 [==============================] - 61s - loss: 0.5953 363 | Epoch 180/500 364 | 2576998/2576998 [==============================] - 61s - loss: 0.5989 365 | Epoch 181/500 366 | 2576998/2576998 [==============================] - 61s - loss: 0.5952 367 | Epoch 182/500 368 | 2576998/2576998 [==============================] - 61s - loss: 0.5923 369 | Epoch 183/500 370 | 2576998/2576998 [==============================] - 61s - loss: 0.5960 371 | Epoch 184/500 372 | 2576998/2576998 [==============================] - 61s - loss: 0.5946 373 | Epoch 185/500 374 | 2576998/2576998 [==============================] - 61s - loss: 0.5896 375 | Epoch 186/500 376 | 2576998/2576998 [==============================] - 62s - loss: 0.5922 377 | Epoch 187/500 378 | 2576998/2576998 [==============================] - 62s - loss: 0.5898 379 | Epoch 188/500 380 | 2576998/2576998 [==============================] - 62s - loss: 0.5906 381 | Epoch 189/500 382 | 2576998/2576998 [==============================] - 62s - loss: 0.5862 383 | Epoch 190/500 384 | 2576998/2576998 [==============================] - 62s - loss: 0.5923 385 | Epoch 191/500 386 | 2576998/2576998 [==============================] - 62s - loss: 0.5873 387 | Epoch 192/500 388 | 2576998/2576998 [==============================] - 62s - loss: 0.5821 389 | Epoch 193/500 390 | 2576998/2576998 [==============================] - 62s - loss: 0.5852 391 | Epoch 194/500 392 | 2576998/2576998 [==============================] - 62s - loss: 0.5885 393 | Epoch 195/500 394 | 2576998/2576998 [==============================] - 62s - loss: 0.5832 395 | Epoch 196/500 396 | 2576998/2576998 [==============================] - 61s - loss: 0.5860 397 | Epoch 197/500 398 | 2576998/2576998 [==============================] - 61s - loss: 0.5831 399 | Epoch 198/500 400 | 2576998/2576998 [==============================] - 62s - loss: 0.5808 401 | Epoch 199/500 402 | 2576998/2576998 [==============================] - 62s - loss: 0.5808 403 | Epoch 200/500 404 | 2576998/2576998 [==============================] - 62s - loss: 0.5791 405 | Epoch 201/500 406 | 2576998/2576998 [==============================] - 62s - loss: 0.5779 407 | Epoch 202/500 408 | 2576998/2576998 [==============================] - 62s - loss: 0.5768 409 | Epoch 203/500 410 | 2576998/2576998 [==============================] - 62s - loss: 0.5771 411 | Epoch 204/500 412 | 2576998/2576998 [==============================] - 62s - loss: 0.5780 413 | Epoch 205/500 414 | 2576998/2576998 [==============================] - 62s - loss: 0.5790 415 | Epoch 206/500 416 | 2576998/2576998 [==============================] - 62s - loss: 0.5717 417 | Epoch 207/500 418 | 2576998/2576998 [==============================] - 61s - loss: 0.5743 419 | Epoch 208/500 420 | 2576998/2576998 [==============================] - 62s - loss: 0.5736 421 | Epoch 209/500 422 | 2576998/2576998 [==============================] - 62s - loss: 0.5745 423 | Epoch 210/500 424 | 2576998/2576998 [==============================] - 62s - loss: 0.5723 425 | Epoch 211/500 426 | 2576998/2576998 [==============================] - 62s - loss: 0.5757 427 | Epoch 212/500 428 | 2576998/2576998 [==============================] - 61s - loss: 0.5748 429 | Epoch 213/500 430 | 2576998/2576998 [==============================] - 62s - loss: 0.5691 431 | Epoch 214/500 432 | 2576998/2576998 [==============================] - 62s - loss: 0.5672 433 | Epoch 215/500 434 | 2576998/2576998 [==============================] - 61s - loss: 0.5684 435 | Epoch 216/500 436 | 2576998/2576998 [==============================] - 62s - loss: 0.5702 437 | Epoch 217/500 438 | 2576998/2576998 [==============================] - 62s - loss: 0.5695 439 | Epoch 218/500 440 | 2576998/2576998 [==============================] - 62s - loss: 0.5654 441 | Epoch 219/500 442 | 2576998/2576998 [==============================] - 62s - loss: 0.5632 443 | Epoch 220/500 444 | 2576998/2576998 [==============================] - 62s - loss: 0.5699 445 | Epoch 221/500 446 | 2576998/2576998 [==============================] - 62s - loss: 0.5693 447 | Epoch 222/500 448 | 2576998/2576998 [==============================] - 62s - loss: 0.5642 449 | Epoch 223/500 450 | 2576998/2576998 [==============================] - 61s - loss: 0.5636 451 | Epoch 224/500 452 | 2576998/2576998 [==============================] - 62s - loss: 0.5607 453 | Epoch 225/500 454 | 2576998/2576998 [==============================] - 62s - loss: 0.5643 455 | Epoch 226/500 456 | 2576998/2576998 [==============================] - 62s - loss: 0.5631 457 | Epoch 227/500 458 | 2576998/2576998 [==============================] - 61s - loss: 0.5603 459 | Epoch 228/500 460 | 2576998/2576998 [==============================] - 62s - loss: 0.5578 461 | Epoch 229/500 462 | 2576998/2576998 [==============================] - 62s - loss: 0.5628 463 | Epoch 230/500 464 | 2576998/2576998 [==============================] - 62s - loss: 0.5604 465 | Epoch 231/500 466 | 2576998/2576998 [==============================] - 62s - loss: 0.5639 467 | Epoch 232/500 468 | 2576998/2576998 [==============================] - 62s - loss: 0.5628 469 | Epoch 233/500 470 | 2576998/2576998 [==============================] - 62s - loss: 0.5605 471 | Epoch 234/500 472 | 2576998/2576998 [==============================] - 62s - loss: 0.5584 473 | Epoch 235/500 474 | 2576998/2576998 [==============================] - 62s - loss: 0.5624 475 | Epoch 236/500 476 | 2576998/2576998 [==============================] - 62s - loss: 0.5575 477 | Epoch 237/500 478 | 2576998/2576998 [==============================] - 62s - loss: 0.5553 479 | Epoch 238/500 480 | 2576998/2576998 [==============================] - 62s - loss: 0.5555 481 | Epoch 239/500 482 | 2576998/2576998 [==============================] - 61s - loss: 0.5522 483 | Epoch 240/500 484 | 2576998/2576998 [==============================] - 61s - loss: 0.5500 485 | Epoch 241/500 486 | 2576998/2576998 [==============================] - 62s - loss: 0.5516 487 | Epoch 242/500 488 | 2576998/2576998 [==============================] - 61s - loss: 0.5538 489 | Epoch 243/500 490 | 2576998/2576998 [==============================] - 62s - loss: 0.5502 491 | Epoch 244/500 492 | 2576998/2576998 [==============================] - 62s - loss: 0.5504 493 | Epoch 245/500 494 | 2576998/2576998 [==============================] - 62s - loss: 0.5531 495 | Epoch 246/500 496 | 2576998/2576998 [==============================] - 61s - loss: 0.5520 497 | Epoch 247/500 498 | 2576998/2576998 [==============================] - 61s - loss: 0.5529 499 | Epoch 248/500 500 | 2576998/2576998 [==============================] - 61s - loss: 0.5506 501 | Epoch 249/500 502 | 2576998/2576998 [==============================] - 61s - loss: 0.5510 503 | Epoch 250/500 504 | 2576998/2576998 [==============================] - 62s - loss: 0.5487 505 | Epoch 251/500 506 | 2576998/2576998 [==============================] - 61s - loss: 0.5495 507 | Epoch 252/500 508 | 2576998/2576998 [==============================] - 61s - loss: 0.5498 509 | Epoch 253/500 510 | 2576998/2576998 [==============================] - 62s - loss: 0.5455 511 | Epoch 254/500 512 | 2576998/2576998 [==============================] - 62s - loss: 0.5438 513 | Epoch 255/500 514 | 2576998/2576998 [==============================] - 61s - loss: 0.5434 515 | Epoch 256/500 516 | 2576998/2576998 [==============================] - 61s - loss: 0.5502 517 | Epoch 257/500 518 | 2576998/2576998 [==============================] - 62s - loss: 0.5447 519 | Epoch 258/500 520 | 2576998/2576998 [==============================] - 61s - loss: 0.5483 521 | Epoch 259/500 522 | 2576998/2576998 [==============================] - 62s - loss: 0.5462 523 | Epoch 260/500 524 | 2576998/2576998 [==============================] - 62s - loss: 0.5413 525 | Epoch 261/500 526 | 2576998/2576998 [==============================] - 61s - loss: 0.5417 527 | Epoch 262/500 528 | 2576998/2576998 [==============================] - 61s - loss: 0.5422 529 | Epoch 263/500 530 | 2576998/2576998 [==============================] - 61s - loss: 0.5450 531 | Epoch 264/500 532 | 2576998/2576998 [==============================] - 62s - loss: 0.5400 533 | Epoch 265/500 534 | 2576998/2576998 [==============================] - 62s - loss: 0.5389 535 | Epoch 266/500 536 | 2576998/2576998 [==============================] - 62s - loss: 0.5412 537 | Epoch 267/500 538 | 2576998/2576998 [==============================] - 62s - loss: 0.5429 539 | Epoch 268/500 540 | 2576998/2576998 [==============================] - 62s - loss: 0.5412 541 | Epoch 269/500 542 | 2576998/2576998 [==============================] - 61s - loss: 0.5444 543 | Epoch 270/500 544 | 2576998/2576998 [==============================] - 62s - loss: 0.5443 545 | Epoch 271/500 546 | 2576998/2576998 [==============================] - 62s - loss: 0.5388 547 | Epoch 272/500 548 | 2576998/2576998 [==============================] - 60s - loss: 0.5388 549 | Epoch 273/500 550 | 2576998/2576998 [==============================] - 59s - loss: 0.5397 551 | Epoch 274/500 552 | 2576998/2576998 [==============================] - 58s - loss: 0.5396 553 | Epoch 275/500 554 | 2576998/2576998 [==============================] - 59s - loss: 0.5364 555 | Epoch 276/500 556 | 2576998/2576998 [==============================] - 59s - loss: 0.5383 557 | Epoch 277/500 558 | 2576998/2576998 [==============================] - 59s - loss: 0.5378 559 | Epoch 278/500 560 | 2576998/2576998 [==============================] - 58s - loss: 0.5369 561 | Epoch 279/500 562 | 2576998/2576998 [==============================] - 58s - loss: 0.5372 563 | Epoch 280/500 564 | 2576998/2576998 [==============================] - 58s - loss: 0.5376 565 | Epoch 281/500 566 | 2576998/2576998 [==============================] - 58s - loss: 0.5350 567 | Epoch 282/500 568 | 2576998/2576998 [==============================] - 58s - loss: 0.5343 569 | Epoch 283/500 570 | 2576998/2576998 [==============================] - 58s - loss: 0.5334 571 | Epoch 284/500 572 | 2576998/2576998 [==============================] - 59s - loss: 0.5305 573 | Epoch 285/500 574 | 2576998/2576998 [==============================] - 58s - loss: 0.5347 575 | Epoch 286/500 576 | 2576998/2576998 [==============================] - 58s - loss: 0.5319 577 | Epoch 287/500 578 | 2576998/2576998 [==============================] - 58s - loss: 0.5298 579 | Epoch 288/500 580 | 2576998/2576998 [==============================] - 58s - loss: 0.5319 581 | Epoch 289/500 582 | 2576998/2576998 [==============================] - 58s - loss: 0.5327 583 | Epoch 290/500 584 | 2576998/2576998 [==============================] - 58s - loss: 0.5321 585 | Epoch 291/500 586 | 2576998/2576998 [==============================] - 58s - loss: 0.5273 587 | Epoch 292/500 588 | 2576998/2576998 [==============================] - 58s - loss: 0.5270 589 | Epoch 293/500 590 | 2576998/2576998 [==============================] - 58s - loss: 0.5300 591 | Epoch 294/500 592 | 2576998/2576998 [==============================] - 59s - loss: 0.5327 593 | Epoch 295/500 594 | 2576998/2576998 [==============================] - 58s - loss: 0.5252 595 | Epoch 296/500 596 | 2576998/2576998 [==============================] - 58s - loss: 0.5284 597 | Epoch 297/500 598 | 2576998/2576998 [==============================] - 58s - loss: 0.5292 599 | Epoch 298/500 600 | 2576998/2576998 [==============================] - 58s - loss: 0.5301 601 | Epoch 299/500 602 | 2576998/2576998 [==============================] - 58s - loss: 0.5317 603 | Epoch 300/500 604 | 2576998/2576998 [==============================] - 58s - loss: 0.5316 605 | Epoch 301/500 606 | 2576998/2576998 [==============================] - 58s - loss: 0.5298 607 | Epoch 302/500 608 | 2576998/2576998 [==============================] - 58s - loss: 0.5264 609 | Epoch 303/500 610 | 2576998/2576998 [==============================] - 58s - loss: 0.5221 611 | Epoch 304/500 612 | 2576998/2576998 [==============================] - 58s - loss: 0.5195 613 | Epoch 305/500 614 | 2576998/2576998 [==============================] - 58s - loss: 0.5216 615 | Epoch 306/500 616 | 2576998/2576998 [==============================] - 58s - loss: 0.5220 617 | Epoch 307/500 618 | 2576998/2576998 [==============================] - 58s - loss: 0.5211 619 | Epoch 308/500 620 | 2576998/2576998 [==============================] - 58s - loss: 0.5220 621 | Epoch 309/500 622 | 2576998/2576998 [==============================] - 58s - loss: 0.5237 623 | Epoch 310/500 624 | 2576998/2576998 [==============================] - 58s - loss: 0.5231 625 | Epoch 311/500 626 | 2576998/2576998 [==============================] - 58s - loss: 0.5191 627 | Epoch 312/500 628 | 2576998/2576998 [==============================] - 58s - loss: 0.5218 629 | Epoch 313/500 630 | 2576998/2576998 [==============================] - 58s - loss: 0.5244 631 | Epoch 314/500 632 | 2576998/2576998 [==============================] - 58s - loss: 0.5244 633 | Epoch 315/500 634 | 2576998/2576998 [==============================] - 58s - loss: 0.5240 635 | Epoch 316/500 636 | 2576998/2576998 [==============================] - 58s - loss: 0.5247 637 | Epoch 317/500 638 | 2576998/2576998 [==============================] - 58s - loss: 0.5221 639 | Epoch 318/500 640 | 2576998/2576998 [==============================] - 58s - loss: 0.5243 641 | Epoch 319/500 642 | 2576998/2576998 [==============================] - 58s - loss: 0.5221 643 | Epoch 320/500 644 | 2576998/2576998 [==============================] - 58s - loss: 0.5226 645 | Epoch 321/500 646 | 2576998/2576998 [==============================] - 58s - loss: 0.5238 647 | Epoch 322/500 648 | 2576998/2576998 [==============================] - 58s - loss: 0.5227 649 | Epoch 323/500 650 | 2576998/2576998 [==============================] - 58s - loss: 0.5184 651 | Epoch 324/500 652 | 2576998/2576998 [==============================] - 58s - loss: 0.5175 653 | Epoch 325/500 654 | 2576998/2576998 [==============================] - 58s - loss: 0.5203 655 | Epoch 326/500 656 | 2576998/2576998 [==============================] - 58s - loss: 0.5173 657 | Epoch 327/500 658 | 2576998/2576998 [==============================] - 58s - loss: 0.5206 659 | Epoch 328/500 660 | 2576998/2576998 [==============================] - 58s - loss: 0.5186 661 | Epoch 329/500 662 | 2576998/2576998 [==============================] - 58s - loss: 0.5165 663 | Epoch 330/500 664 | 2576998/2576998 [==============================] - 58s - loss: 0.5140 665 | Epoch 331/500 666 | 2576998/2576998 [==============================] - 58s - loss: 0.5143 667 | Epoch 332/500 668 | 2576998/2576998 [==============================] - 58s - loss: 0.5160 669 | Epoch 333/500 670 | 2576998/2576998 [==============================] - 58s - loss: 0.5171 671 | Epoch 334/500 672 | 2576998/2576998 [==============================] - 58s - loss: 0.5169 673 | Epoch 335/500 674 | 2576998/2576998 [==============================] - 58s - loss: 0.5118 675 | Epoch 336/500 676 | 2576998/2576998 [==============================] - 58s - loss: 0.5146 677 | Epoch 337/500 678 | 2576998/2576998 [==============================] - 58s - loss: 0.5164 679 | Epoch 338/500 680 | 2576998/2576998 [==============================] - 58s - loss: 0.5183 681 | Epoch 339/500 682 | 2576998/2576998 [==============================] - 58s - loss: 0.5119 683 | Epoch 340/500 684 | 2576998/2576998 [==============================] - 58s - loss: 0.5148 685 | Epoch 341/500 686 | 2576998/2576998 [==============================] - 58s - loss: 0.5174 687 | Epoch 342/500 688 | 2576998/2576998 [==============================] - 58s - loss: 0.5172 689 | Epoch 343/500 690 | 2576998/2576998 [==============================] - 58s - loss: 0.5153 691 | Epoch 344/500 692 | 2576998/2576998 [==============================] - 58s - loss: 0.5166 693 | Epoch 345/500 694 | 2576998/2576998 [==============================] - 58s - loss: 0.5191 695 | Epoch 346/500 696 | 2576998/2576998 [==============================] - 58s - loss: 0.5187 697 | Epoch 347/500 698 | 2576998/2576998 [==============================] - 58s - loss: 0.5146 699 | Epoch 348/500 700 | 2576998/2576998 [==============================] - 58s - loss: 0.5125 701 | Epoch 349/500 702 | 2576998/2576998 [==============================] - 58s - loss: 0.5136 703 | Epoch 350/500 704 | 2576998/2576998 [==============================] - 58s - loss: 0.5185 705 | Epoch 351/500 706 | 2576998/2576998 [==============================] - 58s - loss: 0.5151 707 | Epoch 352/500 708 | 2576998/2576998 [==============================] - 58s - loss: 0.5123 709 | Epoch 353/500 710 | 2576998/2576998 [==============================] - 58s - loss: 0.5114 711 | Epoch 354/500 712 | 2576998/2576998 [==============================] - 58s - loss: 0.5100 713 | Epoch 355/500 714 | 2576998/2576998 [==============================] - 58s - loss: 0.5070 715 | Epoch 356/500 716 | 2576998/2576998 [==============================] - 58s - loss: 0.5093 717 | Epoch 357/500 718 | 2576998/2576998 [==============================] - 58s - loss: 0.5105 719 | Epoch 358/500 720 | 2576998/2576998 [==============================] - 58s - loss: 0.5118 721 | Epoch 359/500 722 | 2576998/2576998 [==============================] - 58s - loss: 0.5096 723 | Epoch 360/500 724 | 2576998/2576998 [==============================] - 58s - loss: 0.5047 725 | Epoch 361/500 726 | 2576998/2576998 [==============================] - 58s - loss: 0.5116 727 | Epoch 362/500 728 | 2576998/2576998 [==============================] - 58s - loss: 0.5125 729 | Epoch 363/500 730 | 2576998/2576998 [==============================] - 59s - loss: 0.5087 731 | Epoch 364/500 732 | 2576998/2576998 [==============================] - 58s - loss: 0.5074 733 | Epoch 365/500 734 | 2576998/2576998 [==============================] - 58s - loss: 0.5081 735 | Epoch 366/500 736 | 2576998/2576998 [==============================] - 58s - loss: 0.5068 737 | Epoch 367/500 738 | 2576998/2576998 [==============================] - 58s - loss: 0.5083 739 | Epoch 368/500 740 | 2576998/2576998 [==============================] - 58s - loss: 0.5106 741 | Epoch 369/500 742 | 2576998/2576998 [==============================] - 58s - loss: 0.5095 743 | Epoch 370/500 744 | 2576998/2576998 [==============================] - 58s - loss: 0.5052 745 | Epoch 371/500 746 | 2576998/2576998 [==============================] - 58s - loss: 0.5031 747 | Epoch 372/500 748 | 2576998/2576998 [==============================] - 58s - loss: 0.5030 749 | Epoch 373/500 750 | 2576998/2576998 [==============================] - 58s - loss: 0.5021 751 | Epoch 374/500 752 | 2576998/2576998 [==============================] - 58s - loss: 0.5019 753 | Epoch 375/500 754 | 2576998/2576998 [==============================] - 58s - loss: 0.5035 755 | Epoch 376/500 756 | 2576998/2576998 [==============================] - 58s - loss: 0.5045 757 | Epoch 377/500 758 | 2576998/2576998 [==============================] - 58s - loss: 0.5051 759 | Epoch 378/500 760 | 2576998/2576998 [==============================] - 58s - loss: 0.5052 761 | Epoch 379/500 762 | 2576998/2576998 [==============================] - 58s - loss: 0.5071 763 | Epoch 380/500 764 | 2576998/2576998 [==============================] - 58s - loss: 0.5068 765 | Epoch 381/500 766 | 2576998/2576998 [==============================] - 58s - loss: 0.5051 767 | Epoch 382/500 768 | 2576998/2576998 [==============================] - 58s - loss: 0.5058 769 | Epoch 383/500 770 | 2576998/2576998 [==============================] - 58s - loss: 0.5034 771 | Epoch 384/500 772 | 2576998/2576998 [==============================] - 58s - loss: 0.5013 773 | Epoch 385/500 774 | 2576998/2576998 [==============================] - 58s - loss: 0.5010 775 | Epoch 386/500 776 | 2576998/2576998 [==============================] - 58s - loss: 0.5045 777 | Epoch 387/500 778 | 2576998/2576998 [==============================] - 58s - loss: 0.5063 779 | Epoch 388/500 780 | 2576998/2576998 [==============================] - 58s - loss: 0.5082 781 | Epoch 389/500 782 | 2576998/2576998 [==============================] - 58s - loss: 0.5025 783 | Epoch 390/500 784 | 2576998/2576998 [==============================] - 58s - loss: 0.5051 785 | Epoch 391/500 786 | 2576998/2576998 [==============================] - 58s - loss: 0.5008 787 | Epoch 392/500 788 | 2576998/2576998 [==============================] - 58s - loss: 0.5022 789 | Epoch 393/500 790 | 2576998/2576998 [==============================] - 58s - loss: 0.4991 791 | Epoch 394/500 792 | 2576998/2576998 [==============================] - 58s - loss: 0.4959 793 | Epoch 395/500 794 | 2576998/2576998 [==============================] - 58s - loss: 0.4996 795 | Epoch 396/500 796 | 2576998/2576998 [==============================] - 58s - loss: 0.5038 797 | Epoch 397/500 798 | 2576998/2576998 [==============================] - 58s - loss: 0.5018 799 | Epoch 398/500 800 | 2576998/2576998 [==============================] - 58s - loss: 0.5005 801 | Epoch 399/500 802 | 2576998/2576998 [==============================] - 58s - loss: 0.5034 803 | Epoch 400/500 804 | 2576998/2576998 [==============================] - 58s - loss: 0.4999 805 | Epoch 401/500 806 | 2576998/2576998 [==============================] - 58s - loss: 0.4979 807 | Epoch 402/500 808 | 2576998/2576998 [==============================] - 58s - loss: 0.4943 809 | Epoch 403/500 810 | 2576998/2576998 [==============================] - 58s - loss: 0.4943 811 | Epoch 404/500 812 | 2576998/2576998 [==============================] - 58s - loss: 0.4968 813 | Epoch 405/500 814 | 2576998/2576998 [==============================] - 58s - loss: 0.4984 815 | Epoch 406/500 816 | 2576998/2576998 [==============================] - 58s - loss: 0.5001 817 | Epoch 407/500 818 | 2576998/2576998 [==============================] - 58s - loss: 0.5001 819 | Epoch 408/500 820 | 2576998/2576998 [==============================] - 58s - loss: 0.5011 821 | Epoch 409/500 822 | 2576998/2576998 [==============================] - 58s - loss: 0.5042 823 | Epoch 410/500 824 | 2576998/2576998 [==============================] - 58s - loss: 0.4996 825 | Epoch 411/500 826 | 2576998/2576998 [==============================] - 58s - loss: 0.5010 827 | Epoch 412/500 828 | 2576998/2576998 [==============================] - 58s - loss: 0.4971 829 | Epoch 413/500 830 | 2576998/2576998 [==============================] - 58s - loss: 0.4964 831 | Epoch 414/500 832 | 2576998/2576998 [==============================] - 58s - loss: 0.4988 833 | Epoch 415/500 834 | 2576998/2576998 [==============================] - 58s - loss: 0.5005 835 | Epoch 416/500 836 | 2576998/2576998 [==============================] - 59s - loss: 0.4981 837 | Epoch 417/500 838 | 2576998/2576998 [==============================] - 59s - loss: 0.4986 839 | Epoch 418/500 840 | 2576998/2576998 [==============================] - 59s - loss: 0.5017 841 | Epoch 419/500 842 | 2576998/2576998 [==============================] - 58s - loss: 0.5022 843 | Epoch 420/500 844 | 2576998/2576998 [==============================] - 58s - loss: 0.5011 845 | Epoch 421/500 846 | 2576998/2576998 [==============================] - 59s - loss: 0.5030 847 | Epoch 422/500 848 | 2576998/2576998 [==============================] - 58s - loss: 0.4990 849 | Epoch 423/500 850 | 2576998/2576998 [==============================] - 58s - loss: 0.4973 851 | Epoch 424/500 852 | 2576998/2576998 [==============================] - 58s - loss: 0.4948 853 | Epoch 425/500 854 | 2576998/2576998 [==============================] - 58s - loss: 0.4984 855 | Epoch 426/500 856 | 2576998/2576998 [==============================] - 58s - loss: 0.5000 857 | Epoch 427/500 858 | 2576998/2576998 [==============================] - 59s - loss: 0.4980 859 | Epoch 428/500 860 | 2576998/2576998 [==============================] - 59s - loss: 0.5030 861 | Epoch 429/500 862 | 2576998/2576998 [==============================] - 58s - loss: 0.5046 863 | Epoch 430/500 864 | 2576998/2576998 [==============================] - 58s - loss: 0.4956 865 | Epoch 431/500 866 | 2576998/2576998 [==============================] - 58s - loss: 0.4946 867 | Epoch 432/500 868 | 2576998/2576998 [==============================] - 58s - loss: 0.4997 869 | Epoch 433/500 870 | 2576998/2576998 [==============================] - 58s - loss: 0.4957 871 | Epoch 434/500 872 | 2576998/2576998 [==============================] - 58s - loss: 0.4930 873 | Epoch 435/500 874 | 2576998/2576998 [==============================] - 58s - loss: 0.4911 875 | Epoch 436/500 876 | 2576998/2576998 [==============================] - 58s - loss: 0.4926 877 | Epoch 437/500 878 | 2576998/2576998 [==============================] - 58s - loss: 0.4956 879 | Epoch 438/500 880 | 2576998/2576998 [==============================] - 58s - loss: 0.4979 881 | Epoch 439/500 882 | 2576998/2576998 [==============================] - 58s - loss: 0.4931 883 | Epoch 440/500 884 | 2576998/2576998 [==============================] - 58s - loss: 0.4915 885 | Epoch 441/500 886 | 2576998/2576998 [==============================] - 58s - loss: 0.4899 887 | Epoch 442/500 888 | 2576998/2576998 [==============================] - 58s - loss: 0.4889 889 | Epoch 443/500 890 | 2576998/2576998 [==============================] - 58s - loss: 0.4911 891 | Epoch 444/500 892 | 2576998/2576998 [==============================] - 58s - loss: 0.4903 893 | Epoch 445/500 894 | 2576998/2576998 [==============================] - 58s - loss: 0.4919 895 | Epoch 446/500 896 | 2576998/2576998 [==============================] - 58s - loss: 0.4857 897 | Epoch 447/500 898 | 2576998/2576998 [==============================] - 58s - loss: 0.4844 899 | Epoch 448/500 900 | 2576998/2576998 [==============================] - 58s - loss: 0.4912 901 | Epoch 449/500 902 | 2576998/2576998 [==============================] - 58s - loss: 0.4922 903 | Epoch 450/500 904 | 2576998/2576998 [==============================] - 58s - loss: 0.4945 905 | Epoch 451/500 906 | 2576998/2576998 [==============================] - 58s - loss: 0.4958 907 | Epoch 452/500 908 | 2576998/2576998 [==============================] - 58s - loss: 0.4967 909 | Epoch 453/500 910 | 2576998/2576998 [==============================] - 59s - loss: 0.4982 911 | Epoch 454/500 912 | 2576998/2576998 [==============================] - 58s - loss: 0.4924 913 | Epoch 455/500 914 | 2576998/2576998 [==============================] - 59s - loss: 0.4874 915 | Epoch 456/500 916 | 2576998/2576998 [==============================] - 58s - loss: 0.4907 917 | Epoch 457/500 918 | 2576998/2576998 [==============================] - 58s - loss: 0.4886 919 | Epoch 458/500 920 | 2576998/2576998 [==============================] - 58s - loss: 0.4908 921 | Epoch 459/500 922 | 2576998/2576998 [==============================] - 58s - loss: 0.4902 923 | Epoch 460/500 924 | 2576998/2576998 [==============================] - 58s - loss: 0.4872 925 | Epoch 461/500 926 | 2576998/2576998 [==============================] - 58s - loss: 0.4909 927 | Epoch 462/500 928 | 2576998/2576998 [==============================] - 58s - loss: 0.4871 929 | Epoch 463/500 930 | 2576998/2576998 [==============================] - 59s - loss: 0.4892 931 | Epoch 464/500 932 | 2576998/2576998 [==============================] - 58s - loss: 0.4918 933 | Epoch 465/500 934 | 2576998/2576998 [==============================] - 58s - loss: 0.4950 935 | Epoch 466/500 936 | 2576998/2576998 [==============================] - 58s - loss: 0.4918 937 | Epoch 467/500 938 | 2576998/2576998 [==============================] - 58s - loss: 0.4934 939 | Epoch 468/500 940 | 2576998/2576998 [==============================] - 58s - loss: 0.4907 941 | Epoch 469/500 942 | 2576998/2576998 [==============================] - 58s - loss: 0.4895 943 | Epoch 470/500 944 | 2576998/2576998 [==============================] - 58s - loss: 0.4919 945 | Epoch 471/500 946 | 2576998/2576998 [==============================] - 58s - loss: 0.4923 947 | Epoch 472/500 948 | 2576998/2576998 [==============================] - 58s - loss: 0.4928 949 | Epoch 473/500 950 | 2576998/2576998 [==============================] - 58s - loss: 0.4924 951 | Epoch 474/500 952 | 2576998/2576998 [==============================] - 58s - loss: 0.4901 953 | Epoch 475/500 954 | 2576998/2576998 [==============================] - 58s - loss: 0.4874 955 | Epoch 476/500 956 | 2576998/2576998 [==============================] - 58s - loss: 0.4886 957 | Epoch 477/500 958 | 2576998/2576998 [==============================] - 58s - loss: 0.4871 959 | Epoch 478/500 960 | 2576998/2576998 [==============================] - 59s - loss: 0.4825 961 | Epoch 479/500 962 | 2576998/2576998 [==============================] - 58s - loss: 0.4838 963 | Epoch 480/500 964 | 2576998/2576998 [==============================] - 59s - loss: 0.4868 965 | Epoch 481/500 966 | 2576998/2576998 [==============================] - 59s - loss: 0.4893 967 | Epoch 482/500 968 | 2576998/2576998 [==============================] - 58s - loss: 0.4873 969 | Epoch 483/500 970 | 2576998/2576998 [==============================] - 58s - loss: 0.4896 971 | Epoch 484/500 972 | 2576998/2576998 [==============================] - 58s - loss: 0.4938 973 | Epoch 485/500 974 | 2576998/2576998 [==============================] - 58s - loss: 0.4910 975 | Epoch 486/500 976 | 2576998/2576998 [==============================] - 58s - loss: 0.4888 977 | Epoch 487/500 978 | 2576998/2576998 [==============================] - 58s - loss: 0.4859 979 | Epoch 488/500 980 | 2576998/2576998 [==============================] - 58s - loss: 0.4862 981 | Epoch 489/500 982 | 2576998/2576998 [==============================] - 59s - loss: 0.4878 983 | Epoch 490/500 984 | 2576998/2576998 [==============================] - 58s - loss: 0.4846 985 | Epoch 491/500 986 | 2576998/2576998 [==============================] - 58s - loss: 0.4852 987 | Epoch 492/500 988 | 2576998/2576998 [==============================] - 58s - loss: 0.4857 989 | Epoch 493/500 990 | 2576998/2576998 [==============================] - 58s - loss: 0.4827 991 | Epoch 494/500 992 | 2576998/2576998 [==============================] - 58s - loss: 0.4850 993 | Epoch 495/500 994 | 2576998/2576998 [==============================] - 58s - loss: 0.4893 995 | Epoch 496/500 996 | 2576998/2576998 [==============================] - 58s - loss: 0.4896 997 | Epoch 497/500 998 | 2576998/2576998 [==============================] - 58s - loss: 0.4900 999 | Epoch 498/500 1000 | 2576998/2576998 [==============================] - 58s - loss: 0.4911 1001 | Epoch 499/500 1002 | 2576998/2576998 [==============================] - 58s - loss: 0.4903 1003 | Epoch 500/500 1004 | 2576998/2576998 [==============================] - 58s - loss: 0.4910 1005 | [[4 4 3 5] 1006 | [3 8 3 6] 1007 | [5 2 2 5] 1008 | [5 8 4 7] 1009 | [3 4 4 6] 1010 | [7 8 5 7] 1011 | [4 2 3 4] 1012 | [4 7 3 5] 1013 | [6 3 4 2] 1014 | [2 4 4 2] 1015 | [1 4 3 5] 1016 | [2 6 2 4] 1017 | [3 3 1 4] 1018 | [6 8 3 5] 1019 | [5 3 4 4] 1020 | [3 5 4 4] 1021 | [3 1 4 2] 1022 | [4 5 3 4] 1023 | [5 1 7 1] 1024 | [1 8 3 8]] 1025 | [[ 2.55091333 3.51815438 3.33235216 4.14515448] 1026 | [ 3.04355812 6.78390598 3.09988165 6.06519842] 1027 | [ 5.09476757 1.12761307 2.786587 2.63562441] 1028 | [ 4.60391521 8.14026928 3.66352987 6.11040592] 1029 | [ 2.82292604 4.0883646 2.4582777 4.91905451] 1030 | [ 6.28122568 6.91842461 5.02637959 5.20572853] 1031 | [ 3.36769629 2.87295437 2.61717319 3.57078314] 1032 | [ 2.91338015 7.01736259 3.0866518 4.89686441] 1033 | [ 5.97147083 2.63921213 5.20363235 2.39807796] 1034 | [ 2.10871553 3.27787232 3.19258189 2.06204343] 1035 | [ 1.09986639 2.95643473 2.01480126 4.70332623] 1036 | [ 2.57313395 6.01060057 2.7264204 4.46761847] 1037 | [ 4.0098753 2.04631567 1.90991449 2.78482008] 1038 | [ 4.86909485 6.8880887 4.2459197 5.32041407] 1039 | [ 3.79327583 2.73349094 3.91592026 4.08679867] 1040 | [ 2.60472393 5.01332664 3.53149104 3.668365 ] 1041 | [ 3.83422637 1.99209118 3.86059237 3.03478909] 1042 | [ 2.68492556 4.99119568 3.03257418 4.01871777] 1043 | [ 4.97045088 0.80353022 7.07414579 0.82435894] 1044 | [ 0.91905761 7.53552532 2.73575354 7.50677681]] 1045 | lenik@drifter~/Desktop/deep-murasaki$ 1046 | --------------------------------------------------------------------------------