├── rrl
├── __init__.py
├── utils.py
├── models.py
└── components.py
├── appendix
├── RRL.png
├── TensorBoard.png
└── tensorboard_olaf.png
├── dataset
├── tic-tac-toe.info
├── README.md
└── tic-tac-toe.data
├── LICENSE
├── args.py
├── experiment.py
└── README.md
/rrl/__init__.py:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/appendix/RRL.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/12wang3/rrl/HEAD/appendix/RRL.png
--------------------------------------------------------------------------------
/appendix/TensorBoard.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/12wang3/rrl/HEAD/appendix/TensorBoard.png
--------------------------------------------------------------------------------
/appendix/tensorboard_olaf.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/12wang3/rrl/HEAD/appendix/tensorboard_olaf.png
--------------------------------------------------------------------------------
/dataset/tic-tac-toe.info:
--------------------------------------------------------------------------------
1 | 1 discrete
2 | 2 discrete
3 | 3 discrete
4 | 4 discrete
5 | 5 discrete
6 | 6 discrete
7 | 7 discrete
8 | 8 discrete
9 | 9 discrete
10 | class discrete
11 | LABEL_POS -1
12 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2021 Zhuo Wang
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
--------------------------------------------------------------------------------
/dataset/README.md:
--------------------------------------------------------------------------------
1 | ## Data set description
2 | The description of those data sets used in the paper are listed in the table below:
3 |
4 | |Dataset (Links) | \#Instances | \#classes | \#features | feature type|
5 | | --- | --- | --- | --- | --- |
6 | |[adult](https://archive.ics.uci.edu/ml/datasets/Adult)| 32561 | 2 | 14 | mixed|
7 | |[bank-marketing](http://archive.ics.uci.edu/ml/datasets/Bank+Marketing) | 45211 | 2 | 16 | mixed|
8 | |[banknote](https://archive.ics.uci.edu/ml/datasets/banknote+authentication) | 1372 | 2 | 4 | continuous|
9 | |[chess](https://archive.ics.uci.edu/ml/datasets/Chess+%28King-Rook+vs.+King%29) | 28056 | 18 | 6 | discrete|
10 | |[connect-4](http://archive.ics.uci.edu/ml/datasets/connect-4) | 67557 | 3 | 42 | discrete|
11 | |[letRecog](https://archive.ics.uci.edu/ml/datasets/Letter+Recognition) | 20000 | 26 | 16 | continuous|
12 | |[magic04](https://archive.ics.uci.edu/ml/datasets/magic+gamma+telescope) | 19020 | 2 | 10 | continuous|
13 | |[tic-tac-toe](https://archive.ics.uci.edu/ml/datasets/Tic-Tac-Toe+Endgame) | 958 | 2 | 9 | discrete|
14 | |[wine](https://archive.ics.uci.edu/ml/datasets/wine) | 178 | 3 | 13 | continuous|
15 | |[activity](https://archive.ics.uci.edu/ml/datasets/human+activity+recognition+using+smartphones) | 10299 | 6 | 561 | continuous|
16 | |[dota2](https://archive.ics.uci.edu/ml/datasets/Dota2+Games+Results) | 102944 | 2 | 116 | discrete|
17 | |[facebook](https://archive.ics.uci.edu/ml/datasets/Facebook+Large+Page-Page+Network) | 22470 | 4 | 4714 | discrete|
18 | |[fashion](https://github.com/zalandoresearch/fashion-mnist) | 70000 | 10 | 784 | continuous|
19 |
20 | For more information about these data sets, you can click the corresponding links.
21 |
22 | ## Data set format
23 | Each data set corresponds to two files, `*.data` and `*.info`. The `*.data` file stores the data for each instance. The `*.info` file stores the information for each feature.
24 |
25 | #### *.data
26 | One row in `*.data` corresponds to one instance and one column corresponds to one feature (including the class label).
27 |
28 | For example, the `adult.data`:
29 |
30 | | | | | | | | | | | | | | | | |
31 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
32 | | 37 | Private | 284582 | Masters | 14 | Married-civ-spouse | Exec-managerial | Wife | White | Female | 0 | 0 | 40 | United-States | <=50K |
33 | | 49 | Private | 160187 | 9th | 5 | Married-spouse-absent | Other-service | Not-in-family | Black | Female | 0 | 0 | 16 | Jamaica | <=50K |
34 | | 52 | Self-emp-not-inc | 209642 | HS-grad | 9 | Married-civ-spouse | Exec-managerial | Husband | White | Male | 0 | 0 | 45 | United-States | >50K |
35 | | 31 | Private | 45781 | Masters | 14 | Never-married | Prof-specialty | Not-in-family | White | Female | 14084 | 0 | 50 | United-States | >50K |
36 | | ......|
37 |
38 | #### *.info
39 | One row (except the last row) in `*.info` corresponds to one feature (including the class label). The order of these features must be the same as the columns in `*.data`. The first column is the feature name, and the second column indicates the characteristics of the feature, i.e., continuous or discrete. The last row does not correspond to one feature. It specifies the position of the class label column.
40 |
41 | For example, the `adult.info`:
42 |
43 | | | |
44 | | --- | --- |
45 | |age | continuous |
46 | |workclass | discrete |
47 | |fnlwgt | continuous |
48 | |education | discrete |
49 | |education-num | continuous |
50 | |...... | |
51 | |hours-per-week | continuous |
52 | |native-country | discrete |
53 | |class | discrete |
54 | |LABEL_POS | -1 |
55 |
56 | ## Add a new data set
57 | You can run the demo on your own data sets by putting corresponding `*.data` and `*.info` file into the `dataset` folder. The formats of these two files are described above.
58 |
--------------------------------------------------------------------------------
/rrl/utils.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import pandas as pd
3 | from sklearn import preprocessing
4 | from sklearn.impute import SimpleImputer
5 |
6 |
7 | def read_info(info_path):
8 | with open(info_path) as f:
9 | f_list = []
10 | for line in f:
11 | tokens = line.strip().split()
12 | f_list.append(tokens)
13 | return f_list[:-1], int(f_list[-1][-1])
14 |
15 |
16 | def read_csv(data_path, info_path, shuffle=False):
17 | D = pd.read_csv(data_path, header=None)
18 | if shuffle:
19 | D = D.sample(frac=1, random_state=0).reset_index(drop=True)
20 | f_list, label_pos = read_info(info_path)
21 | f_df = pd.DataFrame(f_list)
22 | D.columns = f_df.iloc[:, 0]
23 | y_df = D.iloc[:, [label_pos]]
24 | X_df = D.drop(D.columns[label_pos], axis=1)
25 | f_df = f_df.drop(f_df.index[label_pos])
26 | return X_df, y_df, f_df, label_pos
27 |
28 |
29 | class DBEncoder:
30 | """Encoder used for data discretization and binarization."""
31 |
32 | def __init__(self, f_df, discrete=False, y_one_hot=True, drop='first'):
33 | self.f_df = f_df
34 | self.discrete = discrete
35 | self.y_one_hot = y_one_hot
36 | self.label_enc = preprocessing.OneHotEncoder(categories='auto') if y_one_hot else preprocessing.LabelEncoder()
37 | self.feature_enc = preprocessing.OneHotEncoder(categories='auto', drop=drop)
38 | self.imp = SimpleImputer(missing_values=np.nan, strategy='mean')
39 | self.X_fname = None
40 | self.y_fname = None
41 | self.discrete_flen = None
42 | self.continuous_flen = None
43 | self.mean = None
44 | self.std = None
45 |
46 | def split_data(self, X_df):
47 | discrete_data = X_df[self.f_df.loc[self.f_df[1] == 'discrete', 0]]
48 | continuous_data = X_df[self.f_df.loc[self.f_df[1] == 'continuous', 0]]
49 | if not continuous_data.empty:
50 | continuous_data = continuous_data.replace(to_replace=r'.*\?.*', value=np.nan, regex=True)
51 | continuous_data = continuous_data.astype(np.float)
52 | return discrete_data, continuous_data
53 |
54 | def fit(self, X_df, y_df):
55 | X_df = X_df.reset_index(drop=True)
56 | y_df = y_df.reset_index(drop=True)
57 | discrete_data, continuous_data = self.split_data(X_df)
58 | self.label_enc.fit(y_df)
59 | self.y_fname = list(self.label_enc.get_feature_names(y_df.columns)) if self.y_one_hot else y_df.columns
60 |
61 | if not continuous_data.empty:
62 | # Use mean as missing value for continuous columns if do not discretize them.
63 | self.imp.fit(continuous_data.values)
64 | if not discrete_data.empty:
65 | # One-hot encoding
66 | self.feature_enc.fit(discrete_data)
67 | feature_names = discrete_data.columns
68 | self.X_fname = list(self.feature_enc.get_feature_names(feature_names))
69 | self.discrete_flen = len(self.X_fname)
70 | if not self.discrete:
71 | self.X_fname.extend(continuous_data.columns)
72 | else:
73 | self.X_fname = continuous_data.columns
74 | self.discrete_flen = 0
75 | self.continuous_flen = continuous_data.shape[1]
76 |
77 | def transform(self, X_df, y_df, normalized=False, keep_stat=False):
78 | X_df = X_df.reset_index(drop=True)
79 | y_df = y_df.reset_index(drop=True)
80 | discrete_data, continuous_data = self.split_data(X_df)
81 | # Encode string value to int index.
82 | y = self.label_enc.transform(y_df.values.reshape(-1, 1))
83 | if self.y_one_hot:
84 | y = y.toarray()
85 |
86 | if not continuous_data.empty:
87 | # Use mean as missing value for continuous columns if we do not discretize them.
88 | continuous_data = pd.DataFrame(self.imp.transform(continuous_data.values),
89 | columns=continuous_data.columns)
90 | if normalized:
91 | if keep_stat:
92 | self.mean = continuous_data.mean()
93 | self.std = continuous_data.std()
94 | continuous_data = (continuous_data - self.mean) / self.std
95 | if not discrete_data.empty:
96 | # One-hot encoding
97 | discrete_data = self.feature_enc.transform(discrete_data)
98 | if not self.discrete:
99 | X_df = pd.concat([pd.DataFrame(discrete_data.toarray()), continuous_data], axis=1)
100 | else:
101 | X_df = pd.DataFrame(discrete_data.toarray())
102 | else:
103 | X_df = continuous_data
104 | return X_df.values, y
105 |
--------------------------------------------------------------------------------
/args.py:
--------------------------------------------------------------------------------
1 | import os
2 | import argparse
3 |
4 |
5 | parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
6 | parser.add_argument('-d', '--data_set', type=str, default='tic-tac-toe',
7 | help='Set the data set for training. All the data sets in the dataset folder are available.')
8 | parser.add_argument('-i', '--device_ids', type=str, default=None, help='Set the device (GPU ids). Split by @.'
9 | ' E.g., 0@2@3.')
10 | parser.add_argument('-nr', '--nr', default=0, type=int, help='ranking within the nodes')
11 | parser.add_argument('-e', '--epoch', type=int, default=41, help='Set the total epoch.')
12 | parser.add_argument('-bs', '--batch_size', type=int, default=64, help='Set the batch size.')
13 | parser.add_argument('-lr', '--learning_rate', type=float, default=0.01, help='Set the initial learning rate.')
14 | parser.add_argument('-lrdr', '--lr_decay_rate', type=float, default=0.75, help='Set the learning rate decay rate.')
15 | parser.add_argument('-lrde', '--lr_decay_epoch', type=int, default=10, help='Set the learning rate decay epoch.')
16 | parser.add_argument('-wd', '--weight_decay', type=float, default=0.0, help='Set the weight decay (L2 penalty).')
17 | parser.add_argument('-ki', '--ith_kfold', type=int, default=0, help='Do the i-th 5-fold validation, 0 <= ki < 5.')
18 | parser.add_argument('-rc', '--round_count', type=int, default=0, help='Count the round of experiments.')
19 | parser.add_argument('-ma', '--master_address', type=str, default='127.0.0.1', help='Set the master address.')
20 | parser.add_argument('-mp', '--master_port', type=str, default='0', help='Set the master port.')
21 | parser.add_argument('-li', '--log_iter', type=int, default=500, help='The number of iterations (batches) to log once.')
22 |
23 | parser.add_argument('--nlaf', action="store_true",
24 | help='Use novel logical activation functions to take less time and GPU memory usage. We recommend trying (alpha, beta, gamma) in {(0.999, 8, 1), (0.999, 8, 3), (0.9, 3, 3)}')
25 | parser.add_argument('--alpha', type=float, default=0.999, help='Set the alpha for NLAF.')
26 | parser.add_argument('--beta', type=int, default=8, help='Set the beta for NLAF.')
27 | parser.add_argument('--gamma', type=int, default=1, help='Set the gamma for NLAF.')
28 |
29 | parser.add_argument('--temp', type=float, default=1.0, help='Set the temperature.')
30 |
31 | parser.add_argument('--use_not', action="store_true",
32 | help='Use the NOT (~) operator in logical rules. '
33 | 'It will enhance model capability but make the RRL more complex.')
34 | parser.add_argument('--save_best', action="store_true",
35 | help='Save the model with best performance on the validation set.')
36 | parser.add_argument('--skip', action="store_true",
37 | help='Use skip connections when the number of logical layers is greater than 2.')
38 | parser.add_argument('--estimated_grad', action="store_true",
39 | help='Use estimated gradient.')
40 | parser.add_argument('--weighted', action="store_true",
41 | help='Use weighted loss for imbalanced data.')
42 | parser.add_argument('--print_rule', action="store_true",
43 | help='Print the rules.')
44 | parser.add_argument('-s', '--structure', type=str, default='5@64',
45 | help='Set the number of nodes in the binarization layer and logical layers. '
46 | 'E.g., 10@64, 10@64@32@16.')
47 |
48 | rrl_args = parser.parse_args()
49 | rrl_args.folder_name = '{}_e{}_bs{}_lr{}_lrdr{}_lrde{}_wd{}_ki{}_rc{}_useNOT{}_saveBest{}_useNLAF{}_estimatedGrad{}_useSkip{}_alpha{}_beta{}_gamma{}_temp{}'.format(
50 | rrl_args.data_set, rrl_args.epoch, rrl_args.batch_size, rrl_args.learning_rate, rrl_args.lr_decay_rate,
51 | rrl_args.lr_decay_epoch, rrl_args.weight_decay, rrl_args.ith_kfold, rrl_args.round_count, rrl_args.use_not,
52 | rrl_args.save_best, rrl_args.nlaf, rrl_args.estimated_grad, rrl_args.skip, rrl_args.alpha, rrl_args.beta, rrl_args.gamma, rrl_args.temp)
53 |
54 | if not os.path.exists('log_folder'):
55 | os.mkdir('log_folder')
56 | rrl_args.folder_name = rrl_args.folder_name + '_L' + rrl_args.structure
57 | rrl_args.set_folder_path = os.path.join('log_folder', rrl_args.data_set)
58 | if not os.path.exists(rrl_args.set_folder_path):
59 | os.mkdir(rrl_args.set_folder_path)
60 | rrl_args.folder_path = os.path.join(rrl_args.set_folder_path, rrl_args.folder_name)
61 | if not os.path.exists(rrl_args.folder_path):
62 | os.mkdir(rrl_args.folder_path)
63 | rrl_args.model = os.path.join(rrl_args.folder_path, 'model.pth')
64 | rrl_args.rrl_file = os.path.join(rrl_args.folder_path, 'rrl.txt')
65 | rrl_args.plot_file = os.path.join(rrl_args.folder_path, 'plot_file.pdf')
66 | rrl_args.log = os.path.join(rrl_args.folder_path, 'log.txt')
67 | rrl_args.test_res = os.path.join(rrl_args.folder_path, 'test_res.txt')
68 | rrl_args.device_ids = list(map(int, rrl_args.device_ids.strip().split('@')))
69 | rrl_args.gpus = len(rrl_args.device_ids)
70 | rrl_args.nodes = 1
71 | rrl_args.world_size = rrl_args.gpus * rrl_args.nodes
72 | rrl_args.batch_size = int(rrl_args.batch_size / rrl_args.gpus)
73 |
--------------------------------------------------------------------------------
/experiment.py:
--------------------------------------------------------------------------------
1 | import os
2 | import logging
3 | import numpy as np
4 | import torch
5 | torch.set_num_threads(2)
6 | from torch.utils.data.dataset import random_split
7 | from torch.utils.data import DataLoader, TensorDataset
8 | from torch.utils.tensorboard import SummaryWriter
9 | import torch.multiprocessing as mp
10 | import torch.distributed as dist
11 | from sklearn.model_selection import KFold, train_test_split
12 | from collections import defaultdict
13 |
14 | from rrl.utils import read_csv, DBEncoder
15 | from rrl.models import RRL
16 |
17 | DATA_DIR = './dataset'
18 |
19 |
20 | def get_data_loader(dataset, world_size, rank, batch_size, k=0, pin_memory=False, save_best=True):
21 | data_path = os.path.join(DATA_DIR, dataset + '.data')
22 | info_path = os.path.join(DATA_DIR, dataset + '.info')
23 | X_df, y_df, f_df, label_pos = read_csv(data_path, info_path, shuffle=True)
24 |
25 | db_enc = DBEncoder(f_df, discrete=False)
26 | db_enc.fit(X_df, y_df)
27 |
28 | X, y = db_enc.transform(X_df, y_df, normalized=True, keep_stat=True)
29 |
30 | kf = KFold(n_splits=5, shuffle=True, random_state=0)
31 | train_index, test_index = list(kf.split(X_df))[k]
32 | X_train = X[train_index]
33 | y_train = y[train_index]
34 | X_test = X[test_index]
35 | y_test = y[test_index]
36 |
37 | train_set = TensorDataset(torch.tensor(X_train.astype(np.float32)), torch.tensor(y_train.astype(np.float32)))
38 | test_set = TensorDataset(torch.tensor(X_test.astype(np.float32)), torch.tensor(y_test.astype(np.float32)))
39 |
40 | train_len = int(len(train_set) * 0.95)
41 | train_sub, valid_set = random_split(train_set, [train_len, len(train_set) - train_len])
42 |
43 | if save_best: # use validation set for model selections.
44 | train_set = train_sub
45 |
46 | train_sampler = torch.utils.data.distributed.DistributedSampler(train_set, num_replicas=world_size, rank=rank)
47 |
48 | train_loader = DataLoader(train_set, batch_size=batch_size, shuffle=False, pin_memory=pin_memory, sampler=train_sampler)
49 | valid_loader = DataLoader(valid_set, batch_size=batch_size, shuffle=False, pin_memory=pin_memory)
50 | test_loader = DataLoader(test_set, batch_size=batch_size, shuffle=False, pin_memory=pin_memory)
51 |
52 | return db_enc, train_loader, valid_loader, test_loader
53 |
54 |
55 | def train_model(gpu, args):
56 | rank = args.nr * args.gpus + gpu
57 | dist.init_process_group(backend='nccl', init_method='env://', world_size=args.world_size, rank=rank)
58 | torch.manual_seed(42)
59 | device_id = args.device_ids[gpu]
60 | torch.cuda.set_device(device_id)
61 |
62 | if gpu == 0:
63 | writer = SummaryWriter(args.folder_path)
64 | is_rank0 = True
65 | else:
66 | writer = None
67 | is_rank0 = False
68 |
69 | dataset = args.data_set
70 | db_enc, train_loader, valid_loader, _ = get_data_loader(dataset, args.world_size, rank, args.batch_size,
71 | k=args.ith_kfold, pin_memory=True, save_best=args.save_best)
72 |
73 | X_fname = db_enc.X_fname
74 | y_fname = db_enc.y_fname
75 | discrete_flen = db_enc.discrete_flen
76 | continuous_flen = db_enc.continuous_flen
77 |
78 | rrl = RRL(dim_list=[(discrete_flen, continuous_flen)] + list(map(int, args.structure.split('@'))) + [len(y_fname)],
79 | device_id=device_id,
80 | use_not=args.use_not,
81 | is_rank0=is_rank0,
82 | log_file=args.log,
83 | writer=writer,
84 | save_best=args.save_best,
85 | estimated_grad=args.estimated_grad,
86 | use_skip=args.skip,
87 | save_path=args.model,
88 | use_nlaf=args.nlaf,
89 | alpha=args.alpha,
90 | beta=args.beta,
91 | gamma=args.gamma,
92 | temperature=args.temp)
93 |
94 | rrl.train_model(
95 | data_loader=train_loader,
96 | valid_loader=valid_loader,
97 | lr=args.learning_rate,
98 | epoch=args.epoch,
99 | lr_decay_rate=args.lr_decay_rate,
100 | lr_decay_epoch=args.lr_decay_epoch,
101 | weight_decay=args.weight_decay,
102 | log_iter=args.log_iter)
103 |
104 |
105 | def load_model(path, device_id, log_file=None, distributed=True):
106 | checkpoint = torch.load(path, map_location='cpu')
107 | saved_args = checkpoint['rrl_args']
108 | rrl = RRL(
109 | dim_list=saved_args['dim_list'],
110 | device_id=device_id,
111 | is_rank0=True,
112 | use_not=saved_args['use_not'],
113 | log_file=log_file,
114 | distributed=distributed,
115 | estimated_grad=saved_args['estimated_grad'],
116 | use_skip=saved_args['use_skip'],
117 | use_nlaf=saved_args['use_nlaf'],
118 | alpha=saved_args['alpha'],
119 | beta=saved_args['beta'],
120 | gamma=saved_args['gamma'])
121 | stat_dict = checkpoint['model_state_dict']
122 | for key in list(stat_dict.keys()):
123 | # remove 'module.' prefix
124 | stat_dict[key[7:]] = stat_dict.pop(key)
125 | rrl.net.load_state_dict(checkpoint['model_state_dict'])
126 | return rrl
127 |
128 |
129 | def test_model(args):
130 | rrl = load_model(args.model, args.device_ids[0], log_file=args.test_res, distributed=False)
131 | dataset = args.data_set
132 | db_enc, train_loader, _, test_loader = get_data_loader(dataset, 4, 0, args.batch_size, args.ith_kfold, save_best=False)
133 | rrl.test(test_loader=test_loader, set_name='Test')
134 | if args.print_rule:
135 | with open(args.rrl_file, 'w') as rrl_file:
136 | rule2weights = rrl.rule_print(db_enc.X_fname, db_enc.y_fname, train_loader, file=rrl_file, mean=db_enc.mean, std=db_enc.std)
137 | else:
138 | rule2weights = rrl.rule_print(db_enc.X_fname, db_enc.y_fname, train_loader, mean=db_enc.mean, std=db_enc.std, display=False)
139 |
140 | metric = 'Log(#Edges)'
141 | edge_cnt = 0
142 | connected_rid = defaultdict(lambda: set())
143 | ln = len(rrl.net.layer_list) - 1
144 | for rid, w in rule2weights:
145 | connected_rid[ln - abs(rid[0])].add(rid[1])
146 | while ln > 1:
147 | ln -= 1
148 | layer = rrl.net.layer_list[ln]
149 | for r in connected_rid[ln]:
150 | con_len = len(layer.rule_list[0])
151 | if r >= con_len:
152 | opt_id = 1
153 | r -= con_len
154 | else:
155 | opt_id = 0
156 | rule = layer.rule_list[opt_id][r]
157 | edge_cnt += len(rule)
158 | for rid in rule:
159 | connected_rid[ln - abs(rid[0])].add(rid[1])
160 | logging.info('\n\t{} of RRL Model: {}'.format(metric, np.log(edge_cnt)))
161 |
162 |
163 |
164 | def train_main(args):
165 | os.environ['MASTER_ADDR'] = args.master_address
166 | os.environ['MASTER_PORT'] = args.master_port
167 | mp.spawn(train_model, nprocs=args.gpus, args=(args,))
168 |
169 |
170 | if __name__ == '__main__':
171 | from args import rrl_args
172 | # for arg in vars(rrl_args):
173 | # print(arg, getattr(rrl_args, arg))
174 | train_main(rrl_args)
175 | test_model(rrl_args)
176 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Rule-based Representation Learner
2 | ## Updates
3 | *The following updates have been summarized as a paper ([Learning Interpretable Rules for Scalable Data Representation and Classification](https://arxiv.org/abs/2310.14336)) and accepted by TPAMI.* :tada::tada::tada:
4 |
5 | Compared with the previous version, we make the following significant updates to enhance RRL:
6 | ### Hierarchical Gradient Grafting
7 | - The gradient-based discrete model training method proposed by the conference version, i.e., Single Gradient Grafting, is more likely to fail when the RRL goes deeper.
8 | - To tackle this problem and further improve the performance of deep RRL, we propose Hierarchical Gradient Grafting that can avoid the side effects caused by the multiple layers during training.
9 | ### Novel Logical Activation Functions (NLAF)
10 | - NLAFs not only can handle high-dimensional features that the original logical activation functions cannot handle but also are faster and require less GPU memory. Therefore, NLAFs are more scalable.
11 | - Unfortunately, NLAF brings three additional hyperparameters, i.e., alpha, beta, and gamma. We recommend trying (alpha, beta, gamma) in {(0.999, 8, 1), (0.999, 8, 3), (0.9, 3, 3)}.
12 | - To use NLAFs, you should set the "--use_nlaf" option and set hyperparameters by "--alpha", "--beta", and "--gamma". For example:
13 | ```bash
14 | # trained on the tic-tac-toe data set with NLAFs.
15 | python3 experiment.py -d tic-tac-toe -bs 32 -s 1@64 -e401 -lrde 200 -lr 0.002 -ki 0 -i 0 -wd 0.001 --nlaf --alpha 0.9 --beta 3 --gamma 3 --temp 0.01 --print_rule &
16 | ```
17 |
18 | ## Introduction
19 | This is a PyTorch implementation of Rule-based Representation Learner (RRL) as described in NeurIPS 2021 paper
20 | [Scalable Rule-Based Representation Learning for Interpretable Classification](https://arxiv.org/abs/2109.15103) and TPAMI paper [Learning Interpretable Rules for Scalable Data Representation and Classification](https://arxiv.org/abs/2310.14336).
21 |
22 |
23 |
24 | RRL aims to obtain both good scalability and interpretability, and it automatically learns interpretable non-fuzzy rules for data representation and classification. Moreover, RRL can be easily adjusted to obtain a trade-off between classification accuracy and model complexity for different scenarios.
25 |
26 | ## Requirements
27 |
28 | * torch>=1.8.0
29 | * torchvision>=0.9.0
30 | * tensorboard>=1.15.0
31 | * sklearn>=0.23.2
32 | * numpy>=1.19.2
33 | * pandas>=1.1.3
34 | * matplotlib>=3.3.2
35 | * CUDA>=11.1
36 |
37 | ## Tuning Suggestions
38 | 1. Initially test an RRL with a single logical layer. If the loss converges, then consider increasing the number of layers.
39 | 2. Start with a logical layer width of 1024 to check for loss convergence, then reduce width based on interpretability needs.
40 | 3. Temperature (--temp) significantly affects performance. We suggest trying each of the following values: {1, 0.1, 0.01}.
41 | 4. For NLAF, we suggest testing each of the following combinations: (alpha, beta, gamma) in {(0.999, 8, 1), (0.999, 8, 3), (0.9, 3, 3)}.
42 | 5. Begin with learning rates of 0.002 and 0.0002, and then fine-tune as necessary.
43 | 6. Don't forget to try the --save_best option.
44 |
45 | ## Run the demo
46 | We need to put the data sets in the `dataset` folder. You can specify one data set in the `dataset` folder and train the model as follows:
47 |
48 | ```bash
49 | # trained on the tic-tac-toe data set with one GPU.
50 | python3 experiment.py -d tic-tac-toe -bs 32 -s 1@16 -e401 -lrde 200 -lr 0.002 -ki 0 -i 0 -wd 0.0001 --print_rule &
51 | ```
52 | The demo reads the data set and data set information first, then trains the RRL on the training set.
53 | During the training, you can check the training loss and the evaluation result on the validation set by:
54 |
55 | ```bash
56 | tensorboard --logdir=log_folder
57 | ```
58 |
59 |
60 |
61 |
62 | The training log file (`log.txt`) can be found in a folder created in `log_folder`. In this example, the folder path is
63 | ```
64 | log_folder/tic-tac-toe/tic-tac-toe_e401_bs32_lr0.002_lrdr0.75_lrde200_wd0.0001_ki0_rc0_useNOTFalse_saveBestFalse_useNLAFFalse_estimatedGradFalse_useSkipFalse_alpha0.999_beta8_gamma1_temp1.0_L1@16
65 | ```
66 | After training, the evaluation result on the test set is shown in the file `test_res.txt`:
67 | ```
68 | [INFO] - On Test Set:
69 | Accuracy of RRL Model: 1.0
70 | F1 Score of RRL Model: 1.0
71 | ```
72 |
73 | Moreover, the trained RRL model is saved in `model.pth`, and the discrete RRL is printed in `rrl.txt`:
74 |
75 | |RID|class_negative(b=-0.3224)|class_positive(b=-0.1306)|Support|Rule|
76 | | ---- | ---- | ---- | ---- | ---- |
77 | |(-1, 3)|-0.7756|0.9354|0.0885|3_x & 6_x & 9_x|
78 | |(-1, 0)|-0.7257|0.8921|0.1146|1_x & 2_x & 3_x|
79 | |(-1, 5)|-0.6162|0.4967|0.0677|2_x & 5_x & 8_x|
80 | | ......| ...... | ...... | ...... | ...... |
81 |
82 | #### Your own data sets
83 |
84 | You can use the demo to train RRL on your own data set by putting the data and data information files in the `dataset` folder. Please read [DataSetDesc](dataset/README.md) for a more specific guideline.
85 |
86 | #### Available arguments
87 | List all the available arguments and their default values by:
88 | ```bash
89 | $ python3 experiment.py --help
90 | usage: experiment.py [-h] [-d DATA_SET] [-i DEVICE_IDS] [-nr NR] [-e EPOCH] [-bs BATCH_SIZE] [-lr LEARNING_RATE] [-lrdr LR_DECAY_RATE]
91 | [-lrde LR_DECAY_EPOCH] [-wd WEIGHT_DECAY] [-ki ITH_KFOLD] [-rc ROUND_COUNT] [-ma MASTER_ADDRESS] [-mp MASTER_PORT]
92 | [-li LOG_ITER] [--nlaf] [--alpha ALPHA] [--beta BETA] [--gamma GAMMA] [--temp TEMP] [--use_not] [--save_best] [--skip]
93 | [--estimated_grad] [--weighted] [--print_rule] [-s STRUCTURE]
94 |
95 | optional arguments:
96 | -h, --help show this help message and exit
97 | -d DATA_SET, --data_set DATA_SET
98 | Set the data set for training. All the data sets in the dataset folder are available. (default: tic-tac-toe)
99 | -i DEVICE_IDS, --device_ids DEVICE_IDS
100 | Set the device (GPU ids). Split by @. E.g., 0@2@3. (default: None)
101 | -nr NR, --nr NR ranking within the nodes (default: 0)
102 | -e EPOCH, --epoch EPOCH
103 | Set the total epoch. (default: 41)
104 | -bs BATCH_SIZE, --batch_size BATCH_SIZE
105 | Set the batch size. (default: 64)
106 | -lr LEARNING_RATE, --learning_rate LEARNING_RATE
107 | Set the initial learning rate. (default: 0.01)
108 | -lrdr LR_DECAY_RATE, --lr_decay_rate LR_DECAY_RATE
109 | Set the learning rate decay rate. (default: 0.75)
110 | -lrde LR_DECAY_EPOCH, --lr_decay_epoch LR_DECAY_EPOCH
111 | Set the learning rate decay epoch. (default: 10)
112 | -wd WEIGHT_DECAY, --weight_decay WEIGHT_DECAY
113 | Set the weight decay (L2 penalty). (default: 0.0)
114 | -ki ITH_KFOLD, --ith_kfold ITH_KFOLD
115 | Do the i-th 5-fold validation, 0 <= ki < 5. (default: 0)
116 | -rc ROUND_COUNT, --round_count ROUND_COUNT
117 | Count the round of experiments. (default: 0)
118 | -ma MASTER_ADDRESS, --master_address MASTER_ADDRESS
119 | Set the master address. (default: 127.0.0.1)
120 | -mp MASTER_PORT, --master_port MASTER_PORT
121 | Set the master port. (default: 0)
122 | -li LOG_ITER, --log_iter LOG_ITER
123 | The number of iterations (batches) to log once. (default: 500)
124 | --nlaf Use novel logical activation functions to take less time and GPU memory usage. We recommend trying (alpha, beta, gamma) in {(0.999, 8, 1), (0.999, 8, 3), (0.9, 3, 3)} (default: False)
125 | --alpha ALPHA Set the alpha for NLAF. (default: 0.999)
126 | --beta BETA Set the beta for NLAF. (default: 8)
127 | --gamma GAMMA Set the gamma for NLAF. (default: 1)
128 | --temp TEMP Set the temperature. (default: 1.0)
129 | --use_not Use the NOT (~) operator in logical rules. It will enhance model capability but make the RRL more complex. (default: False)
130 | --save_best Save the model with best performance on the validation set. (default: False)
131 | --skip Use skip connections when the number of logical layers is greater than 2. (default: False)
132 | --estimated_grad Use estimated gradient. (default: False)
133 | --weighted Use weighted loss for imbalanced data. (default: False)
134 | --print_rule Print the rules. (default: False)
135 | -s STRUCTURE, --structure STRUCTURE
136 | Set the number of nodes in the binarization layer and logical layers. E.g., 10@64, 10@64@32@16. (default: 5@64)
137 | ```
138 | ## Citation
139 |
140 | If our work is helpful to you, please kindly cite our paper as:
141 |
142 | ```
143 | @article{wang2021scalable,
144 | title={Scalable Rule-Based Representation Learning for Interpretable Classification},
145 | author={Wang, Zhuo and Zhang, Wei and Liu, Ning and Wang, Jianyong},
146 | journal={Advances in Neural Information Processing Systems},
147 | volume={34},
148 | year={2021}
149 | }
150 | @article{wang2024learning,
151 | title={Learning Interpretable Rules for Scalable Data Representation and Classification},
152 | author={Wang, Zhuo and Zhang, Wei and Liu, Ning and Wang, Jianyong},
153 | journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
154 | volume={46},
155 | number={02},
156 | pages={1121--1133},
157 | year={2024},
158 | publisher={IEEE Computer Society}
159 | }
160 | ```
161 |
162 | ## License
163 |
164 | [MIT license](LICENSE)
165 |
--------------------------------------------------------------------------------
/rrl/models.py:
--------------------------------------------------------------------------------
1 | import sys
2 | import logging
3 | import numpy as np
4 | import torch
5 | import torch.nn as nn
6 | from sklearn import metrics
7 | from collections import defaultdict
8 |
9 | from rrl.components import BinarizeLayer
10 | from rrl.components import UnionLayer, LRLayer
11 |
12 | TEST_CNT_MOD = 500
13 |
14 |
15 | class Net(nn.Module):
16 | def __init__(self, dim_list, use_not=False, left=None, right=None, use_nlaf=False, estimated_grad=False, use_skip=True, alpha=0.999, beta=8, gamma=1, temperature=0.01):
17 | super(Net, self).__init__()
18 |
19 | self.dim_list = dim_list
20 | self.use_not = use_not
21 | self.left = left
22 | self.right = right
23 | self.layer_list = nn.ModuleList([])
24 | self.use_skip = use_skip
25 | self.t = nn.Parameter(torch.log(torch.tensor([temperature])))
26 |
27 | prev_layer_dim = dim_list[0]
28 | for i in range(1, len(dim_list)):
29 | num = prev_layer_dim
30 |
31 | skip_from_layer = None
32 | if self.use_skip and i >= 4:
33 | skip_from_layer = self.layer_list[-2]
34 | num += skip_from_layer.output_dim
35 |
36 | if i == 1:
37 | layer = BinarizeLayer(dim_list[i], num, self.use_not, self.left, self.right)
38 | layer_name = 'binary{}'.format(i)
39 | elif i == len(dim_list) - 1:
40 | layer = LRLayer(dim_list[i], num)
41 | layer_name = 'lr{}'.format(i)
42 | else:
43 | # The first logical layer does not use NOT if the binarization layer has already used NOT
44 | layer_use_not = True if i != 2 else False
45 | layer = UnionLayer(dim_list[i], num, use_nlaf=use_nlaf, estimated_grad=estimated_grad, use_not=layer_use_not, alpha=alpha, beta=beta, gamma=gamma)
46 | layer_name = 'union{}'.format(i)
47 |
48 | layer.conn = lambda: None # create an empty class to save the connections
49 | layer.conn.prev_layer = self.layer_list[-1] if len(self.layer_list) > 0 else None
50 | layer.conn.is_skip_to_layer = False
51 | layer.conn.skip_from_layer = skip_from_layer
52 | if skip_from_layer is not None:
53 | skip_from_layer.conn.is_skip_to_layer = True
54 |
55 | prev_layer_dim = layer.output_dim
56 | self.add_module(layer_name, layer)
57 | self.layer_list.append(layer)
58 |
59 | def forward(self, x):
60 | for layer in self.layer_list:
61 | if layer.conn.skip_from_layer is not None:
62 | x = torch.cat((x, layer.conn.skip_from_layer.x_res), dim=1)
63 | del layer.conn.skip_from_layer.x_res
64 | x = layer(x)
65 | if layer.conn.is_skip_to_layer:
66 | layer.x_res = x
67 | return x
68 |
69 | def bi_forward(self, x, count=False):
70 | for layer in self.layer_list:
71 | if layer.conn.skip_from_layer is not None:
72 | x = torch.cat((x, layer.conn.skip_from_layer.x_res), dim=1)
73 | del layer.conn.skip_from_layer.x_res
74 | x = layer.binarized_forward(x)
75 | if layer.conn.is_skip_to_layer:
76 | layer.x_res = x
77 | if count and layer.layer_type != 'linear':
78 | layer.node_activation_cnt += torch.sum(x, dim=0)
79 | layer.forward_tot += x.shape[0]
80 | return x
81 |
82 |
83 | class MyDistributedDataParallel(torch.nn.parallel.DistributedDataParallel):
84 | @property
85 | def layer_list(self):
86 | return self.module.layer_list
87 |
88 | @property
89 | def t(self):
90 | return self.module.t
91 |
92 |
93 | class RRL:
94 | def __init__(self, dim_list, device_id, use_not=False, is_rank0=False, log_file=None, writer=None, left=None,
95 | right=None, save_best=False, estimated_grad=False, save_path=None, distributed=True, use_skip=False,
96 | use_nlaf=False, alpha=0.999, beta=8, gamma=1, temperature=0.01):
97 | super(RRL, self).__init__()
98 | self.dim_list = dim_list
99 | self.use_not = use_not
100 | self.use_skip = use_skip
101 | self.use_nlaf = use_nlaf
102 | self.alpha =alpha
103 | self.beta = beta
104 | self.gamma = gamma
105 | self.best_f1 = -1.
106 | self.best_loss = 1e20
107 |
108 | self.device_id = device_id
109 | self.is_rank0 = is_rank0
110 | self.save_best = save_best
111 | self.estimated_grad = estimated_grad
112 | self.save_path = save_path
113 | if self.is_rank0:
114 | for handler in logging.root.handlers[:]:
115 | logging.root.removeHandler(handler)
116 |
117 | log_format = '%(asctime)s - [%(levelname)s] - %(message)s'
118 | if log_file is None:
119 | logging.basicConfig(level=logging.DEBUG, stream=sys.stdout, format=log_format)
120 | else:
121 | logging.basicConfig(level=logging.DEBUG, filename=log_file, filemode='w', format=log_format)
122 | self.writer = writer
123 |
124 | self.net = Net(dim_list, use_not=use_not, left=left, right=right, use_nlaf=use_nlaf, estimated_grad=estimated_grad, use_skip=use_skip, alpha=alpha, beta=beta, gamma=gamma, temperature=temperature)
125 | self.net.cuda(self.device_id)
126 | if distributed:
127 | self.net = MyDistributedDataParallel(self.net, device_ids=[self.device_id])
128 |
129 | def clip(self):
130 | """Clip the weights into the range [0, 1]."""
131 | for layer in self.net.layer_list[: -1]:
132 | layer.clip()
133 |
134 | def edge_penalty(self):
135 | edge_penalty = 0.0
136 | for layer in self.net.layer_list[1: -1]:
137 | edge_penalty += layer.edge_count()
138 | return edge_penalty
139 |
140 | def l1_penalty(self):
141 | l1_penalty = 0.0
142 | for layer in self.net.layer_list[1: ]:
143 | l1_penalty += layer.l1_norm()
144 | return l1_penalty
145 |
146 | def l2_penalty(self):
147 | l2_penalty = 0.0
148 | for layer in self.net.layer_list[1: ]:
149 | l2_penalty += layer.l2_norm()
150 | return l2_penalty
151 |
152 | def mixed_penalty(self):
153 | penalty = 0.0
154 | for layer in self.net.layer_list[1: -1]:
155 | penalty += layer.l2_norm()
156 | penalty += self.net.layer_list[-1].l1_norm()
157 | return penalty
158 |
159 | @staticmethod
160 | def exp_lr_scheduler(optimizer, epoch, init_lr=0.001, lr_decay_rate=0.9, lr_decay_epoch=7):
161 | """Decay learning rate by a factor of lr_decay_rate every lr_decay_epoch epochs."""
162 | lr = init_lr * (lr_decay_rate ** (epoch // lr_decay_epoch))
163 | for param_group in optimizer.param_groups:
164 | param_group['lr'] = lr
165 | return optimizer
166 |
167 | def train_model(self, data_loader=None, valid_loader=None, epoch=50, lr=0.01, lr_decay_epoch=100,
168 | lr_decay_rate=0.75, weight_decay=0.0, log_iter=50):
169 |
170 | if data_loader is None:
171 | raise Exception("Data loader is unavailable!")
172 |
173 | accuracy_b = []
174 | f1_score_b = []
175 |
176 | criterion = nn.CrossEntropyLoss().cuda(self.device_id)
177 | optimizer = torch.optim.Adam(self.net.parameters(), lr=lr, weight_decay=0.0)
178 |
179 | cnt = -1
180 | avg_batch_loss_rrl = 0.0
181 | epoch_histc = defaultdict(list)
182 | for epo in range(epoch):
183 | optimizer = self.exp_lr_scheduler(optimizer, epo, init_lr=lr, lr_decay_rate=lr_decay_rate,
184 | lr_decay_epoch=lr_decay_epoch)
185 |
186 | epoch_loss_rrl = 0.0
187 | abs_gradient_max = 0.0
188 | abs_gradient_avg = 0.0
189 |
190 | ba_cnt = 0
191 | for X, y in data_loader:
192 | ba_cnt += 1
193 | X = X.cuda(self.device_id, non_blocking=True)
194 | y = y.cuda(self.device_id, non_blocking=True)
195 | optimizer.zero_grad() # Zero the gradient buffers.
196 |
197 | # trainable softmax temperature
198 | y_bar = self.net.forward(X) / torch.exp(self.net.t)
199 | y_arg = torch.argmax(y, dim=1)
200 |
201 | loss_rrl = criterion(y_bar, y_arg) + weight_decay * self.l2_penalty()
202 |
203 | ba_loss_rrl = loss_rrl.item()
204 | epoch_loss_rrl += ba_loss_rrl
205 | avg_batch_loss_rrl += ba_loss_rrl
206 |
207 | loss_rrl.backward()
208 |
209 | cnt += 1
210 | with torch.no_grad():
211 | if self.is_rank0 and cnt % log_iter == 0 and cnt != 0 and self.writer is not None:
212 | self.writer.add_scalar('Avg_Batch_Loss_GradGrafting', avg_batch_loss_rrl / log_iter, cnt)
213 | edge_p = self.edge_penalty().item()
214 | self.writer.add_scalar('Edge_penalty/Log', np.log(edge_p), cnt)
215 | self.writer.add_scalar('Edge_penalty/Origin', edge_p, cnt)
216 | avg_batch_loss_rrl = 0.0
217 |
218 | optimizer.step()
219 |
220 | if self.is_rank0:
221 | for i, param in enumerate(self.net.parameters()):
222 | abs_gradient_max = max(abs_gradient_max, abs(torch.max(param.grad)))
223 | abs_gradient_avg += torch.sum(torch.abs(param.grad)) / (param.grad.numel())
224 | self.clip()
225 |
226 | if self.is_rank0 and (cnt % (TEST_CNT_MOD * (1 if self.save_best else 10)) == 0):
227 | if valid_loader is not None:
228 | acc_b, f1_b = self.test(test_loader=valid_loader, set_name='Validation')
229 | else: # use the data_loader as the valid loader
230 | acc_b, f1_b = self.test(test_loader=data_loader, set_name='Training')
231 |
232 | if self.save_best and (f1_b > self.best_f1 or (np.abs(f1_b - self.best_f1) < 1e-10 and self.best_loss > epoch_loss_rrl)):
233 | self.best_f1 = f1_b
234 | self.best_loss = epoch_loss_rrl
235 | self.save_model()
236 |
237 | accuracy_b.append(acc_b)
238 | f1_score_b.append(f1_b)
239 | if self.writer is not None:
240 | self.writer.add_scalar('Accuracy_RRL', acc_b, cnt // TEST_CNT_MOD)
241 | self.writer.add_scalar('F1_Score_RRL', f1_b, cnt // TEST_CNT_MOD)
242 | if self.is_rank0:
243 | logging.info('epoch: {}, loss_rrl: {}'.format(epo, epoch_loss_rrl))
244 | if self.writer is not None:
245 | self.writer.add_scalar('Training_Loss_RRL', epoch_loss_rrl, epo)
246 | self.writer.add_scalar('Abs_Gradient_Max', abs_gradient_max, epo)
247 | self.writer.add_scalar('Abs_Gradient_Avg', abs_gradient_avg / ba_cnt, epo)
248 | if self.is_rank0 and not self.save_best:
249 | self.save_model()
250 | return epoch_histc
251 |
252 | @torch.no_grad()
253 | def test(self, test_loader=None, set_name='Validation'):
254 | if test_loader is None:
255 | raise Exception("Data loader is unavailable!")
256 |
257 | y_list = []
258 | for X, y in test_loader:
259 | y_list.append(y)
260 | y_true = torch.cat(y_list, dim=0)
261 | y_true = y_true.cpu().numpy().astype(np.int)
262 | y_true = np.argmax(y_true, axis=1)
263 | data_num = y_true.shape[0]
264 |
265 | slice_step = data_num // 40 if data_num >= 40 else 1
266 | logging.debug('y_true: {} {}'.format(y_true.shape, y_true[:: slice_step]))
267 |
268 | y_pred_b_list = []
269 | for X, y in test_loader:
270 | X = X.cuda(self.device_id, non_blocking=True)
271 | output = self.net.forward(X)
272 | y_pred_b_list.append(output)
273 |
274 | y_pred_b = torch.cat(y_pred_b_list).cpu().numpy()
275 | y_pred_b_arg = np.argmax(y_pred_b, axis=1)
276 | logging.debug('y_rrl_: {} {}'.format(y_pred_b_arg.shape, y_pred_b_arg[:: slice_step]))
277 | logging.debug('y_rrl: {} {}'.format(y_pred_b.shape, y_pred_b[:: (slice_step)]))
278 |
279 | accuracy_b = metrics.accuracy_score(y_true, y_pred_b_arg)
280 | f1_score_b = metrics.f1_score(y_true, y_pred_b_arg, average='macro')
281 |
282 | logging.info('-' * 60)
283 | logging.info('On {} Set:\n\tAccuracy of RRL Model: {}'
284 | '\n\tF1 Score of RRL Model: {}'.format(set_name, accuracy_b, f1_score_b))
285 | logging.info('On {} Set:\nPerformance of RRL Model: \n{}\n{}'.format(
286 | set_name, metrics.confusion_matrix(y_true, y_pred_b_arg), metrics.classification_report(y_true, y_pred_b_arg)))
287 | logging.info('-' * 60)
288 |
289 | return accuracy_b, f1_score_b
290 |
291 | def save_model(self):
292 | rrl_args = {'dim_list': self.dim_list, 'use_not': self.use_not, 'use_skip': self.use_skip, 'estimated_grad': self.estimated_grad,
293 | 'use_nlaf': self.use_nlaf, 'alpha': self.alpha, 'beta': self.beta, 'gamma': self.gamma}
294 | torch.save({'model_state_dict': self.net.state_dict(), 'rrl_args': rrl_args}, self.save_path)
295 |
296 | def detect_dead_node(self, data_loader=None):
297 | with torch.no_grad():
298 | for layer in self.net.layer_list[:-1]:
299 | layer.node_activation_cnt = torch.zeros(layer.output_dim, dtype=torch.double, device=self.device_id)
300 | layer.forward_tot = 0
301 |
302 | for x, y in data_loader:
303 | x_bar = x.cuda(self.device_id)
304 | self.net.bi_forward(x_bar, count=True)
305 |
306 | def rule_print(self, feature_name, label_name, train_loader, file=sys.stdout, mean=None, std=None, display=True):
307 | if self.net.layer_list[1] is None and train_loader is None:
308 | raise Exception("Need train_loader for the dead nodes detection.")
309 |
310 | # detect dead nodes first
311 | if self.net.layer_list[1].node_activation_cnt is None:
312 | self.detect_dead_node(train_loader)
313 |
314 | # for Binarize Layer
315 | self.net.layer_list[0].get_bound_name(feature_name, mean, std) # layer_list[0].rule_name == bound_name
316 |
317 | # for Union Layer
318 | for i in range(1, len(self.net.layer_list) - 1):
319 | layer = self.net.layer_list[i]
320 | layer.get_rules(layer.conn.prev_layer, layer.conn.skip_from_layer)
321 | skip_rule_name = None if layer.conn.skip_from_layer is None else layer.conn.skip_from_layer.rule_name
322 | wrap_prev_rule = False if i == 1 else True # do not warp the bound_name
323 | layer.get_rule_description((skip_rule_name, layer.conn.prev_layer.rule_name), wrap=wrap_prev_rule)
324 |
325 | # for LR Layr
326 | layer = self.net.layer_list[-1]
327 | layer.get_rule2weights(layer.conn.prev_layer, layer.conn.skip_from_layer)
328 |
329 | if not display:
330 | return layer.rule2weights
331 |
332 | print('RID', end='\t', file=file)
333 | for i, ln in enumerate(label_name):
334 | print('{}(b={:.4f})'.format(ln, layer.bl[i]), end='\t', file=file)
335 | print('Support\tRule', file=file)
336 | for rid, w in layer.rule2weights:
337 | print(rid, end='\t', file=file)
338 | for li in range(len(label_name)):
339 | print('{:.4f}'.format(w[li]), end='\t', file=file)
340 | now_layer = self.net.layer_list[-1 + rid[0]]
341 | print('{:.4f}'.format((now_layer.node_activation_cnt[layer.rid2dim[rid]] / now_layer.forward_tot).item()),
342 | end='\t', file=file)
343 | print(now_layer.rule_name[rid[1]], end='\n', file=file)
344 | print('#' * 60, file=file)
345 | return layer.rule2weights
346 |
--------------------------------------------------------------------------------
/rrl/components.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 | from collections import defaultdict
4 |
5 | THRESHOLD = 0.5
6 | INIT_RANGE = 0.5
7 | EPSILON = 1e-10
8 | INIT_L = 0.0
9 |
10 |
11 | class GradGraft(torch.autograd.Function):
12 | """Implement the Gradient Grafting."""
13 | @staticmethod
14 | def forward(ctx, X, Y):
15 | return X
16 |
17 | @staticmethod
18 | def backward(ctx, grad_output):
19 | return None, grad_output.clone()
20 |
21 |
22 | class Binarize(torch.autograd.Function):
23 | """Deterministic binarization."""
24 | @staticmethod
25 | def forward(ctx, X):
26 | y = torch.where(X > 0, torch.ones_like(X), torch.zeros_like(X))
27 | return y
28 |
29 | @staticmethod
30 | def backward(ctx, grad_output):
31 | grad_input = grad_output.clone()
32 | return grad_input
33 |
34 |
35 | class BinarizeLayer(nn.Module):
36 | """Implement the feature discretization and binarization."""
37 |
38 | def __init__(self, n, input_dim, use_not=False, left=None, right=None):
39 | super(BinarizeLayer, self).__init__()
40 | self.n = n
41 | self.input_dim = input_dim
42 | self.disc_num = input_dim[0]
43 | self.use_not = use_not
44 | if self.use_not:
45 | self.disc_num *= 2
46 | self.output_dim = self.disc_num + self.n * self.input_dim[1] * 2
47 | self.layer_type = 'binarization'
48 | self.dim2id = {i: i for i in range(self.output_dim)}
49 | self.rule_name = None
50 |
51 | self.register_buffer('left', left)
52 | self.register_buffer('right', right)
53 |
54 | if self.input_dim[1] > 0:
55 | if self.left is not None and self.right is not None:
56 | cl = self.left + torch.rand(self.n, self.input_dim[1]) * (self.right - self.left)
57 | else:
58 | cl = torch.randn(self.n, self.input_dim[1])
59 | self.register_buffer('cl', cl)
60 |
61 | def forward(self, x):
62 | if self.input_dim[1] > 0:
63 | x_disc, x = x[:, 0: self.input_dim[0]], x[:, self.input_dim[0]:]
64 | x = x.unsqueeze(-1)
65 | if self.use_not:
66 | x_disc = torch.cat((x_disc, 1 - x_disc), dim=1)
67 | binarize_res = Binarize.apply(x - self.cl.t()).view(x.shape[0], -1)
68 | return torch.cat((x_disc, binarize_res, 1. - binarize_res), dim=1)
69 | if self.use_not:
70 | x = torch.cat((x, 1 - x), dim=1)
71 | return x
72 |
73 | @torch.no_grad()
74 | def binarized_forward(self, x):
75 | return self.forward(x)
76 |
77 | def clip(self):
78 | if self.input_dim[1] > 0 and self.left is not None and self.right is not None:
79 | self.cl.data = torch.where(self.cl.data > self.right, self.right, self.cl.data)
80 | self.cl.data = torch.where(self.cl.data < self.left, self.left, self.cl.data)
81 |
82 | def get_bound_name(self, feature_name, mean=None, std=None):
83 | bound_name = []
84 | for i in range(self.input_dim[0]):
85 | bound_name.append(feature_name[i])
86 | if self.use_not:
87 | for i in range(self.input_dim[0]):
88 | bound_name.append('~' + feature_name[i])
89 | if self.input_dim[1] > 0:
90 | for c, op in [(self.cl, '>'), (self.cl, '<=')]:
91 | c = c.detach().cpu().numpy()
92 | for i, ci in enumerate(c.T):
93 | fi_name = feature_name[self.input_dim[0] + i]
94 | for j in ci:
95 | if mean is not None and std is not None:
96 | j = j * std[fi_name] + mean[fi_name]
97 | bound_name.append('{} {} {:.3f}'.format(fi_name, op, j))
98 | self.rule_name = bound_name
99 |
100 |
101 | class Product(torch.autograd.Function):
102 | """Tensor product function."""
103 | @staticmethod
104 | def forward(ctx, X):
105 | y = (-1. / (-1. + torch.sum(torch.log(X), dim=1)))
106 | ctx.save_for_backward(X, y)
107 | return y
108 |
109 | @staticmethod
110 | def backward(ctx, grad_output):
111 | X, y, = ctx.saved_tensors
112 | grad_input = grad_output.unsqueeze(1) * (y.unsqueeze(1) ** 2 / (X + EPSILON))
113 | return grad_input
114 |
115 |
116 | class EstimatedProduct(torch.autograd.Function):
117 | """Tensor product function with a estimated derivative."""
118 | @staticmethod
119 | def forward(ctx, X):
120 | y = (-1. / (-1. + torch.sum(torch.log(X), dim=1)))
121 | ctx.save_for_backward(X, y)
122 | return y
123 |
124 | @staticmethod
125 | def backward(ctx, grad_output):
126 | X, y, = ctx.saved_tensors
127 | grad_input = grad_output.unsqueeze(1) * ((-1. / (-1. + torch.log(y.unsqueeze(1) ** 2))) / (X + EPSILON))
128 | return grad_input
129 |
130 |
131 | class LRLayer(nn.Module):
132 | """The LR layer is used to learn the linear part of the data."""
133 |
134 | def __init__(self, n, input_dim):
135 | super(LRLayer, self).__init__()
136 | self.n = n
137 | self.input_dim = input_dim
138 | self.output_dim = self.n
139 | self.layer_type = 'linear'
140 | self.rid2dim = None
141 | self.rule2weights = None
142 |
143 | self.fc1 = nn.Linear(self.input_dim, self.output_dim)
144 |
145 | def forward(self, x):
146 | return self.fc1(x)
147 |
148 | @torch.no_grad()
149 | def binarized_forward(self, x):
150 | return self.forward(x)
151 |
152 | def clip(self):
153 | for param in self.fc1.parameters():
154 | param.data.clamp_(-1.0, 1.0)
155 |
156 | def l1_norm(self):
157 | return torch.norm(self.fc1.weight, p=1)
158 |
159 | def l2_norm(self):
160 | return torch.sum(self.fc1.weight ** 2)
161 |
162 | def get_rule2weights(self, prev_layer, skip_connect_layer):
163 | prev_layer = self.conn.prev_layer
164 | skip_connect_layer = self.conn.skip_from_layer
165 |
166 | always_act_pos = (prev_layer.node_activation_cnt == prev_layer.forward_tot)
167 | merged_dim2id = prev_dim2id = {k: (-1, v) for k, v in prev_layer.dim2id.items()}
168 | if skip_connect_layer is not None:
169 | shifted_dim2id = {(k + prev_layer.output_dim): (-2, v) for k, v in skip_connect_layer.dim2id.items()}
170 | merged_dim2id = defaultdict(lambda: -1, {**shifted_dim2id, **prev_dim2id})
171 | always_act_pos = torch.cat(
172 | [always_act_pos, (skip_connect_layer.node_activation_cnt == skip_connect_layer.forward_tot)])
173 |
174 | Wl, bl = list(self.fc1.parameters())
175 | bl = torch.sum(Wl.T[always_act_pos], dim=0) + bl
176 | Wl = Wl.cpu().detach().numpy()
177 | self.bl = bl.cpu().detach().numpy()
178 |
179 | marked = defaultdict(lambda: defaultdict(float))
180 | rid2dim = {}
181 | for label_id, wl in enumerate(Wl):
182 | for i, w in enumerate(wl):
183 | rid = merged_dim2id[i]
184 | if rid == -1 or rid[1] == -1:
185 | continue
186 | marked[rid][label_id] += w
187 | rid2dim[rid] = i % prev_layer.output_dim
188 |
189 | self.rid2dim = rid2dim
190 | self.rule2weights = sorted(marked.items(), key=lambda x: max(map(abs, x[1].values())), reverse=True)
191 |
192 |
193 | class ConjunctionLayer(nn.Module):
194 | """The novel conjunction layer is used to learn the conjunction of nodes with less time and GPU memory usage."""
195 |
196 | def __init__(self, n, input_dim, use_not=False, alpha=0.999, beta=8, gamma=1):
197 | super(ConjunctionLayer, self).__init__()
198 | self.n = n
199 | self.use_not = use_not
200 | self.input_dim = input_dim if not use_not else input_dim * 2
201 | self.output_dim = self.n
202 | self.layer_type = 'conjunction'
203 |
204 | self.W = nn.Parameter(INIT_L + (0.5 - INIT_L) * torch.rand(self.input_dim, self.n))
205 |
206 | self.node_activation_cnt = None
207 |
208 | self.alpha = alpha
209 | self.beta = beta
210 | self.gamma = gamma
211 |
212 | def forward(self, x):
213 | res_tilde = self.continuous_forward(x)
214 | res_bar = self.binarized_forward(x)
215 | return GradGraft.apply(res_bar, res_tilde)
216 |
217 | def continuous_forward(self, x):
218 | if self.use_not:
219 | x = torch.cat((x, 1 - x), dim=1)
220 | x = 1. - x
221 | xl = (1. - 1. / (1. - (x * self.alpha) ** self.beta))
222 | wl = (1. - 1. / (1. - (self.W * self.alpha) ** self.beta))
223 | return 1. / (1. + xl @ wl) ** self.gamma
224 |
225 | @torch.no_grad()
226 | def binarized_forward(self, x):
227 | if self.use_not:
228 | x = torch.cat((x, 1 - x), dim=1)
229 | Wb = Binarize.apply(self.W - THRESHOLD)
230 | res = (1 - x) @ Wb
231 | return torch.where(res > 0, torch.zeros_like(res), torch.ones_like(res))
232 |
233 | def clip(self):
234 | self.W.data.clamp_(INIT_L, 1.0)
235 |
236 |
237 | class DisjunctionLayer(nn.Module):
238 | """The novel disjunction layer is used to learn the disjunction of nodes with less time and GPU memory usage."""
239 |
240 | def __init__(self, n, input_dim, use_not=False, alpha=0.999, beta=8, gamma=1):
241 | super(DisjunctionLayer, self).__init__()
242 | self.n = n
243 | self.use_not = use_not
244 | self.input_dim = input_dim if not use_not else input_dim * 2
245 | self.output_dim = self.n
246 | self.layer_type = 'disjunction'
247 |
248 | self.W = nn.Parameter(INIT_L + (0.5 - INIT_L) * torch.rand(self.input_dim, self.n))
249 |
250 | self.node_activation_cnt = None
251 |
252 | self.alpha = alpha
253 | self.beta = beta
254 | self.gamma = gamma
255 |
256 | def forward(self, x):
257 | res_tilde = self.continuous_forward(x)
258 | res_bar = self.binarized_forward(x)
259 | return GradGraft.apply(res_bar, res_tilde)
260 |
261 | def continuous_forward(self, x):
262 | if self.use_not:
263 | x = torch.cat((x, 1 - x), dim=1)
264 | xl = (1. - 1. / (1. - (x * self.alpha) ** self.beta))
265 | wl = (1. - 1. / (1. - (self.W * self.alpha) ** self.beta))
266 | return 1. - 1. / (1. + xl @ wl) ** self.gamma
267 |
268 | @torch.no_grad()
269 | def binarized_forward(self, x):
270 | if self.use_not:
271 | x = torch.cat((x, 1 - x), dim=1)
272 | Wb = Binarize.apply(self.W - THRESHOLD)
273 | res = x @ Wb
274 | return torch.where(res > 0, torch.ones_like(res), torch.zeros_like(res))
275 |
276 | def clip(self):
277 | self.W.data.clamp_(INIT_L, 1.0)
278 |
279 |
280 | class OriginalConjunctionLayer(nn.Module):
281 | """The conjunction layer is used to learn the conjunction of nodes."""
282 |
283 | def __init__(self, n, input_dim, use_not=False, estimated_grad=False):
284 | super(OriginalConjunctionLayer, self).__init__()
285 | self.n = n
286 | self.use_not = use_not
287 | self.input_dim = input_dim if not use_not else input_dim * 2
288 | self.output_dim = self.n
289 | self.layer_type = 'conjunction'
290 |
291 | self.W = nn.Parameter(INIT_RANGE * torch.rand(self.input_dim, self.n))
292 | self.Product = EstimatedProduct if estimated_grad else Product
293 | self.node_activation_cnt = None
294 |
295 | def forward(self, x):
296 | res_tilde = self.continuous_forward(x)
297 | res_bar = self.binarized_forward(x)
298 | return GradGraft.apply(res_bar, res_tilde)
299 |
300 | def continuous_forward(self, x):
301 | if self.use_not:
302 | x = torch.cat((x, 1 - x), dim=1)
303 | return self.Product.apply(1 - (1 - x).unsqueeze(-1) * self.W)
304 |
305 | @torch.no_grad()
306 | def binarized_forward(self, x):
307 | if self.use_not:
308 | x = torch.cat((x, 1 - x), dim=1)
309 | Wb = Binarize.apply(self.W - THRESHOLD)
310 | return torch.prod(1 - (1 - x).unsqueeze(-1) * Wb, dim=1)
311 |
312 | def clip(self):
313 | self.W.data.clamp_(0.0, 1.0)
314 |
315 |
316 | class OriginalDisjunctionLayer(nn.Module):
317 | """The disjunction layer is used to learn the disjunction of nodes."""
318 |
319 | def __init__(self, n, input_dim, use_not=False, estimated_grad=False):
320 | super(OriginalDisjunctionLayer, self).__init__()
321 | self.n = n
322 | self.use_not = use_not
323 | self.input_dim = input_dim if not use_not else input_dim * 2
324 | self.output_dim = self.n
325 | self.layer_type = 'disjunction'
326 |
327 | self.W = nn.Parameter(INIT_RANGE * torch.rand(self.input_dim, self.n))
328 | self.Product = EstimatedProduct if estimated_grad else Product
329 | self.node_activation_cnt = None
330 |
331 | def forward(self, x):
332 | res_tilde = self.continuous_forward(x)
333 | res_bar = self.binarized_forward(x)
334 | return GradGraft.apply(res_bar, res_tilde)
335 |
336 | def continuous_forward(self, x):
337 | if self.use_not:
338 | x = torch.cat((x, 1 - x), dim=1)
339 | return 1 - self.Product.apply(1 - x.unsqueeze(-1) * self.W)
340 |
341 | @torch.no_grad()
342 | def binarized_forward(self, x):
343 | if self.use_not:
344 | x = torch.cat((x, 1 - x), dim=1)
345 | Wb = Binarize.apply(self.W - THRESHOLD)
346 | return 1 - torch.prod(1 - x.unsqueeze(-1) * Wb, dim=1)
347 |
348 | def clip(self):
349 | self.W.data.clamp_(0.0, 1.0)
350 |
351 |
352 | def extract_rules(prev_layer, skip_connect_layer, layer, pos_shift=0):
353 | # dim2id = {dimension: rule_id} :
354 | dim2id = defaultdict(lambda: -1)
355 | rules = {}
356 | tmp = 0
357 | rule_list = []
358 |
359 | # Wb.shape = (n, input_dim)
360 | Wb = (layer.W.t() > 0.5).type(torch.int).detach().cpu().numpy()
361 |
362 | # merged_dim2id is the dim2id of the input (the prev_layer and skip_connect_layer)
363 | merged_dim2id = prev_dim2id = {k: (-1, v) for k, v in prev_layer.dim2id.items()}
364 | if skip_connect_layer is not None:
365 | shifted_dim2id = {(k + prev_layer.output_dim): (-2, v) for k, v in skip_connect_layer.dim2id.items()}
366 | merged_dim2id = defaultdict(lambda: -1, {**shifted_dim2id, **prev_dim2id})
367 |
368 | for ri, row in enumerate(Wb):
369 | # delete dead nodes
370 | if layer.node_activation_cnt[ri + pos_shift] == 0 or layer.node_activation_cnt[ri + pos_shift] == layer.forward_tot:
371 | dim2id[ri + pos_shift] = -1
372 | continue
373 | rule = {}
374 | # rule[i] = (k, rule_id):
375 | # k == -1: connects to a rule in prev_layer,
376 | # k == 1: connects to a rule in prev_layer (NOT),
377 | # k == -2: connects to a rule in skip_connect_layer,
378 | # k == 2: connects to a rule in skip_connect_layer (NOT).
379 | bound = {}
380 | if prev_layer.layer_type == 'binarization' and prev_layer.input_dim[1] > 0:
381 | c = torch.cat((prev_layer.cl.t().reshape(-1), prev_layer.cl.t().reshape(-1))).detach().cpu().numpy()
382 | for i, w in enumerate(row):
383 | # deal with "use NOT", use_not_mul = -1 if it used NOT in that input dimension
384 | use_not_mul = 1
385 | if layer.use_not:
386 | if i >= layer.input_dim // 2:
387 | use_not_mul = -1
388 | i = i % (layer.input_dim // 2)
389 |
390 | if w > 0 and merged_dim2id[i][1] != -1:
391 | if prev_layer.layer_type == 'binarization' and i >= prev_layer.disc_num:
392 | ci = i - prev_layer.disc_num
393 | bi = ci // prev_layer.n
394 | if bi not in bound:
395 | bound[bi] = [i, c[ci]]
396 | rule[(-1, i)] = 1 # since dim2id[i] == i in the BinarizeLayer
397 | else: # merge the bounds for one feature
398 | if (ci < c.shape[0] // 2 and layer.layer_type == 'conjunction') or \
399 | (ci >= c.shape[0] // 2 and layer.layer_type == 'disjunction'):
400 | func = max
401 | else:
402 | func = min
403 | bound[bi][1] = func(bound[bi][1], c[ci])
404 | if bound[bi][1] == c[ci]: # replace the last bound
405 | del rule[(-1, bound[bi][0])]
406 | rule[(-1, i)] = 1
407 | bound[bi][0] = i
408 | else:
409 | rid = merged_dim2id[i]
410 | rule[(rid[0] * use_not_mul, rid[1])] = 1
411 |
412 | # give each unique rule an id, and save this id in dim2id
413 | rule = tuple(sorted(rule.keys()))
414 | if rule not in rules:
415 | rules[rule] = tmp
416 | rule_list.append(rule)
417 | dim2id[ri + pos_shift] = tmp
418 | tmp += 1
419 | else:
420 | dim2id[ri + pos_shift] = rules[rule]
421 | return dim2id, rule_list
422 |
423 |
424 | class UnionLayer(nn.Module):
425 | """The union layer is used to learn the rule-based representation."""
426 |
427 | def __init__(self, n, input_dim, use_not=False, use_nlaf=False, estimated_grad=False, alpha=0.999, beta=8, gamma=1):
428 | super(UnionLayer, self).__init__()
429 | self.n = n
430 | self.use_not = use_not
431 | self.input_dim = input_dim
432 | self.output_dim = self.n * 2
433 | self.layer_type = 'union'
434 | self.forward_tot = None
435 | self.node_activation_cnt = None
436 | self.dim2id = None
437 | self.rule_list = None
438 | self.rule_name = None
439 |
440 | if use_nlaf: # use novel logical activation functions
441 | self.con_layer = ConjunctionLayer(self.n, self.input_dim, use_not=use_not, alpha=alpha, beta=beta, gamma=gamma)
442 | self.dis_layer = DisjunctionLayer(self.n, self.input_dim, use_not=use_not, alpha=alpha, beta=beta, gamma=gamma)
443 | else: # use original logical activation functions
444 | self.con_layer = OriginalConjunctionLayer(self.n, self.input_dim, use_not=use_not, estimated_grad=estimated_grad)
445 | self.dis_layer = OriginalDisjunctionLayer(self.n, self.input_dim, use_not=use_not, estimated_grad=estimated_grad)
446 |
447 | def forward(self, x):
448 | return torch.cat([self.con_layer(x), self.dis_layer(x)], dim=1)
449 |
450 | def binarized_forward(self, x):
451 | return torch.cat([self.con_layer.binarized_forward(x),
452 | self.dis_layer.binarized_forward(x)], dim=1)
453 |
454 | def edge_count(self):
455 | con_Wb = Binarize.apply(self.con_layer.W - THRESHOLD)
456 | dis_Wb = Binarize.apply(self.dis_layer.W - THRESHOLD)
457 | return torch.sum(con_Wb) + torch.sum(dis_Wb)
458 |
459 | def l1_norm(self):
460 | return torch.sum(self.con_layer.W) + torch.sum(self.dis_layer.W)
461 |
462 | def l2_norm(self):
463 | return torch.sum(self.con_layer.W ** 2) + torch.sum(self.dis_layer.W ** 2)
464 |
465 | def clip(self):
466 | self.con_layer.clip()
467 | self.dis_layer.clip()
468 |
469 | def get_rules(self, prev_layer, skip_connect_layer):
470 | self.con_layer.forward_tot = self.dis_layer.forward_tot = self.forward_tot
471 | self.con_layer.node_activation_cnt = self.dis_layer.node_activation_cnt = self.node_activation_cnt
472 |
473 | # get dim2id and rule lists of the conjunction layer and the disjunction layer
474 | # dim2id: dimension --> (k, rule id)
475 | con_dim2id, con_rule_list = extract_rules(prev_layer, skip_connect_layer, self.con_layer)
476 | dis_dim2id, dis_rule_list = extract_rules(prev_layer, skip_connect_layer, self.dis_layer, self.con_layer.W.shape[1])
477 |
478 | shift = max(con_dim2id.values()) + 1
479 | dis_dim2id = {k: (-1 if v == -1 else v + shift) for k, v in dis_dim2id.items()}
480 | dim2id = defaultdict(lambda: -1, {**con_dim2id, **dis_dim2id})
481 |
482 | rule_list = (con_rule_list, dis_rule_list)
483 |
484 | self.dim2id = dim2id
485 | self.rule_list = rule_list
486 |
487 | def get_rule_description(self, input_rule_name, wrap=False):
488 | """
489 | input_rule_name: (skip_connect_rule_name, prev_rule_name)
490 | """
491 | self.rule_name = []
492 | for rl, op in zip(self.rule_list, ('&', '|')):
493 | for rule in rl:
494 | name = ''
495 | for i, ri in enumerate(rule):
496 | op_str = ' {} '.format(op) if i != 0 else ''
497 | layer_shift = ri[0]
498 | not_str = ''
499 | if ri[0] > 0: # ri[0] == 1 or ri[0] == 2
500 | layer_shift *= -1
501 | not_str = '~'
502 | var_str = ('({})' if (wrap or not_str == '~') else '{}').format(input_rule_name[2 + layer_shift][ri[1]])
503 | name += op_str + not_str + var_str
504 | self.rule_name.append(name)
505 |
--------------------------------------------------------------------------------
/dataset/tic-tac-toe.data:
--------------------------------------------------------------------------------
1 | x,x,x,x,o,o,x,o,o,positive
2 | x,x,x,x,o,o,o,x,o,positive
3 | x,x,x,x,o,o,o,o,x,positive
4 | x,x,x,x,o,o,o,b,b,positive
5 | x,x,x,x,o,o,b,o,b,positive
6 | x,x,x,x,o,o,b,b,o,positive
7 | x,x,x,x,o,b,o,o,b,positive
8 | x,x,x,x,o,b,o,b,o,positive
9 | x,x,x,x,o,b,b,o,o,positive
10 | x,x,x,x,b,o,o,o,b,positive
11 | x,x,x,x,b,o,o,b,o,positive
12 | x,x,x,x,b,o,b,o,o,positive
13 | x,x,x,o,x,o,x,o,o,positive
14 | x,x,x,o,x,o,o,x,o,positive
15 | x,x,x,o,x,o,o,o,x,positive
16 | x,x,x,o,x,o,o,b,b,positive
17 | x,x,x,o,x,o,b,o,b,positive
18 | x,x,x,o,x,o,b,b,o,positive
19 | x,x,x,o,x,b,o,o,b,positive
20 | x,x,x,o,x,b,o,b,o,positive
21 | x,x,x,o,x,b,b,o,o,positive
22 | x,x,x,o,o,x,x,o,o,positive
23 | x,x,x,o,o,x,o,x,o,positive
24 | x,x,x,o,o,x,o,o,x,positive
25 | x,x,x,o,o,x,o,b,b,positive
26 | x,x,x,o,o,x,b,o,b,positive
27 | x,x,x,o,o,x,b,b,o,positive
28 | x,x,x,o,o,b,x,o,b,positive
29 | x,x,x,o,o,b,x,b,o,positive
30 | x,x,x,o,o,b,o,x,b,positive
31 | x,x,x,o,o,b,o,b,x,positive
32 | x,x,x,o,o,b,b,x,o,positive
33 | x,x,x,o,o,b,b,o,x,positive
34 | x,x,x,o,o,b,b,b,b,positive
35 | x,x,x,o,b,x,o,o,b,positive
36 | x,x,x,o,b,x,o,b,o,positive
37 | x,x,x,o,b,x,b,o,o,positive
38 | x,x,x,o,b,o,x,o,b,positive
39 | x,x,x,o,b,o,x,b,o,positive
40 | x,x,x,o,b,o,o,x,b,positive
41 | x,x,x,o,b,o,o,b,x,positive
42 | x,x,x,o,b,o,b,x,o,positive
43 | x,x,x,o,b,o,b,o,x,positive
44 | x,x,x,o,b,o,b,b,b,positive
45 | x,x,x,o,b,b,x,o,o,positive
46 | x,x,x,o,b,b,o,x,o,positive
47 | x,x,x,o,b,b,o,o,x,positive
48 | x,x,x,o,b,b,o,b,b,positive
49 | x,x,x,o,b,b,b,o,b,positive
50 | x,x,x,o,b,b,b,b,o,positive
51 | x,x,x,b,x,o,o,o,b,positive
52 | x,x,x,b,x,o,o,b,o,positive
53 | x,x,x,b,x,o,b,o,o,positive
54 | x,x,x,b,o,x,o,o,b,positive
55 | x,x,x,b,o,x,o,b,o,positive
56 | x,x,x,b,o,x,b,o,o,positive
57 | x,x,x,b,o,o,x,o,b,positive
58 | x,x,x,b,o,o,x,b,o,positive
59 | x,x,x,b,o,o,o,x,b,positive
60 | x,x,x,b,o,o,o,b,x,positive
61 | x,x,x,b,o,o,b,x,o,positive
62 | x,x,x,b,o,o,b,o,x,positive
63 | x,x,x,b,o,o,b,b,b,positive
64 | x,x,x,b,o,b,x,o,o,positive
65 | x,x,x,b,o,b,o,x,o,positive
66 | x,x,x,b,o,b,o,o,x,positive
67 | x,x,x,b,o,b,o,b,b,positive
68 | x,x,x,b,o,b,b,o,b,positive
69 | x,x,x,b,o,b,b,b,o,positive
70 | x,x,x,b,b,o,x,o,o,positive
71 | x,x,x,b,b,o,o,x,o,positive
72 | x,x,x,b,b,o,o,o,x,positive
73 | x,x,x,b,b,o,o,b,b,positive
74 | x,x,x,b,b,o,b,o,b,positive
75 | x,x,x,b,b,o,b,b,o,positive
76 | x,x,x,b,b,b,o,o,b,positive
77 | x,x,x,b,b,b,o,b,o,positive
78 | x,x,x,b,b,b,b,o,o,positive
79 | x,x,o,x,x,o,o,o,x,positive
80 | x,x,o,x,o,x,x,o,o,positive
81 | x,x,o,x,o,o,x,o,x,positive
82 | x,x,o,x,o,o,x,b,b,positive
83 | x,x,o,x,o,b,x,o,b,positive
84 | x,x,o,x,o,b,x,b,o,positive
85 | x,x,o,x,b,o,x,o,b,positive
86 | x,x,o,x,b,b,x,o,o,positive
87 | x,x,o,o,x,x,o,x,o,positive
88 | x,x,o,o,x,x,o,o,x,positive
89 | x,x,o,o,x,o,x,o,x,positive
90 | x,x,o,o,x,o,o,x,x,positive
91 | x,x,o,o,x,o,b,x,b,positive
92 | x,x,o,o,x,o,b,b,x,positive
93 | x,x,o,o,x,b,o,x,b,positive
94 | x,x,o,o,x,b,o,b,x,positive
95 | x,x,o,o,x,b,b,x,o,positive
96 | x,x,o,o,x,b,b,o,x,positive
97 | x,x,o,b,x,o,o,x,b,positive
98 | x,x,o,b,x,o,o,b,x,positive
99 | x,x,o,b,x,o,b,o,x,positive
100 | x,x,o,b,x,b,o,x,o,positive
101 | x,x,o,b,x,b,o,o,x,positive
102 | x,x,b,x,o,o,x,o,b,positive
103 | x,x,b,x,o,o,x,b,o,positive
104 | x,x,b,x,o,b,x,o,o,positive
105 | x,x,b,x,b,o,x,o,o,positive
106 | x,x,b,o,x,o,o,x,b,positive
107 | x,x,b,o,x,o,o,b,x,positive
108 | x,x,b,o,x,o,b,x,o,positive
109 | x,x,b,o,x,o,b,o,x,positive
110 | x,x,b,o,x,b,o,x,o,positive
111 | x,x,b,o,x,b,o,o,x,positive
112 | x,x,b,b,x,o,o,x,o,positive
113 | x,x,b,b,x,o,o,o,x,positive
114 | x,o,x,x,x,o,x,o,o,positive
115 | x,o,x,x,x,o,o,o,x,positive
116 | x,o,x,x,o,o,x,x,o,positive
117 | x,o,x,x,o,o,x,b,b,positive
118 | x,o,x,x,o,b,x,b,o,positive
119 | x,o,x,x,b,o,x,o,b,positive
120 | x,o,x,x,b,o,x,b,o,positive
121 | x,o,x,x,b,b,x,o,o,positive
122 | x,o,x,o,x,x,x,o,o,positive
123 | x,o,x,o,x,x,o,o,x,positive
124 | x,o,x,o,x,o,x,x,o,positive
125 | x,o,x,o,x,o,x,o,x,positive
126 | x,o,x,o,x,o,x,b,b,positive
127 | x,o,x,o,x,o,o,x,x,positive
128 | x,o,x,o,x,o,b,b,x,positive
129 | x,o,x,o,x,b,x,o,b,positive
130 | x,o,x,o,x,b,x,b,o,positive
131 | x,o,x,o,x,b,o,b,x,positive
132 | x,o,x,o,x,b,b,o,x,positive
133 | x,o,x,o,o,x,o,x,x,positive
134 | x,o,x,o,o,x,b,b,x,positive
135 | x,o,x,o,b,x,o,b,x,positive
136 | x,o,x,o,b,x,b,o,x,positive
137 | x,o,x,b,x,o,x,o,b,positive
138 | x,o,x,b,x,o,x,b,o,positive
139 | x,o,x,b,x,o,o,b,x,positive
140 | x,o,x,b,x,o,b,o,x,positive
141 | x,o,x,b,x,b,x,o,o,positive
142 | x,o,x,b,x,b,o,o,x,positive
143 | x,o,x,b,o,x,o,b,x,positive
144 | x,o,x,b,b,x,o,o,x,positive
145 | x,o,o,x,x,x,x,o,o,positive
146 | x,o,o,x,x,x,o,x,o,positive
147 | x,o,o,x,x,x,o,o,x,positive
148 | x,o,o,x,x,x,o,b,b,positive
149 | x,o,o,x,x,x,b,o,b,positive
150 | x,o,o,x,x,x,b,b,o,positive
151 | x,o,o,x,x,o,x,o,x,positive
152 | x,o,o,x,x,o,x,b,b,positive
153 | x,o,o,x,x,o,o,x,x,positive
154 | x,o,o,x,x,o,b,b,x,positive
155 | x,o,o,x,x,b,x,o,b,positive
156 | x,o,o,x,x,b,x,b,o,positive
157 | x,o,o,x,x,b,o,b,x,positive
158 | x,o,o,x,x,b,b,o,x,positive
159 | x,o,o,x,o,x,x,x,o,positive
160 | x,o,o,x,o,x,x,b,b,positive
161 | x,o,o,x,o,o,x,x,x,positive
162 | x,o,o,x,o,b,x,x,b,positive
163 | x,o,o,x,o,b,x,b,x,positive
164 | x,o,o,x,b,x,x,o,b,positive
165 | x,o,o,x,b,x,x,b,o,positive
166 | x,o,o,x,b,o,x,x,b,positive
167 | x,o,o,x,b,o,x,b,x,positive
168 | x,o,o,x,b,b,x,x,o,positive
169 | x,o,o,x,b,b,x,o,x,positive
170 | x,o,o,x,b,b,x,b,b,positive
171 | x,o,o,o,x,x,x,o,x,positive
172 | x,o,o,o,x,x,o,x,x,positive
173 | x,o,o,o,x,x,b,b,x,positive
174 | x,o,o,o,x,o,x,x,x,positive
175 | x,o,o,o,x,b,x,b,x,positive
176 | x,o,o,o,x,b,b,x,x,positive
177 | x,o,o,o,o,x,x,x,x,positive
178 | x,o,o,o,b,b,x,x,x,positive
179 | x,o,o,b,x,x,o,b,x,positive
180 | x,o,o,b,x,x,b,o,x,positive
181 | x,o,o,b,x,o,x,b,x,positive
182 | x,o,o,b,x,o,b,x,x,positive
183 | x,o,o,b,x,b,x,o,x,positive
184 | x,o,o,b,x,b,o,x,x,positive
185 | x,o,o,b,x,b,b,b,x,positive
186 | x,o,o,b,o,b,x,x,x,positive
187 | x,o,o,b,b,o,x,x,x,positive
188 | x,o,b,x,x,x,o,o,b,positive
189 | x,o,b,x,x,x,o,b,o,positive
190 | x,o,b,x,x,x,b,o,o,positive
191 | x,o,b,x,x,o,x,o,b,positive
192 | x,o,b,x,x,o,x,b,o,positive
193 | x,o,b,x,x,o,o,b,x,positive
194 | x,o,b,x,x,o,b,o,x,positive
195 | x,o,b,x,x,b,x,o,o,positive
196 | x,o,b,x,x,b,o,o,x,positive
197 | x,o,b,x,o,x,x,b,o,positive
198 | x,o,b,x,o,o,x,x,b,positive
199 | x,o,b,x,o,o,x,b,x,positive
200 | x,o,b,x,o,b,x,x,o,positive
201 | x,o,b,x,o,b,x,b,b,positive
202 | x,o,b,x,b,x,x,o,o,positive
203 | x,o,b,x,b,o,x,x,o,positive
204 | x,o,b,x,b,o,x,o,x,positive
205 | x,o,b,x,b,o,x,b,b,positive
206 | x,o,b,x,b,b,x,o,b,positive
207 | x,o,b,x,b,b,x,b,o,positive
208 | x,o,b,o,x,x,o,b,x,positive
209 | x,o,b,o,x,x,b,o,x,positive
210 | x,o,b,o,x,o,x,b,x,positive
211 | x,o,b,o,x,o,b,x,x,positive
212 | x,o,b,o,x,b,x,o,x,positive
213 | x,o,b,o,x,b,o,x,x,positive
214 | x,o,b,o,x,b,b,b,x,positive
215 | x,o,b,o,o,b,x,x,x,positive
216 | x,o,b,o,b,o,x,x,x,positive
217 | x,o,b,b,x,x,o,o,x,positive
218 | x,o,b,b,x,o,x,o,x,positive
219 | x,o,b,b,x,o,o,x,x,positive
220 | x,o,b,b,x,o,b,b,x,positive
221 | x,o,b,b,x,b,o,b,x,positive
222 | x,o,b,b,x,b,b,o,x,positive
223 | x,o,b,b,o,o,x,x,x,positive
224 | x,b,x,x,o,o,x,o,b,positive
225 | x,b,x,x,o,o,x,b,o,positive
226 | x,b,x,x,o,b,x,o,o,positive
227 | x,b,x,x,b,o,x,o,o,positive
228 | x,b,x,o,x,o,x,o,b,positive
229 | x,b,x,o,x,o,x,b,o,positive
230 | x,b,x,o,x,o,o,b,x,positive
231 | x,b,x,o,x,o,b,o,x,positive
232 | x,b,x,o,x,b,x,o,o,positive
233 | x,b,x,o,x,b,o,o,x,positive
234 | x,b,x,o,o,x,o,b,x,positive
235 | x,b,x,o,o,x,b,o,x,positive
236 | x,b,x,o,b,x,o,o,x,positive
237 | x,b,x,b,x,o,x,o,o,positive
238 | x,b,x,b,x,o,o,o,x,positive
239 | x,b,x,b,o,x,o,o,x,positive
240 | x,b,o,x,x,x,o,o,b,positive
241 | x,b,o,x,x,x,o,b,o,positive
242 | x,b,o,x,x,x,b,o,o,positive
243 | x,b,o,x,x,o,x,o,b,positive
244 | x,b,o,x,x,o,o,b,x,positive
245 | x,b,o,x,x,o,b,o,x,positive
246 | x,b,o,x,x,b,x,o,o,positive
247 | x,b,o,x,x,b,o,o,x,positive
248 | x,b,o,x,o,x,x,o,b,positive
249 | x,b,o,x,o,x,x,b,o,positive
250 | x,b,o,x,o,o,x,x,b,positive
251 | x,b,o,x,o,o,x,b,x,positive
252 | x,b,o,x,o,b,x,x,o,positive
253 | x,b,o,x,o,b,x,o,x,positive
254 | x,b,o,x,o,b,x,b,b,positive
255 | x,b,o,x,b,x,x,o,o,positive
256 | x,b,o,x,b,o,x,o,x,positive
257 | x,b,o,x,b,o,x,b,b,positive
258 | x,b,o,x,b,b,x,o,b,positive
259 | x,b,o,x,b,b,x,b,o,positive
260 | x,b,o,o,x,x,o,b,x,positive
261 | x,b,o,o,x,x,b,o,x,positive
262 | x,b,o,o,x,o,x,b,x,positive
263 | x,b,o,o,x,o,b,x,x,positive
264 | x,b,o,o,x,b,x,o,x,positive
265 | x,b,o,o,x,b,o,x,x,positive
266 | x,b,o,o,x,b,b,b,x,positive
267 | x,b,o,o,o,b,x,x,x,positive
268 | x,b,o,o,b,o,x,x,x,positive
269 | x,b,o,b,x,x,o,o,x,positive
270 | x,b,o,b,x,o,x,o,x,positive
271 | x,b,o,b,x,o,o,x,x,positive
272 | x,b,o,b,x,o,b,b,x,positive
273 | x,b,o,b,x,b,o,b,x,positive
274 | x,b,o,b,x,b,b,o,x,positive
275 | x,b,o,b,o,o,x,x,x,positive
276 | x,b,b,x,x,o,x,o,o,positive
277 | x,b,b,x,x,o,o,o,x,positive
278 | x,b,b,x,o,x,x,o,o,positive
279 | x,b,b,x,o,o,x,x,o,positive
280 | x,b,b,x,o,o,x,o,x,positive
281 | x,b,b,x,o,o,x,b,b,positive
282 | x,b,b,x,o,b,x,o,b,positive
283 | x,b,b,x,o,b,x,b,o,positive
284 | x,b,b,x,b,o,x,o,b,positive
285 | x,b,b,x,b,o,x,b,o,positive
286 | x,b,b,x,b,b,x,o,o,positive
287 | x,b,b,o,x,x,o,o,x,positive
288 | x,b,b,o,x,o,x,o,x,positive
289 | x,b,b,o,x,o,o,x,x,positive
290 | x,b,b,o,x,o,b,b,x,positive
291 | x,b,b,o,x,b,o,b,x,positive
292 | x,b,b,o,x,b,b,o,x,positive
293 | x,b,b,b,x,o,o,b,x,positive
294 | x,b,b,b,x,o,b,o,x,positive
295 | x,b,b,b,x,b,o,o,x,positive
296 | o,x,x,x,x,o,x,o,o,positive
297 | o,x,x,x,x,o,o,x,o,positive
298 | o,x,x,x,o,x,o,o,x,positive
299 | o,x,x,o,x,x,x,o,o,positive
300 | o,x,x,o,x,o,x,x,o,positive
301 | o,x,x,o,x,o,x,o,x,positive
302 | o,x,x,o,x,o,x,b,b,positive
303 | o,x,x,o,x,o,b,x,b,positive
304 | o,x,x,o,x,b,x,o,b,positive
305 | o,x,x,o,x,b,x,b,o,positive
306 | o,x,x,o,x,b,b,x,o,positive
307 | o,x,x,o,o,x,x,o,x,positive
308 | o,x,x,o,o,x,b,b,x,positive
309 | o,x,x,o,b,x,b,o,x,positive
310 | o,x,x,b,x,o,x,o,b,positive
311 | o,x,x,b,x,o,x,b,o,positive
312 | o,x,x,b,x,o,o,x,b,positive
313 | o,x,x,b,x,o,b,x,o,positive
314 | o,x,x,b,x,b,x,o,o,positive
315 | o,x,x,b,x,b,o,x,o,positive
316 | o,x,x,b,o,x,o,b,x,positive
317 | o,x,x,b,o,x,b,o,x,positive
318 | o,x,x,b,b,x,o,o,x,positive
319 | o,x,o,x,x,x,x,o,o,positive
320 | o,x,o,x,x,x,o,x,o,positive
321 | o,x,o,x,x,x,o,o,x,positive
322 | o,x,o,x,x,x,o,b,b,positive
323 | o,x,o,x,x,x,b,o,b,positive
324 | o,x,o,x,x,x,b,b,o,positive
325 | o,x,o,x,x,o,o,x,x,positive
326 | o,x,o,x,x,o,b,x,b,positive
327 | o,x,o,x,x,b,o,x,b,positive
328 | o,x,o,x,x,b,b,x,o,positive
329 | o,x,o,x,o,o,x,x,x,positive
330 | o,x,o,o,x,x,x,x,o,positive
331 | o,x,o,o,x,x,b,x,b,positive
332 | o,x,o,o,x,o,x,x,x,positive
333 | o,x,o,o,x,b,x,x,b,positive
334 | o,x,o,o,x,b,b,x,x,positive
335 | o,x,o,o,o,x,x,x,x,positive
336 | o,x,o,o,b,b,x,x,x,positive
337 | o,x,o,b,x,x,o,x,b,positive
338 | o,x,o,b,x,x,b,x,o,positive
339 | o,x,o,b,x,o,x,x,b,positive
340 | o,x,o,b,x,o,b,x,x,positive
341 | o,x,o,b,x,b,x,x,o,positive
342 | o,x,o,b,x,b,o,x,x,positive
343 | o,x,o,b,x,b,b,x,b,positive
344 | o,x,o,b,o,b,x,x,x,positive
345 | o,x,o,b,b,o,x,x,x,positive
346 | o,x,b,x,x,x,o,o,b,positive
347 | o,x,b,x,x,x,o,b,o,positive
348 | o,x,b,x,x,x,b,o,o,positive
349 | o,x,b,x,x,o,o,x,b,positive
350 | o,x,b,x,x,o,b,x,o,positive
351 | o,x,b,x,x,b,o,x,o,positive
352 | o,x,b,o,x,x,b,x,o,positive
353 | o,x,b,o,x,o,x,x,b,positive
354 | o,x,b,o,x,o,b,x,x,positive
355 | o,x,b,o,x,b,x,x,o,positive
356 | o,x,b,o,x,b,b,x,b,positive
357 | o,x,b,o,o,b,x,x,x,positive
358 | o,x,b,o,b,o,x,x,x,positive
359 | o,x,b,b,x,x,o,x,o,positive
360 | o,x,b,b,x,o,x,x,o,positive
361 | o,x,b,b,x,o,o,x,x,positive
362 | o,x,b,b,x,o,b,x,b,positive
363 | o,x,b,b,x,b,o,x,b,positive
364 | o,x,b,b,x,b,b,x,o,positive
365 | o,x,b,b,o,o,x,x,x,positive
366 | o,o,x,x,x,x,x,o,o,positive
367 | o,o,x,x,x,x,o,x,o,positive
368 | o,o,x,x,x,x,o,o,x,positive
369 | o,o,x,x,x,x,o,b,b,positive
370 | o,o,x,x,x,x,b,o,b,positive
371 | o,o,x,x,x,x,b,b,o,positive
372 | o,o,x,x,x,o,x,x,o,positive
373 | o,o,x,x,x,o,x,o,x,positive
374 | o,o,x,x,x,o,x,b,b,positive
375 | o,o,x,x,x,b,x,o,b,positive
376 | o,o,x,x,x,b,x,b,o,positive
377 | o,o,x,x,o,x,o,x,x,positive
378 | o,o,x,x,o,x,b,b,x,positive
379 | o,o,x,x,o,o,x,x,x,positive
380 | o,o,x,x,b,x,o,b,x,positive
381 | o,o,x,x,b,x,b,o,x,positive
382 | o,o,x,o,x,x,x,x,o,positive
383 | o,o,x,o,x,x,x,o,x,positive
384 | o,o,x,o,x,x,x,b,b,positive
385 | o,o,x,o,x,x,b,b,x,positive
386 | o,o,x,o,x,o,x,x,x,positive
387 | o,o,x,o,x,b,x,x,b,positive
388 | o,o,x,o,x,b,x,b,x,positive
389 | o,o,x,o,o,x,x,x,x,positive
390 | o,o,x,o,b,x,x,b,x,positive
391 | o,o,x,o,b,x,b,x,x,positive
392 | o,o,x,o,b,b,x,x,x,positive
393 | o,o,x,b,x,x,x,o,b,positive
394 | o,o,x,b,x,x,x,b,o,positive
395 | o,o,x,b,x,x,o,b,x,positive
396 | o,o,x,b,x,x,b,o,x,positive
397 | o,o,x,b,x,o,x,x,b,positive
398 | o,o,x,b,x,o,x,b,x,positive
399 | o,o,x,b,x,b,x,x,o,positive
400 | o,o,x,b,x,b,x,o,x,positive
401 | o,o,x,b,x,b,x,b,b,positive
402 | o,o,x,b,o,x,x,b,x,positive
403 | o,o,x,b,o,x,b,x,x,positive
404 | o,o,x,b,o,b,x,x,x,positive
405 | o,o,x,b,b,x,x,o,x,positive
406 | o,o,x,b,b,x,o,x,x,positive
407 | o,o,x,b,b,x,b,b,x,positive
408 | o,o,x,b,b,o,x,x,x,positive
409 | o,o,b,x,x,x,x,o,b,positive
410 | o,o,b,x,x,x,x,b,o,positive
411 | o,o,b,x,x,x,o,x,b,positive
412 | o,o,b,x,x,x,o,b,x,positive
413 | o,o,b,x,x,x,b,x,o,positive
414 | o,o,b,x,x,x,b,o,x,positive
415 | o,o,b,x,x,x,b,b,b,positive
416 | o,o,b,x,o,b,x,x,x,positive
417 | o,o,b,x,b,o,x,x,x,positive
418 | o,o,b,o,x,b,x,x,x,positive
419 | o,o,b,o,b,x,x,x,x,positive
420 | o,o,b,b,x,o,x,x,x,positive
421 | o,o,b,b,o,x,x,x,x,positive
422 | o,o,b,b,b,b,x,x,x,positive
423 | o,b,x,x,x,x,o,o,b,positive
424 | o,b,x,x,x,x,o,b,o,positive
425 | o,b,x,x,x,x,b,o,o,positive
426 | o,b,x,x,x,o,x,o,b,positive
427 | o,b,x,x,x,o,x,b,o,positive
428 | o,b,x,x,x,b,x,o,o,positive
429 | o,b,x,x,o,x,o,b,x,positive
430 | o,b,x,x,o,x,b,o,x,positive
431 | o,b,x,x,b,x,o,o,x,positive
432 | o,b,x,o,x,x,x,o,b,positive
433 | o,b,x,o,x,x,x,b,o,positive
434 | o,b,x,o,x,x,b,o,x,positive
435 | o,b,x,o,x,o,x,x,b,positive
436 | o,b,x,o,x,o,x,b,x,positive
437 | o,b,x,o,x,b,x,x,o,positive
438 | o,b,x,o,x,b,x,o,x,positive
439 | o,b,x,o,x,b,x,b,b,positive
440 | o,b,x,o,o,x,x,b,x,positive
441 | o,b,x,o,o,x,b,x,x,positive
442 | o,b,x,o,o,b,x,x,x,positive
443 | o,b,x,o,b,x,x,o,x,positive
444 | o,b,x,o,b,x,b,b,x,positive
445 | o,b,x,o,b,o,x,x,x,positive
446 | o,b,x,b,x,x,x,o,o,positive
447 | o,b,x,b,x,x,o,o,x,positive
448 | o,b,x,b,x,o,x,x,o,positive
449 | o,b,x,b,x,o,x,o,x,positive
450 | o,b,x,b,x,o,x,b,b,positive
451 | o,b,x,b,x,b,x,o,b,positive
452 | o,b,x,b,x,b,x,b,o,positive
453 | o,b,x,b,o,x,x,o,x,positive
454 | o,b,x,b,o,x,o,x,x,positive
455 | o,b,x,b,o,x,b,b,x,positive
456 | o,b,x,b,o,o,x,x,x,positive
457 | o,b,x,b,b,x,o,b,x,positive
458 | o,b,x,b,b,x,b,o,x,positive
459 | o,b,o,x,x,x,x,o,b,positive
460 | o,b,o,x,x,x,x,b,o,positive
461 | o,b,o,x,x,x,o,x,b,positive
462 | o,b,o,x,x,x,o,b,x,positive
463 | o,b,o,x,x,x,b,x,o,positive
464 | o,b,o,x,x,x,b,o,x,positive
465 | o,b,o,x,x,x,b,b,b,positive
466 | o,b,o,x,o,b,x,x,x,positive
467 | o,b,o,x,b,o,x,x,x,positive
468 | o,b,o,o,x,b,x,x,x,positive
469 | o,b,o,o,b,x,x,x,x,positive
470 | o,b,o,b,x,o,x,x,x,positive
471 | o,b,o,b,o,x,x,x,x,positive
472 | o,b,o,b,b,b,x,x,x,positive
473 | o,b,b,x,x,x,x,o,o,positive
474 | o,b,b,x,x,x,o,x,o,positive
475 | o,b,b,x,x,x,o,o,x,positive
476 | o,b,b,x,x,x,o,b,b,positive
477 | o,b,b,x,x,x,b,o,b,positive
478 | o,b,b,x,x,x,b,b,o,positive
479 | o,b,b,x,o,o,x,x,x,positive
480 | o,b,b,o,x,o,x,x,x,positive
481 | o,b,b,o,o,x,x,x,x,positive
482 | o,b,b,o,b,b,x,x,x,positive
483 | o,b,b,b,o,b,x,x,x,positive
484 | o,b,b,b,b,o,x,x,x,positive
485 | b,x,x,o,x,o,x,o,b,positive
486 | b,x,x,o,x,o,x,b,o,positive
487 | b,x,x,o,x,o,o,x,b,positive
488 | b,x,x,o,x,o,b,x,o,positive
489 | b,x,x,o,x,b,x,o,o,positive
490 | b,x,x,o,x,b,o,x,o,positive
491 | b,x,x,o,o,x,o,b,x,positive
492 | b,x,x,o,o,x,b,o,x,positive
493 | b,x,x,o,b,x,o,o,x,positive
494 | b,x,x,b,x,o,x,o,o,positive
495 | b,x,x,b,x,o,o,x,o,positive
496 | b,x,x,b,o,x,o,o,x,positive
497 | b,x,o,x,x,x,o,o,b,positive
498 | b,x,o,x,x,x,o,b,o,positive
499 | b,x,o,x,x,x,b,o,o,positive
500 | b,x,o,x,x,o,o,x,b,positive
501 | b,x,o,x,x,b,o,x,o,positive
502 | b,x,o,o,x,x,o,x,b,positive
503 | b,x,o,o,x,x,b,x,o,positive
504 | b,x,o,o,x,o,x,x,b,positive
505 | b,x,o,o,x,o,b,x,x,positive
506 | b,x,o,o,x,b,x,x,o,positive
507 | b,x,o,o,x,b,o,x,x,positive
508 | b,x,o,o,x,b,b,x,b,positive
509 | b,x,o,o,o,b,x,x,x,positive
510 | b,x,o,o,b,o,x,x,x,positive
511 | b,x,o,b,x,x,o,x,o,positive
512 | b,x,o,b,x,o,o,x,x,positive
513 | b,x,o,b,x,o,b,x,b,positive
514 | b,x,o,b,x,b,o,x,b,positive
515 | b,x,o,b,x,b,b,x,o,positive
516 | b,x,o,b,o,o,x,x,x,positive
517 | b,x,b,x,x,o,o,x,o,positive
518 | b,x,b,o,x,x,o,x,o,positive
519 | b,x,b,o,x,o,x,x,o,positive
520 | b,x,b,o,x,o,o,x,x,positive
521 | b,x,b,o,x,o,b,x,b,positive
522 | b,x,b,o,x,b,o,x,b,positive
523 | b,x,b,o,x,b,b,x,o,positive
524 | b,x,b,b,x,o,o,x,b,positive
525 | b,x,b,b,x,o,b,x,o,positive
526 | b,x,b,b,x,b,o,x,o,positive
527 | b,o,x,x,x,x,o,o,b,positive
528 | b,o,x,x,x,x,o,b,o,positive
529 | b,o,x,x,x,x,b,o,o,positive
530 | b,o,x,x,x,o,x,o,b,positive
531 | b,o,x,x,x,o,x,b,o,positive
532 | b,o,x,x,x,b,x,o,o,positive
533 | b,o,x,x,o,x,o,b,x,positive
534 | b,o,x,x,b,x,o,o,x,positive
535 | b,o,x,o,x,x,x,o,b,positive
536 | b,o,x,o,x,x,x,b,o,positive
537 | b,o,x,o,x,x,o,b,x,positive
538 | b,o,x,o,x,x,b,o,x,positive
539 | b,o,x,o,x,o,x,x,b,positive
540 | b,o,x,o,x,o,x,b,x,positive
541 | b,o,x,o,x,b,x,x,o,positive
542 | b,o,x,o,x,b,x,o,x,positive
543 | b,o,x,o,x,b,x,b,b,positive
544 | b,o,x,o,o,x,x,b,x,positive
545 | b,o,x,o,o,x,b,x,x,positive
546 | b,o,x,o,o,b,x,x,x,positive
547 | b,o,x,o,b,x,x,o,x,positive
548 | b,o,x,o,b,x,o,x,x,positive
549 | b,o,x,o,b,x,b,b,x,positive
550 | b,o,x,o,b,o,x,x,x,positive
551 | b,o,x,b,x,x,x,o,o,positive
552 | b,o,x,b,x,x,o,o,x,positive
553 | b,o,x,b,x,o,x,x,o,positive
554 | b,o,x,b,x,o,x,o,x,positive
555 | b,o,x,b,x,o,x,b,b,positive
556 | b,o,x,b,x,b,x,o,b,positive
557 | b,o,x,b,x,b,x,b,o,positive
558 | b,o,x,b,o,x,o,x,x,positive
559 | b,o,x,b,o,x,b,b,x,positive
560 | b,o,x,b,o,o,x,x,x,positive
561 | b,o,x,b,b,x,o,b,x,positive
562 | b,o,x,b,b,x,b,o,x,positive
563 | b,o,o,x,x,x,x,o,b,positive
564 | b,o,o,x,x,x,x,b,o,positive
565 | b,o,o,x,x,x,o,x,b,positive
566 | b,o,o,x,x,x,o,b,x,positive
567 | b,o,o,x,x,x,b,x,o,positive
568 | b,o,o,x,x,x,b,o,x,positive
569 | b,o,o,x,x,x,b,b,b,positive
570 | b,o,o,x,o,b,x,x,x,positive
571 | b,o,o,x,b,o,x,x,x,positive
572 | b,o,o,o,x,b,x,x,x,positive
573 | b,o,o,o,b,x,x,x,x,positive
574 | b,o,o,b,x,o,x,x,x,positive
575 | b,o,o,b,o,x,x,x,x,positive
576 | b,o,o,b,b,b,x,x,x,positive
577 | b,o,b,x,x,x,x,o,o,positive
578 | b,o,b,x,x,x,o,x,o,positive
579 | b,o,b,x,x,x,o,o,x,positive
580 | b,o,b,x,x,x,o,b,b,positive
581 | b,o,b,x,x,x,b,o,b,positive
582 | b,o,b,x,x,x,b,b,o,positive
583 | b,o,b,x,o,o,x,x,x,positive
584 | b,o,b,o,x,o,x,x,x,positive
585 | b,o,b,o,o,x,x,x,x,positive
586 | b,o,b,o,b,b,x,x,x,positive
587 | b,o,b,b,o,b,x,x,x,positive
588 | b,o,b,b,b,o,x,x,x,positive
589 | b,b,x,x,x,o,x,o,o,positive
590 | b,b,x,x,o,x,o,o,x,positive
591 | b,b,x,o,x,x,x,o,o,positive
592 | b,b,x,o,x,x,o,o,x,positive
593 | b,b,x,o,x,o,x,x,o,positive
594 | b,b,x,o,x,o,x,o,x,positive
595 | b,b,x,o,x,o,x,b,b,positive
596 | b,b,x,o,x,b,x,o,b,positive
597 | b,b,x,o,x,b,x,b,o,positive
598 | b,b,x,o,o,x,x,o,x,positive
599 | b,b,x,o,o,x,o,x,x,positive
600 | b,b,x,o,o,x,b,b,x,positive
601 | b,b,x,o,b,x,o,b,x,positive
602 | b,b,x,o,b,x,b,o,x,positive
603 | b,b,x,b,x,o,x,o,b,positive
604 | b,b,x,b,x,o,x,b,o,positive
605 | b,b,x,b,x,b,x,o,o,positive
606 | b,b,x,b,o,x,o,b,x,positive
607 | b,b,x,b,o,x,b,o,x,positive
608 | b,b,x,b,b,x,o,o,x,positive
609 | b,b,o,x,x,x,x,o,o,positive
610 | b,b,o,x,x,x,o,x,o,positive
611 | b,b,o,x,x,x,o,o,x,positive
612 | b,b,o,x,x,x,o,b,b,positive
613 | b,b,o,x,x,x,b,o,b,positive
614 | b,b,o,x,x,x,b,b,o,positive
615 | b,b,o,x,o,o,x,x,x,positive
616 | b,b,o,o,x,o,x,x,x,positive
617 | b,b,o,o,o,x,x,x,x,positive
618 | b,b,o,o,b,b,x,x,x,positive
619 | b,b,o,b,o,b,x,x,x,positive
620 | b,b,o,b,b,o,x,x,x,positive
621 | b,b,b,x,x,x,o,o,b,positive
622 | b,b,b,x,x,x,o,b,o,positive
623 | b,b,b,x,x,x,b,o,o,positive
624 | b,b,b,o,o,b,x,x,x,positive
625 | b,b,b,o,b,o,x,x,x,positive
626 | b,b,b,b,o,o,x,x,x,positive
627 | x,x,o,x,x,o,o,b,o,negative
628 | x,x,o,x,x,o,b,o,o,negative
629 | x,x,o,x,x,b,o,o,o,negative
630 | x,x,o,x,o,x,o,o,b,negative
631 | x,x,o,x,o,x,o,b,o,negative
632 | x,x,o,x,o,o,o,x,b,negative
633 | x,x,o,x,o,o,o,b,x,negative
634 | x,x,o,x,o,o,b,x,o,negative
635 | x,x,o,x,o,b,o,x,o,negative
636 | x,x,o,x,o,b,o,o,x,negative
637 | x,x,o,x,o,b,o,b,b,negative
638 | x,x,o,x,b,x,o,o,o,negative
639 | x,x,o,x,b,o,o,x,o,negative
640 | x,x,o,x,b,o,b,b,o,negative
641 | x,x,o,o,x,o,x,b,o,negative
642 | x,x,o,o,o,x,o,x,b,negative
643 | x,x,o,o,o,x,o,b,x,negative
644 | x,x,o,o,o,o,x,x,b,negative
645 | x,x,o,o,o,o,x,b,x,negative
646 | x,x,o,o,o,o,b,x,x,negative
647 | x,x,o,o,o,b,o,x,x,negative
648 | x,x,o,o,b,o,x,x,o,negative
649 | x,x,o,b,x,x,o,o,o,negative
650 | x,x,o,b,x,o,x,o,o,negative
651 | x,x,o,b,x,o,b,b,o,negative
652 | x,x,o,b,o,x,o,x,o,negative
653 | x,x,o,b,o,x,o,o,x,negative
654 | x,x,o,b,o,x,o,b,b,negative
655 | x,x,o,b,o,o,x,x,o,negative
656 | x,x,o,b,o,o,o,x,x,negative
657 | x,x,o,b,o,b,o,x,b,negative
658 | x,x,o,b,o,b,o,b,x,negative
659 | x,x,o,b,b,o,x,b,o,negative
660 | x,x,o,b,b,o,b,x,o,negative
661 | x,x,b,x,x,o,o,o,o,negative
662 | x,x,b,x,o,x,o,o,o,negative
663 | x,x,b,x,b,b,o,o,o,negative
664 | x,x,b,o,x,x,o,o,o,negative
665 | x,x,b,o,o,o,x,x,o,negative
666 | x,x,b,o,o,o,x,o,x,negative
667 | x,x,b,o,o,o,x,b,b,negative
668 | x,x,b,o,o,o,o,x,x,negative
669 | x,x,b,o,o,o,b,x,b,negative
670 | x,x,b,o,o,o,b,b,x,negative
671 | x,x,b,b,x,b,o,o,o,negative
672 | x,x,b,b,b,x,o,o,o,negative
673 | x,o,x,x,x,b,o,o,o,negative
674 | x,o,x,x,o,x,o,o,b,negative
675 | x,o,x,x,o,x,b,o,o,negative
676 | x,o,x,x,o,o,b,o,x,negative
677 | x,o,x,x,o,b,o,o,x,negative
678 | x,o,x,x,o,b,b,o,b,negative
679 | x,o,x,x,b,x,o,o,o,negative
680 | x,o,x,o,o,x,x,o,b,negative
681 | x,o,x,o,o,o,x,x,b,negative
682 | x,o,x,o,o,o,x,b,x,negative
683 | x,o,x,o,o,o,b,x,x,negative
684 | x,o,x,o,o,b,x,o,x,negative
685 | x,o,x,b,x,x,o,o,o,negative
686 | x,o,x,b,o,x,x,o,o,negative
687 | x,o,x,b,o,x,b,o,b,negative
688 | x,o,x,b,o,o,x,o,x,negative
689 | x,o,x,b,o,b,x,o,b,negative
690 | x,o,x,b,o,b,b,o,x,negative
691 | x,o,o,x,x,o,b,x,o,negative
692 | x,o,o,x,o,x,o,x,b,negative
693 | x,o,o,x,o,x,o,b,x,negative
694 | x,o,o,x,o,x,b,o,x,negative
695 | x,o,o,x,o,b,o,x,x,negative
696 | x,o,o,b,x,o,x,x,o,negative
697 | x,o,o,b,o,x,x,o,x,negative
698 | x,o,o,b,o,x,o,x,x,negative
699 | x,o,b,x,o,x,o,o,x,negative
700 | x,o,b,x,o,x,b,o,b,negative
701 | x,o,b,x,o,b,b,o,x,negative
702 | x,o,b,o,o,x,x,o,x,negative
703 | x,o,b,b,o,x,x,o,b,negative
704 | x,o,b,b,o,x,b,o,x,negative
705 | x,o,b,b,o,b,x,o,x,negative
706 | x,b,x,x,x,o,o,o,o,negative
707 | x,b,x,x,o,x,o,o,o,negative
708 | x,b,x,x,b,b,o,o,o,negative
709 | x,b,x,o,x,x,o,o,o,negative
710 | x,b,x,o,o,o,x,x,o,negative
711 | x,b,x,o,o,o,x,o,x,negative
712 | x,b,x,o,o,o,x,b,b,negative
713 | x,b,x,o,o,o,o,x,x,negative
714 | x,b,x,o,o,o,b,x,b,negative
715 | x,b,x,o,o,o,b,b,x,negative
716 | x,b,x,b,x,b,o,o,o,negative
717 | x,b,x,b,b,x,o,o,o,negative
718 | x,b,o,x,x,o,o,x,o,negative
719 | x,b,o,x,x,o,b,b,o,negative
720 | x,b,o,x,o,x,o,x,o,negative
721 | x,b,o,x,o,x,o,o,x,negative
722 | x,b,o,x,o,x,o,b,b,negative
723 | x,b,o,x,o,o,o,x,x,negative
724 | x,b,o,x,o,b,o,x,b,negative
725 | x,b,o,x,o,b,o,b,x,negative
726 | x,b,o,x,b,o,b,x,o,negative
727 | x,b,o,o,x,o,x,x,o,negative
728 | x,b,o,o,o,x,o,x,x,negative
729 | x,b,o,b,x,o,x,b,o,negative
730 | x,b,o,b,x,o,b,x,o,negative
731 | x,b,o,b,o,x,o,x,b,negative
732 | x,b,o,b,o,x,o,b,x,negative
733 | x,b,o,b,o,b,o,x,x,negative
734 | x,b,o,b,b,o,x,x,o,negative
735 | x,b,b,x,x,b,o,o,o,negative
736 | x,b,b,x,b,x,o,o,o,negative
737 | x,b,b,o,o,o,x,x,b,negative
738 | x,b,b,o,o,o,x,b,x,negative
739 | x,b,b,o,o,o,b,x,x,negative
740 | x,b,b,b,x,x,o,o,o,negative
741 | o,x,x,x,x,b,o,o,o,negative
742 | o,x,x,x,o,x,o,b,o,negative
743 | o,x,x,x,o,x,b,o,o,negative
744 | o,x,x,x,o,o,x,b,o,negative
745 | o,x,x,x,o,o,b,x,o,negative
746 | o,x,x,x,o,b,x,o,o,negative
747 | o,x,x,x,o,b,o,x,o,negative
748 | o,x,x,x,o,b,b,b,o,negative
749 | o,x,x,x,b,x,o,o,o,negative
750 | o,x,x,o,x,x,o,o,b,negative
751 | o,x,x,o,x,x,o,b,o,negative
752 | o,x,x,o,x,o,o,b,x,negative
753 | o,x,x,o,x,b,o,o,x,negative
754 | o,x,x,o,x,b,o,b,b,negative
755 | o,x,x,o,o,x,x,b,o,negative
756 | o,x,x,o,o,x,o,x,b,negative
757 | o,x,x,o,o,x,b,x,o,negative
758 | o,x,x,o,o,o,x,x,b,negative
759 | o,x,x,o,o,o,x,b,x,negative
760 | o,x,x,o,o,o,b,x,x,negative
761 | o,x,x,o,o,b,x,x,o,negative
762 | o,x,x,o,o,b,o,x,x,negative
763 | o,x,x,o,b,x,o,x,o,negative
764 | o,x,x,o,b,x,o,b,b,negative
765 | o,x,x,o,b,o,o,x,x,negative
766 | o,x,x,o,b,b,o,x,b,negative
767 | o,x,x,o,b,b,o,b,x,negative
768 | o,x,x,b,x,x,o,o,o,negative
769 | o,x,x,b,o,x,x,o,o,negative
770 | o,x,x,b,o,x,o,x,o,negative
771 | o,x,x,b,o,x,b,b,o,negative
772 | o,x,x,b,o,o,x,x,o,negative
773 | o,x,x,b,o,b,x,b,o,negative
774 | o,x,x,b,o,b,b,x,o,negative
775 | o,x,o,x,x,o,x,b,o,negative
776 | o,x,o,x,o,x,x,b,o,negative
777 | o,x,o,x,o,x,o,x,b,negative
778 | o,x,o,x,o,x,o,b,x,negative
779 | o,x,o,x,o,x,b,x,o,negative
780 | o,x,o,x,o,b,x,x,o,negative
781 | o,x,o,x,o,b,o,x,x,negative
782 | o,x,o,x,b,o,x,x,o,negative
783 | o,x,o,o,x,x,o,b,x,negative
784 | o,x,o,o,b,x,o,x,x,negative
785 | o,x,o,b,o,x,x,x,o,negative
786 | o,x,o,b,o,x,o,x,x,negative
787 | o,x,b,x,o,x,x,o,o,negative
788 | o,x,b,x,o,x,o,x,o,negative
789 | o,x,b,x,o,x,b,b,o,negative
790 | o,x,b,x,o,o,x,x,o,negative
791 | o,x,b,x,o,b,x,b,o,negative
792 | o,x,b,x,o,b,b,x,o,negative
793 | o,x,b,o,x,x,o,o,x,negative
794 | o,x,b,o,x,x,o,b,b,negative
795 | o,x,b,o,x,b,o,b,x,negative
796 | o,x,b,o,o,x,x,x,o,negative
797 | o,x,b,o,o,x,o,x,x,negative
798 | o,x,b,o,b,x,o,x,b,negative
799 | o,x,b,o,b,x,o,b,x,negative
800 | o,x,b,o,b,b,o,x,x,negative
801 | o,x,b,b,o,x,x,b,o,negative
802 | o,x,b,b,o,x,b,x,o,negative
803 | o,x,b,b,o,b,x,x,o,negative
804 | o,o,x,x,o,x,x,o,b,negative
805 | o,o,x,x,o,x,x,b,o,negative
806 | o,o,x,x,o,x,b,x,o,negative
807 | o,o,x,x,o,b,x,x,o,negative
808 | o,o,x,x,o,b,x,o,x,negative
809 | o,o,x,o,x,x,o,x,b,negative
810 | o,o,x,o,x,b,o,x,x,negative
811 | o,o,x,b,o,x,x,x,o,negative
812 | o,o,o,x,x,o,x,x,b,negative
813 | o,o,o,x,x,o,x,b,x,negative
814 | o,o,o,x,x,o,b,x,x,negative
815 | o,o,o,x,x,b,x,x,o,negative
816 | o,o,o,x,x,b,x,o,x,negative
817 | o,o,o,x,x,b,x,b,b,negative
818 | o,o,o,x,x,b,o,x,x,negative
819 | o,o,o,x,x,b,b,x,b,negative
820 | o,o,o,x,x,b,b,b,x,negative
821 | o,o,o,x,o,x,x,x,b,negative
822 | o,o,o,x,o,x,x,b,x,negative
823 | o,o,o,x,o,x,b,x,x,negative
824 | o,o,o,x,b,x,x,x,o,negative
825 | o,o,o,x,b,x,x,o,x,negative
826 | o,o,o,x,b,x,x,b,b,negative
827 | o,o,o,x,b,x,o,x,x,negative
828 | o,o,o,x,b,x,b,x,b,negative
829 | o,o,o,x,b,x,b,b,x,negative
830 | o,o,o,x,b,b,x,x,b,negative
831 | o,o,o,x,b,b,x,b,x,negative
832 | o,o,o,x,b,b,b,x,x,negative
833 | o,o,o,o,x,x,x,x,b,negative
834 | o,o,o,o,x,x,x,b,x,negative
835 | o,o,o,o,x,x,b,x,x,negative
836 | o,o,o,b,x,x,x,x,o,negative
837 | o,o,o,b,x,x,x,o,x,negative
838 | o,o,o,b,x,x,x,b,b,negative
839 | o,o,o,b,x,x,o,x,x,negative
840 | o,o,o,b,x,x,b,x,b,negative
841 | o,o,o,b,x,x,b,b,x,negative
842 | o,o,o,b,x,b,x,x,b,negative
843 | o,o,o,b,x,b,x,b,x,negative
844 | o,o,o,b,x,b,b,x,x,negative
845 | o,o,o,b,b,x,x,x,b,negative
846 | o,o,o,b,b,x,x,b,x,negative
847 | o,o,o,b,b,x,b,x,x,negative
848 | o,o,b,x,o,x,x,x,o,negative
849 | o,o,b,x,o,x,x,o,x,negative
850 | o,o,b,o,x,x,o,x,x,negative
851 | o,b,x,x,o,x,x,o,o,negative
852 | o,b,x,x,o,x,o,x,o,negative
853 | o,b,x,x,o,x,b,b,o,negative
854 | o,b,x,x,o,o,x,x,o,negative
855 | o,b,x,x,o,b,x,b,o,negative
856 | o,b,x,x,o,b,b,x,o,negative
857 | o,b,x,o,x,x,o,x,o,negative
858 | o,b,x,o,x,x,o,b,b,negative
859 | o,b,x,o,x,o,o,x,x,negative
860 | o,b,x,o,x,b,o,x,b,negative
861 | o,b,x,o,x,b,o,b,x,negative
862 | o,b,x,o,o,x,x,x,o,negative
863 | o,b,x,o,b,x,o,x,b,negative
864 | o,b,x,o,b,b,o,x,x,negative
865 | o,b,x,b,o,x,x,b,o,negative
866 | o,b,x,b,o,x,b,x,o,negative
867 | o,b,x,b,o,b,x,x,o,negative
868 | o,b,o,x,x,o,x,x,o,negative
869 | o,b,o,x,o,x,x,x,o,negative
870 | o,b,o,x,o,x,o,x,x,negative
871 | o,b,o,o,x,x,o,x,x,negative
872 | o,b,b,x,o,x,x,b,o,negative
873 | o,b,b,x,o,x,b,x,o,negative
874 | o,b,b,x,o,b,x,x,o,negative
875 | o,b,b,o,x,x,o,x,b,negative
876 | o,b,b,o,x,x,o,b,x,negative
877 | o,b,b,o,x,b,o,x,x,negative
878 | o,b,b,o,b,x,o,x,x,negative
879 | o,b,b,b,o,x,x,x,o,negative
880 | b,x,x,x,x,o,o,o,o,negative
881 | b,x,x,x,o,x,o,o,o,negative
882 | b,x,x,x,b,b,o,o,o,negative
883 | b,x,x,o,x,x,o,o,o,negative
884 | b,x,x,o,o,o,x,x,o,negative
885 | b,x,x,o,o,o,x,o,x,negative
886 | b,x,x,o,o,o,x,b,b,negative
887 | b,x,x,o,o,o,o,x,x,negative
888 | b,x,x,o,o,o,b,x,b,negative
889 | b,x,x,o,o,o,b,b,x,negative
890 | b,x,x,b,x,b,o,o,o,negative
891 | b,x,x,b,b,x,o,o,o,negative
892 | b,x,o,x,x,o,x,o,o,negative
893 | b,x,o,x,x,o,b,b,o,negative
894 | b,x,o,x,o,x,o,x,o,negative
895 | b,x,o,x,o,x,o,o,x,negative
896 | b,x,o,x,o,x,o,b,b,negative
897 | b,x,o,x,o,o,x,x,o,negative
898 | b,x,o,x,o,o,o,x,x,negative
899 | b,x,o,x,o,b,o,x,b,negative
900 | b,x,o,x,o,b,o,b,x,negative
901 | b,x,o,x,b,o,x,b,o,negative
902 | b,x,o,x,b,o,b,x,o,negative
903 | b,x,o,o,o,x,o,x,x,negative
904 | b,x,o,b,x,o,x,b,o,negative
905 | b,x,o,b,o,x,o,x,b,negative
906 | b,x,o,b,o,x,o,b,x,negative
907 | b,x,o,b,o,b,o,x,x,negative
908 | b,x,o,b,b,o,x,x,o,negative
909 | b,x,b,x,x,b,o,o,o,negative
910 | b,x,b,x,b,x,o,o,o,negative
911 | b,x,b,o,o,o,x,x,b,negative
912 | b,x,b,o,o,o,x,b,x,negative
913 | b,x,b,o,o,o,b,x,x,negative
914 | b,x,b,b,x,x,o,o,o,negative
915 | b,o,x,x,o,x,x,o,o,negative
916 | b,o,x,x,o,x,b,o,b,negative
917 | b,o,x,x,o,o,x,o,x,negative
918 | b,o,x,x,o,b,x,o,b,negative
919 | b,o,x,x,o,b,b,o,x,negative
920 | b,o,x,b,o,x,x,o,b,negative
921 | b,o,x,b,o,b,x,o,x,negative
922 | b,o,o,x,x,o,x,x,o,negative
923 | b,o,o,x,o,x,x,o,x,negative
924 | b,o,o,x,o,x,o,x,x,negative
925 | b,o,b,x,o,x,x,o,b,negative
926 | b,o,b,x,o,x,b,o,x,negative
927 | b,o,b,x,o,b,x,o,x,negative
928 | b,o,b,b,o,x,x,o,x,negative
929 | b,b,x,x,x,b,o,o,o,negative
930 | b,b,x,x,b,x,o,o,o,negative
931 | b,b,x,o,o,o,x,x,b,negative
932 | b,b,x,o,o,o,x,b,x,negative
933 | b,b,x,o,o,o,b,x,x,negative
934 | b,b,x,b,x,x,o,o,o,negative
935 | b,b,o,x,x,o,x,b,o,negative
936 | b,b,o,x,x,o,b,x,o,negative
937 | b,b,o,x,o,x,o,x,b,negative
938 | b,b,o,x,o,x,o,b,x,negative
939 | b,b,o,x,o,b,o,x,x,negative
940 | b,b,o,x,b,o,x,x,o,negative
941 | b,b,o,b,x,o,x,x,o,negative
942 | b,b,o,b,o,x,o,x,x,negative
943 | x,x,o,o,x,x,x,o,o,negative
944 | x,x,o,o,o,x,x,x,o,negative
945 | x,x,o,o,o,x,x,o,x,negative
946 | x,o,x,x,x,o,o,x,o,negative
947 | x,o,x,x,o,x,o,x,o,negative
948 | x,o,x,x,o,o,o,x,x,negative
949 | x,o,x,o,x,x,o,x,o,negative
950 | x,o,x,o,o,x,x,x,o,negative
951 | x,o,o,o,x,x,x,x,o,negative
952 | o,x,x,x,x,o,o,o,x,negative
953 | o,x,x,x,o,o,x,o,x,negative
954 | o,x,x,x,o,o,o,x,x,negative
955 | o,x,o,x,x,o,x,o,x,negative
956 | o,x,o,x,o,x,x,o,x,negative
957 | o,x,o,o,x,x,x,o,x,negative
958 | o,o,x,x,x,o,o,x,x,negative
--------------------------------------------------------------------------------